Search Results

Search found 18096 results on 724 pages for 'let'.

Page 710/724 | < Previous Page | 706 707 708 709 710 711 712 713 714 715 716 717  | Next Page >

  • OIM 11g notification framework

    - by Rajesh G Kumar
    OIM 11g has introduced an improved and template based Notifications framework. New release has removed the limitation of sending text based emails (out-of-the-box emails) and enhanced to support html features. New release provides in-built out-of-the-box templates for events like 'Reset Password', 'Create User Self Service' , ‘User Deleted' etc. Also provides new APIs to support custom templates to send notifications out of OIM. OIM notification framework supports notification mechanism based on events, notification templates and template resolver. They are defined as follows: Ø Events are defined as XML file and imported as part of MDS database in order to make notification event available for use. Ø Notification templates are created using OIM advance administration console. The template contains the text and the substitution 'variables' which will be replaced with the data provided by the template resolver. Templates support internationalization and can be defined as HTML or in form of simple text. Ø Template resolver is a Java class that is responsible to provide attributes and data to be used at runtime and design time. It must be deployed following the OIM plug-in framework. Resolver data provided at design time is to be used by end user to design notification template with available entity variables and it also provides data at runtime to replace the designed variable with value to be displayed to recipients. Steps to define custom notifications in OIM 11g are: Steps# Steps 1. Define the Notification Event 2. Create the Custom Template Resolver class 3. Create Template with notification contents to be sent to recipients 4. Create Event triggering spots in OIM 1. Notification Event metadata The Notification Event is defined as XML file which need to be imported into MDS database. An event file must be compliant with the schema defined by the notification engine, which is NotificationEvent.xsd. The event file contains basic information about the event.XSD location in MDS database: “/metadata/iam-features-notification/NotificationEvent.xsd”Schema file can be viewed by exporting file from MDS using weblogicExportMetadata.sh script.Sample Notification event metadata definition: 1: <?xml version="1.0" encoding="UTF-8"?> 2: <Events xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance xsi:noNamespaceSchemaLocation="../../../metadata/NotificationEvent.xsd"> 3: <EventType name="Sample Notification"> 4: <StaticData> 5: <Attribute DataType="X2-Entity" EntityName="User" Name="Granted User"/> 6: </StaticData> 7: <Resolver class="com.iam.oim.demo.notification.DemoNotificationResolver"> 8: <Param DataType="91-Entity" EntityName="Resource" Name="ResourceInfo"/> 9: </Resolver> 10: </EventType> 11: </Events> Line# Description 1. XML file notation tag 2. Events is root tag 3. EventType tag is to declare a unique event name which will be available for template designing 4. The StaticData element lists a set of parameters which allow user to add parameters that are not data dependent. In other words, this element defines the static data to be displayed when notification is to be configured. An example of static data is the User entity, which is not dependent on any other data and has the same set of attributes for all event instances and notification templates. Available attributes are used to be defined as substitution tokens in the template. 5. Attribute tag is child tag for StaticData to declare the entity and its data type with unique reference name. User entity is most commonly used Entity as StaticData. 6. StaticData closing tag 7. Resolver tag defines the resolver class. The Resolver class must be defined for each notification. It defines what parameters are available in the notification creation screen and how those parameters are replaced when the notification is to be sent. Resolver class resolves the data dynamically at run time and displays the attributes in the UI. 8. The Param DataType element lists a set of parameters which allow user to add parameters that are data dependent. An example of the data dependent or a dynamic entity is a resource object which user can select at run time. A notification template is to be configured for the resource object. Corresponding to the resource object field, a lookup is displayed on the UI. When a user selects the event the call goes to the Resolver class provided to fetch the fields that are displayed in the Available Data list, from which user can select the attribute to be used on the template. Param tag is child tag to declare the entity and its data type with unique reference name. 9. Resolver closing tag 10 EventType closing tag 11. Events closing tag Note: - DataType needs to be declared as “X2-Entity” for User entity and “91-Entity” for Resource or Organization entities. The dynamic entities supported for lookup are user, resource, and organization. Once notification event metadata is defined, need to be imported into MDS database. Fully qualified resolver class name need to be define for XML but do not need to load the class in OIM yet (it can be loaded later). 2. Coding the notification resolver All event owners have to provide a resolver class which would resolve the data dynamically at run time. Custom resolver class must implement the interface oracle.iam.notification.impl.NotificationEventResolver and override the implemented methods with actual implementation. It has 2 methods: S# Methods Descriptions 1. public List<NotificationAttribute> getAvailableData(String eventType, Map<String, Object> params); This API will return the list of available data variables. These variables will be available on the UI while creating/modifying the Templates and would let user select the variables so that they can be embedded as a token as part of the Messages on the template. These tokens are replaced by the value passed by the resolver class at run time. Available data is displayed in a list. The parameter "eventType" specifies the event Name for which template is to be read.The parameter "params" is the map which has the entity name and the corresponding value for which available data is to be fetched. Sample code snippet: List<NotificationAttribute> list = new ArrayList<NotificationAttribute>(); long objKey = (Long) params.get("resource"); //Form Field details based on Resource object key HashMap<String, Object> formFieldDetail = getObjectFormName(objKey); for (Iterator<?> itrd = formFieldDetail.entrySet().iterator(); itrd.hasNext(); ) { NotificationAttribute availableData = new NotificationAttribute(); Map.Entry formDetailEntrySet = (Entry<?, ?>)itrd.next(); String fieldLabel = (String)formDetailEntrySet.getValue(); availableData.setName(fieldLabel); list.add(availableData); } return list; 2. Public HashMap<String, Object> getReplacedData(String eventType, Map<String, Object> params); This API would return the resolved value of the variables present on the template at the runtime when notification is being sent. The parameter "eventType" specifies the event Name for which template is to be read.The parameter "params" is the map which has the base values such as usr_key, obj_key etc required by the resolver implementation to resolve the rest of the variables in the template. Sample code snippet: HashMap<String, Object> resolvedData = new HashMap<String, Object>();String firstName = getUserFirstname(params.get("usr_key"));resolvedData.put("fname", firstName); String lastName = getUserLastName(params.get("usr_key"));resolvedData.put("lname", lastname);resolvedData.put("count", "1 million");return resolvedData; This code must be deployed as per OIM 11g plug-in framework. The XML file defining the plug-in is as below: <?xml version="1.0" encoding="UTF-8"?> <oimplugins xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <plugins pluginpoint="oracle.iam.notification.impl.NotificationEventResolver"> <plugin pluginclass= " com.iam.oim.demo.notification.DemoNotificationResolver" version="1.0" name="Sample Notification Resolver"/> </plugins> </oimplugins> 3. Defining the template To create a notification template: Log in to the Oracle Identity Administration Click the System Management tab and then click the Notification tab From the Actions list on the left pane, select Create On the Create page, enter values for the following fields under the Template Information section: Template Name: Demo template Description Text: Demo template Under the Event Details section, perform the following: From the Available Event list, select the event for which the notification template is to be created from a list of available events. Depending on your selection, other fields are displayed in the Event Details section. Note that the template Sample Notification Event created in the previous step being used as the notification event. The contents of the Available Data drop down are based on the event XML StaticData tag, the drop down basically lists all the attributes of the entities defined in that tag. Once you select an element in the drop down, it will show up in the Selected Data text field and then you can just copy it and paste it into either the message subject or the message body fields prefixing $ symbol. Example if list has attribute like First_Name then message body will contains this as $First_Name which resolver will parse and replace it with actual value at runtime. In the Resource field, select a resource from the lookup. This is the dynamic data defined by the Param DataType element in the XML definition. Based on selected resource getAvailableData method of resolver will be called to fetch the resource object attribute detail, if method is overridden with required implementation. For current scenario, Map<String, Object> params will get populated with object key as value and key as “resource” in the map. This is the only input will be provided to resolver at design time. You need to implement the further logic to fetch the object attributes detail to populate the available Data list. List string should not have space in between, if object attributes has space for attribute name then implement logic to replace the space with ‘_’ before populating the list. Example if attribute name is “First Name” then make it “First_Name” and populate the list. Space is not supported while you try to parse and replace the token at run time with real value. Make a note that the Available Data and Selected Data are used in the substitution tokens definition only, they do not define the final data that will be sent in the notification. OIM will invoke the resolver class to get the data and make the substitutions. Under the Locale Information section, enter values in the following fields: To specify a form of encoding, select either UTF-8 or ASCII. In the Message Subject field, enter a subject for the notification. From the Type options, select the data type in which you want to send the message. You can choose between HTML and Text/Plain. In the Short Message field, enter a gist of the message in very few words. In the Long Message field, enter the message that will be sent as the notification with Available data token which need to be replaced by resolver at runtime. After you have entered the required values in all the fields, click Save. A message is displayed confirming the creation of the notification template. Click OK 4. Triggering the event A notification event can be triggered from different places in OIM. The logic behind the triggering must be coded and plugged into OIM. Examples of triggering points for notifications: Event handlers: post process notifications for specific data updates in OIM users Process tasks: to notify the users that a provisioning task was executed by OIM Scheduled tasks: to notify something related to the task The scheduled job has two parameters: Template Name: defines the notification template to be sent User Login: defines the user record that will provide the data to be sent in the notification Sample Code Snippet: public void execute(String templateName , String userId) { try { NotificationService notService = Platform.getService(NotificationService.class); NotificationEvent eventToSend=this.createNotificationEvent(templateName,userId); notService.notify(eventToSend); } catch (Exception e) { e.printStackTrace(); } } private NotificationEvent createNotificationEvent(String poTemplateName, String poUserId) { NotificationEvent event = new NotificationEvent(); String[] receiverUserIds= { poUserId }; event.setUserIds(receiverUserIds); event.setTemplateName(poTemplateName); event.setSender(null); HashMap<String, Object> templateParams = new HashMap<String, Object>(); templateParams.put("USER_LOGIN",poUserId); event.setParams(templateParams); return event; } public HashMap getAttributes() { return null; } public void setAttributes() {} }

    Read the article

  • Mobile Friendly Websites with CSS Media Queries

    - by dwahlin
    In a previous post the concept of CSS media queries was introduced and I discussed the fundamentals of how they can be used to target different screen sizes. I showed how they could be used to convert a 3-column wide page into a more vertical view of data that displays better on devices such as an iPhone:     In this post I'll provide an additional look at how CSS media queries can be used to mobile-enable a sample site called "Widget Masters" without having to change any server-side code or HTML code. The site that will be discussed is shown next:     This site has some of the standard items shown in most websites today including a title area, menu bar, and sections where data is displayed. Without including CSS media queries the site is readable but has to be zoomed out to see everything on a mobile device, cuts-off some of the menu items, and requires horizontal scrolling to get to additional content. The following image shows what the site looks like on an iPhone. While the site works on mobile devices it's definitely not optimized for mobile.     Let's take a look at how CSS media queries can be used to override existing styles in the site based on different screen widths. Adding CSS Media Queries into a Site The Widget Masters Website relies on standard CSS combined with HTML5 elements to provide the layout shown earlier. For example, to layout the menu bar shown at the top of the page the nav element is used as shown next. A standard div element could certainly be used as well if desired.   <nav> <ul class="clearfix"> <li><a href="#home">Home</a></li> <li><a href="#products">Products</a></li> <li><a href="#aboutus">About Us</a></li> <li><a href="#contactus">Contact Us</a></li> <li><a href="#store">Store</a></li> </ul> </nav>   This HTML is combined with the CSS shown next to add a CSS3 gradient, handle the horizontal orientation, and add some general hover effects.   nav { width: 100%; } nav ul { border-radius: 6px; height: 40px; width: 100%; margin: 0; padding: 0; background: rgb(125,126,125); /* Old browsers */ background: -moz-linear-gradient(top, rgba(125,126,125,1) 0%, rgba(14,14,14,1) 100%); /* FF3.6+ */ background: -webkit-gradient(linear, left top, left bottom, color-stop(0%,rgba(125,126,125,1)), color-stop(100%,rgba(14,14,14,1))); /* Chrome,Safari4+ */ background: -webkit-linear-gradient(top, rgba(125,126,125,1) 0%, rgba(14,14,14,1) 100%); /* Chrome10+,Safari5.1+ */ background: -o-linear-gradient(top, rgba(125,126,125,1) 0%, rgba(14,14,14,1) 100%); /* Opera 11.10+ */ background: -ms-linear-gradient(top, rgba(125,126,125,1) 0%, rgba(14,14,14,1) 100%); /* IE10+ */ background: linear-gradient(top, rgba(125,126,125,1) 0%, rgba(14,14,14,1) 100%); /* W3C */ filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='#7d7e7d', endColorstr='#0e0e0e',GradientType=0 ); /* IE6-9 */ } nav ul > li { list-style: none; float: left; margin: 0; padding: 0; } nav ul > li:first-child { margin-left: 8px; } nav ul > li > a { color: #ccc; text-decoration: none; line-height: 2.8em; font-size: 0.95em; font-weight: bold; padding: 8px 25px 7px 25px; font-family: Arial, Helvetica, sans-serif; } nav ul > li a:hover { background-color: rgba(0, 0, 0, 0.1); color: #fff; }   When mobile devices hit the site the layout of the menu items needs to be adjusted so that they're all visible without having to swipe left or right to get to them. This type of modification can be accomplished using CSS media queries by targeting specific screen sizes. To start, a media query can be added into the site's CSS file as shown next: @media screen and (max-width:320px) { /* CSS style overrides for this screen width go here */ } This media query targets screens that have a maximum width of 320 pixels. Additional types of queries can also be added – refer to my previous post for more details as well as resources that can be used to test media queries in different devices. In that post I emphasize (and I'll emphasize again) that CSS media queries only modify the overall layout and look and feel of a site. They don't optimize the site as far as the size of the images or content sent to the device which is important to keep in mind. To make the navigation menu more accessible on devices such as an iPhone or Android the CSS shown next can be used. This code changes the height of the menu from 40 pixels to 100%, takes off the li element floats, changes the line-height, and changes the margins.   @media screen and (max-width:320px) { nav ul { height: 100%; } nav ul > li { float: none; } nav ul > li a { line-height: 1.5em; } nav ul > li:first-child { margin-left: 0px; } /* Additional CSS overrides go here */ }   The following image shows an example of what the menu look like when run on a device with a width of 320 pixels:   Mobile devices with a maximum width of 480 pixels need different CSS styles applied since they have 160 additional pixels of width. This can be done by adding a new CSS media query into the stylesheet as shown next. Looking through the CSS you'll see that only a minimal override is added to adjust the padding of anchor tags since the menu fits by default in this screen width.   @media screen and (max-width: 480px) { nav ul > li > a { padding: 8px 10px 7px 10px; } }   Running the site on a device with 480 pixels results in the menu shown next being rendered. Notice that the space between the menu items is much smaller compared to what was shown when the main site loads in a standard browser.     In addition to modifying the menu, the 3 horizontal content sections shown earlier can be changed from a horizontal layout to a vertical layout so that they look good on a variety of smaller mobile devices and are easier to navigate by end users. The HTML5 article and section elements are used as containers for the 3 sections in the site as shown next:   <article class="clearfix"> <section id="info"> <header>Why Choose Us?</header> <br /> <img id="mainImage" src="Images/ArticleImage.png" title="Article Image" /> <p> Post emensos insuperabilis expeditionis eventus languentibus partium animis, quas periculorum varietas fregerat et laborum, nondum tubarum cessante clangore vel milite locato per stationes hibernas. </p> </section> <section id="products"> <header>Products</header> <br /> <img id="gearsImage" src="Images/Gears.png" title="Article Image" /> <p> <ul> <li>Widget 1</li> <li>Widget 2</li> <li>Widget 3</li> <li>Widget 4</li> <li>Widget 5</li> </ul> </p> </section> <section id="FAQ"> <header>FAQ</header> <br /> <img id="faqImage" src="Images/faq.png" title="Article Image" /> <p> <ul> <li>FAQ 1</li> <li>FAQ 2</li> <li>FAQ 3</li> <li>FAQ 4</li> <li>FAQ 5</li> </ul> </p> </section> </article>   To force the sections into a vertical layout for smaller mobile devices the CSS styles shown next can be added into the media queries targeting 320 pixel and 480 pixel widths. Styles to target the display size of the images in each section are also included. It's important to note that the original image is still being downloaded from the server and isn't being optimized in any way for the mobile device. It's certainly possible for the CSS to include URL information for a mobile-optimized image if desired. @media screen and (max-width:320px) { section { float: none; width: 97%; margin: 0px; padding: 5px; } #wrapper { padding: 5px; width: 96%; } #mainImage, #gearsImage, #faqImage { width: 100%; height: 100px; } } @media screen and (max-width: 480px) { section { float: none; width: 98%; margin: 0px 0px 10px 0px; padding: 5px; } article > section:last-child { margin-right: 0px; float: none; } #bottomSection { width: 99%; } #wrapper { padding: 5px; width: 96%; } #mainImage, #gearsImage, #faqImage { width: 100%; height: 100px; } }   The following images show the site rendered on an iPhone with the CSS media queries in place. Each of the sections now displays vertically making it much easier for the user to access them. Images inside of each section also scale appropriately to fit properly.     CSS media queries provide a great way to override default styles in a website and target devices with different resolutions. In this post you've seen how CSS media queries can be used to convert a standard browser-based site into a site that is more accessible to mobile users. Although much more can be done to optimize sites for mobile, CSS media queries provide a nice starting point if you don't have the time or resources to create mobile-specific versions of sites.

    Read the article

  • Setting up and using Bing Translate API Service for Machine Translation

    - by Rick Strahl
    Last week I spent quite a bit of time trying to set up the Bing Translate API service. I can honestly say this was one of the most screwed up developer experiences I've had in a long while - specifically related to the byzantine sign up process that Microsoft has in place. Not only is it nearly impossible to find decent documentation on the required signup process, some of the links in the docs are just plain wrong, and some of the account pages you need to access the actual account information once signed up are not linked anywhere from the administration UI. To make things even harder is the fact that the APIs changed a while back, with a completely new authentication scheme that's described and not directly linked documentation topic also made for a very frustrating search experience. It's a bummer that this is the case too, because the actual API itself is easy to use and works very well - fast and reasonably accurate (as accurate as you can expect machine translation to be). But the sign up process is a pain in the ass doubtlessly leaving many people giving up in frustration. In this post I'll try to hit all the points needed to set up to use the Bing Translate API in one place since such a document seems to be missing from Microsoft. Hopefully the API folks at Microsoft will get their shit together and actually provide this sort of info on their site… Signing Up The first step required is to create a Windows Azure MarketPlace account. Go to: https://datamarket.azure.com/ Sign in with your Windows Live Id If you don't have an account you will be taken to a registration page which you have to fill out. Follow the links and complete the registration. Once you're signed in you can start adding services. Click on the Data Link on the main page Select Microsoft Translator from the list This adds the Microsoft Bing Translator to your services. Pricing The page shows the pricing matrix and the free service which provides 2 megabytes for translations a month for free. Prices go up steeply from there. Pricing is determined by actual bytes of the result translations used. Max translations are 1000 characters so at minimum this means you get around 2000 translations a month for free. However most translations are probable much less so you can expect larger number of translations to go through. For testing or low volume translations this should be just fine. Once signed up there are no further instructions and you're left in limbo on the MS site. Register your Application Once you've created the Data association with Translator the next step is registering your application. To do this you need to access your developer account. Go to https://datamarket.azure.com/developer/applications/register Provide a ClientId, which is effectively the unique string identifier for your application (not your customer id!) Provide your name The client secret was auto-created and this becomes your 'password' For the redirect url provide any https url: https://microsoft.com works Give this application a description of your choice so you can identify it in the list of apps Now, once you've registered your application, keep track of the ClientId and ClientSecret - those are the two keys you need to authenticate before you can call the Translate API. Oddly the applications page is hidden from the Azure Portal UI. I couldn't find a direct link from anywhere on the site back to this page where I can examine my developer application keys. To find them you can go to: https://datamarket.azure.com/developer/applications You can come back here to look at your registered applications and pick up the ClientID and ClientSecret. Fun eh? But we're now ready to actually call the API and do some translating. Using the Bing Translate API The good news is that after this signup hell, using the API is pretty straightforward. To use the translation API you'll need to actually use two services: You need to call an authentication API service first, before you can call the actual translator API. These two APIs live on different domains, and the authentication API returns JSON data while the translator service returns XML. So much for consistency. Authentication The first step is authentication. The service uses oAuth authentication with a  bearer token that has to be passed to the translator API. The authentication call retrieves the oAuth token that you can then use with the translate API call. The bearer token has a short 10 minute life time, so while you can cache it for successive calls, the token can't be cached for long periods. This means for Web backend requests you typically will have to authenticate each time unless you build a more elaborate caching scheme that takes the timeout into account (perhaps using the ASP.NET Cache object). For low volume operations you can probably get away with simply calling the auth API for every translation you do. To call the Authentication API use code like this:/// /// Retrieves an oAuth authentication token to be used on the translate /// API request. The result string needs to be passed as a bearer token /// to the translate API. /// /// You can find client ID and Secret (or register a new one) at: /// https://datamarket.azure.com/developer/applications/ /// /// The client ID of your application /// The client secret or password /// public string GetBingAuthToken(string clientId = null, string clientSecret = null) { string authBaseUrl = https://datamarket.accesscontrol.windows.net/v2/OAuth2-13; if (string.IsNullOrEmpty(clientId) || string.IsNullOrEmpty(clientSecret)) { ErrorMessage = Resources.Resources.Client_Id_and_Client_Secret_must_be_provided; return null; } var postData = string.Format("grant_type=client_credentials&client_id={0}" + "&client_secret={1}" + "&scope=http://api.microsofttranslator.com", HttpUtility.UrlEncode(clientId), HttpUtility.UrlEncode(clientSecret)); // POST Auth data to the oauth API string res, token; try { var web = new WebClient(); web.Encoding = Encoding.UTF8; res = web.UploadString(authBaseUrl, postData); } catch (Exception ex) { ErrorMessage = ex.GetBaseException().Message; return null; } var ser = new JavaScriptSerializer(); var auth = ser.Deserialize<BingAuth>(res); if (auth == null) return null; token = auth.access_token; return token; } private class BingAuth { public string token_type { get; set; } public string access_token { get; set; } } This code basically takes the client id and secret and posts it at the oAuth endpoint which returns a JSON string. Here I use the JavaScript serializer to deserialize the JSON into a custom object I created just for deserialization. You can also use JSON.NET and dynamic deserialization if you are already using JSON.NET in your app in which case you don't need the extra type. In my library that houses this component I don't, so I just rely on the built in serializer. The auth method returns a long base64 encoded string which can be used as a bearer token in the translate API call. Translation Once you have the authentication token you can use it to pass to the translate API. The auth token is passed as an Authorization header and the value is prefixed with a 'Bearer ' prefix for the string. Here's what the simple Translate API call looks like:/// /// Uses the Bing API service to perform translation /// Bing can translate up to 1000 characters. /// /// Requires that you provide a CLientId and ClientSecret /// or set the configuration values for these two. /// /// More info on setup: /// http://www.west-wind.com/weblog/ /// /// Text to translate /// Two letter culture name /// Two letter culture name /// Pass an access token retrieved with GetBingAuthToken. /// If not passed the default keys from .config file are used if any /// public string TranslateBing(string text, string fromCulture, string toCulture, string accessToken = null) { string serviceUrl = "http://api.microsofttranslator.com/V2/Http.svc/Translate"; if (accessToken == null) { accessToken = GetBingAuthToken(); if (accessToken == null) return null; } string res; try { var web = new WebClient(); web.Headers.Add("Authorization", "Bearer " + accessToken); string ct = "text/plain"; string postData = string.Format("?text={0}&from={1}&to={2}&contentType={3}", HttpUtility.UrlEncode(text), fromCulture, toCulture, HttpUtility.UrlEncode(ct)); web.Encoding = Encoding.UTF8; res = web.DownloadString(serviceUrl + postData); } catch (Exception e) { ErrorMessage = e.GetBaseException().Message; return null; } // result is a single XML Element fragment var doc = new XmlDocument(); doc.LoadXml(res); return doc.DocumentElement.InnerText; } The first of this code deals with ensuring the auth token exists. You can either pass the token into the method manually or let the method automatically retrieve the auth code on its own. In my case I'm using this inside of a Web application and in that situation I simply need to re-authenticate every time as there's no convenient way to manage the lifetime of the auth cookie. The auth token is added as an Authorization HTTP header prefixed with 'Bearer ' and attached to the request. The text to translate, the from and to language codes and a result format are passed on the query string of this HTTP GET request against the Translate API. The translate API returns an XML string which contains a single element with the translated string. Using the Wrapper Methods It should be pretty obvious how to use these two methods but here are a couple of test methods that demonstrate the two usage scenarios:[TestMethod] public void TranslateBingWithAuthTest() { var translate = new TranslationServices(); string clientId = DbResourceConfiguration.Current.BingClientId; string clientSecret = DbResourceConfiguration.Current.BingClientSecret; string auth = translate.GetBingAuthToken(clientId, clientSecret); Assert.IsNotNull(auth); string text = translate.TranslateBing("Hello World we're back home!", "en", "de",auth); Assert.IsNotNull(text, translate.ErrorMessage); Console.WriteLine(text); } [TestMethod] public void TranslateBingIntegratedTest() { var translate = new TranslationServices(); string text = translate.TranslateBing("Hello World we're back home!","en","de"); Assert.IsNotNull(text, translate.ErrorMessage); Console.WriteLine(text); } Other API Methods The Translate API has a number of methods available and this one is the simplest one but probably also the most common one that translates a single string. You can find additional methods for this API here: http://msdn.microsoft.com/en-us/library/ff512419.aspx Soap and AJAX APIs are also available and documented on MSDN: http://msdn.microsoft.com/en-us/library/dd576287.aspx These links will be your starting points for calling other methods in this API. Dual Interface I've talked about my database driven localization provider here in the past, and it's for this tool that I added the Bing localization support. Basically I have a localization administration form that allows me to translate individual strings right out of the UI, using both Google and Bing APIs: As you can see in this example, the results from Google and Bing can vary quite a bit - in this case Google is stumped while Bing actually generated a valid translation. At other times it's the other way around - it's pretty useful to see multiple translations at the same time. Here I can choose from one of the values and driectly embed them into the translated text field. Lost in Translation There you have it. As I mentioned using the API once you have all the bureaucratic crap out of the way calling the APIs is fairly straight forward and reasonably fast, even if you have to call the Auth API for every call. Hopefully this post will help out a few of you trying to navigate the Microsoft bureaucracy, at least until next time Microsoft upends everything and introduces new ways to sign up again. Until then - happy translating… Related Posts Translation method Source on Github Translating with Google Translate without Google API Keys Creating a data-driven ASP.NET Resource Provider© Rick Strahl, West Wind Technologies, 2005-2013Posted in Localization  ASP.NET  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How Expedia Made My New Bride Cry

    - by Lance Robinson
    Tweet this? Email Expedia and ask them to give me and my new wife our honeymoon? When Expedia followed up their failure with our honeymoon trip with a complete and total lack of acknowledgement of any responsibility for the problem and endless loops of explaining the issue over and over again - I swore that they would make it right. When they brought my new bride to tears, I got an immediate and endless supply of motivation. I hope you will help me make them make it right by posting our story on Twitter, Facebook, your blog, on Expedia itself, and when talking to your friends in person about their own travel plans.   If you are considering using them now for an important trip - reconsider. Short summary: We arrived early for a flight - but Expedia had made a mistake with the data they supplied to JetBlue and Emirates, which resulted in us not being able to check in (one leg of our trip was missing)!  At the time of this post, three people (myself, my wife, and an exceptionally patient JetBlue employee named Mary) each spent hours on the phone with Expedia.  I myself spent right at 3 hours (according to iPhone records), Lauren spent an hour and a half or so, and poor Mary was probably on the phone for a good 3.5 hours.  This is after 5 hours total at the airport.  If you add up our phone time, that is nearly 8 hours of phone time over a 5 hour period with little or no help, stall tactics (?), run-around, denial, shifting of blame, and holding. Details below (times are approximate): First, my wife and I were married yesterday - June 18th, the 3 year anniversary of our first date. She is awesome. She is the nicest person I have ever known, a ton of fun, absolutely beautiful in every way. Ok enough mushy - here are the dirty details. 2:30 AM - Early Check-in Attempt - we attempted to check-in for our flight online. Some sort of technology error on website, instructed to checkin at desk. 4:30 AM - Arrive at airport. Try to check-in at kiosk, get the same error. We got to the JetBlue desk at RDU International Airport, where Mary helped us. Mary discovered that the Expedia provided itinerary does not match the Expedia provided tickets. We are informed that when that happens American, JetBlue, and others that use the same software cannot check you in for the flight because. Why? Because the itinerary was missing a leg of our flight! Basically we were not shown in the system as definitely being able to make it home. Mary called Expedia and was put on hold by their automated system. 4:55 AM - Mary, myself, and my brand new bride all waited for about 25 minutes when finally I decided I would make a call myself on my iPhone while Mary was on the airport phone. In their automated system, I chose "make a new reservation", thinking they might answer a little more quickly than "customer service". Not surprisingly I was connected to an Expedia person within 1 minute. They informed me that they would have to forward me to a customer service specialist. I explained to them that we were already on hold for that and had been for nearly half an hour, that we were going on our honeymoon and that our flight would be leaving soon - could they please help us. "Yes, I will help you". I hand the phone to JetBlue Mary who explains the situation 3 or 4 times. Obviously I couldn't hear both ends of the conversation at this point, but the Expedia person explained what the problem was by stating exactly what Mary had just spent 15 minutes explaining. Mary calmly confirms that this is the problem, and asks Expedia to re-issue the itinerary. Expedia tells Mary that they'll have to transfer her to customer service. Mary asks for someone specific so that we get an answer this time, and goes on hold. Mary get's connected, explains the situation, and then Mary's connection gets terminated. 5:10 AM - Mary calls back to the Expedia automated system again, and we wait for about 5 minutes on hold this time before I pick up my iPhone and call Expedia again myself. Again I go to sales, a person picks up the phone in less than a minute. I explain the situation and let them know that we are now very close to missing our flight for our honeymoon, could they please help us. "Yes, I will help you". Again I give the phone to Mary who provides them with a call back number in case we get disconnected again and explains the situation again. More back and forth with Expedia doing nothing but repeating the same questions, Mary answering the questions with the same information she provided in the original explanation, and Expedia simply restating the problem. Mary again asks them to re-issue the itinerary, and explains that doing so will fix the problem. Expedia again repeats the problem instead of fixing it, and Mary's connection gets terminated. 5:20 AM - Mary again calls back to Expedia. My beautiful bride also calls on her own phone. At this point she is struggling to hold back her tears, stumbling through an explanation of all that has happened and that we are about to miss our flight. Please help us. "Yes, I will help". My beautiful bride's connection gets terminated. Ok, maybe this disconnection isn't an accident. We've now been disconnected 3 times on two different phones. 5:45 AM - I walk away and pleadingly beg a person to help me. They "escalate" the issue to "Rosy" (sp?) at Expedia. I go through the whole song and dance again with Rosy, who gives me the same treatment Mary was given. Rosy blames JetBlue for now having the correct data. Meanwhile Mary is on the phone with Emirates Air (the airline for the second leg of our trip), who agrees with JetBlue that Expedia's data isn't up to date. We are informed by two airport employees that issues like this with Expedia are not uncommon, and that the fix is simple. On the phone iwth Rosy, I ask her to re-issue the itinerary because we are about to miss our flight. She again explains the problem to me. At this point, I am standing at the window, pleading with Rosy to help us get to our honeymoon, watching our airplane. Then our airplane leaves without us. 6:03 AM - At this point we have missed our flight. Re-issuing the itinerary is no longer a solution. I ask Rosy to start from the beginning and work us up a new trip. She says that she cannot do that. She says that she needs to talk to JetBlue and Emirates and find out why we cannot check-in for our flight. I remind Rosy that our flight has already left - I just watched it taxi away - it no longer matters why (not to mention the fact that we already knew why, and have known why since 4:30 AM), and have known the solution since 4:30 AM. Rosy, can you please book a new trip? Yes, but it will cost $400. Excuse me? Now you can, but it will cost ME to fix your mistake? Rosy says that she can escalate the situation to her supervisor but that will take 1.5 hours. 6:15 AM - I told Rosy that if they had re-issued the itinerary as JetBlue asked (at 4:30 AM), my new wife and I might be on the airplane now instead of dealing with this on the phone and missing the beginning (and how much more?) of our honeymoon. Rosy said that it was not necessary to re-issue the itinerary. Out of curiosity, i asked Rosy if there was some financial burden on them to re-issue the itinerary. "No", said Rosy. I asked her if it was a large time burden on Expedia to re-issue the itinerary. "No", said Rosy. I directly asked Rosy: Why wouldn't Expedia have re-issued the itinerary when JetBlue asked? No answer. I asked Rosy: If you had re-issued the itinerary at 4:30, isn't it possible that I would be on that flight right now? She actually surprised me by answering "Yes" to that question. So I pointed out that it followed that Expedia was responsible for the fact that we missed out flight, and she immediately went into more about how the problem was with JetBlue - but now it was ALSO an Emirates Air problem as well. I tell Rosy to go ahead and escalate the issue again, and please call me back in that 1.5 hours (which how is about 1 hour and 10 minutes away). 6:30 AM - I start tweeting my frustration with iPhone. It's now pretty much impossible for us to make it to The Maldives by 3pm, which is the time at which we would need to arrive in order to be allowed service to the actual island where we are staying. Expedia has now given me the run-around for 2 hours, caused me to miss my flight, and worst of all caused my amazing new wife Lauren to miss our honeymoon. You think I was mad? No. Furious. Its ok to make mistakes - but to refuse to fix them and to ruin our honeymoon? No, not ok, Expedia. I swore right then that Expedia would make this right. 7:45 AM - JetBlue mary is still talking her tail off to other people in JetBlue and Emirates Air. Mary works it out so that if Expedia simply books a new trip, JetBlue and Emirates will both waive all the fees. Now we just have to convince Expedia to fix their mistake and get us on our way! Around this time Expedia Rosy calls me back! I inform her of the excellent work of JetBlue Mary - that JetBlue and Emirates both will waive the fees so Expedia can fix their mistake and get us going on our way. She says that she sees documentation of this in her system and that she needs to put me on hold "for 1 to 10 minutes" to talk to Emirates Air (why I'm not exactly sure). I say ok. 8:45 AM - After an hour on hold, Rosy comes on the line and asks me to hold more. I ask her to call me back. 9:35 AM - I put down the iPhone Twitter app and picks up the laptop. You think I made some noise with my iPhone? Heh 11:25 AM - Expedia follows me and sends a canned "We're sorry, DM us the details".  If you look at their Twitter feed, 16 out of the most recent 20 tweets are exactly the same canned response.  The other 4?  Ads.  Um - #MultiFAIL? To Expedia:  You now have had (as explained above) 8 hours of 3 different people explaining our situation, you know the email address of our Expedia account, you know my web blog, you know my Twitter address, you know my phone number.  You also know how upset you have made both me and my new bride by treating us with such a ... non caring, scripted, uncooperative, argumentative, and possibly even deceitful manner.  In the wise words of the great Kenan Thompson of SNL: "FIX IT!".  And no, I'm NOT going away until you make this right. Period. 11:45 AM - Expedia corporate office called.  The woman I spoke to was very nice and apologetic.  She listened to me tell the story again, she says she understands the problem and she is going to work to resolve it.  I don't have any details on what exactly that resolution might me, she said she will call me back in 20 minutes.  She found out about the problem via Twitter.  Thank you Twitter, and all of you who helped.  Hopefully social media will win my wife and I our honeymoon, and hopefully Expedia will encourage their customer service teams treat their customers properly. 12:22 PM - Spoke to Fran again from Expedia corporate office.  She has a flight for us tonight.  She is booking it now.  We will arrive at our honeymoon destination of beautiful Veligandu Island Resort only 1 day late.  She cannot confirm today, but she expects that Expedia will pay for the lost honeymoon night.  Thank you everyone for your help.  I will reflect more on this whole situation and confirm its resolution after our flight is 100% confirmed.  For now, I'm going to take a breather and go kiss my wonderful wife! 1:50 PM - Have not yet received the promised phone call.  We did receive an email with a new itinerary for a flight but the booking is not for specific seats, so there is no guarantee that my wife and I will be able to sit together.  With the original booking I carefully selected our seats for every segment of our trip.  I decided to call into the phone number that Fran from the Expedia corporate office gave me.  Its automated voice system identified itself as "Tier 3 Support".  I am currently still on hold with them, I have not gotten through to a human yet. 1:55 PM - Fran from Expedia called me back.  She confirmed us as booked.  She called the airlines to confirm.  Unfortunately, Expedia was unwilling or unable to allow us any type of seat selection.  It is possible that i won't get to sit next to the woman I married less than a day ago on our 40 total hours of flight time (there and back).  In addition, our seats could be the worst seats on the planes, with no reclining seat back or right next to the restroom.  Despite this fact (which in my opinion is huge), the horrible inconvenience, the hours at the airport, and the negative Internet publicity that Expedia is receiving, Expedia declined to offer us any kind of upgrade or to mark us as SFU (suitable for upgrade).  Since they didn't offer - I asked, and was rejected.  I am grateful to finally be heading in the right direction, but not only did Expedia horribly botch this job from the very beginning, they followed that botch job with near zero customer service, followed by a verbally apologetic but otherwise half-hearted resolution.  If this works out favorably for us, great.  If not - I'm not done making noise, Expedia.  You owe us, and I expect you to make it right.  You haven't quite done that yet. Thanks - Thank you to Twitter.  Thanks to all those who sympathize with us and helped us get the attention of Expedia, since three people (one of them an airline employee) using Expedia's normal channels of communication for many hours didn't help.  Thanks especially to my PowerShell and Sharepoint friends, my local friends, and those connectors who encouraged me and spread my story. 5:15 PM - Love Wins - After all this, Lauren and I are exhausted.  We both took a short nap, and when we woke up we talked about the last 24 hours.  It was a big, amazing, story-filled 24 hours.  I said that Expedia won, but Lauren said no.  She pointed out how lucky we are.  We are in love and married.  We have wonderful family and friends.  We are both hard-working successful people who love what they do.  We get to go to an amazing exotic destination for our honeymoon like Veligandu in The Maldives...  That's a lot of good.  Expedia didn't win.  This was (is) a big loss for Expedia.  It is a public blemish for all to see.  But Lauren and I did win, big time.  Expedia may not have made things right - but things are right for us.  Post in progress... I will relay any further comments (or lack of) from Expedia soon, as well as an update on confirmation of their repayment of our lost resort room rates.  I'll also post a picture of us on our honeymoon as soon as I can!

    Read the article

  • Metro: Introduction to CSS 3 Grid Layout

    - by Stephen.Walther
    The purpose of this blog post is to provide you with a quick introduction to the new W3C CSS 3 Grid Layout standard. You can use CSS Grid Layout in Metro style applications written with JavaScript to lay out the content of an HTML page. CSS Grid Layout provides you with all of the benefits of using HTML tables for layout without requiring you to actually use any HTML table elements. Doing Page Layouts without Tables Back in the 1990’s, if you wanted to create a fancy website, then you would use HTML tables for layout. For example, if you wanted to create a standard three-column page layout then you would create an HTML table with three columns like this: <table height="100%"> <tr> <td valign="top" width="300px" bgcolor="red"> Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column </td> <td valign="top" bgcolor="green"> Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column </td> <td valign="top" width="300px" bgcolor="blue"> Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column </td> </tr> </table> When the table above gets rendered out to a browser, you end up with the following three-column layout: The width of the left and right columns is fixed – the width of the middle column expands or contracts depending on the width of the browser. Sometime around the year 2005, everyone decided that using tables for layout was a bad idea. Instead of using tables for layout — it was collectively decided by the spirit of the Web — you should use Cascading Style Sheets instead. Why is using HTML tables for layout bad? Using tables for layout breaks the semantics of the TABLE element. A TABLE element should be used only for displaying tabular information such as train schedules or moon phases. Using tables for layout is bad for accessibility (The Web Content Accessibility Guidelines 1.0 is explicit about this) and using tables for layout is bad for separating content from layout (see http://CSSZenGarden.com). Post 2005, anyone who used HTML tables for layout were encouraged to hold their heads down in shame. That’s all well and good, but the problem with using CSS for layout is that it can be more difficult to work with CSS than HTML tables. For example, to achieve a standard three-column layout, you either need to use absolute positioning or floats. Here’s a three-column layout with floats: <style type="text/css"> #container { min-width: 800px; } #leftColumn { float: left; width: 300px; height: 100%; background-color:red; } #middleColumn { background-color:green; height: 100%; } #rightColumn { float: right; width: 300px; height: 100%; background-color:blue; } </style> <div id="container"> <div id="rightColumn"> Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column </div> <div id="leftColumn"> Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column </div> <div id="middleColumn"> Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column </div> </div> The page above contains four DIV elements: a container DIV which contains a leftColumn, middleColumn, and rightColumn DIV. The leftColumn DIV element is floated to the left and the rightColumn DIV element is floated to the right. Notice that the rightColumn DIV appears in the page before the middleColumn DIV – this unintuitive ordering is necessary to get the floats to work correctly (see http://stackoverflow.com/questions/533607/css-three-column-layout-problem). The page above (almost) works with the most recent versions of most browsers. For example, you get the correct three-column layout in both Firefox and Chrome: And the layout mostly works with Internet Explorer 9 except for the fact that for some strange reason the min-width doesn’t work so when you shrink the width of your browser, you can get the following unwanted layout: Notice how the middle column (the green column) bleeds to the left and right. People have solved these issues with more complicated CSS. For example, see: http://matthewjamestaylor.com/blog/holy-grail-no-quirks-mode.htm But, at this point, no one could argue that using CSS is easier or more intuitive than tables. It takes work to get a layout with CSS and we know that we could achieve the same layout more easily using HTML tables. Using CSS Grid Layout CSS Grid Layout is a new W3C standard which provides you with all of the benefits of using HTML tables for layout without the disadvantage of using an HTML TABLE element. In other words, CSS Grid Layout enables you to perform table layouts using pure Cascading Style Sheets. The CSS Grid Layout standard is still in a “Working Draft” state (it is not finalized) and it is located here: http://www.w3.org/TR/css3-grid-layout/ The CSS Grid Layout standard is only supported by Internet Explorer 10 and there are no signs that any browser other than Internet Explorer will support this standard in the near future. This means that it is only practical to take advantage of CSS Grid Layout when building Metro style applications with JavaScript. Here’s how you can create a standard three-column layout using a CSS Grid Layout: <!DOCTYPE html> <html> <head> <style type="text/css"> html, body, #container { height: 100%; padding: 0px; margin: 0px; } #container { display: -ms-grid; -ms-grid-columns: 300px auto 300px; -ms-grid-rows: 100%; } #leftColumn { -ms-grid-column: 1; background-color:red; } #middleColumn { -ms-grid-column: 2; background-color:green; } #rightColumn { -ms-grid-column: 3; background-color:blue; } </style> </head> <body> <div id="container"> <div id="leftColumn"> Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column </div> <div id="middleColumn"> Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column </div> <div id="rightColumn"> Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column </div> </div> </body> </html> When the page above is rendered in Internet Explorer 10, you get a standard three-column layout: The page above contains four DIV elements: a container DIV which contains a leftColumn DIV, middleColumn DIV, and rightColumn DIV. The container DIV is set to Grid display mode with the following CSS rule: #container { display: -ms-grid; -ms-grid-columns: 300px auto 300px; -ms-grid-rows: 100%; } The display property is set to the value “-ms-grid”. This property causes the container DIV to lay out its child elements in a grid. (Notice that you use “-ms-grid” instead of “grid”. The “-ms-“ prefix is used because the CSS Grid Layout standard is still preliminary. This implementation only works with IE10 and it might change before the final release.) The grid columns and rows are defined with the “-ms-grid-columns” and “-ms-grid-rows” properties. The style rule above creates a grid with three columns and one row. The left and right columns are fixed sized at 300 pixels. The middle column sizes automatically depending on the remaining space available. The leftColumn, middleColumn, and rightColumn DIVs are positioned within the container grid element with the following CSS rules: #leftColumn { -ms-grid-column: 1; background-color:red; } #middleColumn { -ms-grid-column: 2; background-color:green; } #rightColumn { -ms-grid-column: 3; background-color:blue; } The “-ms-grid-column” property is used to specify the column associated with the element selected by the style sheet selector. The leftColumn DIV is positioned in the first grid column, the middleColumn DIV is positioned in the second grid column, and the rightColumn DIV is positioned in the third grid column. I find using CSS Grid Layout to be just as intuitive as using an HTML table for layout. You define your columns and rows and then you position different elements within these columns and rows. Very straightforward. Creating Multiple Columns and Rows In the previous section, we created a super simple three-column layout. This layout contained only a single row. In this section, let’s create a slightly more complicated layout which contains more than one row: The following page contains a header row, a content row, and a footer row. The content row contains three columns: <!DOCTYPE html> <html> <head> <style type="text/css"> html, body, #container { height: 100%; padding: 0px; margin: 0px; } #container { display: -ms-grid; -ms-grid-columns: 300px auto 300px; -ms-grid-rows: 100px 1fr 100px; } #header { -ms-grid-column: 1; -ms-grid-column-span: 3; -ms-grid-row: 1; background-color: yellow; } #leftColumn { -ms-grid-column: 1; -ms-grid-row: 2; background-color:red; } #middleColumn { -ms-grid-column: 2; -ms-grid-row: 2; background-color:green; } #rightColumn { -ms-grid-column: 3; -ms-grid-row: 2; background-color:blue; } #footer { -ms-grid-column: 1; -ms-grid-column-span: 3; -ms-grid-row: 3; background-color: orange; } </style> </head> <body> <div id="container"> <div id="header"> Header, Header, Header </div> <div id="leftColumn"> Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column, Left Column </div> <div id="middleColumn"> Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column, Middle Column </div> <div id="rightColumn"> Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column, Right Column </div> <div id="footer"> Footer, Footer, Footer </div> </div> </body> </html> In the page above, the grid layout is created with the following rule which creates a grid with three rows and three columns: #container { display: -ms-grid; -ms-grid-columns: 300px auto 300px; -ms-grid-rows: 100px 1fr 100px; } The header is created with the following rule: #header { -ms-grid-column: 1; -ms-grid-column-span: 3; -ms-grid-row: 1; background-color: yellow; } The header is positioned in column 1 and row 1. Furthermore, notice that the “-ms-grid-column-span” property is used to span the header across three columns. CSS Grid Layout and Fractional Units When you use CSS Grid Layout, you can take advantage of fractional units. Fractional units provide you with an easy way of dividing up remaining space in a page. Imagine, for example, that you want to create a three-column page layout. You want the size of the first column to be fixed at 200 pixels and you want to divide the remaining space among the remaining three columns. The width of the second column is equal to the combined width of the third and fourth columns. The following CSS rule creates four columns with the desired widths: #container { display: -ms-grid; -ms-grid-columns: 200px 2fr 1fr 1fr; -ms-grid-rows: 1fr; } The fr unit represents a fraction. The grid above contains four columns. The second column is two times the size (2fr) of the third (1fr) and fourth (1fr) columns. When you use the fractional unit, the remaining space is divided up using fractional amounts. Notice that the single row is set to a height of 1fr. The single grid row gobbles up the entire vertical space. Here’s the entire HTML page: <!DOCTYPE html> <html> <head> <style type="text/css"> html, body, #container { height: 100%; padding: 0px; margin: 0px; } #container { display: -ms-grid; -ms-grid-columns: 200px 2fr 1fr 1fr; -ms-grid-rows: 1fr; } #firstColumn { -ms-grid-column: 1; background-color:red; } #secondColumn { -ms-grid-column: 2; background-color:green; } #thirdColumn { -ms-grid-column: 3; background-color:blue; } #fourthColumn { -ms-grid-column: 4; background-color:orange; } </style> </head> <body> <div id="container"> <div id="firstColumn"> First Column, First Column, First Column </div> <div id="secondColumn"> Second Column, Second Column, Second Column </div> <div id="thirdColumn"> Third Column, Third Column, Third Column </div> <div id="fourthColumn"> Fourth Column, Fourth Column, Fourth Column </div> </div> </body> </html>   Summary There is more in the CSS 3 Grid Layout standard than discussed in this blog post. My goal was to describe the basics. If you want to learn more than you can read through the entire standard at http://www.w3.org/TR/css3-grid-layout/ In this blog post, I described some of the difficulties that you might encounter when attempting to replace HTML tables with Cascading Style Sheets when laying out a web page. I explained how you can take advantage of the CSS 3 Grid Layout standard to avoid these problems when building Metro style applications using JavaScript. CSS 3 Grid Layout provides you with all of the benefits of using HTML tables for laying out a page without requiring you to use HTML table elements.

    Read the article

  • SQLAuthority News – SQLPASS Nov 8-11, 2010-Seattle – An Alternative Look at Experience

    - by pinaldave
    I recently attended most prestigious SQL Server event SQLPASS between Nov 8-11, 2010 at Seattle. I have only one expression for the event - Best Summit Ever This year the summit was at its best. Instead of writing about my usual routine or the event, I am going to write about the interesting things I did and how I felt about it! Best Summit Ever Trip to Seattle! This was my second trip to Seattle this year and the journey is always long. Here is the travel stats on how long it takes to get to Seattle: 24 hours official air time 36 hours total travel time (connection waits and airport commute) Every time I travel to USA I gain a day and when I travel back to home, I lose a day. However, the total traveling time is around 3 days. The journey is long and very exhausting. However, it is all worth it when you’re attending an event like SQLPASS. Here are few things I carry when I travel for a long journey: Dry Snack packs – I like to have some good Indian Dry Snacks along with me in my backpack so I can have my own snack when I want Amazon Kindle – Loaded with 80+ books A physical book – This is usually a very easy to read book I do not watch movies on the plane and usually spend my time reading something quick and easy. If I can go to sleep, I go for it. I prefer to not to spend time in conversation with the guy sitting next to me because usually I end up listening to their biography, which I cannot blog about. Sheraton Seattle SQLPASS In any case, I love to go to Seattle as the city is great and has everything a brilliant metropolis has to offer. The new Light Train is extremely convenient, and I can take it directly from the airport to the city center. My hotel, the Sheraton, was only few meters (in the USA people count in blocks – 3 blocks) away from the train station. This time I saved USD 40 each round trip due to the Light Train. Sessions I attended! Well, I really wanted to attend most of the sessions but there was great dilemma of which ones to choose. There were many, many sessions to be attended and at any given time there was more than one good session being presented. I had decided to attend sessions in area performance tuning and I attended quite a few sessions this year, compared to what I was able to do last year. Here are few names of the speakers whose sessions I attended (please note, following great speakers are not listed in any order. I loved them and I enjoyed their sessions): Conor Cunningham Rushabh Mehta Buck Woody Brent Ozar Jonathan Kehayias Chris Leonard Bob Ward Grant Fritchey I had great fun attending their sessions. The sessions were meaningful and enlightening. It is hard to rate any session but I have found that the insights learned in Conor Cunningham’s sessions are the highlight of the PASS Summit. Rushabh Mehta at Keynote SQLPASS   Bucky Woody and Brent Ozar I always like the sessions where the speaker is much closer to the audience and has real world experience. I think speakers who have worked in the real world deliver the best content and most useful information. Sessions I did not like! Indeed there were few sessions I did not like it and I am not going to name them here. However, there were strong reasons I did not like their sessions, and here is why: Sessions were all theory and had no real world connections. All technical questions ended with confusing answers (lots of “I will get back to you on it,” “it depends,” “let us take this offline” and many more…) “I am God” kind of attitude in the speakers For example, I attended a session of one very well known speaker who is a specialist for one particular area. I was bit late for the session and was surprised to see that in a room that could hold 350 people there were only 30 attendees. After sitting there for 15 minutes, I realized why lots of people left. Very soon I found I preferred to stare out the window instead of listening to that particular speaker. One on One Talk! Many times people ask me what I really like about PASS. I always say the experience of meeting SQL legends and spending time with them one on one and LEARNING! Here is the quick list of the people I met during this event and spent more than 30 minutes with each of them talking about various subjects: Pinal Dave and Brad Shulz Pinal Dave and Rushabh Mehta Michael Coles and Pinal Dave Rushabh Mehta – It is always pleasure to meet with him. He is a man with lots of energy and a passion for community. He recently told me that he really wanted to turn PASS into resource for learning for every SQL Server Developer and Administrator in the world. I had great in-depth discussion regarding how a single person can contribute to a community. Michael Coles – I consider him my best friend. It is always fun to meet him. He is funny and very knowledgeable. I think there are very few people who are as expert as he is in encryption and spatial databases. Worth meeting him every single time. Glenn Berry – A real friend of everybody. He is very a simple person and very true to his heart. I think there is not a single person in whole community who does not like him. He is a friends of all and everybody likes him very much. I once again had time to sit with him and learn so much from him. As he is known as Dr. DMV, I can be his nurse in the area of DMV. Brad Schulz – I always wanted to meet him but never got chance until today. I had great time meeting him in person and we have spent considerable amount of time together discussing various T-SQL tricks and tips. I do not know where he comes up with all the different ideas but I enjoy reading his blog and sharing his wisdom with me. Jonathan Kehayias – He is drill sergeant in US army. If you get the impression that he is a giant with very strong personality – you are wrong. He is very kind and soft spoken DBA with strong performance tuning skills. I asked him how he has kept his two jobs separate and I got very good answer – just work hard and have passion for what you do. I attended his sessions and his presentation style is very unique.  I feel like he is speaking in a language I understand. Louis Davidson – I had never had a chance to sit with him and talk about technology before. He has so much wisdom and he is very kind. During the dinner, I had talked with him for long time and without hesitation he started to draw a schema for me on the menu. It was a wonderful experience to learn from a master at the dinner table. He explained to me the real and practical differences between third normal form and forth normal form. Honestly I did not know earlier, but now I do. Erland Sommarskog – This man needs no introduction, he is very well known and very clear in conveying his ideas. I learned a lot from him during the course of year. Every time I meet him, I learn something new and this time was no exception. Joe Webb – Joey is all about community and people, we had interesting conversation about community, MVP and how one can be helpful to community without losing passion for long time. It is always pleasant to talk to him and of course, I had fun time. Ross Mistry – I call him my brother many times because he indeed looks like my cousin. He provided me lots of insight of how one can write book and how he keeps his books simple to appeal to all the readers. A wonderful person and great friend. Ola Hallgren - I did not know he was coming to the summit. I had great time meeting him and had a wonderful conversation with him regarding his scripts and future community activities. Blythe Morrow – She used to be integrated part of SQL Server Community and PASS HQ. It was wonderful to meet her again and re-connect. She is wonderful person and I had a great time talking to her. Solid Quality Mentors – It is difficult to decide who to mention here. Instead of writing all the names, I am going to include a photo of our meeting. I had great fun meeting various members of our global branches. This year I was sitting with my Spanish speaking friends and had great fun as Javier Loria from Solid Quality translated lots of things for me. Party, Party and Parties Every evening there were various parties. I did attend almost all of them. Every party had different theme but the goal of all the parties the same – networking. Here are the few parties where I had lots of fun: Dell Reception Party Exhibitor Party Solid Quality Fun Party Red Gate Friends Party MVP Dinner Microsoft Party MVP Dinner Quest Party Gameworks PASS Party Volunteer Party at Garage Solid Quality Mentors (10 Members out of 120) They were all great networking opportunities and lots of fun. I really had great time meeting people at the various parties. There were few people everywhere – well, I will say I am among them – who hopped parties. NDA – Not Decided Agenda During the event there were few meetings marked “NDA.” Someone asked me “why are these things NDA?”  My response was simple: because they are not sure themselves. NDA stands for Not Decided Agenda. Toys, Giveaways and Luggage I admit, I was like child in Gameworks and was playing to win soft toys. I was doing it for my daughter. I must thank all of the people who gave me their cards to try my luck. I won 4 soft-toys for my daughter and it was fun. Also, thanks to Angel who did a final toy swap with me to get the desired toy for my daughter. I also collected ducks from Idera, as my daughter really loves them. Solid Quality Booth Each of the exhibitors was giving away something and I got so much stuff that my luggage got quite a bit bigger when I returned. Best Exhibitor Idera had SQLDoctor (a real magician and fun guy) to promote their new tool SQLDoctor. I really had a great time participating in the magic myself. At one point, the magician made my watch disappear.  I have seen better magic before, but this time it caught me unexpectedly and I was taken by surprise. I won many ducks again. The Common Question I heard the following common questions: I have seen you somewhere – who are you? – I am Pinal Dave. I did not know that Pinal is your first name and Dave is your last name, how do you pronounce your last name again? – Da-way How old are you? – I am as old as I can be. Are you an Indian because you look like one? – I did not answer this one. Where are you from? This question was usually asked after looking at my badge which says India. So did you really fly from India? – Yes, because I have seasickness so I do not prefer the sea journey. How long was the journey? – 24/36/12 (air travel time/total travel time/time zone difference) Why do you write on SQLAuthority.com? – Because I want to. I remember your daughter looks like you. – Is this even a question? Of course, she is daddy’s little girl. There were so many other questions, I will have to write another blog post about it. SQLPASS Again, Best Summit Ever! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology Tagged: SQLPASS

    Read the article

  • Too nervous to install

    - by The Prop
    Yesterday I (a professional rugby prop of somewhat limited intellect) landed in http://htmlagilitypack.codeplex.com/ and found myself stranded in a town with no signposts. The locals don't need signposts - they know their way around - so who gives a hoot about visitors? Well I'm a visitor and I'm lost. Here's my plea to the good burgesses of Codeplex-sans-signs: HELP!! Let me back-track and explain what landed me at the bottom of this tangled ruck. There's a "Download" button positioned near the top-right of the Codeplex web page, right? Like the Sword of Damocles, a down-arrow to the left of the button indicates, presumably, what a download would include: CURRENT 1.4.0 Stable DATE Fri May 7 2010 at 7:00 AM STATUS Stable With a simple-minded confidence that has since deserted me (the confidence - not the simple-mindedness), I clicked "Download". This introduced 3 new files to my computer: HtmlAgilityPack.dll, HtmlAgilityPack.pdb, and HtmlAgilityPack.XML This is when the first stab of doubt penetrated that globe between my cauliflower ears that I call a head. Where's the dot cs? Somewhere in Codeplex, I'd read advice to another lost soul to "download and build the HTMLAgilityPack solution". As I've done so many times as an All Black prop, I glared at the opposition front row - ah, I mean the 3 new files. Shouldn't one of them have a ".cs" on the back of his jersey - er, on the end of its name? Or is this just how they play the game in Codeplex-sans-signs? Undaunted (props have more courage than sense) I packed into my first C# scrum. The half-back feeds the ball in, and the front rows collapse - er, the debugging stops at this line of my code: "HtmlAgilityPack.HtmlDocument doc = new HtmlAgilityPack.HtmlDocument();" Then the Referee blows his whistle and announces one of those verdicts that's utterly indecipherable to your average loose-head prop: Locating source for 'C:\Source\htmlagilitypack\Trunk\HtmlAgilityPack\HtmlDocument.cs'. Checksum: MD5 {62 bc f3 7e 9a 92 a6 32 7 d6 5b f8 76 59 7b 5b} The file 'C:\Source\htmlagilitypack\Trunk\HtmlAgilityPack\HtmlDocument.cs' does not exist. Looking in script documents for 'C:\Source\htmlagilitypack\Trunk\HtmlAgilityPack\HtmlDocument.cs'... Looking in the projects for 'C:\Source\htmlagilitypack\Trunk\HtmlAgilityPack\HtmlDocument.cs'. The file was not found in a project. Looking in directory 'C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\vc7\atlmfc'... Looking in directory 'C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\vc7\crt'... The debugger will ask the user to find the file: C:\Source\htmlagilitypack\Trunk\HtmlAgilityPack\HtmlDocument.cs. The user pressed Cancel [a brain-stemmer from the prop] in the Find Source dialog. The debug source files settings for the active solution have been modified so that the debugger will not ask the user to find the file: C:\Source\htmlagilitypack\Trunk\HtmlAgilityPack\HtmlDocument.cs. The debugger could not locate the source file 'C:\Source\htmlagilitypack\Trunk\HtmlAgilityPack\HtmlDocument.cs'. Even if it had been the first 50 stanzas of "Eskimo Nell", I couldn't have been more shocked. I'm so shocked, my jaws clamp shut around the opposition hooker's ear. He thumbs me in the iris. With a cornea-torn eye I peer at the Codeplex site. My brain stem sparks and I punch the "View all downloads" link. It sparks four more times on each download link, and.. lo! FOUR files this time: HAPExplorer.zip, HtmlAgilityPack.1.4.0.Source.zip, HtmlAgilityPack.1.4.0.zip, HtmlAgilityPack.Documentation.chm But... is this not the same place arrived at recently by my flat-mate Chaz, journalist extraordinaire? (Chaz, if you're reading this, I'm not plugging for nothing - just write kindly about me in your next report, okay?) Didn't these same four files flummox Chaz The Great? He told me about it. Chaz left a message with Codeplex and then solved the problem by just walking away. Typical journalist, huh. But I'm not like that. I don't walk away. I'm made of the sort of stubborn stuff that becomes an All Black prop. Hence this impassioned plea: GOOD TOWNSFOLK OF CODEPLEX-SANS-SIGNS, WHAT SHOULD I DO NEXT? Can somebody point me to Main Street? How does a simpleton install 'C:\Source\htmlagilitypack\Trunk\HtmlAgilityPack\HtmlDocument.cs'? I'm willing to prostrate myself and grovel to the first kind face that passes in front of my rapidly clouding sight. So help me, I'd even tug my forelock if I had one! Should I hold forth my rod over the wilderness, and create a folder called 'C:\Source\htmlagilitypack\Trunk\HtmlAgilityPack\' or some such? If so, what files should I move into it? ANYTHING else a dum-ass should know about? - and I mean ANYTHING - you just don't know how witless a punch-drunk prop can be.. %( Whenever I've installed other programs they've given me an ".exe" or ".msi" that I can click on and it's all done for me like magic. HEY... there's nothing of that nature here, is there? Am I missing something? Something for dummies to click? (From the waiting rooms of Dr I. Sight Phixes) (signed) The Prop

    Read the article

  • Built-in GZip/Deflate Compression on IIS 7.x

    - by Rick Strahl
    IIS 7 improves internal compression functionality dramatically making it much easier than previous versions to take advantage of compression that’s built-in to the Web server. IIS 7 also supports dynamic compression which allows automatic compression of content created in your own applications (ASP.NET or otherwise!). The scheme is based on content-type sniffing and so it works with any kind of Web application framework. While static compression on IIS 7 is super easy to set up and turned on by default for most text content (text/*, which includes HTML and CSS, as well as for JavaScript, Atom, XAML, XML), setting up dynamic compression is a bit more involved, mostly because the various default compression settings are set in multiple places down the IIS –> ASP.NET hierarchy. Let’s take a look at each of the two approaches available: Static Compression Compresses static content from the hard disk. IIS can cache this content by compressing the file once and storing the compressed file on disk and serving the compressed alias whenever static content is requested and it hasn’t changed. The overhead for this is minimal and should be aggressively enabled. Dynamic Compression Works against application generated output from applications like your ASP.NET apps. Unlike static content, dynamic content must be compressed every time a page that requests it regenerates its content. As such dynamic compression has a much bigger impact than static caching. How Compression is configured Compression in IIS 7.x  is configured with two .config file elements in the <system.WebServer> space. The elements can be set anywhere in the IIS/ASP.NET configuration pipeline all the way from ApplicationHost.config down to the local web.config file. The following is from the the default setting in ApplicationHost.config (in the %windir%\System32\inetsrv\config forlder) on IIS 7.5 with a couple of small adjustments (added json output and enabled dynamic compression): <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files"> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/json" enabled="true" /> <add mimeType="*/*" enabled="false" /> </dynamicTypes> <staticTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/atom+xml" enabled="true" /> <add mimeType="application/xaml+xml" enabled="true" /> <add mimeType="*/*" enabled="false" /> </staticTypes> </httpCompression> <urlCompression doStaticCompression="true" doDynamicCompression="true" /> </system.webServer> </configuration> You can find documentation on the httpCompression and urlCompression keys here respectively: http://msdn.microsoft.com/en-us/library/ms690689%28v=vs.90%29.aspx http://msdn.microsoft.com/en-us/library/aa347437%28v=vs.90%29.aspx The httpCompression Element – What and How to compress Basically httpCompression configures what types to compress and how to compress them. It specifies the DLL that handles gzip encoding and the types of documents that are to be compressed. Types are set up based on mime-types which looks at returned Content-Type headers in HTTP responses. For example, I added the application/json to mime type to my dynamic compression types above to allow that content to be compressed as well since I have quite a bit of AJAX content that gets sent to the client. The UrlCompression Element – Enables and Disables Compression The urlCompression element is a quick way to turn compression on and off. By default static compression is enabled server wide, and dynamic compression is disabled server wide. This might be a bit confusing because the httpCompression element also has a doDynamicCompression attribute which is set to true by default, but the urlCompression attribute by the same name actually overrides it. The urlCompression element only has three attributes: doStaticCompression, doDynamicCompression and dynamicCompressionBeforeCache. The doCompression attributes are the final determining factor whether compression is enabled, so it’s a good idea to be explcit! The default for doDynamicCompression='false”, but doStaticCompression="true"! Static Compression is enabled by Default, Dynamic Compression is not Because static compression is very efficient in IIS 7 it’s enabled by default server wide and there probably is no reason to ever change that setting. Dynamic compression however, since it’s more resource intensive, is turned off by default. If you want to enable dynamic compression there are a few quirks you have to deal with, namely that enabling it in ApplicationHost.config doesn’t work. Setting: <urlCompression doDynamicCompression="true" /> in applicationhost.config appears to have no effect and I had to move this element into my local web.config to make dynamic compression work. This is actually a smart choice because you’re not likely to want dynamic compression in every application on a server. Rather dynamic compression should be applied selectively where it makes sense. However, nowhere is it documented that the setting in applicationhost.config doesn’t work (or more likely is overridden somewhere and disabled lower in the configuration hierarchy). So: remember to set doDynamicCompression=”true” in web.config!!! How Static Compression works Static compression works against static content loaded from files on disk. Because this content is static and not bound to change frequently – such as .js, .css and static HTML content – it’s fairly easy for IIS to compress and then cache the compressed content. The way this works is that IIS compresses the files into a special folder on the server’s hard disk and then reads the content from this location if already compressed content is requested and the underlying file resource has not changed. The semantics of serving an already compressed file are very efficient – IIS still checks for file changes, but otherwise just serves the already compressed file from the compression folder. The compression folder is located at: %windir%\inetpub\temp\IIS Temporary Compressed Files\ApplicationPool\ If you look into the subfolders you’ll find compressed files: These files are pre-compressed and IIS serves them directly to the client until the underlying files are changed. As I mentioned before – static compression is on by default and there’s very little reason to turn that functionality off as it is efficient and just works out of the box. The one tweak you might want to do is to set the compression level to maximum. Since IIS only compresses content very infrequently it would make sense to apply maximum compression. You can do this with the staticCompressionLevel setting on the scheme element: <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> Other than that the default settings are probably just fine. Dynamic Compression – not so fast! By default dynamic compression is disabled and that’s actually quite sensible – you should use dynamic compression very carefully and think about what content you want to compress. In most applications it wouldn’t make sense to compress *all* generated content as it would generate a significant amount of overhead. Scott Fortsyth has a great post that details some of the performance numbers and how much impact dynamic compression has. Depending on how busy your server is you can play around with compression and see what impact it has on your server’s performance. There are also a few settings you can tweak to minimize the overhead of dynamic compression. Specifically the httpCompression key has a couple of CPU related keys that can help minimize the impact of Dynamic Compression on a busy server: dynamicCompressionDisableCpuUsage dynamicCompressionEnableCpuUsage By default these are set to 90 and 50 which means that when the CPU hits 90% compression will be disabled until CPU utilization drops back down to 50%. Again this is actually quite sensible as it utilizes CPU power from compression when available and falling off when the threshold has been hit. It’s a good way some of that extra CPU power on your big servers to use when utilization is low. Again these settings are something you likely have to play with. I would probably set the upper limit a little lower than 90% maybe around 70% to make this a feature that kicks in only if there’s lots of power to spare. I’m not really sure how accurate these CPU readings that IIS uses are as Cpu usage on Web Servers can spike drastically even during low loads. Don’t trust settings – do some load testing or monitor your server in a live environment to see what values make sense for your environment. Finally for dynamic compression I tend to add one Mime type for JSON data, since a lot of my applications send large chunks of JSON data over the wire. You can do that with the application/json content type: <add mimeType="application/json" enabled="true" /> What about Deflate Compression? The default compression is GZip. The documentation hints that you can use a different compression scheme and mentions Deflate compression. And sure enough you can change the compression settings to: <scheme name="deflate" dll="%Windir%\system32\inetsrv\gzip.dll" staticCompressionLevel="9" /> to get deflate style compression. The deflate algorithm produces slightly more compact output so I tend to prefer it over GZip but more HTTP clients (other than browsers) support GZip than Deflate so be careful with this option if you build Web APIs. I also had some issues with the above value actually being applied right away. Changing the scheme in applicationhost.config didn’t show up on the site  right away. It required me to do a full IISReset to get that change to show up before I saw the change over to deflate compressed content. Content was slightly more compressed with deflate – not sure if it’s worth the slightly less common compression type, but the option at least is available. IIS 7 finally makes GZip Easy In summary IIS 7 makes GZip easy finally, even if the configuration settings are a bit obtuse and the documentation is seriously lacking. But once you know the basic settings I’ve described here and the fact that you can override all of this in your local web.config it’s pretty straight forward to configure GZip support and tweak it exactly to your needs. Static compression is a total no brainer as it adds very little overhead compared to direct static file serving and provides solid compression. Dynamic Compression is a little more tricky as it does add some overhead to servers, so it probably will require some tweaking to get the right balance of CPU load vs. compression ratios. Looking at large sites like Amazon, Yahoo, NewEgg etc. – they all use Related Content Code based ASP.NET GZip Caveats HttpWebRequest and GZip Responses © Rick Strahl, West Wind Technologies, 2005-2011Posted in IIS7   ASP.NET  

    Read the article

  • Passing multiple POST parameters to Web API Controller Methods

    - by Rick Strahl
    ASP.NET Web API introduces a new API for creating REST APIs and making AJAX callbacks to the server. This new API provides a host of new great functionality that unifies many of the features of many of the various AJAX/REST APIs that Microsoft created before it - ASP.NET AJAX, WCF REST specifically - and combines them into a whole more consistent API. Web API addresses many of the concerns that developers had with these older APIs, namely that it was very difficult to build consistent REST style resource APIs easily. While Web API provides many new features and makes many scenarios much easier, a lot of the focus has been on making it easier to build REST compliant APIs that are focused on resource based solutions and HTTP verbs. But  RPC style calls that are common with AJAX callbacks in Web applications, have gotten a lot less focus and there are a few scenarios that are not that obvious, especially if you're expecting Web API to provide functionality similar to ASP.NET AJAX style AJAX callbacks. RPC vs. 'Proper' REST RPC style HTTP calls mimic calling a method with parameters and returning a result. Rather than mapping explicit server side resources or 'nouns' RPC calls tend simply map a server side operation, passing in parameters and receiving a typed result where parameters and result values are marshaled over HTTP. Typically RPC calls - like SOAP calls - tend to always be POST operations rather than following HTTP conventions and using the GET/POST/PUT/DELETE etc. verbs to implicitly determine what operation needs to be fired. RPC might not be considered 'cool' anymore, but for typical private AJAX backend operations of a Web site I'd wager that a large percentage of use cases of Web API will fall towards RPC style calls rather than 'proper' REST style APIs. Web applications that have needs for things like live validation against data, filling data based on user inputs, handling small UI updates often don't lend themselves very well to limited HTTP verb usage. It might not be what the cool kids do, but I don't see RPC calls getting replaced by proper REST APIs any time soon.  Proper REST has its place - for 'real' API scenarios that manage and publish/share resources, but for more transactional operations RPC seems a better choice and much easier to implement than trying to shoehorn a boatload of endpoint methods into a few HTTP verbs. In any case Web API does a good job of providing both RPC abstraction as well as the HTTP Verb/REST abstraction. RPC works well out of the box, but there are some differences especially if you're coming from ASP.NET AJAX service or WCF Rest when it comes to multiple parameters. Action Routing for RPC Style Calls If you've looked at Web API demos you've probably seen a bunch of examples of how to create HTTP Verb based routing endpoints. Verb based routing essentially maps a controller and then uses HTTP verbs to map the methods that are called in response to HTTP requests. This works great for resource APIs but doesn't work so well when you have many operational methods in a single controller. HTTP Verb routing is limited to the few HTTP verbs available (plus separate method signatures) and - worse than that - you can't easily extend the controller with custom routes or action routing beyond that. Thankfully Web API also supports Action based routing which allows you create RPC style endpoints fairly easily:RouteTable.Routes.MapHttpRoute( name: "AlbumRpcApiAction", routeTemplate: "albums/{action}/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumApi", action = "GetAblums" } ); This uses traditional MVC style {action} method routing which is different from the HTTP verb based routing you might have read a bunch about in conjunction with Web API. Action based routing like above lets you specify an end point method in a Web API controller either via the {action} parameter in the route string or via a default value for custom routes. Using routing you can pass multiple parameters either on the route itself or pass parameters on the query string, via ModelBinding or content value binding. For most common scenarios this actually works very well. As long as you are passing either a single complex type via a POST operation, or multiple simple types via query string or POST buffer, there's no issue. But if you need to pass multiple parameters as was easily done with WCF REST or ASP.NET AJAX things are not so obvious. Web API has no issue allowing for single parameter like this:[HttpPost] public string PostAlbum(Album album) { return String.Format("{0} {1:d}", album.AlbumName, album.Entered); } There are actually two ways to call this endpoint: albums/PostAlbum Using the Model Binder with plain POST values In this mechanism you're sending plain urlencoded POST values to the server which the ModelBinder then maps the parameter. Each property value is matched to each matching POST value. This works similar to the way that MVC's  ModelBinder works. Here's how you can POST using the ModelBinder and jQuery:$.ajax( { url: "albums/PostAlbum", type: "POST", data: { AlbumName: "Dirty Deeds", Entered: "5/1/2012" }, success: function (result) { alert(result); }, error: function (xhr, status, p3, p4) { var err = "Error " + " " + status + " " + p3; if (xhr.responseText && xhr.responseText[0] == "{") err = JSON.parse(xhr.responseText).message; alert(err); } }); Here's what the POST data looks like for this request: The model binder and it's straight form based POST mechanism is great for posting data directly from HTML pages to model objects. It avoids having to do manual conversions for many operations and is a great boon for AJAX callback requests. Using Web API JSON Formatter The other option is to post data using a JSON string. The process for this is similar except that you create a JavaScript object and serialize it to JSON first.album = { AlbumName: "PowerAge", Entered: new Date(1977,0,1) } $.ajax( { url: "albums/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify(album), success: function (result) { alert(result); } }); Here the data is sent using a JSON object rather than form data and the data is JSON encoded over the wire. The trace reveals that the data is sent using plain JSON (Source above), which is a little more efficient since there's no UrlEncoding that occurs. BTW, notice that WebAPI automatically deals with the date. I provided the date as a plain string, rather than a JavaScript date value and the Formatter and ModelBinder both automatically map the date propertly to the Entered DateTime property of the Album object. Passing multiple Parameters to a Web API Controller Single parameters work fine in either of these RPC scenarios and that's to be expected. ModelBinding always works against a single object because it maps a model. But what happens when you want to pass multiple parameters? Consider an API Controller method that has a signature like the following:[HttpPost] public string PostAlbum(Album album, string userToken) Here I'm asking to pass two objects to an RPC method. Is that possible? This used to be fairly straight forward either with WCF REST and ASP.NET AJAX ASMX services, but as far as I can tell this is not directly possible using a POST operation with WebAPI. There a few workarounds that you can use to make this work: Use both POST *and* QueryString Parameters in Conjunction If you have both complex and simple parameters, you can pass simple parameters on the query string. The above would actually work with: /album/PostAlbum?userToken=sekkritt but that's not always possible. In this example it might not be a good idea to pass a user token on the query string though. It also won't work if you need to pass multiple complex objects, since query string values do not support complex type mapping. They only work with simple types. Use a single Object that wraps the two Parameters If you go by service based architecture guidelines every service method should always pass and return a single value only. The input should wrap potentially multiple input parameters and the output should convey status as well as provide the result value. You typically have a xxxRequest and a xxxResponse class that wraps the inputs and outputs. Here's what this method might look like:public PostAlbumResponse PostAlbum(PostAlbumRequest request) { var album = request.Album; var userToken = request.UserToken; return new PostAlbumResponse() { IsSuccess = true, Result = String.Format("{0} {1:d} {2}", album.AlbumName, album.Entered,userToken) }; } with these support types:public class PostAlbumRequest { public Album Album { get; set; } public User User { get; set; } public string UserToken { get; set; } } public class PostAlbumResponse { public string Result { get; set; } public bool IsSuccess { get; set; } public string ErrorMessage { get; set; } }   To call this method you now have to assemble these objects on the client and send it up as JSON:var album = { AlbumName: "PowerAge", Entered: "1/1/1977" } var user = { Name: "Rick" } var userToken = "sekkritt"; $.ajax( { url: "samples/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify({ Album: album, User: user, UserToken: userToken }), success: function (result) { alert(result.Result); } }); I assemble the individual types first and then combine them in the data: property of the $.ajax() call into the actual object passed to the server, that mimics the structure of PostAlbumRequest server class that has Album, User and UserToken properties. This works well enough but it gets tedious if you have to create Request and Response types for each method signature. If you have common parameters that are always passed (like you always pass an album or usertoken) you might be able to abstract this to use a single object that gets reused for all methods, but this gets confusing too: Overload a single 'parameter' too much and it becomes a nightmare to decipher what your method actual can use. Use JObject to parse multiple Property Values out of an Object If you recall, ASP.NET AJAX and WCF REST used a 'wrapper' object to make default AJAX calls. Rather than directly calling a service you always passed an object which contained properties for each parameter: { parm1: Value, parm2: Value2 } WCF REST/ASP.NET AJAX would then parse this top level property values and map them to the parameters of the endpoint method. This automatic type wrapping functionality is no longer available directly in Web API, but since Web API now uses JSON.NET for it's JSON serializer you can actually simulate that behavior with a little extra code. You can use the JObject class to receive a dynamic JSON result and then using the dynamic cast of JObject to walk through the child objects and even parse them into strongly typed objects. Here's how to do this on the API Controller end:[HttpPost] public string PostAlbum(JObject jsonData) { dynamic json = jsonData; JObject jalbum = json.Album; JObject juser = json.User; string token = json.UserToken; var album = jalbum.ToObject<Album>(); var user = juser.ToObject<User>(); return String.Format("{0} {1} {2}", album.AlbumName, user.Name, token); } This is clearly not as nice as having the parameters passed directly, but it works to allow you to pass multiple parameters and access them using Web API. JObject is JSON.NET's generic object container which sports a nice dynamic interface that allows you to walk through the object's properties using standard 'dot' object syntax. All you have to do is cast the object to dynamic to get access to the property interface of the JSON type. Additionally JObject also allows you to parse JObject instances into strongly typed objects, which enables us here to retrieve the two objects passed as parameters from this jquery code:var album = { AlbumName: "PowerAge", Entered: "1/1/1977" } var user = { Name: "Rick" } var userToken = "sekkritt"; $.ajax( { url: "samples/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify({ Album: album, User: user, UserToken: userToken }), success: function (result) { alert(result); } }); Summary ASP.NET Web API brings many new features and many advantages over the older Microsoft AJAX and REST APIs, but realize that some things like passing multiple strongly typed object parameters will work a bit differently. It's not insurmountable, but just knowing what options are available to simulate this behavior is good to know. Now let me say here that it's probably not a good practice to pass a bunch of parameters to an API call. Ideally APIs should be closely factored to accept single parameters or a single content parameter at least along with some identifier parameters that can be passed on the querystring. But saying that doesn't mean that occasionally you don't run into a situation where you have the need to pass several objects to the server and all three of the options I mentioned might have merit in different situations. For now I'm sure the question of how to pass multiple parameters will come up quite a bit from people migrating WCF REST or ASP.NET AJAX code to Web API. At least there are options available to make it work.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • CodePlex Daily Summary for Monday, December 06, 2010

    CodePlex Daily Summary for Monday, December 06, 2010Popular ReleasesAura: Aura Preview 1: Rewritten from scratch. This release supports getting color only from icon of foreground window.myCollections: Version 1.2: New in version 1.2: Big performance improvement. New Design (Added Outlook style View, New detail view, New Groub By...) Added Sort by Media Added Manage Movie Studio Zoom preference is now saved. Media name are now editable. Added Portuguese version You can now Hide details panel Add support for FLAC tags You can now imports books from BibTex Xml file BugFixingmytrip.mvc (CMS & e-Commerce): mytrip.mvc 1.0.49.0 beta: mytrip.mvc 1.0.49.0 beta web Web for install hosting System Requirements: NET 4.0, MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) mytrip.mvc 1.0.49.0 beta src System Requirements: Visual Studio 2010 or Web Deweloper 2010 MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) Connector/Net 6.3.4, MVC3 RC WARNING For run and debug mytrip.mvc 1.0.49.0 beta src download and ...Menu and Context Menu for Silverlight 4.0: Silverlight Menu and Context Menu v2.3 Beta: - Added keyboard navigation support with access keys - Shortcuts like Ctrl-Alt-A are now supported(where the browser permits it) - The PopupMenuSeparator is now completely based on the PopupMenuItem class - Moved item manipulation code to a partial class in PopupMenuItemsControl.cs - Moved menu management and keyboard navigation code to the new PopupMenuManager class - Simplified the layout by removing the RootGrid element(all content is now placed in OverlayCanvas and is accessed by the new ...SubtitleTools: SubtitleTools 1.0: First public releaseMiniTwitter: 1.62: MiniTwitter 1.62 ???? ?? ??????????????????????????????????????? 140 ?????????????????????????? ???????????????????????????????? ?? ??????????????????????????????????Phalanger - The PHP Language Compiler for the .NET Framework: 2.0 (December 2010): The release is targetted for stable daily use. With improved performance and enhanced compatibility with several latest PHP open source applications; it makes this release perfect replacement of your old PHP runtime. Changes made within this release include following and much more: Performance improvements based on real-world applications experience. We determined biggest bottlenecks and we found and removed overheads causing performance problems in many PHP applications. Reimplemented nat...Chronos WPF: Chronos v2.0 Beta 3: Release notes: Updated introduction document. Updated Visual Studio 2010 Extension (vsix) package. Added horizontal scrolling to the main window TaskBar. Added new styles for ListView, ListViewItem, GridViewColumnHeader, ... Added a new WindowViewModel class (allowing to fetch data). Added a new Navigate method (with several overloads) to the NavigationViewModel class (protected). Reimplemented Task usage for the WorkspaceViewModel.OnDelete method. Removed the reflection effect...MDownloader: MDownloader-0.15.26.7024: Fixed updater; Fixed MegauploadDJ - jQuery WebControls for ASP.NET: DJ 1.2: What is new? Update to support jQuery 1.4.2 Update to support jQuery ui 1.8.6 Update to Visual Studio 2010 New WebControls with samples added Autocomplete WebControl Button WebControl ToggleButt WebControl The example web site is including in source code project.LateBindingApi.Excel: LateBindingApi.Excel Release 0.7g: Unterschiede zur Vorgängerversion: - Zusätzliche Interior Properties - Group / Ungroup Methoden für Range - Bugfix COM Reference Handling für Application Objekt in einigen Klassen Release+Samples V0.7g: - Enthält Laufzeit DLL und Beispielprojekte Beispielprojekte: COMAddinExample - Demonstriert ein versionslos angebundenes COMAddin Example01 - Background Colors und Borders für Cells Example02 - Font Attributes undAlignment für Cells Example03 - Numberformats Example04 - Shapes, WordArts, P...ESRI ArcGIS Silverlight Toolkit: November 2010 - v2.1: ESRI ArcGIS Silverlight Toolkit v2.1 Added Windows Phone 7 build. New controls added: InfoWindow ChildPage (Windows Phone 7 only) See what's new here full details for : http://help.arcgis.com/en/webapi/silverlight/help/#/What_s_new_in_2_1/016600000025000000/ Note: Requires Visual Studio 2010, .NET 4.0 and Silverlight 4.0.ASP .NET MVC CMS (Content Management System): Atomic CMS 2.1.1: Atomic CMS 2.1.1 release notes Atomic CMS installation guide Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.6.5 beta Released: Hi, Today we are releasing Visifire 3.6.5 beta with the following new feature: New property AutoFitToPlotArea has been introduced in DataSeries. AutoFitToPlotArea will bring bubbles inside the PlotArea in order to avoid clipping of bubbles in bubble chart. Also this release includes few bug fixes: AxisXLabel label were getting clipped if angle was set for AxisLabels and ScrollingEnabled was not set in Chart. If LabelStyle property was set as 'Inside', size of the Pie was not proper. Yo...EnhSim: EnhSim 2.1.1: 2.1.1This release adds in the changes for 4.03a. To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Switched Searing Flames bac...AI: Initial 0.0.1: It’s simply just one code file; it simulates AI and machine in a simulated world. The AI has a little understanding of its body machine and parts, and able to use its feet to do actions just start and stop walking. The world is all of white with nothing but just the machine on a white planet. Colors, odors and position information make no sense. I’m previous C# programmer and I’m learning F# during this project, although I’m still not a good F# programmer, in this project I learning to prog...NKinect: NKinect Preview: Build features: Accelerometer reading Motor serial number property Realtime image update Realtime depth calculation Export to PLY (On demand) Control motor LED Control Kinect tiltMicrosoft - Domain Oriented N-Layered .NET 4.0 App Sample (Microsoft Spain): V1.0 - N-Layer DDD Sample App .NET 4.0: Required Software (Microsoft Base Software needed for Development environment) Visual Studio 2010 RTM & .NET 4.0 RTM (Final Versions) Expression Blend 4 SQL Server 2008 R2 Express/Standard/Enterprise Unity Application Block 2.0 - Published May 5th 2010 http://www.microsoft.com/downloads/en/details.aspx?FamilyID=2D24F179-E0A6-49D7-89C4-5B67D939F91B&displaylang=en http://unity.codeplex.com/releases/view/31277 PEX & MOLES 0.94.51023.0, 29/Oct/2010 - Visual Studio 2010 Power Tools http://re...Sense/Net Enterprise Portal & ECMS: SenseNet 6.0.1 Community Edition: Sense/Net 6.0.1 Community Edition This half year we have been working quite fiercely to bring you the long-awaited release of Sense/Net 6.0. Download this Community Edition to see what we have been up to. These months we have worked on getting the WebCMS capabilities of Sense/Net 6.0 up to par. New features include: New, powerful page and portlet editing experience. HTML and CSS cleanup, new, powerful site skinning system. Upgraded, lightning-fast indexing and query via Lucene. Limita...Minecraft GPS: Minecraft GPS 1.1.1: New Features Compass! New style. Set opacity on main window to allow overlay of Minecraft. Open World in any folder. Fixes Fixed style so listbox won't grow the window size. Fixed open file dialog issue on non-vista kernel machines.New ProjectsAboutTime: The AboutTime WPF controls project is aimed at developing custom controls that relate to time.aReader: aReader is a free software, it's used as an XPS document reader. It's developed in C# Language and use Windows Presentation Foundation technology with .NET Framework 3.5. Mixed with Ribbon Controls Library for GUI (Graphic User Interface) make this application user friendly.Battle Net Info: Battle Net Info provides information of the StarCraft2 player from his profile pageBencoder: Library for encode/decode bencode file or string. It's developing on C#.BiBongNet: BiBongNet Project.Binhnt: BinhntC++ Bloom Filter Library: C++ Bloom Filter LibraryChild Sponsorship Manager: Sponsorship Manager is developed for a NPO that provides child sponsorship in developing countries. It is possible to track sponsor child relations, gifts and payments. It is developed in visual basic . netDocBlogger: This is a tool for automatically converting existing XML comments from your project into MSDN style HTML for posting to the codeplex site. This will use the MetaBlog API to post code, but can be used in a copy paste fashion right away.Dynamic Rdlc WebControl: "Dynamic Rdlc WebControl" is an ASP .NET WebControl to generate dynamic reports in RDLC format without generate physical files. Suports groups and totalizers. It is developed with Microsoft Visual Studio 2010, ASP .NET and C# 4.Fake Call for Windows Phone 7: Coding4Fun Windows Phone 7 fake call applicationFlow launcher: Flow is the worlds fastest application launcher, using an onscreen keyboard and mnemonics to achieve lightning fast shortcut launching.GaDotNet: GaDotNet is an open source library designed to make it easy to log page views, events and transactions, through c# code, without using JavaScript or even needing to have a browser.HackerNews for WP7: HackerNews is a WP7 client for the HackerNews website.How much is this meeting costing us?: Coding4Fun Windows Phone 7 "How much is this meeting costing us?" applicationKLAB: KLABMap Navigator: Map Navigator - it's a silverlight application intended to work with maps.MNRT: MNRT implements (demonstrates) several techniques to realize fast global illumination for dynamic scenes on Graphics Processing Units (GPUs) using CUDA. A GPU-based kd-tree was implemented to accelerate both ray tracing and photon mapping.MVC Helpers: MVC Helper makes developing views easier. It contains extended helpers classes to render view content. It is developed in C#.Net Extended Helpers for Grid has been created so far. MVCPets: This is a projected dedicated to providing a free platform to be used by animal rescue organizations. The hope is that this project can fill the void for those rescue groups that can't afford to pay a professional web designer/developer.MyGraphicProgram: ???????????????NAI: This project is a step by step illustration of some Numerical Analysis methods.Nemono: Nemono is an application that runs in the background, and is activated by pressing a key combination like ALT+W. When activated, Nemono uses context awareness to present relevant shortcuts to the user, and mnemonics to execute shortcuts.opojo: opojoOxyPlot: OxyPlot is a .NET library for making XY line plots. The focus is on simplicity and performance. The library contains custom controls for WPF and Windows Forms. The plots can also be exported to SVG, PDF and PNG.PowerChumby: PowerChumby is a Perl CGI script and a PowerShell module that gives you a PowerShell way of controlling your Chumby.RHoK Berlin Visio Projekt: Random Hacks of Kindness - Berlin Projekt für die Senatsverwaltung für Gesundheit, Umwelt und Verbraucherschutz Query, integrate and display external data in Microsoft Visio. It's developed in C#.sc2md: starcraft.md news portalSlide Show: Coding4Fun Windows Phone 7 Slide Show applicationsmartcon: smart control centerTFS Fav source: Favourites for source location in VSTwitter Followers Monitor: Free and Open Source tool that will let you monitor any Twitter account for its new & lost followers even if it's not yours and you don't have its credentials. It allows you to add several Twitter accounts and be updated right from your desktop.

    Read the article

  • Configure Forms based authentication in SharePoint 2010

    - by sreejukg
      Configuring form authentication is a straight forward task in SharePoint. Mostly public facing websites built on SharePoint requires form based authentication. Recently, one of the WCM implementation where I was included in the project team required registration system. Any internet user can register to the site and the site offering them some membership specific functionalities once the user logged in. Since the registration open for all, I don’t want to store all those users in Active Directory. I have decided to use Forms based authentication for those users. This is a typical scenario of form authentication in SharePoint implementation. To implement form authentication you require the following A data store where you are storing the users – technically this can be active directory, SQL server database, LDAP etc. Form authentication will redirect the user to the login page, if the request is not authenticated. In the login page, there will be controls that validate the user inputs against the configured data store. In this article, I am going to use SQL server database with ASP.Net membership API’s to configure form based authentication in SharePoint 2010. This article assumes that you have SQL membership database available. I already configured the membership and roles database using aspnet_regsql command. If you want to know how to configure membership database using aspnet_regsql command, read the below blog post. http://weblogs.asp.net/sreejukg/archive/2011/06/16/usage-of-aspnet-regsql-exe-in-asp-net-4.aspx The snapshot of the database after implementing membership and role manager is as follows. I have used the database name “aspnetdb_claim”. Make sure you have created the database and make sure your database contains tables and stored procedures for membership. Create a web application with claims based authentication. This article assumes you already created a web application using claims based authentication. If you want to enable forms based authentication in SharePoint 2010, you must enable claims based authentication. Read this post for creating a web application using claims based authentication. http://weblogs.asp.net/sreejukg/archive/2011/06/15/create-a-web-application-in-sharepoint-2010-using-claims-based-authentication.aspx  You make sure, you have selected enable form authentication, and then selected Membership provider and Role manager name. To make sure you are done with the configuration, navigate to central administration website, from central administration, navigate to the Web Applications page, select the web application and click on icon, you will see the authentication providers for the current web application. Go to the section Claims authentication types, and make sure you have enabled forms based authentication. As mentioned in the snapshot, I have named the membership provider as SPFormAuthMembership and role manager as SPFormAuthRoleManager. You can choose your own names as you need. Modify the configuration files(Web.Config) to enable form authentication There are three applications that needs to be configured to support form authentication. The following are those applications. Central Administration If you want to assign permissions to web application using the credentials from form authentication, you need to update Central Administration configuration. If you do not want to access form authentication credentials from Central Administration, just leave this step.  STS service application Security Token service is the service application that issues security token when users are logging in. You need to modify the configuration of STS application to make sure users are able to login. To find the STS application, follow the following steps Go to the IIS Manager Expand the sites Node, you will see SharePoint Web Services Expand SharePoint Web Services, you can see SecurityTokenServiceApplication Right click SecuritytokenServiceApplication and click explore, it will open the corresponding file system. By default, the path for STS is C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\WebServices\SecurityToken You need to modify the configuration file available in the mentioned location. The web application that needs to be enabled with form authentication. You need to modify the configuration of your web application to make sure your web application identifies users from the form authentication.   Based on the above, I am going to modify the web configuration. At end of each step, I have mentioned the expected output. I recommend you to go step by step and after each step, make sure the configuration changes are working as expected. If you do everything all together, and test your application at the end, you may face difficulties in troubleshooting the configuration errors. Modifications for Central Administration Web.Config Open the web.config for the Central administration in a text editor. I always prefer Visual Studio, for editing web.config. In most cases, the path of the web.config for the central administration website is as follows C:\inetpub\wwwroot\wss\VirtualDirectories\<port number> Make sure you keep a backup copy of the web.config, before editing it. Let me summarize what we are going to do with Central Administration web.config. First I am going to add a connection string that points to the form authentication database, that I created as mentioned in previous steps. Then I need to add a membership provider and a role manager with the corresponding connectionstring. Then I need to update the peoplepickerwildcards section to make sure the users are appearing in search results. By default there is no connection string available in the web.config of Central Administration. Add a connection string just after the configsections element. The below is the connection string I have used all over the article. <add name="FormAuthConnString" connectionString="Initial Catalog=yourdatabasename;data source=databaseservername;Integrated Security=SSPI;" /> Once you added the connection string, the web.config look similar to Now add membership provider to the code. In web.config for CA, there will be <membership> tag, search for it. You will find membership and role manager under the <system.web> element. Under the membership providers section add the below code… <add name="SPFormAuthMembership" type="System.Web.Security.SqlMembershipProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" applicationName="FormAuthApplication" connectionStringName="FormAuthConnString" /> After adding memberhip element, see the snapshot of the web.config. Now you need to add role manager element to the web.config. Insider providers element under rolemanager, add the below code. <add name="SPFormAuthRoleManager" type="System.Web.Security.SqlRoleProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" applicationName="FormAuthApplication" connectionStringName="FormAuthConnString" /> After adding, your role manager will look similar to the following. As a last step, you need to update the people picker wildcard element in web.config, so that the users from your membership provider are available for browsing in Central Administration. Search for PeoplePickerWildcards in the web.config, add the following inside the <PeoplePickerWildcards> tag. <add key="SPFormAuthMembership" value="%" /> After adding this element, your web.config will look like After completing these steps, you can browse the users available in the SQL server database from central administration website. Go to the site collection administrator’s page from central administration. Select the site collection you have created for form authentication. Click on the people picker icon, choose Forms Auth and click on the search icon, you will see the users listed from the SQL server database. Once you complete these steps, make sure the users are available for browsing from central administration website. If you are unable to find the users, there must be some errors in the configuration, check windows event logs to find related errors and fix them. Change the web.config for STS application Open the web.config for STS application in text editor. By default, STS web.config does not have system.Web or connectionstrings section. Just after the System.Webserver element, add the following code. <connectionStrings> <add name="FormAuthConnString" connectionString="Initial Catalog=aspnetdb_claim;data source=sp2010_db;Integrated Security=SSPI;" /> </connectionStrings> <system.web> <roleManager enabled="true" cacheRolesInCookie="false" cookieName=".ASPXROLES" cookieTimeout="30" cookiePath="/" cookieRequireSSL="false" cookieSlidingExpiration="true" cookieProtection="All" createPersistentCookie="false" maxCachedResults="25"> <providers> <add name="SPFormAuthRoleManager" type="System.Web.Security.SqlRoleProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" applicationName="FormAuthApplication" connectionStringName="FormAuthConnString" /> </providers> </roleManager> <membership userIsOnlineTimeWindow="15" hashAlgorithmType=""> <providers> <add name="SPFormAuthMembership" type="System.Web.Security.SqlMembershipProvider, System.Web, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" applicationName="FormAuthApplication" connectionStringName="FormAuthConnString" /> </providers> </membership> </system.web> See the snapshot of the web.config after adding the required elements. After adding this, you should be able to login using the credentials from SQL server. Try assigning a user as primary/secondary administrator for your site collection from Central Administration and login to your site using form authentication. If you made everything correct, you should be able to login. This means you have successfully completed configuration of STS Configuration of Web Application for Form Authentication As a last step, you need to modify the web.config of the form authentication web application. Once you have done this, you should be able to grant permissions to users stored in the membership database. Open the Web.config of the web application you created for form authentication. You can find the web.config for the application under the path C:\inetpub\wwwroot\wss\VirtualDirectories\<port number> Basically you need to add connection string, membership provider, role manager and update the people picker wild card configuration. Add the connection string (same as the one you added to the web.config in Central Administration). See the screenshot after the connection string has added. Search for <membership> in the web.config, you will find this inside system.web element. There will be other providers already available there. You add your form authentication membership provider (similar to the one added to Central Administration web.config) to the provider element under membership. Find the snapshot of membership configuration as follows. Search for <roleManager> element in web.config, add the new provider name under providers section of the roleManager element. See the snapshot of web.config after new provider added. Now you need to configure the peoplepickerwildcard configuration in web.config. As I specified earlier, this is to make sure, you can locate the users by entering a part of their username. Add the following line under the <PeoplePickerWildcards> element in web.config. See the screenshot of the peoplePickerWildcards element after the element has been added. Now you have completed all the setup for form authentication. Navigate to the web application. From the site actions -> site settings -> go to peope and groups Click on new -> add users, it will popup the people picker dialog. Click on the icon, select Form Auth, enter a username in the search textbox, and click on search icon. See the screenshot of admin search when I tried searching the users If it displays the user, it means you are done with the configuration. If you add users to the form authentication database, the users will be able to access SharePoint portal as normal.

    Read the article

  • Elfsign Object Signing on Solaris

    - by danx
    Elfsign Object Signing on Solaris Don't let this happen to you—use elfsign! Solaris elfsign(1) is a command that signs and verifies ELF format executables. That includes not just executable programs (such as ls or cp), but other ELF format files including libraries (such as libnvpair.so) and kernel modules (such as autofs). Elfsign has been available since Solaris 10 and ELF format files distributed with Solaris, since Solaris 10, are signed by either Sun Microsystems or its successor, Oracle Corporation. When an ELF file is signed, elfsign adds a new section the ELF file, .SUNW_signature, that contains a RSA public key signature and other information about the signer. That is, the algorithm used, algorithm OID, signer CN/OU, and time stamp. The signature section can later be verified by elfsign or other software by matching the signature in the file agains the ELF file contents (excluding the signature). ELF executable files may also be signed by a 3rd-party or by the customer. This is useful for verifying the origin and authenticity of executable files installed on a system. The 3rd-party or customer public key certificate should be installed in /etc/certs/ to allow verification by elfsign. For currently-released versions of Solaris, only cryptographic framework plugin libraries are verified by Solaris. However, all ELF files may be verified by the elfsign command at any time. Elfsign Algorithms Elfsign signatures are created by taking a digest of the ELF section contents, then signing the digest with RSA. To verify, one takes a digest of ELF file and compares with the expected digest that's computed from the signature and RSA public key. Originally elfsign took a MD5 digest of a SHA-1 digest of the ELF file sections, then signed the resulting digest with RSA. In Solaris 11.1 then Solaris 11.1 SRU 7 (5/2013), the elfsign crypto algorithms available have been expanded to keep up with evolving cryptography. The following table shows the available elfsign algorithms: Elfsign Algorithm Solaris Release Comments elfsign sign -F rsa_md5_sha1   S10, S11.0, S11.1 Default for S10. Not recommended* elfsign sign -F rsa_sha1 S11.1 Default for S11.1. Not recommended elfsign sign -F rsa_sha256 S11.1 patch SRU7+   Recommended ___ *Most or all CAs do not accept MD5 CSRs and do not issue MD5 certs due to MD5 hash collision problems. RSA Key Length. I recommend using RSA-2048 key length with elfsign is RSA-2048 as the best balance between a long expected "life time", interoperability, and performance. RSA-2048 keys have an expected lifetime through 2030 (and probably beyond). For details, see Recommendation for Key Management: Part 1: General, NIST Publication SP 800-57 part 1 (rev. 3, 7/2012, PDF), tables 2 and 4 (pp. 64, 67). Step 1: create or obtain a key and cert The first step in using elfsign is to obtain a key and cert from a public Certificate Authority (CA), or create your own self-signed key and cert. I'll briefly explain both methods. Obtaining a Certificate from a CA To obtain a cert from a CA, such as Verisign, Thawte, or Go Daddy (to name a few random examples), you create a private key and a Certificate Signing Request (CSR) file and send it to the CA, following the instructions of the CA on their website. They send back a signed public key certificate. The public key cert, along with the private key you created is used by elfsign to sign an ELF file. The public key cert is distributed with the software and is used by elfsign to verify elfsign signatures in ELF files. You need to request a RSA "Class 3 public key certificate", which is used for servers and software signing. Elfsign uses RSA and we recommend RSA-2048 keys. The private key and CSR can be generated with openssl(1) or pktool(1) on Solaris. Here's a simple example that uses pktool to generate a private RSA_2048 key and a CSR for sending to a CA: $ pktool gencsr keystore=file format=pem outcsr=MYCSR.p10 \ subject="CN=canineswworks.com,OU=Canine SW object signing" \ outkey=MYPRIVATEKEY.key $ openssl rsa -noout -text -in MYPRIVATEKEY.key Private-Key: (2048 bit) modulus: 00:d2:ef:42:f2:0b:8c:96:9f:45:32:fc:fe:54:94: . . . [omitted for brevity] . . . c9:c7 publicExponent: 65537 (0x10001) privateExponent: 26:14:fc:49:26:bc:a3:14:ee:31:5e:6b:ac:69:83: . . . [omitted for brevity] . . . 81 prime1: 00:f6:b7:52:73:bc:26:57:26:c8:11:eb:6c:dc:cb: . . . [omitted for brevity] . . . bc:91:d0:40:d6:9d:ac:b5:69 prime2: 00:da:df:3f:56:b2:18:46:e1:89:5b:6c:f1:1a:41: . . . [omitted for brevity] . . . f3:b7:48:de:c3:d9:ce:af:af exponent1: 00:b9:a2:00:11:02:ed:9a:3f:9c:e4:16:ce:c7:67: . . . [omitted for brevity] . . . 55:50:25:70:d3:ca:b9:ab:99 exponent2: 00:c8:fc:f5:57:11:98:85:8e:9a:ea:1f:f2:8f:df: . . . [omitted for brevity] . . . 23:57:0e:4d:b2:a0:12:d2:f5 coefficient: 2f:60:21:cd:dc:52:76:67:1a:d8:75:3e:7f:b0:64: . . . [omitted for brevity] . . . 06:94:56:d8:9d:5c:8e:9b $ openssl req -noout -text -in MYCSR.p10 Certificate Request: Data: Version: 2 (0x2) Subject: OU=Canine SW object signing, CN=canineswworks.com Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: 00:d2:ef:42:f2:0b:8c:96:9f:45:32:fc:fe:54:94: . . . [omitted for brevity] . . . c9:c7 Exponent: 65537 (0x10001) Attributes: Signature Algorithm: sha1WithRSAEncryption b3:e8:30:5b:88:37:68:1c:26:6b:45:af:5e:de:ea:60:87:ea: . . . [omitted for brevity] . . . 06:f9:ed:b4 Secure storage of RSA private key. The private key needs to be protected if the key signing is used for production (as opposed to just testing). That is, protect the key to protect against unauthorized signatures by others. One method is to use a PIN-protected PKCS#11 keystore. The private key you generate should be stored in a secure manner, such as in a PKCS#11 keystore using pktool(1). Otherwise others can sign your signature. Other secure key storage mechanisms include a SCA-6000 crypto card, a USB thumb drive stored in a locked area, a dedicated server with restricted access, Oracle Key Manager (OKM), or some combination of these. I also recommend secure backup of the private key. Here's an example of generating a private key protected in the PKCS#11 keystore, and a CSR. $ pktool setpin # use if PIN not set yet Enter token passphrase: changeme Create new passphrase: Re-enter new passphrase: Passphrase changed. $ pktool gencsr keystore=pkcs11 label=MYPRIVATEKEY \ format=pem outcsr=MYCSR.p10 \ subject="CN=canineswworks.com,OU=Canine SW object signing" $ pktool list keystore=pkcs11 Enter PIN for Sun Software PKCS#11 softtoken: Found 1 asymmetric public keys. Key #1 - RSA public key: MYPRIVATEKEY Here's another example that uses openssl instead of pktool to generate a private key and CSR: $ openssl genrsa -out cert.key 2048 $ openssl req -new -key cert.key -out MYCSR.p10 Self-Signed Cert You can use openssl or pktool to create a private key and a self-signed public key certificate. A self-signed cert is useful for development, testing, and internal use. The private key created should be stored in a secure manner, as mentioned above. The following example creates a private key, MYSELFSIGNED.key, and a public key cert, MYSELFSIGNED.pem, using pktool and displays the contents with the openssl command. $ pktool gencert keystore=file format=pem serial=0xD06F00D lifetime=20-year \ keytype=rsa hash=sha256 outcert=MYSELFSIGNED.pem outkey=MYSELFSIGNED.key \ subject="O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com" $ pktool list keystore=file objtype=cert infile=MYSELFSIGNED.pem Found 1 certificates. 1. (X.509 certificate) Filename: MYSELFSIGNED.pem ID: c8:24:59:08:2b:ae:6e:5c:bc:26:bd:ef:0a:9c:54:de:dd:0f:60:46 Subject: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com Issuer: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com Not Before: Oct 17 23:18:00 2013 GMT Not After: Oct 12 23:18:00 2033 GMT Serial: 0xD06F00D0 Signature Algorithm: sha256WithRSAEncryption $ openssl x509 -noout -text -in MYSELFSIGNED.pem Certificate: Data: Version: 3 (0x2) Serial Number: 3496935632 (0xd06f00d0) Signature Algorithm: sha256WithRSAEncryption Issuer: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com Validity Not Before: Oct 17 23:18:00 2013 GMT Not After : Oct 12 23:18:00 2033 GMT Subject: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: 00:bb:e8:11:21:d9:4b:88:53:8b:6c:5a:7a:38:8b: . . . [omitted for brevity] . . . bf:77 Exponent: 65537 (0x10001) Signature Algorithm: sha256WithRSAEncryption 9e:39:fe:c8:44:5c:87:2c:8f:f4:24:f6:0c:9a:2f:64:84:d1: . . . [omitted for brevity] . . . 5f:78:8e:e8 $ openssl rsa -noout -text -in MYSELFSIGNED.key Private-Key: (2048 bit) modulus: 00:bb:e8:11:21:d9:4b:88:53:8b:6c:5a:7a:38:8b: . . . [omitted for brevity] . . . bf:77 publicExponent: 65537 (0x10001) privateExponent: 0a:06:0f:23:e7:1b:88:62:2c:85:d3:2d:c1:e6:6e: . . . [omitted for brevity] . . . 9c:e1:e0:0a:52:77:29:4a:75:aa:02:d8:af:53:24: c1 prime1: 00:ea:12:02:bb:5a:0f:5a:d8:a9:95:b2:ba:30:15: . . . [omitted for brevity] . . . 5b:ca:9c:7c:19:48:77:1e:5d prime2: 00:cd:82:da:84:71:1d:18:52:cb:c6:4d:74:14:be: . . . [omitted for brevity] . . . 5f:db:d5:5e:47:89:a7:ef:e3 exponent1: 32:37:62:f6:a6:bf:9c:91:d6:f0:12:c3:f7:04:e9: . . . [omitted for brevity] . . . 97:3e:33:31:89:66:64:d1 exponent2: 00:88:a2:e8:90:47:f8:75:34:8f:41:50:3b:ce:93: . . . [omitted for brevity] . . . ff:74:d4:be:f3:47:45:bd:cb coefficient: 4d:7c:09:4c:34:73:c4:26:f0:58:f5:e1:45:3c:af: . . . [omitted for brevity] . . . af:01:5f:af:ad:6a:09:bf Step 2: Sign the ELF File object By now you should have your private key, and obtained, by hook or crook, a cert (either from a CA or use one you created (a self-signed cert). The next step is to sign one or more objects with your private key and cert. Here's a simple example that creates an object file, signs, verifies, and lists the contents of the ELF signature. $ echo '#include <stdio.h>\nint main(){printf("Hello\\n");}'>hello.c $ make hello cc -o hello hello.c $ elfsign verify -v -c MYSELFSIGNED.pem -e hello elfsign: no signature found in hello. $ elfsign sign -F rsa_sha256 -v -k MYSELFSIGNED.key -c MYSELFSIGNED.pem -e hello elfsign: hello signed successfully. format: rsa_sha256. signer: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com. signed on: October 17, 2013 04:22:49 PM PDT. $ elfsign list -f format -e hello rsa_sha256 $ elfsign list -f signer -e hello O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com $ elfsign list -f time -e hello October 17, 2013 04:22:49 PM PDT $ elfsign verify -v -c MYSELFSIGNED.key -e hello elfsign: verification of hello failed. format: rsa_sha256. signer: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com. signed on: October 17, 2013 04:22:49 PM PDT. Signing using the pkcs11 keystore To sign the ELF file using a private key in the secure pkcs11 keystore, replace "-K MYSELFSIGNED.key" in the "elfsign sign" command line with "-T MYPRIVATEKEY", where MYPRIVATKEY is the pkcs11 token label. Step 3: Install the cert and test on another system Just signing the object isn't enough. You need to copy or install the cert and the signed ELF file(s) on another system to test that the signature is OK. Your public key cert should be installed in /etc/certs. Use elfsign verify to verify the signature. Elfsign verify checks each cert in /etc/certs until it finds one that matches the elfsign signature in the file. If one isn't found, the verification fails. Here's an example: $ su Password: # rm /etc/certs/MYSELFSIGNED.key # cp MYSELFSIGNED.pem /etc/certs # exit $ elfsign verify -v hello elfsign: verification of hello passed. format: rsa_sha256. signer: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com. signed on: October 17, 2013 04:24:20 PM PDT. After testing, package your cert along with your ELF object to allow elfsign verification after your cert and object are installed or copied. Under the Hood: elfsign verification Here's the steps taken to verify a ELF file signed with elfsign. The steps to sign the file are similar except the private key exponent is used instead of the public key exponent and the .SUNW_signature section is written to the ELF file instead of being read from the file. Generate a digest (SHA-256) of the ELF file sections. This digest uses all ELF sections loaded in memory, but excludes the ELF header, the .SUNW_signature section, and the symbol table Extract the RSA signature (RSA-2048) from the .SUNW_signature section Extract the RSA public key modulus and public key exponent (65537) from the public key cert Calculate the expected digest as follows:     signaturepublicKeyExponent % publicKeyModulus Strip the PKCS#1 padding (most significant bytes) from the above. The padding is 0x00, 0x01, 0xff, 0xff, . . ., 0xff, 0x00. If the actual digest == expected digest, the ELF file is verified (OK). Further Information elfsign(1), pktool(1), and openssl(1) man pages. "Signed Solaris 10 Binaries?" blog by Darren Moffat (2005) shows how to use elfsign. "Simple CLI based CA on Solaris" blog by Darren Moffat (2008) shows how to set up a simple CA for use with self-signed certificates. "How to Create a Certificate by Using the pktool gencert Command" System Administration Guide: Security Services (available at docs.oracle.com)

    Read the article

  • Solaris 11.1: Changes to included FOSS packages

    - by alanc
    Besides the documentation changes I mentioned last time, another place you can see Solaris 11.1 changes before upgrading is in the online package repository, now that the 11.1 packages have been published to http://pkg.oracle.com/solaris/release/, as the “0.175.1.0.0.24.2” branch. (Oracle Solaris Package Versioning explains what each field in that version string means.) When you’re ready to upgrade to the packages from either this repo, or the support repository, you’ll want to first read How to Update to Oracle Solaris 11.1 Using the Image Packaging System by Pete Dennis, as there are a couple issues you will need to be aware of to do that upgrade, several of which are due to changes in the Free and Open Source Software (FOSS) packages included with Solaris, as I’ll explain in a bit. Solaris 11 can update more readily than Solaris 10 In the Solaris 10 and older update models, the way the updates were built constrained what changes we could make in those releases. To change an existing SVR4 package in those releases, we created a Solaris Patch, which applied to a given version of the SVR4 package and replaced, added or deleted files in it. These patches were released via the support websites (originally SunSolve, now My Oracle Support) for applying to existing Solaris 10 installations, and were also merged into the install images for the next Solaris 10 update release. (This Solaris Patches blog post from Gerry Haskins dives deeper into that subject.) Some of the restrictions of this model were that package refactoring, changes to package dependencies, and even just changing the package version number, were difficult to do in this hybrid patch/OS update model. For instance, when Solaris 10 first shipped, it had the Xorg server from X11R6.8. Over the first couple years of update releases we were able to keep it up to date by replacing, adding, & removing files as necessary, taking it all the way up to Xorg server release 1.3 (new version numbering begun after the X11R7 split of the X11 tree into separate modules gave each module its own version). But if you run pkginfo on the SUNWxorg-server package, you’ll see it still displayed a version number of 6.8, confusing users as to which version was actually included. We stopped upgrading the Xorg server releases in Solaris 10 after 1.3, as later versions added new dependencies, such as HAL, D-Bus, and libpciaccess, which were very difficult to manage in this patching model. (We later got libpciaccess to work, but HAL & D-Bus would have been much harder due to the greater dependency tree underneath those.) Similarly, every time the GNOME team looked into upgrading Solaris 10 past GNOME 2.6, they found these constraints made it so difficult it wasn’t worthwhile, and eventually GNOME’s dependencies had changed enough it was completely infeasible. Fortunately, this worked out for both the X11 & GNOME teams, with our management making the business decision to concentrate on the “Nevada” branch for desktop users - first as Solaris Express Desktop Edition, and later as OpenSolaris, so we didn’t have to fight to try to make the package updates fit into these tight constraints. Meanwhile, the team designing the new packaging system for Solaris 11 was seeing us struggle with these problems, and making this much easier to manage for both the development teams and our users was one of their big goals for the IPS design they were working on. Now that we’ve reached the first update release to Solaris 11, we can start to see the fruits of their labors, with more FOSS updates in 11.1 than we had in many Solaris 10 update releases, keeping software more up to date with the upstream communities. Of course, just because we can more easily update now, doesn’t always mean we should or will do so, it just removes the package system limitations from forcing the decision for us. So while we’ve upgraded the X Window System in the 11.1 release from X11R7.6 to 7.7, the Solaris GNOME team decided it was not the right time to try to make the jump from GNOME 2 to GNOME 3, though they did update some individual components of the desktop, especially those with security fixes like Firefox. In other parts of the system, decisions as to what to update were prioritized based on how they affected other projects, or what customer requests we’d gotten for them. So with all that background in place, what packages did we actually update or add between Solaris 11.0 and 11.1? Core OS Functionality One of the FOSS changes with the biggest impact in this release is the upgrade from Grub Legacy (0.97) to Grub 2 (1.99) for the x64 platform boot loader. This is the cause of one of the upgrade quirks, since to go from Solaris 11.0 to 11.1 on x64 systems, you first need to update the Boot Environment tools (such as beadm) to a new version that can handle boot environments that use the Grub2 boot loader. System administrators can find the details they need to know about the new Grub in the Administering the GRand Unified Bootloader chapter of the Booting and Shutting Down Oracle Solaris 11.1 Systems guide. This change was necessary to be able to support new hardware coming into the x64 marketplace, including systems using UEFI firmware or booting off disk drives larger than 2 terabytes. For both platforms, Solaris 11.1 adds rsyslog as an optional alternative to the traditional syslogd, and OpenSCAP for checking security configuration settings are compliant with site policies. Note that the support repo actually has newer versions of BIND & fetchmail than the 11.1 release, as some late breaking critical fixes came through from the community upstream releases after the Solaris 11.1 release was frozen, and made their way to the support repository. These are responsible for the other big upgrade quirk in this release, in which to upgrade a system which already installed those versions from the support repo, you need to either wait for those packages to make their way to the 11.1 branch of the support repo, or follow the steps in the aforementioned upgrade walkthrough to let the package system know it's okay to temporarily downgrade those. Developer Stack While Solaris 11.0 included Python 2.7, many of the bundled python modules weren’t packaged for it yet, limiting its usability. For 11.1, many more of the python modules include 2.7 versions (enough that I filtered them out of the below table, but you can always search on the package repository server for them. For other language runtimes and development tools, 11.1 expands the use of IPS mediated links to choose which version of a package is the default when the packages are designed to allow multiple versions to install side by side. For instance, in Solaris 11.0, GNU automake 1.9 and 1.10 were provided, and developers had to run them as either automake-1.9 or automake-1.10. In Solaris 11.1, when automake 1.11 was added, also added was a /usr/bin/automake mediated link, which points to the automake-1.11 program by default, but can be changed to another version by running the pkg set-mediator command. Mediated links were also used for the Java runtime & development kits in 11.1, changing the default versions to the Java 7 releases (the 1.7.0.x package versions), while allowing admins to switch links such as /usr/bin/javac back to Java 6 if they need to for their site, to deal with Java 7 compatibility or other issues, without having to update each usage to use the full versioned /usr/jdk/jdk1.6.0_35/bin/javac paths for every invocation. Desktop Stack As I mentioned before, we upgraded from X11R7.6 to X11R7.7, since a pleasant coincidence made the X.Org release dates line up nicely with our feature & code freeze dates for this release. (Or perhaps it wasn’t so coincidental, after all, one of the benefits of being the person making the release is being able to decide what schedule is most convenient for you, and this one worked well for me.) For the table below, I’ve skipped listing the packages in which we use the X11 “katamari” version for the Solaris package version (mainly packages combining elements of multiple upstream modules with independent version numbers), since they just all changed from 7.6 to 7.7. In the graphics drivers, we worked with Intel to update the Intel Integrated Graphics Processor support to support 3D graphics and kernel mode setting on the Ivy Bridge chipsets, and updated Nvidia’s non-FOSS graphics driver from 280.13 to 295.20. Higher up in the desktop stack, PulseAudio was added for audio support, and liblouis for Braille support, and the GNOME applications were built to use them. The Mozilla applications, Firefox & Thunderbird moved to the current Extended Support Release (ESR) versions, 10.x for each, to bring up-to-date security fixes without having to be on Mozilla’s agressive 6 week feature cycle release train. Detailed list of changes This table shows most of the changes to the FOSS packages between Solaris 11.0 and 11.1. As noted above, some were excluded for clarity, or to reduce noise and duplication. All the FOSS packages which didn't change the version number in their packaging info are not included, even if they had updates to fix bugs, security holes, or add support for new hardware or new features of Solaris. Package11.011.1 archiver/unrar 3.8.5 4.1.4 audio/sox 14.3.0 14.3.2 backup/rdiff-backup 1.2.1 1.3.3 communication/im/pidgin 2.10.0 2.10.5 compress/gzip 1.3.5 1.4 compress/xz not included 5.0.1 database/sqlite-3 3.7.6.3 3.7.11 desktop/remote-desktop/tigervnc 1.0.90 1.1.0 desktop/window-manager/xcompmgr 1.1.5 1.1.6 desktop/xscreensaver 5.12 5.15 developer/build/autoconf 2.63 2.68 developer/build/autoconf/xorg-macros 1.15.0 1.17 developer/build/automake-111 not included 1.11.2 developer/build/cmake 2.6.2 2.8.6 developer/build/gnu-make 3.81 3.82 developer/build/imake 1.0.4 1.0.5 developer/build/libtool 1.5.22 2.4.2 developer/build/makedepend 1.0.3 1.0.4 developer/documentation-tool/doxygen 1.5.7.1 1.7.6.1 developer/gnu-binutils 2.19 2.21.1 developer/java/jdepend not included 2.9 developer/java/jdk-6 1.6.0.26 1.6.0.35 developer/java/jdk-7 1.7.0.0 1.7.0.7 developer/java/jpackage-utils not included 1.7.5 developer/java/junit 4.5 4.10 developer/lexer/jflex not included 1.4.1 developer/parser/byaccj not included 1.14 developer/parser/java_cup not included 0.10 developer/quilt 0.47 0.60 developer/versioning/git 1.7.3.2 1.7.9.2 developer/versioning/mercurial 1.8.4 2.2.1 developer/versioning/subversion 1.6.16 1.7.5 diagnostic/constype 1.0.3 1.0.4 diagnostic/nmap 5.21 5.51 diagnostic/scanpci 0.12.1 0.13.1 diagnostic/wireshark 1.4.8 1.8.2 diagnostic/xload 1.1.0 1.1.1 editor/gnu-emacs 23.1 23.4 editor/vim 7.3.254 7.3.600 file/lndir 1.0.2 1.0.3 image/editor/bitmap 1.0.5 1.0.6 image/gnuplot 4.4.0 4.6.0 image/library/libexif 0.6.19 0.6.21 image/library/libpng 1.4.8 1.4.11 image/library/librsvg 2.26.3 2.34.1 image/xcursorgen 1.0.4 1.0.5 library/audio/pulseaudio not included 1.1 library/cacao 2.3.0.0 2.3.1.0 library/expat 2.0.1 2.1.0 library/gc 7.1 7.2 library/graphics/pixman 0.22.0 0.24.4 library/guile 1.8.4 1.8.6 library/java/javadb 10.5.3.0 10.6.2.1 library/java/subversion 1.6.16 1.7.5 library/json-c not included 0.9 library/libedit not included 3.0 library/libee not included 0.3.2 library/libestr not included 0.1.2 library/libevent 1.3.5 1.4.14.2 library/liblouis not included 2.1.1 library/liblouisxml not included 2.1.0 library/libtecla 1.6.0 1.6.1 library/libtool/libltdl 1.5.22 2.4.2 library/nspr 4.8.8 4.8.9 library/openldap 2.4.25 2.4.30 library/pcre 7.8 8.21 library/perl-5/subversion 1.6.16 1.7.5 library/python-2/jsonrpclib not included 0.1.3 library/python-2/lxml 2.1.2 2.3.3 library/python-2/nose not included 1.1.2 library/python-2/pyopenssl not included 0.11 library/python-2/subversion 1.6.16 1.7.5 library/python-2/tkinter-26 2.6.4 2.6.8 library/python-2/tkinter-27 2.7.1 2.7.3 library/security/nss 4.12.10 4.13.1 library/security/openssl 1.0.0.5 (1.0.0e) 1.0.0.10 (1.0.0j) mail/thunderbird 6.0 10.0.6 network/dns/bind 9.6.3.4.3 9.6.3.7.2 package/pkgbuild not included 1.3.104 print/filter/enscript not included 1.6.4 print/filter/gutenprint 5.2.4 5.2.7 print/lp/filter/foomatic-rip 3.0.2 4.0.15 runtime/java/jre-6 1.6.0.26 1.6.0.35 runtime/java/jre-7 1.7.0.0 1.7.0.7 runtime/perl-512 5.12.3 5.12.4 runtime/python-26 2.6.4 2.6.8 runtime/python-27 2.7.1 2.7.3 runtime/ruby-18 1.8.7.334 1.8.7.357 runtime/tcl-8/tcl-sqlite-3 3.7.6.3 3.7.11 security/compliance/openscap not included 0.8.1 security/nss-utilities 4.12.10 4.13.1 security/sudo 1.8.1.2 1.8.4.5 service/network/dhcp/isc-dhcp 4.1 4.1.0.6 service/network/dns/bind 9.6.3.4.3 9.6.3.7.2 service/network/ftp (ProFTPD) 1.3.3.0.5 1.3.3.0.7 service/network/samba 3.5.10 3.6.6 shell/conflict 0.2004.9.1 0.2010.6.27 shell/pipe-viewer 1.1.4 1.2.0 shell/zsh 4.3.12 4.3.17 system/boot/grub 0.97 1.99 system/font/truetype/liberation 1.4 1.7.2 system/library/freetype-2 2.4.6 2.4.9 system/library/libnet 1.1.2.1 1.1.5 system/management/cim/pegasus 2.9.1 2.11.0 system/management/ipmitool 1.8.10 1.8.11 system/management/wbem/wbemcli 1.3.7 1.3.9.1 system/network/routing/quagga 0.99.8 0.99.19 system/rsyslog not included 6.2.0 terminal/luit 1.1.0 1.1.1 text/convmv 1.14 1.15 text/gawk 3.1.5 3.1.8 text/gnu-grep 2.5.4 2.10 web/browser/firefox 6.0.2 10.0.6 web/browser/links 1.0 1.0.3 web/java-servlet/tomcat 6.0.33 6.0.35 web/php-53 not included 5.3.14 web/php-53/extension/php-apc not included 3.1.9 web/php-53/extension/php-idn not included 0.2.0 web/php-53/extension/php-memcache not included 3.0.6 web/php-53/extension/php-mysql not included 5.3.14 web/php-53/extension/php-pear not included 5.3.14 web/php-53/extension/php-suhosin not included 0.9.33 web/php-53/extension/php-tcpwrap not included 1.1.3 web/php-53/extension/php-xdebug not included 2.2.0 web/php-common not included 11.1 web/proxy/squid 3.1.8 3.1.18 web/server/apache-22 2.2.20 2.2.22 web/server/apache-22/module/apache-sed 2.2.20 2.2.22 web/server/apache-22/module/apache-wsgi not included 3.3 x11/diagnostic/xev 1.1.0 1.2.0 x11/diagnostic/xscope 1.3 1.3.1 x11/documentation/xorg-docs 1.6 1.7 x11/keyboard/xkbcomp 1.2.3 1.2.4 x11/library/libdmx 1.1.1 1.1.2 x11/library/libdrm 2.4.25 2.4.32 x11/library/libfontenc 1.1.0 1.1.1 x11/library/libfs 1.0.3 1.0.4 x11/library/libice 1.0.7 1.0.8 x11/library/libsm 1.2.0 1.2.1 x11/library/libx11 1.4.4 1.5.0 x11/library/libxau 1.0.6 1.0.7 x11/library/libxcb 1.7 1.8.1 x11/library/libxcursor 1.1.12 1.1.13 x11/library/libxdmcp 1.1.0 1.1.1 x11/library/libxext 1.3.0 1.3.1 x11/library/libxfixes 4.0.5 5.0 x11/library/libxfont 1.4.4 1.4.5 x11/library/libxft 2.2.0 2.3.1 x11/library/libxi 1.4.3 1.6.1 x11/library/libxinerama 1.1.1 1.1.2 x11/library/libxkbfile 1.0.7 1.0.8 x11/library/libxmu 1.1.0 1.1.1 x11/library/libxmuu 1.1.0 1.1.1 x11/library/libxpm 3.5.9 3.5.10 x11/library/libxrender 0.9.6 0.9.7 x11/library/libxres 1.0.5 1.0.6 x11/library/libxscrnsaver 1.2.1 1.2.2 x11/library/libxtst 1.2.0 1.2.1 x11/library/libxv 1.0.6 1.0.7 x11/library/libxvmc 1.0.6 1.0.7 x11/library/libxxf86vm 1.1.1 1.1.2 x11/library/mesa 7.10.2 7.11.2 x11/library/toolkit/libxaw7 1.0.9 1.0.11 x11/library/toolkit/libxt 1.0.9 1.1.3 x11/library/xtrans 1.2.6 1.2.7 x11/oclock 1.0.2 1.0.3 x11/server/xdmx 1.10.3 1.12.2 x11/server/xephyr 1.10.3 1.12.2 x11/server/xorg 1.10.3 1.12.2 x11/server/xorg/driver/xorg-input-keyboard 1.6.0 1.6.1 x11/server/xorg/driver/xorg-input-mouse 1.7.1 1.7.2 x11/server/xorg/driver/xorg-input-synaptics 1.4.1 1.6.2 x11/server/xorg/driver/xorg-input-vmmouse 12.7.0 12.8.0 x11/server/xorg/driver/xorg-video-ast 0.91.10 0.93.10 x11/server/xorg/driver/xorg-video-ati 6.14.1 6.14.4 x11/server/xorg/driver/xorg-video-cirrus 1.3.2 1.4.0 x11/server/xorg/driver/xorg-video-dummy 0.3.4 0.3.5 x11/server/xorg/driver/xorg-video-intel 2.10.0 2.18.0 x11/server/xorg/driver/xorg-video-mach64 6.9.0 6.9.1 x11/server/xorg/driver/xorg-video-mga 1.4.13 1.5.0 x11/server/xorg/driver/xorg-video-openchrome 0.2.904 0.2.905 x11/server/xorg/driver/xorg-video-r128 6.8.1 6.8.2 x11/server/xorg/driver/xorg-video-trident 1.3.4 1.3.5 x11/server/xorg/driver/xorg-video-vesa 2.3.0 2.3.1 x11/server/xorg/driver/xorg-video-vmware 11.0.3 12.0.2 x11/server/xserver-common 1.10.3 1.12.2 x11/server/xvfb 1.10.3 1.12.2 x11/server/xvnc 1.0.90 1.1.0 x11/session/sessreg 1.0.6 1.0.7 x11/session/xauth 1.0.6 1.0.7 x11/session/xinit 1.3.1 1.3.2 x11/transset 0.9.1 1.0.0 x11/trusted/trusted-xorg 1.10.3 1.12.2 x11/x11-window-dump 1.0.4 1.0.5 x11/xclipboard 1.1.1 1.1.2 x11/xclock 1.0.5 1.0.6 x11/xfd 1.1.0 1.1.1 x11/xfontsel 1.0.3 1.0.4 x11/xfs 1.1.1 1.1.2 P.S. To get the version numbers for this table, I ran a quick perl script over the output from: % pkg contents -H -r -t depend -a type=incorporate -o fmri \ `pkg contents -H -r -t depend -a type=incorporate -o fmri [email protected],5.11-0.175.1.0.0.24` \ | sort /tmp/11.1 % pkg contents -H -r -t depend -a type=incorporate -o fmri \ `pkg contents -H -r -t depend -a type=incorporate -o fmri [email protected],5.11-0.175.0.0.0.2` \ | sort /tmp/11.0

    Read the article

  • jQuery, ASP.NET, and Browser History

    - by Stephen Walther
    One objection that people always raise against Ajax applications concerns browser history. Because an Ajax application updates its content by performing sneaky Ajax postbacks, the browser backwards and forwards buttons don’t work as you would normally expect. In a normal, non-Ajax application, when you click the browser back button, you return to a previous state of the application. For example, if you are paging through a set of movie records, you might return to the previous page of records. In an Ajax application, on the other hand, the browser backwards and forwards buttons do not work as you would expect. If you navigate to the second page in a list of records and click the backwards button, you won’t return to the previous page. Most likely, you will end up navigating away from the application entirely (which is very unexpected and irritating). Bookmarking presents a similar problem. You cannot bookmark a particular page of records in an Ajax application because the address bar does not reflect the state of the application. The Ajax Solution There is a solution to both of these problems. To solve both of these problems, you must take matters into your own hands and take responsibility for saving and restoring your application state yourself. Furthermore, you must ensure that the address bar gets updated to reflect the state of your application. In this blog entry, I demonstrate how you can take advantage of a jQuery library named bbq that enables you to control browser history (and make your Ajax application bookmarkable) in a cross-browser compatible way. The JavaScript Libraries In this blog entry, I take advantage of the following four JavaScript files: jQuery-1.4.2.js – The jQuery library. Available from the Microsoft Ajax CDN at http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js jquery.pager.js – Used to generate pager for navigating records. Available from http://plugins.jquery.com/project/Pager microtemplates.js – John Resig’s micro-templating library. Available from http://ejohn.org/blog/javascript-micro-templating/ jquery.ba-bbq.js – The Back Button and Query (BBQ) Library. Available from http://benalman.com/projects/jquery-bbq-plugin/ All of these libraries, with the exception of the Micro-templating library, are available under the MIT open-source license. The Ajax Application Let’s start by building a simple Ajax application that enables you to page through a set of movie database records, 3 records at a time. We’ll use my favorite database named MoviesDB. This database contains a Movies table that looks like this: We’ll create a data model for this database by taking advantage of the ADO.NET Entity Framework. The data model looks like this: Finally, we’ll expose the data to the universe with the help of a WCF Data Service named MovieService.svc. The code for the data service is contained in Listing 1. Listing 1 – MovieService.svc using System.Data.Services; using System.Data.Services.Common; namespace WebApplication1 { public class MovieService : DataService<MoviesDBEntities> { public static void InitializeService(DataServiceConfiguration config) { config.SetEntitySetAccessRule("Movies", EntitySetRights.AllRead); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } } The WCF Data Service in Listing 1 exposes the movies so that you can query the movie database table with URLs that looks like this: http://localhost:2474/MovieService.svc/Movies -- Returns all movies http://localhost:2474/MovieService.svc/Movies?$top=5 – Returns 5 movies The HTML page in Listing 2 enables you to page through the set of movies retrieved from the WCF Data Service. Listing 2 – Original.html <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Movies with History</title> <link href="Design/Pager.css" rel="stylesheet" type="text/css" /> </head> <body> <h1>Page <span id="pageNumber"></span> of <span id="pageCount"></span></h1> <div id="pager"></div> <br style="clear:both" /><br /> <div id="moviesContainer"></div> <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> <script src="App_Scripts/Microtemplates.js" type="text/javascript"></script> <script src="App_Scripts/jquery.pager.js" type="text/javascript"></script> <script type="text/javascript"> var pageSize = 3, pageIndex = 0; // Show initial page of movies showMovies(); function showMovies() { // Build OData query var query = "/MovieService.svc" // base URL + "/Movies" // top-level resource + "?$skip=" + pageIndex * pageSize // skip records + "&$top=" + pageSize // take records + " &$inlinecount=allpages"; // include total count of movies // Make call to WCF Data Service $.ajax({ dataType: "json", url: query, success: showMoviesComplete }); } function showMoviesComplete(result) { // unwrap results var movies = result["d"]["results"]; var movieCount = result["d"]["__count"] // Show movies using template var showMovie = tmpl("<li><%=Id%> - <%=Title %></li>"); var html = ""; for (var i = 0; i < movies.length; i++) { html += showMovie(movies[i]); } $("#moviesContainer").html(html); // show pager $("#pager").pager({ pagenumber: (pageIndex + 1), pagecount: Math.ceil(movieCount / pageSize), buttonClickCallback: selectPage }); // Update page number and page count $("#pageNumber").text(pageIndex + 1); $("#pageCount").text(movieCount); } function selectPage(pageNumber) { pageIndex = pageNumber - 1; showMovies(); } </script> </body> </html> The page in Listing 3 has the following three functions: showMovies() – Performs an Ajax call against the WCF Data Service to retrieve a page of movies. showMoviesComplete() – When the Ajax call completes successfully, this function displays the movies by using a template. This function also renders the pager user interface. selectPage() – When you select a particular page by clicking on a page number in the pager UI, this function updates the current page index and calls the showMovies() function. Figure 1 illustrates what the page looks like when it is opened in a browser. Figure 1 If you click the page numbers then the browser history is not updated. Clicking the browser forward and backwards buttons won’t move you back and forth in browser history. Furthermore, the address displayed in the address bar does not change when you navigate to different pages. You cannot bookmark any page except for the first page. Adding Browser History The Back Button and Query (bbq) library enables you to add support for browser history and bookmarking to a jQuery application. The bbq library supports two important methods: jQuery.bbq.pushState(object) – Adds state to browser history. jQuery.bbq.getState(key) – Gets state from browser history. The bbq library also supports one important event: hashchange – This event is raised when the part of an address after the hash # is changed. The page in Listing 3 demonstrates how to use the bbq library to add support for browser navigation and bookmarking to an Ajax page. Listing 3 – Default.html <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Movies with History</title> <link href="Design/Pager.css" rel="stylesheet" type="text/css" /> </head> <body> <h1>Page <span id="pageNumber"></span> of <span id="pageCount"></span></h1> <div id="pager"></div> <br style="clear:both" /><br /> <div id="moviesContainer"></div> <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> <script src="App_Scripts/jquery.ba-bbq.js" type="text/javascript"></script> <script src="App_Scripts/Microtemplates.js" type="text/javascript"></script> <script src="App_Scripts/jquery.pager.js" type="text/javascript"></script> <script type="text/javascript"> var pageSize = 3, pageIndex = 0; $(window).bind('hashchange', function (e) { pageIndex = e.getState("pageIndex") || 0; pageIndex = parseInt(pageIndex); showMovies(); }); $(window).trigger('hashchange'); function showMovies() { // Build OData query var query = "/MovieService.svc" // base URL + "/Movies" // top-level resource + "?$skip=" + pageIndex * pageSize // skip records + "&$top=" + pageSize // take records +" &$inlinecount=allpages"; // include total count of movies // Make call to WCF Data Service $.ajax({ dataType: "json", url: query, success: showMoviesComplete }); } function showMoviesComplete(result) { // unwrap results var movies = result["d"]["results"]; var movieCount = result["d"]["__count"] // Show movies using template var showMovie = tmpl("<li><%=Id%> - <%=Title %></li>"); var html = ""; for (var i = 0; i < movies.length; i++) { html += showMovie(movies[i]); } $("#moviesContainer").html(html); // show pager $("#pager").pager({ pagenumber: (pageIndex + 1), pagecount: Math.ceil(movieCount / pageSize), buttonClickCallback: selectPage }); // Update page number and page count $("#pageNumber").text(pageIndex + 1); $("#pageCount").text(movieCount); } function selectPage(pageNumber) { pageIndex = pageNumber - 1; $.bbq.pushState({ pageIndex: pageIndex }); } </script> </body> </html> Notice the first chunk of JavaScript code in Listing 3: $(window).bind('hashchange', function (e) { pageIndex = e.getState("pageIndex") || 0; pageIndex = parseInt(pageIndex); showMovies(); }); $(window).trigger('hashchange'); When the hashchange event occurs, the current pageIndex is retrieved by calling the e.getState() method. The value is returned as a string and the value is cast to an integer by calling the JavaScript parseInt() function. Next, the showMovies() method is called to display the page of movies. The $(window).trigger() method is called to raise the hashchange event so that the initial page of records will be displayed. When you click a page number, the selectPage() method is invoked. This method adds the current page index to the address by calling the following method: $.bbq.pushState({ pageIndex: pageIndex }); For example, if you click on page number 2 then page index 1 is saved to the URL. The URL looks like this: Notice that when you click on page 2 then the browser address is updated to look like: /Default.htm#pageIndex=1 If you click on page 3 then the browser address is updated to look like: /Default.htm#pageIndex=2 Because the browser address is updated when you navigate to a new page number, the browser backwards and forwards button will work to navigate you backwards and forwards through the page numbers. When you click page 2, and click the backwards button, you will navigate back to page 1. Furthermore, you can bookmark a particular page of records. For example, if you bookmark the URL /Default.htm#pageIndex=1 then you will get the second page of records whenever you open the bookmark. Summary You should not avoid building Ajax applications because of worries concerning browser history or bookmarks. By taking advantage of a JavaScript library such as the bbq library, you can make your Ajax applications behave in exactly the same way as a normal web application.

    Read the article

  • Ops Center 12c - Provisioning Solaris Using a Card-Based NIC

    - by scottdickson
    It's been a long time since last I added something here, but having some conversations this last week, I got inspired to update things. I've been spending a lot of time with Ops Center for managing and installing systems these days.  So, I suspect a number of my upcoming posts will be in that area. Today, I want to look at how to provision Solaris using Ops Center when your network is not connected to one of the built-in NICs.  We'll talk about how this can work for both Solaris 10 and Solaris 11, since they are pretty similar.  In both cases, WANboot is a key piece of the story. Here's what I want to do:  I have a Sun Fire T2000 server with a Quad-GbE nxge card installed.  The only network is connected to port 2 on that card rather than the built-in network interfaces.  I want to install Solaris on it across the network, either Solaris 10 or Solaris 11.  I have met with a lot of customers lately who have a similar architecture.  Usually, they have T4-4 servers with the network connected via 10GbE connections. Add to this mix the fact that I use Ops Center to manage the systems in my lab, so I really would like to add this to Ops Center.  If possible, I would like this to be completely hands free.  I can't quite do that yet. Close, but not quite. WANBoot or Old-Style NetBoot? When a system is installed from the network, it needs some help getting the process rolling.  It has to figure out what its network configuration (IP address, gateway, etc.) ought to be.  It needs to figure out what server is going to help it boot and install, and it needs the instructions for the installation.  There are two different ways to bootstrap an installation of Solaris on SPARC across the network.   The old way uses a broadcast of RARP or more recently DHCP to obtain the IP configuration and the rest of the information needed.  The second is to explicitly configure this information in the OBP and use WANBoot for installation WANBoot has a number of benefits over broadcast-based installation: it is not restricted to a single subnet; it does not require special DHCP configuration or DHCP helpers; it uses standard HTTP and HTTPS protocols which traverse firewalls much more easily than NFS-based package installation.  But, WANBoot is not available on really old hardware and WANBoot requires the use o Flash Archives in Solaris 10.  Still, for many people, this is a great approach. As it turns out, WANBoot is necessary if you plan to install using a NIC on a card rather than a built-in NIC. Identifying Which Network Interface to Use One of the trickiest aspects to this process, and the one that actually requires manual intervention to set up, is identifying how the OBP and Solaris refer to the NIC that we want to use to boot.  The OBP already has device aliases configured for the built-in NICs called net, net0, net1, net2, net3.  The device alias net typically points to net0 so that when you issue the command  "boot net -v install", it uses net0 for the boot.  Our task is to figure out the network instance for the NIC we want to use.  We will need to get to the OBP console of the system we want to install in order to figure out what the network should be called.  I will presume you know how to get to the ok prompt.  Once there, we have to see what networks the OBP sees and identify which one is associated with our NIC using the OBP command show-nets. SunOS Release 5.11 Version 11.0 64-bit Copyright (c) 1983, 2011, Oracle and/or its affiliates. All rights reserved. {4} ok banner Sun Fire T200, No Keyboard Copyright (c) 1998, 2010, Oracle and/or its affiliates. All rights reserved. OpenBoot 4.30.4.b, 32640 MB memory available, Serial #69057548. Ethernet address 0:14:4f:1d:bc:c, Host ID: 841dbc0c. {4} ok show-nets a) /pci@7c0/pci@0/pci@2/network@0,1 b) /pci@7c0/pci@0/pci@2/network@0 c) /pci@780/pci@0/pci@8/network@0,3 d) /pci@780/pci@0/pci@8/network@0,2 e) /pci@780/pci@0/pci@8/network@0,1 f) /pci@780/pci@0/pci@8/network@0 g) /pci@780/pci@0/pci@1/network@0,1 h) /pci@780/pci@0/pci@1/network@0 q) NO SELECTION Enter Selection, q to quit: d /pci@780/pci@0/pci@8/network@0,2 has been selected. Type ^Y ( Control-Y ) to insert it in the command line. e.g. ok nvalias mydev ^Y for creating devalias mydev for /pci@780/pci@0/pci@8/network@0,2 {4} ok devalias ... net3 /pci@7c0/pci@0/pci@2/network@0,1 net2 /pci@7c0/pci@0/pci@2/network@0 net1 /pci@780/pci@0/pci@1/network@0,1 net0 /pci@780/pci@0/pci@1/network@0 net /pci@780/pci@0/pci@1/network@0 ... name aliases By looking at the devalias and the show-nets output, we can see that our Quad-GbE card must be the device nodes starting with  /pci@780/pci@0/pci@8/network@0.  The cable for our network is plugged into the 3rd slot, so the device address for our network must be /pci@780/pci@0/pci@8/network@0,2. With that, we can create a device alias for our network interface.  Naming the device alias may take a little bit of trial and error, especially in Solaris 11 where the device alias seems to matter more with the new virtualized network stack. So far in my testing, since this is the "next" network interface to be used, I have found success in naming it net4, even though it's a NIC in the middle of a card that might, by rights, be called net6 (assuming the 0th interface on the card is the next interface identified by Solaris and this is the 3rd interface on the card).  So, we will call it net4.  We need to assign a device alias to it: {4} ok nvalias net4 /pci@780/pci@0/pci@8/network@0,2 {4} ok devalias net4 /pci@780/pci@0/pci@8/network@0,2 ... We also may need to have the MAC for this particular interface, so let's get it, too.  To do this, we go to the device and interrogate its properties. {4} ok cd /pci@780/pci@0/pci@8/network@0,2 {4} ok .properties assigned-addresses 82060210 00000000 03000000 00000000 01000000 82060218 00000000 00320000 00000000 00008000 82060220 00000000 00328000 00000000 00008000 82060230 00000000 00600000 00000000 00100000 local-mac-address 00 21 28 20 42 92 phy-type mif ... From this, we can see that the MAC for this interface is  00:21:28:20:42:92.  We will need this later. This is all we need to do at the OBP.  Now, we can configure Ops Center to use this interface. Network Boot in Solaris 10 Solaris 10 turns out to be a little simpler than Solaris 11 for this sort of a network boot.  Since WANBoot in Solaris 10 fetches a specified In order to install the system using Ops Center, it is necessary to create a OS Provisioning profile and its corresponding plan.  I am going to presume that you already know how to do this within Ops Center 12c and I will just cover the differences between a regular profile and a profile that can use an alternate interface. Create a OS Provisioning profile for Solaris 10 as usual.  However, when you specify the network resources for the primary network, click on the name of the NIC, probably GB_0, and rename it to GB_N/netN, where N is the instance number you used previously in creating the device alias.  This is where the trial and error may come into play.  You may need to try a few instance numbers before you, the OBP, and Solaris all agree on the instance number.  Mark this as the boot network. For Solaris 10, you ought to be able to then apply the OS Provisioning profile to the server and it should install using that interface.  And if you put your cards in the same slots and plug the networks into the same NICs, this profile is reusable across multiple servers. Why This Works If you watch the console as Solaris boots during the OSP process, Ops Center is going to look for the device alias netN.  Since WANBoot requires a device alias called just net, Ops Center uses the value of your netN device alias and assigns that device to the net alias.  That means that boot net will automatically use this device.  Very cool!  Here's a trace from the console as Ops Center provisions a server: Sun Sun Fire T200, No KeyboardCopyright (c) 1998, 2010, Oracle and/or its affiliates. All rights reserved.OpenBoot 4.30.4.b, 32640 MB memory available, Serial #69057548.Ethernet address 0:14:4f:1d:bc:c, Host ID: 841dbc0c.auto-boot? =            false{0} ok  {0} ok printenv network-boot-argumentsnetwork-boot-arguments =  host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=0100144F1DBC0C,file=http://10.140.204.22:5555/cgi-bin/wanboot-cgi{0} ok {0} ok devalias net net                      /pci@780/pci@0/pci@1/network@0{0} ok devalias net4 net4                     /pci@780/pci@0/pci@8/network@0,2{0} ok devalias net /pci@780/pci@0/pci@8/network@0,2{0} ok setenv network-boot-arguments host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=0100144F1DBC0C,file=http://10.140.204.22:8004/cgi-bin/wanboot-cginetwork-boot-arguments =  host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=0100144F1DBC0C,file=http://10.140.204.22:8004/cgi-bin/wanboot-cgi{0} ok {0} ok boot net - installBoot device: /pci@780/pci@0/pci@8/network@0,2  File and args: - install/pci@780/pci@0/pci@8/network@0,2: 1000 Mbps link up<time unavailable> wanboot info: WAN boot messages->console<time unavailable> wanboot info: configuring /pci@780/pci@0/pci@8/network@0,2 See what happened?  Ops Center looked for the network device alias called net4 that we specified in the profile, took the value from it, and made it the net device alias for the boot.  Pretty cool! WANBoot and Solaris 11 Solaris 11 requires an additional step since the Automated Installer in Solaris 11 uses the MAC address of the network to figure out which manifest to use for system installation.  In order to make sure this is available, we have to take an extra step to associate the MAC of the NIC on the card with the host.  So, in addition to creating the device alias like we did above, we also have to declare to Ops Center that the host has this new MAC. Declaring the NIC Start out by discovering the hardware as usual.  Once you have discovered it, take a look under the Connectivity tab to see what networks it has discovered.  In the case of this system, it shows the 4 built-in networks, but not the networks on the additional cards.  These are not directly visible to the system controller.  In order to add the additional network interface to the hardware asset, it is necessary to Declare it.  We will declare that we have a server with this additional NIC, but we will also  specify the existing GB_0 network so that Ops Center can associate the right resources together.  The GB_0 acts as sort of a key to tie our new declaration to the old system already discovered.  Go to the Assets tab, select All Assets, and then in the Actions tab, select Add Asset.  Rather than going through a discovery this time, we will manually declare a new asset. When we declare it, we will give the hostname, IP address, system model that match those that have already been discovered.  Then, we will declare both GB_0 with its existing MAC and the new GB_4 with its MAC.  Remember that we collected the MAC for GB_4 when we created its device alias. After you declare the asset, you will see the new NIC in the connectivity tab for the asset.  You will notice that only the NICs you listed when you declared it are seen now.  If you want Ops Center to see all of the existing NICs as well as the additional one, declare them as well.  Add the other GB_1, GB_2, GB_3 links and their MACs just as you did GB_0 and GB_4.  Installing the OS  Once you have declared the asset, you can create an OS Provisioning profile for Solaris 11 in the same way that you did for Solaris 10.  The only difference from any other provisioning profile you might have created already is the network to use for installation.  Again, use GB_N/netN where N is the interface number you used for your device alias and in your declaration.  And away you go.  When the system boots from the network, the automated installer (AI) is able to see which system manifest to use, based on the new MAC that was associated, and the system gets installed. {0} ok {0} ok printenv network-boot-argumentsnetwork-boot-arguments =  host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=01002128204292,file=http://10.140.204.22:5555/cgi-bin/wanboot-cgi{0} ok {0} ok devalias net net                      /pci@780/pci@0/pci@1/network@0{0} ok devalias net4 net4                     /pci@780/pci@0/pci@8/network@0,2{0} ok devalias net /pci@780/pci@0/pci@8/network@0,2{0} ok setenv network-boot-arguments host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=01002128204292,file=http://10.140.204.22:5555/cgi-bin/wanboot-cginetwork-boot-arguments =  host-ip=10.140.204.234,router-ip=10.140.204.1,subnet-mask=255.255.254.0,hostname=atl-sewr-52,client-id=01002128204292,file=http://10.140.204.22:5555/cgi-bin/wanboot-cgi{0} ok {0} ok boot net - installBoot device: /pci@780/pci@0/pci@8/network@0,2  File and args: - install/pci@780/pci@0/pci@8/network@0,2: 1000 Mbps link up<time unavailable> wanboot info: WAN boot messages->console<time unavailable> wanboot info: configuring /pci@780/pci@0/pci@8/network@0,2...SunOS Release 5.11 Version 11.0 64-bitCopyright (c) 1983, 2011, Oracle and/or its affiliates. All rights reserved.Remounting root read/writeProbing for device nodes ...Preparing network image for useDownloading solaris.zlib--2012-02-17 15:10:17--  http://10.140.204.22:5555/var/js/AI/sparc//solaris.zlibConnecting to 10.140.204.22:5555... connected.HTTP request sent, awaiting response... 200 OKLength: 126752256 (121M) [text/plain]Saving to: `/tmp/solaris.zlib'100%[======================================>] 126,752,256 28.6M/s   in 4.4s    2012-02-17 15:10:21 (27.3 MB/s) - `/tmp/solaris.zlib' saved [126752256/126752256] Conclusion So, why go to all of this trouble?  More and more, I find that customers are wiring their data center to only use higher speed networks - 10GbE only to the hosts.  Some customers are moving aggressively toward consolidated networks combining storage and network on CNA NICs.  All of this means that network-based provisioning cannot rely exclusively on the built-in network interfaces.  So, it's important to be able to provision a system using other than the built-in networks.  Turns out, that this is pretty straight-forward for both Solaris 10 and Solaris 11 and fits into the Ops Center deployment process quite nicely. Hopefully, you will be able to use this as you build out your own private cloud solutions with Ops Center.

    Read the article

  • Ball to Ball Collision - Detection and Handling

    - by Simucal
    With the help of the Stack Overflow community I've written a pretty basic-but fun physics simulator. You click and drag the mouse to launch a ball. It will bounce around and eventually stop on the "floor". My next big feature I want to add in is ball to ball collision. The ball's movement is broken up into a x and y speed vector. I have gravity (small reduction of the y vector each step), I have friction (small reduction of both vectors each collision with a wall). The balls honestly move around in a surprisingly realistic way. I guess my question has two parts: What is the best method to detect ball to ball collision? Do I just have an O(n^2) loop that iterates over each ball and checks every other ball to see if it's radius overlaps? What equations do I use to handle the ball to ball collisions? Physics 101 How does it effect the two balls speed x/y vectors? What is the resulting direction the two balls head off in? How do I apply this to each ball? Handling the collision detection of the "walls" and the resulting vector changes were easy but I see more complications with ball-ball collisions. With walls I simply had to take the negative of the appropriate x or y vector and off it would go in the correct direction. With balls I don't think it is that way. Some quick clarifications: for simplicity I'm ok with a perfectly elastic collision for now, also all my balls have the same mass right now, but I might change that in the future. In case anyone is interested in playing with the simulator I have made so far, I've uploaded the source here (EDIT: Check the updated source below). Edit: Resources I have found useful 2d Ball physics with vectors: 2-Dimensional Collisions Without Trigonometry.pdf 2d Ball collision detection example: Adding Collision Detection Success! I have the ball collision detection and response working great! Relevant code: Collision Detection: for (int i = 0; i < ballCount; i++) { for (int j = i + 1; j < ballCount; j++) { if (balls[i].colliding(balls[j])) { balls[i].resolveCollision(balls[j]); } } } This will check for collisions between every ball but skip redundant checks (if you have to check if ball 1 collides with ball 2 then you don't need to check if ball 2 collides with ball 1. Also, it skips checking for collisions with itself). Then, in my ball class I have my colliding() and resolveCollision() methods: public boolean colliding(Ball ball) { float xd = position.getX() - ball.position.getX(); float yd = position.getY() - ball.position.getY(); float sumRadius = getRadius() + ball.getRadius(); float sqrRadius = sumRadius * sumRadius; float distSqr = (xd * xd) + (yd * yd); if (distSqr <= sqrRadius) { return true; } return false; } public void resolveCollision(Ball ball) { // get the mtd Vector2d delta = (position.subtract(ball.position)); float d = delta.getLength(); // minimum translation distance to push balls apart after intersecting Vector2d mtd = delta.multiply(((getRadius() + ball.getRadius())-d)/d); // resolve intersection -- // inverse mass quantities float im1 = 1 / getMass(); float im2 = 1 / ball.getMass(); // push-pull them apart based off their mass position = position.add(mtd.multiply(im1 / (im1 + im2))); ball.position = ball.position.subtract(mtd.multiply(im2 / (im1 + im2))); // impact speed Vector2d v = (this.velocity.subtract(ball.velocity)); float vn = v.dot(mtd.normalize()); // sphere intersecting but moving away from each other already if (vn > 0.0f) return; // collision impulse float i = (-(1.0f + Constants.restitution) * vn) / (im1 + im2); Vector2d impulse = mtd.multiply(i); // change in momentum this.velocity = this.velocity.add(impulse.multiply(im1)); ball.velocity = ball.velocity.subtract(impulse.multiply(im2)); } Source Code: Complete source for ball to ball collider. Binary: Compiled binary in case you just want to try bouncing some balls around. If anyone has some suggestions for how to improve this basic physics simulator let me know! One thing I have yet to add is angular momentum so the balls will roll more realistically. Any other suggestions? Leave a comment!

    Read the article

  • 10 Essential Tools for building ASP.NET Websites

    - by Stephen Walther
    I recently put together a simple public website created with ASP.NET for my company at Superexpert.com. I was surprised by the number of free tools that I ended up using to put together the website. Therefore, I thought it would be interesting to create a list of essential tools for building ASP.NET websites. These tools work equally well with both ASP.NET Web Forms and ASP.NET MVC. Performance Tools After reading Steve Souders two (very excellent) books on front-end website performance High Performance Web Sites and Even Faster Web Sites, I have been super sensitive to front-end website performance. According to Souders’ Performance Golden Rule: “Optimize front-end performance first, that's where 80% or more of the end-user response time is spent” You can use the tools below to reduce the size of the images, JavaScript files, and CSS files used by an ASP.NET application. 1. Sprite and Image Optimization Framework CSS sprites were first described in an article written for A List Apart entitled CSS sprites: Image Slicing’s Kiss of Death. When you use sprites, you combine multiple images used by a website into a single image. Next, you use CSS trickery to display particular sub-images from the combined image in a webpage. The primary advantage of sprites is that they reduce the number of requests required to display a webpage. Requesting a single large image is faster than requesting multiple small images. In general, the more resources – images, JavaScript files, CSS files – that must be moved across the wire, the slower your website. However, most people avoid using sprites because they require a lot of work. You need to combine all of the images and write just the right CSS rules to display the sub-images. The Microsoft Sprite and Image Optimization Framework enables you to avoid all of this work. The framework combines the images for you automatically. Furthermore, the framework includes an ASP.NET Web Forms control and an ASP.NET MVC helper that makes it easy to display the sub-images. You can download the Sprite and Image Optimization Framework from CodePlex at http://aspnet.codeplex.com/releases/view/50869. The Sprite and Image Optimization Framework was written by Morgan McClean who worked in the office next to mine at Microsoft. Morgan was a scary smart Intern from Canada and we discussed the Framework while he was building it (I was really excited to learn that he was working on it). Morgan added some great advanced features to this framework. For example, the Sprite and Image Optimization Framework supports something called image inlining. When you use image inlining, the actual image is stored in the CSS file. Here’s an example of what image inlining looks like: .Home_StephenWalther_small-jpg { width:75px; height:100px; background: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAEsAAABkCAIAAABB1lpeAAAAB GdBTUEAALGOfPtRkwAAACBjSFJNAACHDwAAjA8AAP1SAACBQAAAfXkAAOmLAAA85QAAGcxzPIV3AAAKL s+zNfREAAAAASUVORK5CYII=) no-repeat 0% 0%; } The actual image (in this case a picture of me that is displayed on the home page of the Superexpert.com website) is stored in the CSS file. If you visit the Superexpert.com website then very few separate images are downloaded. For example, all of the images with a red border in the screenshot below take advantage of CSS sprites: Unfortunately, there are some significant Gotchas that you need to be aware of when using the Sprite and Image Optimization Framework. There are workarounds for these Gotchas. I plan to write about these Gotchas and workarounds in a future blog entry. 2. Microsoft Ajax Minifier Whenever possible you should combine, minify, compress, and cache with a far future header all of your JavaScript and CSS files. The Microsoft Ajax Minifier makes it easy to minify JavaScript and CSS files. Don’t confuse minification and compression. You need to do both. According to Souders, you can reduce the size of a JavaScript file by an additional 20% (on average) by minifying a JavaScript file after you compress the file. When you minify a JavaScript or CSS file, you use various tricks to reduce the size of the file before you compress the file. For example, you can minify a JavaScript file by replacing long JavaScript variables names with short variables names and removing unnecessary white space and comments. You can minify a CSS file by doing such things as replacing long color names such as #ffffff with shorter equivalents such as #fff. The Microsoft Ajax Minifier was created by Microsoft employee Ron Logan. Internally, this tool was being used by several large Microsoft websites. We also used the tool heavily on the ASP.NET team. I convinced Ron to publish the tool on CodePlex so that everyone in the world could take advantage of it. You can download the tool from the ASP.NET Ajax website and read documentation for the tool here. I created the installer for the Microsoft Ajax Minifier. When creating the installer, I also created a Visual Studio build task to make it easy to minify all of your JavaScript and CSS files whenever you do a build within Visual Studio automatically. Read the Ajax Minifier Quick Start to learn how to configure the build task. 3. ySlow The ySlow tool is a free add-on for Firefox created by Yahoo that enables you to test the front-end of your website. For example, here are the current test results for the Superexpert.com website: The Superexpert.com website has an overall score of B (not perfect but not bad). The ySlow tool is not perfect. For example, the Superexpert.com website received a failing grade of F for not using a Content Delivery Network even though the website using the Microsoft Ajax Content Delivery Network for JavaScript files such as jQuery. Uptime After publishing a website live to the world, you want to ensure that the website does not encounter any issues and that it stays live. I use the following tools to monitor the Superexpert.com website now that it is live. 4. ELMAH ELMAH stands for Error Logging Modules and Handlers for ASP.NET. ELMAH enables you to record any errors that happen at your website so you can review them in the future. You can download ELMAH for free from the ELMAH project website. ELMAH works great with both ASP.NET Web Forms and ASP.NET MVC. You can configure ELMAH to store errors in a number of different stores including XML files, the Event Log, an Access database, a SQL database, an Oracle database, or in computer RAM. You also can configure ELMAH to email error messages to you when they happen. By default, you can access ELMAH by requesting the elmah.axd page from a website with ELMAH installed. Here’s what the elmah page looks like from the Superexpert.com website (this page is password-protected because secret information can be revealed in an error message): If you click on a particular error message, you can view the original Yellow Screen ASP.NET error message (even when the error message was never displayed to the actual user). I installed ELMAH by taking advantage of the new package manager for ASP.NET named NuGet (originally named NuPack). You can read the details about NuGet in the following blog entry by Scott Guthrie. You can download NuGet from CodePlex. 5. Pingdom I use Pingdom to verify that the Superexpert.com website is always up. You can sign up for Pingdom by visiting Pingdom.com. You can use Pingdom to monitor a single website for free. At the Pingdom website, you configure the frequency that your website gets pinged. I verify that the Superexpert.com website is up every 5 minutes. I have the Pingdom service verify that it can retrieve the string “Contact Us” from the website homepage. If your website goes down, you can configure Pingdom so that it sends an email, Twitter, SMS, or iPhone alert. I use the Pingdom iPhone app which looks like this: 6. Host Tracker If your website does go down then you need some way of determining whether it is a problem with your local network or if your website is down for everyone. I use a website named Host-Tracker.com to check how badly a website is down. Here’s what the Host-Tracker website displays for the Superexpert.com website when the website can be successfully pinged from everywhere in the world: Notice that Host-Tracker pinged the Superexpert.com website from 68 locations including Roubaix, France and Scranton, PA. Debugging I mean debugging in the broadest possible sense. I use the following tools when building a website to verify that I have not made a mistake. 7. HTML Spell Checker Why doesn’t Visual Studio have a built-in spell checker? Don’t know – I’ve always found this mysterious. Fortunately, however, a former member of the ASP.NET team wrote a free spell checker that you can use with your ASP.NET pages. I find a spell checker indispensible. It is easy to delude yourself that you are capable of perfect spelling. I’m always super embarrassed when I actually run the spell checking tool and discover all of my spelling mistakes. The fastest way to add the HTML Spell Checker extension to Visual Studio is to select the menu option Tools, Extension Manager within Visual Studio. Click on Online Gallery and search for HTML Spell Checker: 8. IIS SEO Toolkit If people cannot find your website through Google then you should not even bother to create it. Microsoft has a great extension for IIS named the IIS Search Engine Optimization Toolkit that you can use to identify issue with your website that would hurt its page rank. You also can use this tool to quickly create a sitemap for your website that you can submit to Google or Bing. You can even generate the sitemap for an ASP.NET MVC website. Here’s what the report overview for the Superexpert.com website looks like: Notice that the Sueprexpert.com website had plenty of violations. For example, there are 65 cases in which a page has a broken hyperlink. You can drill into these violations to identity the exact page and location where these violations occur. 9. LinqPad If your ASP.NET website accesses a database then you should be using LINQ to Entities with the Entity Framework. Using LINQ involves some magic. LINQ queries written in C# get converted into SQL queries for you. If you are not careful about how you write your LINQ queries, you could unintentionally build a really badly performing website. LinqPad is a free tool that enables you to experiment with your LINQ queries. It even works with Microsoft SQL CE 4 and Azure. You can use LinqPad to execute a LINQ to Entities query and see the results. You also can use it to see the resulting SQL that gets executed against the database: 10. .NET Reflector I use .NET Reflector daily. The .NET Reflector tool enables you to take any assembly and disassemble the assembly into C# or VB.NET code. You can use .NET Reflector to see the “Source Code” of an assembly even when you do not have the actual source code. You can download a free version of .NET Reflector from the Redgate website. I use .NET Reflector primarily to help me understand what code is doing internally. For example, I used .NET Reflector with the Sprite and Image Optimization Framework to better understand how the MVC Image helper works. Here’s part of the disassembled code from the Image helper class: Summary In this blog entry, I’ve discussed several of the tools that I used to create the Superexpert.com website. These are tools that I use to improve the performance, improve the SEO, verify the uptime, or debug the Superexpert.com website. All of the tools discussed in this blog entry are free. Furthermore, all of these tools work with both ASP.NET Web Forms and ASP.NET MVC. Let me know if there are any tools that you use daily when building ASP.NET websites.

    Read the article

  • Why didn't 12.04 install?

    - by Josephisscrewed
    Ok, so I've installed Ubuntu many times on my computer.. Normally on the same partition, and WIndows would always delete Ubuntu(I don't know how.. it just happens) if i go away from keyboard during boot and it chooses Windows automatically because I took to long. So i tried to reinstall again, but after the fifth time it wouldn't let me, and told me to check "wubi-12.04-rev266.log". It took a while to find, but when i found it, I had no idea what any of it meant, as I'm no programmer.I first tried this the day Precise Pangolin came out. SO skip ahead 2.5 months, when I finally found this file, and i then got the idea of making a new partition to install Ubuntu on, but I used wubi, like I always did. It didn't look like it would f anything up, so I did it. it went through all the downloads, extracting, etc. Which took about 40 minutes total, then ended with an error message saying to check "wubi-12.04-rev266.log". i did. Here's what it says: 07-10 23:33 INFO root: === wubi 12.04 rev266 === 07-10 23:33 DEBUG root: Logfile is c:\users\joseph\appdata\local\temp\wubi-12.04-rev266.log 07-10 23:33 DEBUG root: sys.argv = ['main.pyo', '--exefile="C:\\Users\\Joseph\\Downloads\\wubi.exe"'] 07-10 23:33 DEBUG CommonBackend: data_dir=C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\data 07-10 23:33 DEBUG WindowsBackend: 7z=C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\bin\7z.exe 07-10 23:33 DEBUG WindowsBackend: startup_folder=C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Startup 07-10 23:33 DEBUG CommonBackend: Fetching basic info... 07-10 23:33 DEBUG CommonBackend: original_exe=C:\Users\Joseph\Downloads\wubi.exe 07-10 23:33 DEBUG CommonBackend: platform=win32 07-10 23:33 DEBUG CommonBackend: osname=nt 07-10 23:33 DEBUG CommonBackend: language=en_US 07-10 23:33 DEBUG CommonBackend: encoding=cp1252 07-10 23:33 DEBUG WindowsBackend: arch=amd64 07-10 23:33 DEBUG CommonBackend: Parsing isolist=C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\data\isolist.ini 07-10 23:33 DEBUG CommonBackend: Adding distro Xubuntu-i386 07-10 23:33 DEBUG CommonBackend: Adding distro Edubuntu-i386 07-10 23:33 DEBUG CommonBackend: Adding distro Xubuntu-amd64 07-10 23:33 DEBUG CommonBackend: Adding distro Kubuntu-amd64 07-10 23:33 DEBUG CommonBackend: Adding distro Mythbuntu-i386 07-10 23:33 DEBUG CommonBackend: Adding distro Edubuntu-amd64 07-10 23:33 DEBUG CommonBackend: Adding distro Ubuntu-amd64 07-10 23:33 DEBUG CommonBackend: Adding distro Lubuntu-i386 07-10 23:33 DEBUG CommonBackend: Adding distro Ubuntu-i386 07-10 23:33 DEBUG CommonBackend: Adding distro Mythbuntu-amd64 07-10 23:33 DEBUG CommonBackend: Adding distro Kubuntu-i386 07-10 23:33 DEBUG CommonBackend: Adding distro Lubuntu-amd64 07-10 23:33 DEBUG WindowsBackend: Fetching host info... 07-10 23:33 DEBUG WindowsBackend: registry_key=Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi 07-10 23:33 DEBUG WindowsBackend: windows version=vista 07-10 23:33 DEBUG WindowsBackend: windows_version2=Windows 7 Home Premium 07-10 23:33 DEBUG WindowsBackend: windows_sp=None 07-10 23:33 DEBUG WindowsBackend: windows_build=7600 07-10 23:33 DEBUG WindowsBackend: gmt=-8 07-10 23:33 DEBUG WindowsBackend: country=US 07-10 23:33 DEBUG WindowsBackend: timezone=America/Los_Angeles 07-10 23:33 DEBUG WindowsBackend: windows_username=Joseph 07-10 23:33 DEBUG WindowsBackend: user_full_name=Joseph 07-10 23:33 DEBUG WindowsBackend: user_directory=C:\Users\Joseph 07-10 23:33 DEBUG WindowsBackend: windows_language_code=1033 07-10 23:33 DEBUG WindowsBackend: windows_language=English 07-10 23:33 DEBUG WindowsBackend: processor_name=Intel(R) Core(TM) i3 CPU M 370 @ 2.40GHz 07-10 23:33 DEBUG WindowsBackend: bootloader=vista 07-10 23:33 DEBUG WindowsBackend: system_drive=Drive(C: hd 78696.8203125 mb free ntfs) 07-10 23:33 DEBUG WindowsBackend: drive=Drive(C: hd 78696.8203125 mb free ntfs) 07-10 23:33 DEBUG WindowsBackend: drive=Drive(D: hd 4303.48046875 mb free ntfs) 07-10 23:33 DEBUG WindowsBackend: drive=Drive(E: cd 0.0 mb free udf) 07-10 23:33 DEBUG WindowsBackend: drive=Drive(U: hd 79907.8320313 mb free ntfs) 07-10 23:33 DEBUG WindowsBackend: uninstaller_path=None 07-10 23:33 DEBUG WindowsBackend: previous_target_dir=None 07-10 23:33 DEBUG WindowsBackend: previous_distro_name=None 07-10 23:33 DEBUG WindowsBackend: keyboard_id=67699721 07-10 23:33 DEBUG WindowsBackend: keyboard_layout=us 07-10 23:33 DEBUG WindowsBackend: keyboard_variant= 07-10 23:33 DEBUG CommonBackend: python locale=('en_US', 'cp1252') 07-10 23:33 DEBUG CommonBackend: locale=en_US.UTF-8 07-10 23:33 DEBUG WindowsBackend: total_memory_mb=3893.859375 07-10 23:33 DEBUG CommonBackend: Searching ISOs on USB devices 07-10 23:33 DEBUG CommonBackend: Searching for local CDs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Ubuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Ubuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Kubuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Kubuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Xubuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Xubuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Mythbuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Mythbuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Edubuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Edubuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Lubuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Lubuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Ubuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Ubuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Kubuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Kubuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Xubuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Xubuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Mythbuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Mythbuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Edubuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Edubuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Lubuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Lubuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Ubuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Ubuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Kubuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Kubuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Xubuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Xubuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Mythbuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Mythbuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Edubuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Edubuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Lubuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Lubuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Ubuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Ubuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Kubuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Kubuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Xubuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Xubuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Mythbuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Mythbuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Edubuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Edubuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Lubuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Lubuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 INFO root: Running the installer... 07-10 23:33 DEBUG WindowsFrontend: __init__... 07-10 23:33 DEBUG WindowsFrontend: on_init... 07-10 23:33 INFO WinuiPage: appname=wubi, localedir=C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\translations, languages=['en_US', 'en'] 07-10 23:33 INFO WinuiPage: appname=wubi, localedir=C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\translations, languages=['en_US', 'en'] 07-10 23:35 DEBUG WinuiInstallationPage: target_drive=U:, installation_size=30000MB, distro_name=Ubuntu, language=en_US, locale=en_US.UTF-8, username=joseph 07-10 23:35 INFO root: Received settings 07-10 23:35 DEBUG CommonBackend: Searching for local CD 07-10 23:35 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Ubuntu CD 07-10 23:35 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:35 DEBUG Distro: checking whether D:\ is a valid Ubuntu CD 07-10 23:35 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:35 DEBUG Distro: checking whether E:\ is a valid Ubuntu CD 07-10 23:35 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:35 DEBUG Distro: checking whether U:\ is a valid Ubuntu CD 07-10 23:35 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:35 DEBUG CommonBackend: Searching for local ISO 07-10 23:35 INFO WinuiPage: appname=wubi, localedir=C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\translations, languages=['en_US', 'en'] 07-10 23:35 DEBUG TaskList: # Running tasklist... 07-10 23:35 DEBUG TaskList: ## Running select_target_dir... 07-10 23:35 INFO WindowsBackend: Installing into U:\ubuntu 07-10 23:35 DEBUG TaskList: ## Finished select_target_dir 07-10 23:35 DEBUG TaskList: ## Running create_dir_structure... 07-10 23:35 DEBUG CommonBackend: Creating dir U:\ubuntu 07-10 23:35 DEBUG CommonBackend: Creating dir U:\ubuntu\disks 07-10 23:35 DEBUG CommonBackend: Creating dir U:\ubuntu\install 07-10 23:35 DEBUG CommonBackend: Creating dir U:\ubuntu\install\boot 07-10 23:35 DEBUG CommonBackend: Creating dir U:\ubuntu\disks\boot 07-10 23:35 DEBUG CommonBackend: Creating dir U:\ubuntu\disks\boot\grub 07-10 23:35 DEBUG CommonBackend: Creating dir U:\ubuntu\install\boot\grub 07-10 23:35 DEBUG TaskList: ## Finished create_dir_structure 07-10 23:35 DEBUG TaskList: ## Running create_uninstaller... 07-10 23:35 DEBUG WindowsBackend: Copying uninstaller C:\Users\Joseph\Downloads\wubi.exe -> U:\ubuntu\uninstall-wubi.exe 07-10 23:35 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi UninstallString U:\ubuntu\uninstall-wubi.exe 07-10 23:35 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi InstallationDir U:\ubuntu 07-10 23:35 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi DisplayName Ubuntu 07-10 23:35 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi DisplayIcon U:\ubuntu\Ubuntu.ico 07-10 23:35 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi DisplayVersion 12.04-rev266 07-10 23:35 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi Publisher Ubuntu 07-10 23:35 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi URLInfoAbout http://www.ubuntu.com 07-10 23:35 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi HelpLink http://www.ubuntu.com/support 07-10 23:35 DEBUG TaskList: ## Finished create_uninstaller 07-10 23:35 DEBUG TaskList: ## Running create_preseed_diskimage... 07-10 23:35 DEBUG TaskList: ## Finished create_preseed_diskimage 07-10 23:35 DEBUG TaskList: ## Running get_diskimage... 07-10 23:35 DEBUG TaskList: New task download 07-10 23:35 DEBUG TaskList: ### Running download... 07-10 23:35 DEBUG downloader: downloading http://releases.ubuntu.com/12.04/ubuntu-12.04-wubi-amd64.tar.xz > U:\ubuntu\disks\ubuntu-12.04-wubi-amd64.tar.xz 07-10 23:35 DEBUG downloader: Download start filename=U:\ubuntu\disks\ubuntu-12.04-wubi-amd64.tar.xz, url=http://releases.ubuntu.com/12.04/ubuntu-12.04-wubi-amd64.tar.xz, basename=ubuntu-12.04-wubi-amd64.tar.xz, length=512730488, text=None 07-11 00:00 DEBUG TaskList: ### Finished download 07-11 00:00 DEBUG downloader: download finished (read 512730488 bytes) 07-11 00:00 DEBUG TaskList: ## Finished get_diskimage 07-11 00:00 DEBUG TaskList: ## Running extract_diskimage... 07-11 00:03 DEBUG TaskList: ## Finished extract_diskimage 07-11 00:03 DEBUG TaskList: ## Running choose_disk_sizes... 07-11 00:03 DEBUG WindowsBackend: total size=30000 root=29744 swap=256 home=0 usr=0 07-11 00:03 DEBUG TaskList: ## Finished choose_disk_sizes 07-11 00:03 DEBUG TaskList: ## Running expand_diskimage... 07-11 00:05 DEBUG TaskList: ## Finished expand_diskimage 07-11 00:05 DEBUG TaskList: ## Running create_swap_diskimage... 07-11 00:05 DEBUG TaskList: ## Finished create_swap_diskimage 07-11 00:05 DEBUG TaskList: ## Running modify_bootloader... 07-11 00:05 DEBUG TaskList: New task modify_bcd 07-11 00:05 DEBUG TaskList: ### Running modify_bcd... 07-11 00:05 DEBUG WindowsBackend: modify_bcd Drive(C: hd 78696.8203125 mb free ntfs) 07-11 00:05 ERROR TaskList: Error executing command >>command=C:\Windows\sysnative\bcdedit.exe /set {970e3d1b-e019-11df-a016-81045c79c1f9} device partition=U: >>retval=1 >>stderr=An error has occurred setting the element data. The request is not supported. >>stdout= Traceback (most recent call last): File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\win32\backend.py", line 697, in modify_bcd File "\lib\wubi\backends\common\utils.py", line 66, in run_command Exception: Error executing command >>command=C:\Windows\sysnative\bcdedit.exe /set {970e3d1b-e019-11df-a016-81045c79c1f9} device partition=U: >>retval=1 >>stderr=An error has occurred setting the element data. The request is not supported. >>stdout= 07-11 00:05 DEBUG TaskList: # Cancelling tasklist 07-11 00:05 DEBUG TaskList: New task modify_bcd 07-11 00:05 ERROR root: Error executing command >>command=C:\Windows\sysnative\bcdedit.exe /set {970e3d1b-e019-11df-a016-81045c79c1f9} device partition=U: >>retval=1 >>stderr=An error has occurred setting the element data. The request is not supported. >>stdout= Traceback (most recent call last): File "\lib\wubi\application.py", line 58, in run File "\lib\wubi\application.py", line 132, in select_task File "\lib\wubi\application.py", line 158, in run_installer File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\win32\backend.py", line 697, in modify_bcd File "\lib\wubi\backends\common\utils.py", line 66, in run_command Exception: Error executing command >>command=C:\Windows\sysnative\bcdedit.exe /set {970e3d1b-e019-11df-a016-81045c79c1f9} device partition=U: >>retval=1 >>stderr=An error has occurred setting the element data. The request is not supported. >>stdout= 07-11 00:05 DEBUG TaskList: New task modify_bcd 07-11 00:05 DEBUG TaskList: ## Finished modify_bootloader 07-11 00:05 DEBUG TaskList: # Finished tasklist What have I done wrong? What can I do? If I turn off my laptop, will I actually be able to turn it back on? If you want me to post the log from the first day it happened, i'd be glad to in the comments, in the main body it made it over 30000 characters.

    Read the article

  • Cold Start

    - by antony.reynolds
    Well we had snow drifts 3ft deep on Saturday so it must be spring time.  In preparation for Spring we decided to move the lawn tractor.  Of course after sitting in the garage all winter it refused to start.  I then come into the office and need to start my 11g SOA Suite installation.  I thought about this and decided my tractor might be cranky but at least I can script the startup of my SOA Suite 11g installation. So with this in mind I created 6 scripts.  I created them for Linux but they should translate to Windows without too many problems.  This is left as an exercise to the reader, note you will have to hardcode more than I did in the Linux scripts and create separate script files for the sqlplus and WLST sections. Order to start things I believe there should be order in all things, especially starting the SOA Suite.  So here is my preferred order. Start Database This is need by EM and the rest of SOA Suite so best to start it before the Admin Server and managed servers. Start Node Manager on all machines This is needed if you want the scripts to work across machines. Start Admin Server Once this is done in theory you can manually stat the managed servers using WebLogic console.  But then you have to wait for console to be available.  Scripting it all is quicker and easier way of starting. Start Managed Servers & Clusters Best to start them one per physical machine at a time to avoid undue load on the machines.  Non-clustered install will have just soa_server1 and bam_serv1 by default.  Clusters will have at least SOA and BAM clusters that can be started as a group or individually.  I have provided scripts for standalone servers, but easy to change them to work with clusters. Starting Database I have provided a very primitive script (available here) to start the database, the listener and the DB console.  The section highlighted in red needs to match your database name. #!/bin/sh echo "##############################" echo "# Setting Oracle Environment #" echo "##############################" . oraenv <<-EOF orcl EOF echo "#####################" echo "# Starting Database #" echo "#####################" sqlplus / as sysdba <<-EOF startup exit EOF echo "#####################" echo "# Starting Listener #" echo "#####################" lsnrctl start echo "######################" echo "# Starting dbConsole #" echo "######################" emctl start dbconsole read -p "Hit <enter> to continue" Starting SOA Suite My script for starting the SOA Suite (available here) breaks the task down into five sections. Setting the Environment First set up the environment variables.  The variables highlighted in red probably need changing for your environment. #!/bin/sh echo "###########################" echo "# Setting SOA Environment #" echo "###########################" export MW_HOME=~oracle/Middleware11gPS1 export WL_HOME=$MW_HOME/wlserver_10.3 export ORACLE_HOME=$MW_HOME/Oracle_SOA export DOMAIN_NAME=soa_std_domain export DOMAIN_HOME=$MW_HOME/user_projects/domains/$DOMAIN_NAME Starting the Node Manager I start node manager with a nohup to stop it exiting when the script terminates and I redirect the standard output and standard error to a file in a logs directory. cd $DOMAIN_HOME echo "#########################" echo "# Starting Node Manager #" echo "#########################" nohup $WL_HOME/server/bin/startNodeManager.sh >logs/NodeManager.out 2>&1 & Starting the Admin Server I had problems starting the Admin Server from Node Manager so I decided to start it using the command line script.  I again use nohup and redirect output. echo "#########################" echo "# Starting Admin Server #" echo "#########################" nohup ./startWebLogic.sh >logs/AdminServer.out 2>&1 & Starting the Managed Servers I then used WLST (WebLogic Scripting Tool) to start the managed servers.  First I waited for the Admin Server to come up by putting a connect command in a loop.  I could have put the WLST commands into a separate script file but I wanted to reduce the number of files I was using and so used redirected input (here syntax). $ORACLE_HOME/common/bin/wlst.sh <<-EOF import time sleep=time.sleep print "#####################################" print "# Waiting for Admin Server to Start #" print "#####################################" while True:   try:     connect(adminServerName="AdminServer")     break   except:     sleep(10) I then start the SOA server and tell WLST to wait until it is started before returning.  If starting a cluster then the start command would be modified accordingly to start the SOA cluster. print "#######################" print "# Starting SOA Server #" print "#######################" start(name="soa_server1", block="true") I then start the BAM server in the same way as the SOA server. print "#######################" print "# Starting BAM Server #" print "#######################" start(name="bam_server1", block="true") EOF Finally I let people know the servers are up and wait for input in case I am running in a separate window, in which case the result would be lost without the read command. echo "#####################" echo "# SOA Suite Started #" echo "#####################" read -p "Hit <enter> to continue" Stopping the SOA Suite My script for shutting down the SOA Suite (available here)  is basically the reverse of my startup script.  After setting the environment I connect to the Admin Server using WLST and shut down the managed servers and the admin server.  Again the script would need modifying for a cluster. Stopping the Servers If I cannot connect to the Admin Server I try to connect to the node manager, in case the Admin Server is down but the managed servers are up. #!/bin/sh echo "###########################" echo "# Setting SOA Environment #" echo "###########################" export MW_HOME=~oracle/Middleware11gPS1 export WL_HOME=$MW_HOME/wlserver_10.3 export ORACLE_HOME=$MW_HOME/Oracle_SOA export DOMAIN_NAME=soa_std_domain export DOMAIN_HOME=$MW_HOME/user_projects/domains/$DOMAIN_NAME cd $DOMAIN_HOME $MW_HOME/Oracle_SOA/common/bin/wlst.sh <<-EOF try:   print("#############################")   print("# Connecting to AdminServer #")   print("#############################")   connect(username='weblogic',password='welcome1',url='t3://localhost:7001') except:   print "#########################################"   print "#   Unable to connect to Admin Server   #"   print "# Attempting to connect to Node Manager #"   print "#########################################"   nmConnect(domainName=os.getenv("DOMAIN_NAME")) print "#######################" print "# Stopping BAM Server #" print "#######################" shutdown('bam_server1') print "#######################" print "# Stopping SOA Server #" print "#######################" shutdown('soa_server1') print "#########################" print "# Stopping Admin Server #" print "#########################" shutdown('AdminServer') disconnect() nmDisconnect() EOF Stopping the Node Manager I stopped the node manager by searching for the java node manager process using the ps command and then killing that process. echo "#########################" echo "# Stopping Node Manager #" echo "#########################" kill -9 `ps -ef | grep java | grep NodeManager |  awk '{print $2;}'` echo "#####################" echo "# SOA Suite Stopped #" echo "#####################" read -p "Hit <enter> to continue" Stopping the Database Again my script for shutting down the database is the reverse of my start script.  It is available here.  The only change needed might be to the database name. #!/bin/sh echo "##############################" echo "# Setting Oracle Environment #" echo "##############################" . oraenv <<-EOF orcl EOF echo "######################" echo "# Stopping dbConsole #" echo "######################" emctl stop dbconsole echo "#####################" echo "# Stopping Listener #" echo "#####################" lsnrctl stop echo "#####################" echo "# Stopping Database #" echo "#####################" sqlplus / as sysdba <<-EOF shutdown immediate exit EOF read -p "Hit <enter> to continue" Cleaning Up Cleaning SOA Suite I often run tests and want to clean up all the log files.  The following script (available here) does this for the WebLogic servers in a given domain on a machine.  After setting the domain I just remove all files under the servers logs directories.  It also cleans up the log files I created with my startup scripts.  These scripts could be enhanced to copy off the log files if you needed them but in my test environments I don’t need them and would prefer to reclaim the disk space. #!/bin/sh echo "###########################" echo "# Setting SOA Environment #" echo "###########################" export MW_HOME=~oracle/Middleware11gPS1 export WL_HOME=$MW_HOME/wlserver_10.3 export ORACLE_HOME=$MW_HOME/Oracle_SOA export DOMAIN_NAME=soa_std_domain export DOMAIN_HOME=$MW_HOME/user_projects/domains/$DOMAIN_NAME echo "##########################" echo "# Cleaning SOA Log Files #" echo "##########################" cd $DOMAIN_HOME rm -Rf logs/* servers/*/logs/* read -p "Hit <enter> to continue" Cleaning Database I also created a script to clean up the dump files of an Oracle database instance and also the EM log files (available here).  This relies on the machine name being correct as the EM log files are stored in a directory that is based on the hostname and the Oracle SID. #!/bin/sh echo "##############################" echo "# Setting Oracle Environment #" echo "##############################" . oraenv <<-EOF orcl EOF echo "#############################" echo "# Cleaning Oracle Log Files #" echo "#############################" rm -Rf $ORACLE_BASE/admin/$ORACLE_SID/*dump/* rm -Rf $ORACLE_HOME/`hostname`_$ORACLE_SID/sysman/log/* read -p "Hit <enter> to continue" Summary Hope you find the above scripts useful.  They certainly stop me hanging around waiting for things to happen on my test machine and make it easy to run a test, change parameters, bounce the SOA Suite and clean the logs between runs so I can see exactly what is happening. Now I need to get that mower started…

    Read the article

  • Can't focus fancybox iframe input

    - by bswinnerton
    So I'm using fancybox to load up a login iframe, and I would like it onComplete to bring focus to the username field. If someone could take a look at this code and let me know what's wrong, that'd be great. Gracias. /* Function to resize the height of the fancybox window */ (function($){ $.fn.resize = function(width, height) { if (!width || (width == "inherit")) inner_width = parent.$("#fancybox-inner").width(); if (!height || (height == "inherit")) inner_height = parent.$("#fancybox-inner").height(); inner_width = width; outer_width = (inner_width + 14); inner_height = height; outer_height = (inner_height + 14); parent.$("#fancybox-inner").css({'width':inner_width, 'height':inner_height}); parent.$("#fancybox-outer").css({'width':outer_width, 'height':outer_height}); } })(jQuery); $(document).ready(function(){ var pasturl = parent.location.href; $("a.iframe#register").fancybox({ 'transitionIn' : 'fade', 'transitionOut' : 'fade', 'speedIn' : 600, 'speedOut' : 350, 'width' : 450, 'height' : 385, 'scrolling' : 'no', 'autoScale' : false, 'autoDimensions' : false, 'overlayShow' : true, 'overlayOpacity' : 0.7, 'padding' : 7, 'hideOnContentClick': false, 'titleShow' : false, 'onStart' : function() { $.fn.resize(450,385); }, 'onComplete' : function() { $("#realname").focus(); }, 'onClosed' : function() { $(".warningmsg").hide(); $(".errormsg").hide(); $(".successfulmsg").hide(); } }); $("a.iframe#login").fancybox({ 'transitionIn' : 'fade', 'transitionOut' : 'fade', 'speedIn' : 600, 'speedOut' : 350, 'width' : 400, 'height' : 250, 'scrolling' : 'no', 'autoScale' : false, 'overlayShow' : true, 'overlayOpacity' : 0.7, 'padding' : 7, 'hideOnContentClick': false, 'titleShow' : false, 'onStart' : function() { $.fn.resize(400,250); }, 'onComplete' : function() { $("#login_username").focus(); }, 'onClosed' : function() { $(".warningmsg").hide(); $(".errormsg").hide(); $(".successfulmsg").hide(); } }); $("#register").click(function() { $("#login_form").hide(); $(".registertext").hide(); $.fn.resize(450,385); $("label").addClass("#register_form label"); }); $("#login").click(function() { $.fn.resize(400,250); $("label").addClass("#login_form label"); }); $("#register_form").bind("submit", function() { $(".warningmsg").hide(); $(".errormsg").hide(); $(".successfulmsg").hide(); if ($("#realname").val().length < 1 || $("#password").val().length < 1 || $("#username").val().length < 1) { $("#no_fields").addClass("warningmsg").show().resize(inherit,405); return false; } if ($("#password").val() != $("#password2").val()) { $("#no_pass_match").addClass("errormsg").show().resize(); return false; } $.fancybox.showActivity(); $.post( "../../admin/users/create_submit.php", { realname:$('#realname').val(), email:$('#email').val(), username:$('#username').val(), password:MD5($('#password').val()), rand:Math.random() } ,function(data){ if(data == "OK"){ $(".registerbox").hide(); $.fancybox.hideActivity(); $.fn.resize(inherit,300); $("#successful_login").addClass("successfulmsg").show(); } else if(data == "user_taken"){ $.fancybox.hideActivity(); $("#user_taken").addClass("errormsg").show().resize(inherit,405); $("#username").val(""); } else { $.fancybox.hideActivity(); document.write("Well, that was weird. Give me a shout at [email protected]."); } return false; }); return false; }); $("#login_form").bind("submit", function() { $(".warningmsg").hide(); $(".errormsg").hide(); $(".successfulmsg").hide(); if ($("#login_username").val().length < 1 || $("#login_password").val().length < 1) { $("#no_fields").addClass("warningmsg").show().resize(inherit,280); return false; } $.fancybox.showActivity(); $.post( "../../admin/users/login_submit.php", { username:$('#login_username').val(), password:MD5($('#login_password').val()), rand:Math.random() } ,function(data){ if(data == "authenticated"){ $(".loginbox").hide(); $(".registertext").hide(); $.fancybox.hideActivity(); $("#successful_login").addClass("successfulmsg").show(); parent.document.location.href=pasturl; } else if(data == "no_user"){ $.fancybox.hideActivity(); $("#no_user").addClass("errormsg").show().resize(); $("#login_username").val(""); $("#login_password").val(""); } else if(data == "wrong_password"){ $.fancybox.hideActivity(); $("#wrong_password").addClass("warningmsg").show().resize(); $("#login_password").val(""); } else { $.fancybox.hideActivity(); document.write("Well, that was weird."); } return false; }); return false; }); }); And here is the HTML: <p><a class="iframe" id="login" href="/login/">Login</a></p>

    Read the article

  • CodePlex Daily Summary for Monday, March 05, 2012

    CodePlex Daily Summary for Monday, March 05, 2012Popular ReleasesSimple Injector: Simple Injector v1.4.1: This release adds two small improvements to the SimpleInjector.Extensions.dll. No changes have been made to the core library. New features and improvements in this release for the SimpleInjector.Extensions.dll The RegisterManyForOpenGeneric extension methods now accept non-generic decorator, as long as they implement the given open generic service type. GetTypesToRegister methods added to the OpenGenericBatchRegistrationExtensions class which allows to customize the behavior. Note that the...SQL Scriptz Runner: Application: Scriptz Runner source code and applicationCommonLibrary: Code: CodePowerGUI Visual Studio Extension: PowerGUI VSX 1.5.2: Added support for PowerGUI 3.2.Path Copy Copy: 10.0: New version with the following biggest new features: Regular expression support in custom commands (see this work item) Import/export feature for custom commands (see this work item) This version also addresses the following work items: 11353 11348 This version is a recommended upgrade for all users. Note: new custom commands that user regular expressions won't be compatible with earlier versions (if installed by registry manipulation or via network installation), but everything else is ...VidCoder: 1.3.1: Updated HandBrake core to 0.9.6 release (svn 4472). Removed erroneous "None" container choice. Change some logic and help text to stop assuming you have to pick the VIDEO_TS folder for a DVD scan. This should make previewing DVD titles on the Queue Multiple Titles window possible when you've picked the root DVD directory.ASP.NET MVC Framework - Abstracting Data Annotations, HTML5, Knockout JS techs: Version 1.0: Please download the source code. I am not associating any dll for release.ExtAspNet: ExtAspNet v3.1.0: ExtAspNet - ?? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ?????????? ExtAspNet ????? ExtJS ??? ASP.NET 2.0 ???,????? AJAX ??????????。 ExtAspNet ??????? JavaScript,?? CSS,?? UpdatePanel,?? ViewState,?? WebServices ???????。 ??????: IE 7.0, Firefox 3.6, Chrome 3.0, Opera 10.5, Safari 3.0+ ????:Apache License 2.0 (Apache) ??:http://extasp.net/ ??:http://bbs.extasp.net/ ??:http://extaspnet.codeplex.com/ ??:http://sanshi.cnblogs.com/ ????: +2012-03-04 v3.1.0 -??Hidden???????(〓?〓)。 -?PageManager??...AcDown????? - Anime&Comic Downloader: AcDown????? v3.9.1: ?? ●AcDown??????????、??、??????,????1M,????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。??????AcPlay?????,??????、????????????????。 ● AcDown???????????????????????????,???,???????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86),?????"?????????"??? ??????????????,??????????: ??"AcDo...Windows Phone Commands for VS2010: Version 1.0: Initial Release Version 1.0 Connect from device or emulator (Monitors the connection) Show Device information (Plataform, build , version, avaliable memory, total memory, architeture Manager installed applications (Launch, uninstall and explorer isolate storage files) Manager core applications (Launch blocked applications from emulator (Office, Calculator, alarm, calendar , etc) Manager blocked settings from emulator (Airplane Mode, Celullar Network, Wifi, etc) Deploy and update ap...DNN Metro7 style Skin package: Metro7 style Skin for DotNetNuke 06.01.00: Changes on Version 06.01.00 Fixed issue on GraySmallTitle container, that breaks the layout Fixed issue on Blue Metro7 Skin where the Search, Login, Register, Date is missing Fixed issue with the Version numbers on the target file Fixed issue where the jQuery and jQuery-UI files not deleted on upgrade from Version 01.00.00 Added a internal page where the Image Slider would be replaces with a BannerPaneMedia Companion: MC 3.433b Release: General More GUI tweaks (mostly imperceptible!) Updates for mc_com.exe TV The 'Watched' button has been re-instigated Added TV Menu sub-option to search ALL for new Episodes (includes locked shows) Movies Added 'Source' field (eg DVD, Bluray, HDTV), customisable in Advanced Preferences (try it out, let us know how it works!) Added HTML <<format>> tag with optional parameters for video container, source, and resolution (updated HTML tags to be added to Documentation shortly) Known Issu...Picturethrill: Version 2.3.2.0: Release includes Self-Update feature for Picturethrill. What that means for users is that they are always guaranteed to have a fresh copy of Picturethrill on their computers with all latest fixes. When Picturethrill adds a new website to get pictures from, you will get it too!Simple MVVM Toolkit for Silverlight, WPF and Windows Phone: Simple MVVM Toolkit v3.0.0.0: Added support for Silverlight 5.0 and Windows Phone 7.1. Upgraded project templates and samples. Upgraded installer. There are some new prerequisites required for this version, namely Silverlight 5 Tools, Expression Blend Preview for Silverlight 5 (until the SDK is released), Windows Phone 7.1 SDK. Because it is in the experimental band, I have also removed the dependency on the Silverlight Testing Framework. You can use it if you wish, but the Ria Services project template no longer uses ...CODE Framework: 4.0.20301: The latest version adds a number of new features to the WPF system (such as stylable and testable messagebox support) as well as various new features throughout the system (especially in the Utilities namespace).MyRouter (Virtual WiFi Router): MyRouter 1.0.2 (Beta): A friendlier User Interface. A logger file to catch exceptions so you may send it to use to improve and fix any bugs that may occur. A feedback form because we always love hearing what you guy's think of MyRouter. Check for update menu item for you to stay up to date will the latest changes. Facebook fan page so you may spread the word and share MyRouter with friends and family And Many other exciting features were sure your going to love!WPF Sound Visualization Library: WPF SVL 0.3 (Source, Binaries, Examples, Help): Version 0.3 of WPFSVL. This includes three new controls: an equalizer, a digital clock, and a time editor.Orchard Project: Orchard 1.4: Please read our release notes for Orchard 1.4: http://docs.orchardproject.net/Documentation/Orchard-1-4-Release-NotesNetSqlAzMan - .NET SQL Authorization Manager: 3.6.0.15: 3.6.0.15 28-Feb-2012 • Fix: The communication object, System.ServiceModel.Channels.ServiceChannel, cannot be used for communication because it is in the Faulted state. Work Item 10435: http://netsqlazman.codeplex.com/workitem/10435 • Fix: Made StorageCache thread safe. Thanks to tangrl. • Fix: Members property of SqlAzManApplicationGroup is not functioning. Thanks to tangrl. Work Item 10267: http://netsqlazman.codeplex.com/workitem/10267 • Fix: Indexer are making database calls. Thanks to t...SCCM Client Actions Tool: Client Actions Tool v1.1: SCCM Client Actions Tool v1.1 is the latest version. It comes with following changes since last version: Added stop button to stop the ongoing process. Added action "Query update status". Added option "saveOnlineComputers" in config.ini to enable saving list of online computers from last session. Default value for "LatestClientVersion" set to SP2 R3 (4.00.6487.2157). Wuauserv service manual startup mode is considered healthy on Windows 7. Errors are now suppressed in checkReleases...New ProjectsAbsolute Risk Game: Risk Game is a classic "World Domination Risk" game where you try to conquer the world.Architecture Document Catalog: The Architecture Document Catalog is a catalog for various documentation used by software solution architects. The initial architecture framework supported is from Rozanski and Woods, also called Viewpoints and Perspectives. It's in C#, Silverlight, WCF RIA Services.Background Worker Job Execution & Job Scheduler: Background worker is a multi-threaded job execution and scheduling engine. Very similar to Quartz, but with a slightly different focus.BWOS: preparing a new operating system. and name BWOS. CommonLibrary: Common modules and componentesCustomizeTool: CustomizeToolData Foundry: Data Foundry is a Swiss army knife for database administrator. With capabilities to cross reference check data between tables to find orphaned data and table relations.Data Grid Extensions: Modular extensions for the DataGrid control. Attach filtering capabilities to your existing DataGrid.dk.Helper: dk.Helper makes it easier for tribal wars game players (divoke-kmene.sk) to manage common activities in game. It's developed in C#.ExEn: ExEn is a high-performance implementation of a subset of the XNA API that runs on Silverlight, iOS and Android.Extended Methods for .NET: Developed in VB.NET, this DLL includes some functions I came across over the internet. Originally they were functions. I made some changes to suit my work and recreated them as Extended methods. Currently includes two extended methods for the Image class and one extended method for the String class. I will update it with new methods as I go along.GestionarComenzi: Gestioneaza Comenzi - Firma transporturiGIS Library Management: GIS Library Management System.iFinity Cache Master for DotNetNuke: The iFinity DotNetNuke Cache Master is an Administration module for DotNetNuke. This module allows administrators to inspect the current contents of the ASP.NET Cache in their running DotNetNuke installation. A very useful tool for developers working with the DotNetNuke cache.intelliEssay Document Format Checker: This is a product that tries to check a document's format to see if it conforms to certain given standard.Just T[he]IP: "Just The IP" leverages the Bing API v2 and the ip: Bing Advanced Operator to list the other web sites hosted at this IP Address (that are indexed by Bing).KTool: Ktool is a tool for learning japanese kanji. This program is still in developed. Current version can only display and find a kanji word (Press ctrl-F on main screen for search). Developed in WPF, .NET 4.0Lab Checker: Lab Checker makes it easy for teacher to check students' programs. It allows a student to test his program on a set of test cases which eliminates the need to run the program manually.MeoBox Vera Control Plugin: This is my project for creating a plugin to control my MeoBox ( Scientific Atlanta ) using Vera ( www.micasaverde.com ) Should work with other TV2Client boxesNUnitTestHelper: NUnitTestHelper is a helper class for developing unit tests for C# applications with NUnit framework. This helper library provides set of classes and method which can be used for accomplishing faster unit test development for any kind of project. NUnitTestBase – Supports basic functionality for writing a unit test with NUnit framework. This base class provides built in support for mocking framework. This version provides support for Rhino mocking framework.OnlineExam: Common online examination system based on Asp.net MVC3 can used for knowledge test, I/Q test, college test or most other test project.Orchard Code Generation Extensions: Code Generation Extensions module for Orchard.OrchardTranstion_CN: Orchar?? OsAvatar: to next step showOwnTools: About my toolsPhone Finder App: PhoneFinder will be a Windows Phone platform app alternative for finding a lost or stolen phone. Our plan is to have additional functionality the Windows software does not include. It will be developed using ASP.NET MVC, SQL Server, C#, etc.PoshChat: PoshChat is a client/server chat program written in PowerShell. Supports multiple client connections to a single server to chat.Process Attachment And Secure Text: ProcessAttachmentAndSecureText is an Exchange Transport Agent DLL and associated ASPX page. It is used to 1) strip out attachments and send them to an upload portal, and 2) to strip out text from emails and send it to an upload portal. It is currently coded for Exchange 2010Resource Viewer - Visual Studio Extension: Simply put the “resource viewer extension” enables you to visually view your ResourceDictionary. To open it go to: View – Other Windows – Resource Viewer. When working with WPF/Silverlight you put your reusable resources in a common ResourceDictionary, those resources might be of type Style, SolidColorBrush, DrawingBrush, BitmapImage and more. The problems starts when you have that ResourceDictionary you have no way to see how your resources look like, making the work process (of both t...Scumm XNA: Scumm XNA is an engine that runs old school LucasArts graphical adventure games. It is written completely in C# and will run on PC, Xbox 360 and Windows phone. The code is inspired by ScummVM but this project is not a port, it is a complete rewrite in order to optimize the engine for the CLR. Of course, you will need to own the orinigal games in order to use it. Currently, I will focus my work on the great "Day of Tentacle" game specifically the CD version.SDX DataGrid: SDXDataGrid is a comprehensive data grid component for Microsoft .NET 3.5 web application developers. It is designed to ease the exhausting process of implementing the necessary code for sorting, navigation, grouping, searching and data editing in a data representation object.SQL Scriptz Runner: Features are : Drag And Drop script files Run a directory of script files Sql Script out put messages during execution Script passed or failed that are colored green and red (yellow for running) Stop on error option Open script on error option Run report with time taken for each uComponents Demo: A demo for the code of running uComponents in Umbraco siteUnixTable: UnixTable makes it easier to realize application to access database, with no code but only with visual instrument at runtime. You'll no longer have to write query or code to read table from database. It's developed in Visual Basic for .NET 4. VOA Player.NET: VOA Player is a lightweight windows client for listening VOA Special English. Because of GFW blocking it fetch RSS data via rss2proxy.appspot.com indirectly. User can view article page and listen MP3 stream. The project is a C# implementation of voaplayer.sinaapp.com.Windows Metafile Library: The library supports reading and writing WMF files. Source code is written in pure .NET from scratch following the Windows Metafile Format Specification.Windows Phone 7.1 + MicroFramework (a phone device as remote control): Windows Phone 7.1 + .Net MicroFramework (How use a windows phone device as remote control of a Fez Panda II/MicroFramework board). A windows phone device can be connected to a wifi network (for example a wifi router). If you have also a MicroFramework board connected to the wifi network; you can send some http rest commands from the windows phone device to the MicroFramework board. The microframework board, it must support the tcp/ip protocol through a connect shield. In this exaple it will...XBMC Cache Manager: A Windows Service to manage a shared XBMC MySQL database and shared cache folder.

    Read the article

  • The Art of Productivity

    - by dwahlin
    Getting things done has always been a challenge regardless of gender, age, race, skill, or job position. No matter how hard some people try, they end up procrastinating tasks until the last minute. Some people simply focus better when they know they’re out of time and can’t procrastinate any longer. How many times have you put off working on a term paper in school until the very last minute? With only a few hours left your mental energy and focus seem to kick in to high gear especially as you realize that you either get the paper done now or risk failing. It’s amazing how a little pressure can turn into a motivator and allow our minds to focus on a given task. Some people seem to specialize in procrastinating just about everything they do while others tend to be the “doers” who get a lot done and ultimately rise up the ladder at work. What’s the difference between these types of people? Is it pure laziness or are other factors at play? I think that some people are certainly more motivated than others, but I also think a lot of it is based on the process that “doers” tend to follow - whether knowingly or unknowingly. While I’ve certainly fought battles with procrastination, I’ve always had a knack for being able to get a lot done in a relatively short amount of time. I think a lot of my “get it done” attitude goes back to the the strong work ethic my parents instilled in me at a young age. I remember my dad saying, “You need to learn to work hard!” when I was around 5 years old. I remember that moment specifically because I was on a tractor with him the first time I heard it while he was trying to move some large rocks into a pile. The tractor was big but so were the rocks and my dad had to balance the tractor perfectly so that it didn’t tip forward too far. It was challenging work and somewhat tedious but my dad finished the task and taught me a few important lessons along the way including persistence, the importance of having a skill, and getting the job done right without skimping along the way. In this post I’m going to list a few of the techniques and processes I follow that I hope may be beneficial to others. I blogged about the general concept back in 2009 but thought I’d share some updated information and lessons learned since then. Most of the ideas that follow came from learning and refining my daily work process over the years. However, since most of the ideas are common sense (at least in my opinion), I suspect they can be found in other productivity processes that are out there. Let’s start off with one of the most important yet simple tips: Start Each Day with a List. Start Each Day with a List What are you planning to get done today? Do you keep track of everything in your head or rely on your calendar? While most of us think that we’re pretty good at managing “to do” lists strictly in our head you might be surprised at how affective writing out lists can be. By writing out tasks you’re forced to focus on the most important tasks to accomplish that day, commit yourself to those tasks, and have an easy way to track what was supposed to get done and what actually got done. Start every morning by making a list of specific tasks that you want to accomplish throughout the day. I’ll even go so far as to fill in times when I’d like to work on tasks if I have a lot of meetings or other events tying up my calendar on a given day. I’m not a big fan of using paper since I type a lot faster than I write (plus I write like a 3rd grader according to my wife), so I use the Sticky Notes feature available in Windows. Here’s an example of yesterday’s sticky note: What do you add to your list? That’s the subject of the next tip. Focus on Small Tasks It’s no secret that focusing on small, manageable tasks is more effective than trying to focus on large and more vague tasks. When you make your list each morning only add tasks that you can accomplish within a given time period. For example, if I only have 30 minutes blocked out to work on an article I don’t list “Write Article”. If I do that I’ll end up wasting 30 minutes stressing about how I’m going to get the article done in 30 minutes and ultimately get nothing done. Instead, I’ll list something like “Write Introductory Paragraphs for Article”. The next day I may add, “Write first section of article” or something that’s small and manageable – something I’m confident that I can get done. You’ll find that once you’ve knocked out several smaller tasks it’s easy to continue completing others since you want to keep the momentum going. In addition to keeping my tasks focused and small, I also make a conscious effort to limit my list to 4 or 5 tasks initially. I’ve found that if I list more than 5 tasks I feel a bit overwhelmed which hurts my productivity. It’s easy to add additional tasks as you complete others and you get the added benefit of that confidence boost of knowing that you’re being productive and getting things done as you remove tasks and add others. Getting Started is the Hardest (Yet Easiest) Part I’ve always found that getting started is the hardest part and one of the biggest contributors to procrastination. Getting started working on tasks is a lot like getting a large rock pushed to the bottom of a hill. It’s difficult to get the rock rolling at first, but once you manage to get it rocking some it’s really easy to get it rolling on its way to the bottom. As an example, I’ve written 100s of articles for technical magazines over the years and have really struggled with the initial introductory paragraphs. Keep in mind that these are the paragraphs that don’t really add that much value (in my opinion anyway). They introduce the reader to the subject matter and nothing more. What a waste of time for me to sit there stressing about how to start the article. On more than one occasion I’ve spent more than an hour trying to come up with 2-3 paragraphs of text.  Talk about a productivity killer! Whether you’re struggling with a writing task, some code for a project, an email, or other tasks, jumping in without thinking too much is the best way to get started I’ve found. I’m not saying that you shouldn’t have an overall plan when jumping into a task, but on some occasions you’ll find that if you simply jump into the task and stop worrying about doing everything perfectly that things will flow more smoothly. For my introductory paragraph problem I give myself 5 minutes to write out some general concepts about what I know the article will cover and then spend another 10-15 minutes going back and refining that information. That way I actually have some ideas to work with rather than a blank sheet of paper. If I still find myself struggling I’ll write the rest of the article first and then circle back to the introductory paragraphs once I’m done. To sum this tip up: Jump into a task without thinking too hard about it. It’s better to to get the rock at the top of the hill rocking some than doing nothing at all. You can always go back and refine your work.   Learn a Productivity Technique and Stick to It There are a lot of different productivity programs and seminars out there being sold by companies. I’ve always laughed at how much money people spend on some of these motivational programs/seminars because I think that being productive isn’t that hard if you create a re-useable set of steps and processes to follow. That’s not to say that some of these programs/seminars aren’t worth the money of course because I know they’ve definitely benefited some people that have a hard time getting things done and staying focused. One of the best productivity techniques I’ve ever learned is called the “Pomodoro Technique” and it’s completely free. This technique is an extremely simple way to manage your time without having to remember a bunch of steps, color coding mechanisms, or other processes. The technique was originally developed by Francesco Cirillo in the 80s and can be implemented with a simple timer. In a nutshell here’s how the technique works: Pick a task to work on Set the timer to 25 minutes and work on the task Once the timer rings record your time Take a 5 minute break Repeat the process Here’s why the technique works well for me: It forces me to focus on a single task for 25 minutes. In the past I had no time goal in mind and just worked aimlessly on a task until I got interrupted or bored. 25 minutes is a small enough chunk of time for me to stay focused. Any distractions that may come up have to wait until after the timer goes off. If the distraction is really important then I stop the timer and record my time up to that point. When the timer is running I act as if I only have 25 minutes total for the task (like you’re down to the last 25 minutes before turning in your term paper….frantically working to get it done) which helps me stay focused and turns into a “beat the clock” type of game. It’s actually kind of fun if you treat it that way and really helps me focus on a the task at hand. I automatically know how much time I’m spending on a given task (more on this later) by using this technique. I know that I have 5 minutes after each pomodoro (the 25 minute sprint) to waste on anything I’d like including visiting a website, stepping away from the computer, etc. which also helps me stay focused when the 25 minute timer is counting down. I use this technique so much that I decided to build a program for Windows 8 called Pomodoro Focus (I plan to blog about how it was built in a later post). It’s a Windows Store application that allows people to track tasks, productive time spent on tasks, interruption time experienced while working on a given task, and the number of pomodoros completed. If a time estimate is given when the task is initially created, Pomodoro Focus will also show the task completion percentage. I like it because it allows me to track my tasks, time spent on tasks (very useful in the consulting world), and even how much time I wasted on tasks (pressing the pause button while working on a task starts the interruption timer). I recently added a new feature that charts productive and interruption time for tasks since I wanted to see how productive I was from week to week and month to month. A few screenshots from the Pomodoro Focus app are shown next, I had a lot of fun building it and use it myself to as I work on tasks.   There are certainly many other productivity techniques and processes out there (and a slew of books describing them), but the Pomodoro Technique has been the simplest and most effective technique I’ve ever come across for staying focused and getting things done.   Persistence is Key Getting things done is great but one of the biggest lessons I’ve learned in life is that persistence is key especially when you’re trying to get something done that at times seems insurmountable. Small tasks ultimately lead to larger tasks getting accomplished, however, it’s not all roses along the way as some of the smaller tasks may come with their own share of bumps and bruises that lead to discouragement about the end goal and whether or not it is worth achieving at all. I’ve been on several long-term projects over my career as a software developer (I have one personal project going right now that fits well here) and found that repeating, “Persistence is the key!” over and over to myself really helps. Not every project turns out to be successful, but if you don’t show persistence through the hard times you’ll never know if you succeeded or not. Likewise, if you don’t persistently stick to the process of creating a daily list, follow a productivity process, etc. then the odds of consistently staying productive aren’t good.   Track Your Time How much time do you actually spend working on various tasks? If you don’t currently track time spent answering emails, on phone calls, and working on various tasks then you might be surprised to find out that a task that you thought was going to take you 30 minutes ultimately ended up taking 2 hours. If you don’t track the time you spend working on tasks how can you expect to learn from your mistakes, optimize your time better, and become more productive? That’s another reason why I like the Pomodoro Technique – it makes it easy to stay focused on tasks while also tracking how much time I’m working on a given task.   Eliminate Distractions I blogged about this final tip several years ago but wanted to bring it up again. If you want to be productive (and ultimately successful at whatever you’re doing) then you can’t waste a lot of time playing games or on Twitter, Facebook, or other time sucking websites. If you see an article you’re interested in that has no relation at all to the tasks you’re trying to accomplish then bookmark it and read it when you have some spare time (such as during a pomodoro break). Fighting the temptation to check your friends’ status updates on Facebook? Resist the urge and realize how much those types of activities are hurting your productivity and taking away from your focus. I’ll admit that eliminating distractions is still tough for me personally and something I have to constantly battle. But, I’ve made a conscious decision to cut back on my visits and updates to Facebook, Twitter, Google+ and other sites. Sure, my Klout score has suffered as a result lately, but does anyone actually care about those types of scores aside from your online “friends” (few of whom you’ve actually met in person)? :-) Ultimately it comes down to self-discipline and how badly you want to be productive and successful in your career, life goals, hobbies, or whatever you’re working on. Rather than having your homepage take you to a time wasting news site, game site, social site, picture site, or others, how about adding something like the following as your homepage? Every time your browser opens you’ll see a personal message which helps keep you on the right track. You can download my ubber-sophisticated homepage here if interested. Summary Is there a single set of steps that if followed can ultimately lead to productivity? I don’t think so since one size has never fit all. Every person is different, works in their own unique way, and has their own set of motivators, distractions, and more. While I certainly don’t consider myself to be an expert on the subject of productivity, I do think that if you learn what steps work best for you and gradually refine them over time that you can come up with a personal productivity process that can serve you well. Productivity is definitely an “art” that anyone can learn with a little practice and persistence. You’ve seen some of the steps that I personally like to follow and I hope you find some of them useful in boosting your productivity. If you have others you use please leave a comment. I’m always looking for ways to improve.

    Read the article

  • On automating a split-mirror ASM backup with EMC TimeFinder ...

    - by [email protected]
    Normal 0 21 false false false MicrosoftInternetExplorer4 st1\:*{behavior:url(#ieooui) } /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;} Hi clerks,   Offloading the backup operation to another host using disk cloning could really improve the performance on highly busy databases ( 24x7, zero downtime and all this stuff ...) There are well know white papers on this subject, ASM included, but today Im showing you a nice way to automate the procedure using shell scripting with EMC TimeFinder technologies:   Assumptions: *********** ASM diskgroups name:   +data_${db_name} : asm data diskgroup +fra_${db_name} :  asm fra  diskgroup   EMC Time Finder sync groups name:   rac_${DB_NAME}_data_tf : data group rac_${DB_NAME}_fra_tf:   fra group     There are two scripts, one located on the production box ( bck_database.sh ) and the other one on the backup server node ( bck_database_mirror.sh ) The second one is remotly executed from the production host There are a bunch of variables along the code with selfexplanatory names I guess, anyway let me know if you want some help     #!/bin/ksh ### ###  Copyright (c) 1988, 2010, Oracle Corporation.  All Rights Reserved. ### ###    NAME ###     bck_database.sh ### ###    DESCRIPTION ###     Database backup on third mirror ### ###    RETURNS ### ###    NOTES ### ###    MODIFIED                                 (DD/MM/YY) ###    Oracle            28/01/10             - Creacion ###   V_DATE=`/bin/date +%Y%m%d_%H%M%S` V_FICH_LOG=`dirname $0`/trace_dir_location/`basename $0`.${V_DATE}.log exec 4>&1 tee ${V_FICH_LOG} >&4 |& exec 1>&p 2>&1     ADMIN_DIR=`dirname $0` . ${ADMIN_DIR}/setenv_instance.sh -- This script should set the instance vars like Oracle Home, Sid, db_name ... if [ $? -ne 0 ] then   echo "Error when setting the environment."   exit 1 fi   echo "${V_DATE} ####################################################" echo "Executing database backup: ${DB_NAME}" echo "####################################################################"   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Sync asm data diskgroups ..." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_data_tf establish -noprompt if [ $? -ne 0 ] then   echo "Error when sync asm data diskgroups"   exit 2 fi V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Verifying asm data disks ..." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_data_tf -i 30 verify if [ $? -ne 0 ] then   echo "Error when verifying asm data diskgroups"   exit 3 fi     V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Sync asm fra diskgroups ..." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_fra_tf establish -noprompt if [ $? -ne 0 ] then   echo "Error when sync asm fra diskgroups"   exit 4 fi V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Verifying asm fra disks ..." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_fra_tf -i 30 verify if [ $? -ne 0 ] then   echo "Error when verifying asm fra diskgroups"   exit 5 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "ASM sync sucessfully completed!" echo "####################################################################"     V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Updating status ${DB_NAME} to BEGIN BACKUP ..." echo "####################################################################" sqlplus -s /nolog <<-!   whenever sqlerror exit 1   connect / as sysdba   whenever sqlerror exit   alter system archive log current;   alter database ${DB_NAME} begin backup; ! if [ $? -ne 0 ] then   echo "Error when updating database status to BEGIN backup"   exit 6 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Splitting asm data disks....." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_data_tf split -noprompt if [ $? -ne 0 ] then   echo "Error when splitting asm data disks"   exit 7 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Updating status ${DB_NAME} to END BACKUP ..." echo "####################################################################" sqlplus -s /nolog <<-!   whenever sqlerror exit 1   connect / as sysdba   whenever sqlerror exit   alter database ${DB_NAME} end backup;   alter system archive log current; ! if [ $? -ne 0 ] then   echo "Error when updating database status to END backup"   exit 8 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Generating controlfile copies...." echo "####################################################################" rman<<-! connect target / run { allocate channel ch1 type DISK; copy current controlfile to '+FRA_${DB_NAME}/${DB_NAME}/CONTROLFILE/control_mount.ctl'; copy current controlfile to '+FRA_${DB_NAME}/${DB_NAME}/CONTROLFILE/control_backup.ctl'; } ! if [ $? -ne 0 ] then   echo "Error generating controlfile copies"   exit 9 fi V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Resync RMAN catalog ....." echo "####################################################################" rman<<-! connect target / connect catalog ${V_RMAN_USR}/${V_RMAN_PWD}@${V_DB_CATALOG} resync catalog; ! if [ $? -ne 0 ] then   echo "Error when resyncing RMAN catalog"   exit 10 fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Splitting asm fra disks....." echo "####################################################################" sudo symmir -g rac_${DB_NAME}_fra_tf split -noprompt if [ $? -ne 0 ] then   echo "Error when splitting asm fra disks"   exit 11 fi     echo "WARNING!: Calling bck_database_mirror.sh host ${NODE_BCK_SERVER}..." ssh ${NODO_BCK_SERVER} ${ADMIN_DIR_BCK}/bck_database_mirror.sh if [ $? -ne 0 ] then   echo "Error, when remote executing the backup "   exit 12 fi V_DATE=`/bin/date +%Y%m%d_%H%M%S` echo "${V_DATE} ####################################################" echo "Cleaning the archived redo logs already copied to tape ..." echo "####################################################################" rman<<-! connect target / connect catalog ${V_RMAN_USR}/${V_RMAN_PWD}@${V_DB_CATALOG} run { resync catalog; delete noprompt archivelog all backed up 1 times to device type sbt; } ! if [ $? -ne 0 ] then   echo "Error when cleaning the archived redo logs"   exit 13 fi echo "${V_DATE} ####################################################" echo "Backup sucessfully executed!!" echo "####################################################################" exit 0   ------------------------------------------------------------------------------ ------------------------** BACKUP SERVER NODE ** ----------------------------- ------------------------------------------------------------------------------   #!/bin/ksh ### ###  Copyright (c) 1988, 2010, Oracle Corporation.  All Rights Reserved. ### ###    ###    NAME ###     bck_database_mirror.sh ### ###    DESCRIPTION ###      Backup @ backup server ### ###    RETURNS ### ###    NOTES ### ###    MODIFIED                                 (DD/MM/YY) ###      Oracle                    28/01/10     - Creacion         V_DATE=`/bin/date +%Y%m%d_%H%M%S`   echo "${V_DATE} ####################################################"   echo "Starting ASM instance ..."   echo "####################################################################"   ${V_ADMIN_DIR}/start_asm.sh -- This script is supposed to start the ASM instance in the backup server   if [ $? -ne 0 ]   then     echo "Error when tying to start ASM instance."     exit 1   fi       . ${V_ADMIN_DIR}/setenv_asm.sh -- This script is supposed to set the env. variables of the ASM instance   if [ $? -ne 0 ]   then     echo "Error when setting the ASM environment"     exit 1   fi       V_DATE=`/bin/date +%Y%m%d_%H%M%S`   echo "${V_DATE} ####################################################"   echo "The asm diskgroups/disks dettected are the following ..."   echo "####################################################################"     sqlplus /nolog <<-!     whenever sqlerror exit 1     connect / as sysdba     whenever sqlerror exit     SET LINES 200     COL PATH FORMAT A25     SELECT DISK.MOUNT_STATUS, DISK.PATH, DISK.NAME, DISK_GROUP.NAME, DISK_GROUP.TOTAL_MB FROM V\$ASM_DISK DISK, V\$ASM_DISKGROUP DISK_GROUP WHERE DISK.GROUP_NUMBER=DISK_GROUP.GROUP_NUMBER; !       V_ADMIN_DIR=`dirname $0`   . ${V_ADMIN_DIR}/setenv_instance.sh -- This script is supposed to set the env. variables of the database instance   if [ $? -ne 0 ]   then     echo "Error when setting the database instance environment"     exit 1   fi     V_DATE=`/bin/date +%Y%m%d_%H%M%S`   echo "${V_DATE} ####################################################"   echo "Starting ${DB_NAME} in MOUNT mode..."   echo "####################################################################"   ${V_ADMIN_DIR}/start_instance_mount.sh -- This script is supposed to do a startup mount   if [ $? -ne 0 ]   then     echo "Error starting  ${DB_NAME} in MOUNT mode"     exit 1   fi   V_DATE=`/bin/date +%Y%m%d_%H%M%S`   echo "${V_DATE} ####################################################"   echo "Executing RMAN backup..."   echo "####################################################################"   rman<<-!   connect target /   connect catalog ${V_RMAN_USR}/${V_RMAN_PWD}@${V_DB_CATALOG}   run {   allocate channel ch1 type 'SBT_TAPE' parms'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo.opt)'; -- TDPO Media Library   crosscheck archivelog all;   backup tag BCK_CONTROLFILE_ST_${DB_NAME}   format 'ctl_%d_%s__%p_%t'   controlfilecopy '+FRA_${DB_NAME}/${DB_NAME}/CONTROLFILE/control_backup.ctl';   backup tag BCK_DATAFILE_ST_${DB_NAME} full   format 'db_%d_%s_%p_%t'database;   backup tag BCK_ARCHLOG_ST_${DB_NAME} format 'al_%d_%s_%p_%t' archivelog all;   release channel ch1;   } !   if [ $? -ne 0 ]   then     echo "Error executing the RMAN backup"     exit 1   fi     ${V_ADMIN_DIR}/stop_instance_immediate.sh -- This script is supposed to do a shutdown immediate of the database instance   ${ADMIN_DIR}/stop_asm_immediate.sh -- This script is supposed to do a shutdown immediate of the ASM instance   exit 0     fi   Hope it helps someone! --L

    Read the article

  • BizTalk server problem

    - by WtFudgE
    Hi, we have a biztalk server (a virtual one (1!)...) at our company, and an sql server where the data is being kept. Now we have a lot of data traffic. I'm talking about hundred of thousands. So I'm actually not even sure if one server is pretty safe, but our company is not that easy to convince. Now recently we have a lot of problems. Allow me to situate in detail, so I'm not missing anything: Our server has 5 applications: One with 3 orchestrations, 12 send ports, 16 receive locations. One with 4 orchestrations, 32 send ports, 20 receive locations. One with 4 orchestrations, 24 send ports, 20 receive locations. One with 47 (yes 47) orchestrations, 37 send ports, 6 receive locations. One with common application with a couple of resources. Our problems have occured since we deployed the applications with the 47 orchestrations. A lot of these orchestrations use assign shapes which use c# code to do the mapping. This is because we use HL7 extensions and this is kind of special, so by using c# code & xpath it was a lot easier to do the mapping because a lot of these schema's look alike. The c# reads in XmlNodes received through xpath, and returns XmlNode which are then assigned again to biztalk messages. I'm not sure if this could be the cause, but I thought I'd mention it. The send and receive ports have a lot of different types: File, MQSeries, SQL, MLLP, FTP. Each of these types have a different host instances, to balance out the load. Our orchestrations use the BiztalkApplication host. On this server also a couple of scripts are running, mostly ftp upload scripts & also a zipper script, which zips files every half an hour in a daily zip and deletes the zip files after a month. We use this zipscript on our backup files (we backup a lot, backups are also on our server), we did this because the server had problems with sending files to a location where there were a lot (A LOT) of files, so after the files were reduced to zips it went better. Now the problems we are having recently are mainly two major problems: Our most important problem is the following. We kept a receive location with a lot of messages on a queue for testing. After we start this receive location which uses the 47 orchestrations, the running service instances start to sky rock. Ok, this is pretty normal. Let's say about 10000, and then we stop the receive location to see how biztalk handles these 10000 instances. Normally they would go down pretty fast, and it does sometimes, but after a while it starts to "throttle", meaning they just stop being processed and the service instances stay at the same number, for example in 30 seconds it goes down from 10000 to 4000 and then it stays at 4000 and it lowers very very very slowly, like 30 in 5minutes or something. So this means, that all the other service instances of the other applications are also stuck in here, and they are also not processed. We noticed that after restarting our host instances the instance number went down fast again. So we tried to selectively restart different host instances to locate the problem. We noticed that eventually restarting the file send/receive host instance would do the trick. So we thought file sends would be the problem. Concidering that we make a lot of backups. So we replaced the file type backups with mqseries backups. The same problem occured, and funny thing, restarting the file send/receive host still fixes the problem. No errors can be found in the event viewer either. A second problem we're having is. That sometimes at arround 6 am, all or a part of the host instances are being stopped. In the event viewer we noticed the following errors (these are more than one): The receive location "MdnBericht SQL" with URL "SQL://ZNACDBPEG/mdnd0001/" is shutting down. Details:"The error threshold has been exceeded. The receive location is shutting down.". The Messaging Engine failed to add a receive location "M2m Othello Export Start Bestand" with URL "\m2mservices\Othello_import$\DataFilter Start*.xml" to the adapter "FILE". Reason: "The FILE adapter cannot access the folder \m2mservices\Othello_import$\DataFilter Start. Verify this folder exists. Error: Logon failure: unknown user name or bad password. ". The FILE adapter cannot access the folder \m2mservices\Othello_import$\DataFilter Start. Verify this folder exists. Error: Logon failure: unknown user name or bad password. An attempt to connect to "BizTalkMsgBoxDb" SQL Server database on server "ZNACDBBTS" failed. Error: "Login failed for user ''. The user is not associated with a trusted SQL Server connection." It woould seem that there's a login failure at this time and that because of it other services are also experiencing problems, and eventually they are shut down. The thing is, our user is admin, and it's impossible that it's password is wrong "sometimes". We have concidering that the problem could be due to an infrastructure problem, but that's not really are department. I know it's a long post, but we're not sure anymore what to do. Would adding another server and balancing the load solve our problems? Is there a way to meassure our balance and know where to start splitting? What are normal numbers of load etc? I appreciate any answers because these issues are getting worse and we're also on a deadline. Thanks a lot for replies!

    Read the article

  • Install Control Center Agent on Oracle Application Server

    - by qianqian.wu
    Control Center Agent (CCA) The Control Center Agent is the OWB component that runs the Template Mappings in the Oracle Containers for J2EE (OC4J) server; also referred to as the J2EE Runtime. The Control Center Agent provides a Java-based runtime environment that can be installed on Oracle and non-Oracle database hosts. The Control Center Agent provides fundamental infrastructure for the heterogeneous, Code Template-based mapping support and Web services-related features of OWB in this release. In Oracle Warehouse Builder 11gR2 the Control Center Agent, by default will run in the built-in OC4J that is bundled in the Oracle Home. Besides that, you also have ability to install the Control Center Agent in an Oracle Application Server install. In this article, you will find step-by-step instructions how to install the Control Center Agent on an Oracle Application Server instance. The instructions cover the following tasks: Task 1: Install and Configure the Application Server Task 2: Deploy the Control Center Agent to the Application Server Task 3: Optional Configuration Tasks   Task 1: Install and Configure the Application Server Before configuring the Application Server, you need to install it from Oracle Application Server CD-ROM, or by downloading the installation program from Oracle Technology Network (OTN). Once the installation is completed, you are ready to configure the Application Server. The purpose of the configuration task is to make sure the Control Center Agent ear file can be deployed and runs in the Application Server successfully. The essential configuration tasks are outlined below: · Modify the OC4J Startup Script · Set up Control Center Agent Server Side Logging · Set up Audit Table Data Source · Copy ct_permissions.properties File · Set up Security Roles for Control Center Agent · Create JMS Queues · Install JDBC Drivers to OC4J Modify the OC4J Startup Script The OC4J startup script “opmn.xml” is located in Application Server configuration directory, $AS_HOME/opmn/conf. $AS_HOME stands for the root home directory of the application server. Open the file opmn.xml in a text editor, and alter the contents of the file as displayed in the following sample. You need to make sure that: The MaxPerSize is set to 128M. This is to ensure that you allocate enough PermGen space to OC4J to run Control Center Agent. This will prevent java.lang.OutOfMemoryError when running the agent. The Python.path sets the path for the Python library files used by the Control Center Agent: jython_lib.zip and jython_owblib.jar. These two files are in the $OWB_HOME/owb/lib/int directory, where $OWB_HOME is the directory where owb is installed. · The km_security_needed determines whether restrictions will be applied to the kinds of operating system commands allowed to be executed by the OWB Code Template script executed by Control Center Agent. Setting km_security_needed to “true” enforces such restriction while setting it to “false” removes such restrictions. Set up Control Center Agent Server Side Logging Ensure that you are in the Application Server configuration directory, $AS_HOME/j2ee/home/config. Open the file j2ee-logging.xml in a text editor and add the following lines to the log handler section. The jrt-internal-log-handler is the handler used by Control Center Agent runtime logger to create log files. Then add the following entry into the loggers section to create the logger for Control Center Agent runtime auditing. Set up Audit Table Data Source To enable Audit Table logging, a managed data source and connection pool need to be set up before Control Center Agent deployment. Ensure that you are in the Application Server configuration directory, $AS_HOME/j2ee/home/config. Open the file data-sources.xml in a text editor. Define the audit data source shown below in the file, <managed-data-source name="AuditDS" connection-pool-name="OWBSYS Audit   Connection Pool" jndi-name="jdbc/AuditDS"/> <connection-pool name="OWBSYS Audit Connection Pool">   <connection-factory factory-class="oracle.jdbc.pool.OracleDataSource"     user="owbsys_audit" password="owbsys_audit"     url="jdbc:oracle:thin:@//localhost:1521/ORCL"/> </connection-pool> Copy ct_permissions.properties File The ct_permissions.properties can be obtained from $OWB_HOME /owb/jrt/config/ directory. You need to copy the file to $AS_HOME/j2ee/home/config directory.This properties file takes effect when the setting km-security is set to true in Control Center Agent. By default the ALLOWED_CMD is commented out in ct_permissions.properties file. This prevents all system command from being invoked from scripts executed in Control Center Agent (when km-security is set to true). To allow certain system commands to be invoked, ALLOWED_CMD needs to be uncommented out, and the system commands (allowed to be invoked) need to be added to the ALLOWED_CMD. Set up Security Roles for Control Center Agent You can set up the Control Center Agent security roles through Oracle Enterprise Manager. In a web browser, navigate to Enterprise Manager Homepage (e.g. http://hostname:8889/em). 1. Log in using the oc4jadmin credentials. After the Cluster Topology page is loaded, click home (the OC4J instance). This takes you to the home page of the OC4J instance. On the OC4J home screen, click the Administration tab. On the Administration Tasks screen, expand Security. Click the task icon next to Security Providers. 2. On Security Providers page click on the button “Instance Level Security”. On Instance Level Security page, go to “Realms” tab. You will see a row for the default realm “jazn.com” in the results table. It has a “Roles” column and a “Users” column. Click on the number in “Roles” column. In the “Roles” page it will display all the roles available for the realm. Click on “Create” button to create a new role “OWB_J2EE_ EXECUTOR”. 3. On the Add Role screen, enter Name OWB_J2EE_EXECUTOR, and click OK. 4. Follow the same steps as before, and create a new role “OWB_J2EE_OPERATOR”. 5. Assign role “oc4j-administrators” and “OWB_J2EE_EXECUTOR” to the role “OWB_J2EE_OPERATOR” by moving these roles from “Available Roles” and click “OK” to save. 6. Go back to Instance Level Security page and create a new role “OWB_J2EE_ADMINISTRATOR”. 7. Assign roles “OWB_J2EE_ OPERATOR” and “OWB_J2EE_EXECUTOR” to the role “OWB_J2EE_ ADMINISTRATOR” by moving these roles from “Available Roles” and click “OK” to save. 8.Go back to Instance Level Security page. This time, click on the number in “Users” column for the realm “jazn.com”. In the “Users” page, it shows all the users defined for this realm. Locate the user “oc4jadmin” in the results table and click on it. 9. Assign the roles “OWB_J2EE_ADMINISTRATOR” and “oc4j-app-administrators” to this user by moving the role from the “Available Roles” selection box to “Selected Roles” box and click “Apply” to save. 10. Go back to Instance Level Security page and create a new role “OWB_INTERNAL_USERS”, assign no user or role to this role. Simply click “OK” to create this role. Now you have finished creating the security roles required for Control Center Agent. Create JMS Queues You need to create two JMS queues for Control Center Agent: owbQueue and abort_owbQueue. 1. Now go to OC4J home Page. On the OC4J home screen, click the Administration tab. On the Administration Tasks screen, expand Services and then expand Enterprise Messaging Service. Click the task icon next to JMS Destinations. 2. On JMS Destinations page, click “Create New” button to create a new JMS queue. On Add Destination page, choose “Queue” as Destination Type. Put “owbQueue” as Destination Name. Select “In Memory Persistence Only” as the Persistence Type and put “jms/owbQueue” as JNDI Location and click on “OK” to finish. 3. Follow the same instruction as above to create the owb_abortQueue. Now you have finished creating the JMS queues required for Control Center Agent. Install JDBC Drivers to OC4J In order to execute Code Templates using commercial databases other than Oracle, e.g. DB2, SQL Server etc, the corresponding jdbc driver files need to be added to $AS_HOME/j2ee/home/applib directory. 1. To install other JDBC drivers to OC4J, first obtain the .jar file containing the JDBC driver. All the external JDBC drivers .jar files can be found in the directory: $OWB_HOME/owb/lib/ext/. For DB2, the files needed are db2jcc.jar and db2jcc_license_cu.jar. For SQL Server the file is sqljdbc.jar. For sunopsis JDBC drivers, the file needed is snpsxmlo.jar. 2. Copy the required JDBC driver file into the directory $AS_HOME/j2ee/home/applib. Now you have finished the Application Server configuration. To make the configuration to take an effect, you need to restart the Application Server.   Task 2: Deploy the Control Center Agent to the Application Server Now you can deploy the Control Center Agent to the Application Server. In a web browser, navigate to Enterprise Manager Homepage (e.g. http://hostname:8889/em). 1. Log in using the oc4jadmin credentials. After the Cluster Topology page is loaded, click home (the OC4J instance). This takes you to the home page of the OC4J instance. On the OC4J home screen, click the Applications tab. Click Deploy to begin deploying Control Center Agent. 2. On the Deploy: Select Archive screen, under Archive, select Archive is present on local host. Upload the archive to the server where Application Server Control is running. Click Browse and locate the jrt.ear file in the $OWB_HOME/owb/jrt/applications directory. Under Deployment Plan, select Automatically create a new deployment plan. Click Next. 3. Wait for the ear file to be uploaded to Application Server. On the Deploy: Application Attributes screen, enter Application Name jrt, and Context Root jrt. Leave the other attributes at their default values. Click Next. 4. On Deploy: Deployment Settings screen, leave all attributes at their default values, and click Deploy. This will take about 1 minute or so and when the application is deployed successfully, a confirmation message will be displayed. Now the Control Center Agent is started automatically. Go back to OC4J home page and click on Applications tab to make sure the deployed application jrt is showing in the applications list.   Task 3: Optional Configuration Tasks The optional configuration tasks contain: · Secure Control Center Agent Web Service · Setting the PATH Environment Variable Secure Control Center Agent Web Service If you want to use JRTWebService with a secure website, you need to do the following steps, 1. Create a file “secure-web-site.xml” in the $AS_HOME/j2ee/home/config directory. The file can be obtained from $OWB_HOME/owb/jrt/config directory. A sample secure-web-site.xml is shown as below. We need to modify the “protocol” to “https”, and “secure” to “true”, also choose an port as the secure http port. Also we need to add the entry “ssl-config” in the file. Remember to use the absolute path for the key store file. 2. Modify the file “server.xml” that is located at $AS_HOME/j2ee/home/config directory. Then add the <web-site> element in the file for the secure-web-site. 3. Create a key store file “serverkeystore.jks” in the $AS_HOME/j2ee/home/config directory. The file can be obtained from $OWB_HOME/owb/jrt/config directory. After the three files are altered, restart the application server. Now you can access the JRTWebService in SSL way through https://hostname:4443/jrt/webservice. Setting the PATH Environment Variable Sometimes, some system commands such as linux ls, sh etc, can not be executed successfully during the script execution due to they are not found in PATH. To ensure they work normally, you can setup the environment variable PATH. Let’s navigate to the Enterprise Manager Homepage. 1. Go to OC4J home screen and click the Administration tab. Expand Administration Tasks, then expand Properties. Click the task icon next to Server Properties. 2. On the Server Properties screen, scroll down to Environment Variables section. Under Environment Variables, click Add Another Row. Enter PATH in Name, and fill Value with directories that contain the system commands. Click Apply.   After you work through this article, I believe you have developed a deeper understanding of the Control Center Agent installation process, and you can apply this knowledge in other installation plan such as Control Center Agent installation on Standalone OC4J.

    Read the article

< Previous Page | 706 707 708 709 710 711 712 713 714 715 716 717  | Next Page >