Search Results

Search found 34595 results on 1384 pages for 'vendor id'.

Page 410/1384 | < Previous Page | 406 407 408 409 410 411 412 413 414 415 416 417  | Next Page >

  • AGENT: The World's Smartest Watch

    - by Rob Chartier
    AGENT: The World's Smartest Watch by Secret Labs + House of Horology Disclaimer: Most if not all of this content has been gleaned from the comments on the Kickstarter project page and comments section. Any discrepancies between this post and any documentation on agentwatches.com, kickstarter.com, etc.., those official sites take precedence. Overview The next generation smartwatch with brand-new technology. World-class developer tools, unparalleled battery life, Qi wireless charging. Kickstarter Page, Comments Funding period : May 21, 2013 - Jun 20, 2013 MSRP : $249 Other Urls http://www.agentwatches.com/ https://www.facebook.com/agentwatches http://twitter.com/agentwatches http://pinterest.com/agentwatches/ http://paper.li/robchartier/1371234640 Developer Story The first official launch of the preview SDK and emulator will happen on 20-Jun-2013.  All development will be done in Visual Studio 2012, using the .NET Micro Framework SDK 2.3.  The SDK will ship with the first round of the expected API for developers along with an emulator. With that said, there is no need to wait for the SDK.  You can download the tooling now and get started with Apps and Faces immediately.  The only thing that you will not be able to work with is the API; but for example, watch faces, you can start building the basic face rendering with the Bitmap graphics drawing in the .NET Micro Framework.   Does it look good? Before we dig into any more of the gory details, here are a few photos of the current available prototype models.   The watch on the tiny QI Charter   If you wander too far away from your phone, your watch will let you know with a vibration and a message, all but one button will dismiss the message.   An app showing the premium weather data!   Nice stitching on the straps, leather and silicon will be available, along with a few lengths to choose from (short, regular, long lengths). On to those gory details…. Hardware Specs Processor 120MHz ARM Cortex-M4 processor (ATSAM4SD32) with secondary AVR co-processor Flash & RAM 2MB of onboard flash and 160KB of RAM 1/4 of the onboard flash will be used by the OS The flash is permanent (non-volatile) storage. Bluetooth Bluetooth 4.0 BD/EDR + LE Bluetooth 4.0 is backwards compatible with Bluetooth 2.1, so classic Bluetooth functions (BD/EDR, SPP/AVRCP/PBAP/etc.) will work fine. Sensors 3D Accelerometer (Motion) ST LSM303DLHC Ambient Light Sensor Hardware power metering Vibration Motor (You can pulse it to create vibration patterns, not sure about the vibration strength - driven with PWM) No piezo/speaker or microphone. Other QI Wireless Charging, no NFC, no wall adapter included Custom LED Backlight No GPS in the watch. It uses the GPS in your phone. AGENT watch apps are deployed and debugged wirelessly from your PC via Bluetooth. RoHS, Pb-free Battery Expected to use a CR2430-sized rechargeable battery – replaceable (Mouser, Amazon) Estimated charging time from empty is 2 hours with provided charger 7 Days typical with Bluetooth on, 30 days with Bluetooth off (watch-face only mode) The battery should last at least 2 years, with 100s of charge cycles. Physical dimensions Roughly 38mm top-to-bottom on the front face 35mm left-to-right on the front face and around 12mm in depth 22mm strap Two ~1/16" hex screws to attach the watch pin The top watchcase material candidates are PVD stainless steel, brushed matte ceramic, and high-quality polycarbonate (TBD). The glass lens is mineral glass, Anti-glare glass lens Strap options Leather and silicon straps will be available Expected to have three sizes Display 1.28" Sharp Memory Display The display stays on 100% of the time. Dimensions: 128x128 pixels Buttons Custom "Pusher" buttons, they will not make noise like a mouse click, and are very durable. The top-left button activates the backlight; bottom-left changes apps; three buttons on the right are up/select/down and can be used for custom purposes by apps. Backup reset procedure is currently activated by holding the home/menu button and the top-right user button for about ten seconds Device Support Android 2.3 or newer iPhone 4S or newer Windows Phone 8 or newer Heart Rate monitors - Bluetooth SPP or Bluetooth LE (GATT) is what you'll want the heart monitor to support. Almost limitless Bluetooth device support! Internationalization & Localization Full UTF8 Support from the ground up. AGENT's user interface is in English. Your content (caller ID, music tracks, notifications) will be in your native language. We have a plan to cover most major character sets, with Latin characters pre-loaded on the watch. Simplified Chinese will be available Feature overview Phone lost alert Caller ID Music Control (possible volume control) Wireless Charging Timer Stopwatch Vibrating Alarm (possibly custom vibrations for caller id) A few default watch faces Airplane mode (by demand or low power) Can be turned off completely Customizable 3rd party watch faces, applications which can be loaded over bluetooth. Sample apps that maybe installed Weather Sample Apps not installed Exercise App Other Possible Skype integration over Bluetooth. They will provide an AGENT app for your smartphone (iPhone, Android, Windows Phone). You'll be able to use it to load apps onto the watch.. You will be able to cancel phone calls. With compatible phones you can also answer, end, etc. They are adopting the standard hands-free profile to provide these features and caller ID.

    Read the article

  • Introducing the Earthquake Locator – A Bing Maps Silverlight Application, part 1

    - by Bobby Diaz
    Update: Live demo and source code now available!  The recent wave of earthquakes (no pun intended) being reported in the news got me wondering about the frequency and severity of earthquakes around the world. Since I’ve been doing a lot of Silverlight development lately, I decided to scratch my curiosity with a nice little Bing Maps application that will show the location and relative strength of recent seismic activity. Here is a list of technologies this application will utilize, so be sure to have everything downloaded and installed if you plan on following along. Silverlight 3 WCF RIA Services Bing Maps Silverlight Control * Managed Extensibility Framework (optional) MVVM Light Toolkit (optional) log4net (optional) * If you are new to Bing Maps or have not signed up for a Developer Account, you will need to visit www.bingmapsportal.com to request a Bing Maps key for your application. Getting Started We start out by creating a new Silverlight Application called EarthquakeLocator and specify that we want to automatically create the Web Application Project with RIA Services enabled. I cleaned up the web app by removing the Default.aspx and EarthquakeLocatorTestPage.html. Then I renamed the EarthquakeLocatorTestPage.aspx to Default.aspx and set it as my start page. I also set the development server to use a specific port, as shown below. RIA Services Next, I created a Services folder in the EarthquakeLocator.Web project and added a new Domain Service Class called EarthquakeService.cs. This is the RIA Services Domain Service that will provide earthquake data for our client application. I am not using LINQ to SQL or Entity Framework, so I will use the <empty domain service class> option. We will be pulling data from an external Atom feed, but this example could just as easily pull data from a database or another web service. This is an important distinction to point out because each scenario I just mentioned could potentially use a different Domain Service base class (i.e. LinqToSqlDomainService<TDataContext>). Now we can start adding Query methods to our EarthquakeService that pull data from the USGS web site. Here is the complete code for our service class: using System; using System.Collections.Generic; using System.IO; using System.Linq; using System.ServiceModel.Syndication; using System.Web.DomainServices; using System.Web.Ria; using System.Xml; using log4net; using EarthquakeLocator.Web.Model;   namespace EarthquakeLocator.Web.Services {     /// <summary>     /// Provides earthquake data to client applications.     /// </summary>     [EnableClientAccess()]     public class EarthquakeService : DomainService     {         private static readonly ILog log = LogManager.GetLogger(typeof(EarthquakeService));           // USGS Data Feeds: http://earthquake.usgs.gov/earthquakes/catalogs/         private const string FeedForPreviousDay =             "http://earthquake.usgs.gov/earthquakes/catalogs/1day-M2.5.xml";         private const string FeedForPreviousWeek =             "http://earthquake.usgs.gov/earthquakes/catalogs/7day-M2.5.xml";           /// <summary>         /// Gets the earthquake data for the previous week.         /// </summary>         /// <returns>A queryable collection of <see cref="Earthquake"/> objects.</returns>         public IQueryable<Earthquake> GetEarthquakes()         {             var feed = GetFeed(FeedForPreviousWeek);             var list = new List<Earthquake>();               if ( feed != null )             {                 foreach ( var entry in feed.Items )                 {                     var quake = CreateEarthquake(entry);                     if ( quake != null )                     {                         list.Add(quake);                     }                 }             }               return list.AsQueryable();         }           /// <summary>         /// Creates an <see cref="Earthquake"/> object for each entry in the Atom feed.         /// </summary>         /// <param name="entry">The Atom entry.</param>         /// <returns></returns>         private Earthquake CreateEarthquake(SyndicationItem entry)         {             Earthquake quake = null;             string title = entry.Title.Text;             string summary = entry.Summary.Text;             string point = GetElementValue<String>(entry, "point");             string depth = GetElementValue<String>(entry, "elev");             string utcTime = null;             string localTime = null;             string depthDesc = null;             double? magnitude = null;             double? latitude = null;             double? longitude = null;             double? depthKm = null;               if ( !String.IsNullOrEmpty(title) && title.StartsWith("M") )             {                 title = title.Substring(2, title.IndexOf(',')-3).Trim();                 magnitude = TryParse(title);             }             if ( !String.IsNullOrEmpty(point) )             {                 var values = point.Split(' ');                 if ( values.Length == 2 )                 {                     latitude = TryParse(values[0]);                     longitude = TryParse(values[1]);                 }             }             if ( !String.IsNullOrEmpty(depth) )             {                 depthKm = TryParse(depth);                 if ( depthKm != null )                 {                     depthKm = Math.Round((-1 * depthKm.Value) / 100, 2);                 }             }             if ( !String.IsNullOrEmpty(summary) )             {                 summary = summary.Replace("</p>", "");                 var values = summary.Split(                     new string[] { "<p>" },                     StringSplitOptions.RemoveEmptyEntries);                   if ( values.Length == 3 )                 {                     var times = values[1].Split(                         new string[] { "<br>" },                         StringSplitOptions.RemoveEmptyEntries);                       if ( times.Length > 0 )                     {                         utcTime = times[0];                     }                     if ( times.Length > 1 )                     {                         localTime = times[1];                     }                       depthDesc = values[2];                     depthDesc = "Depth: " + depthDesc.Substring(depthDesc.IndexOf(":") + 2);                 }             }               if ( latitude != null && longitude != null )             {                 quake = new Earthquake()                 {                     Id = entry.Id,                     Title = entry.Title.Text,                     Summary = entry.Summary.Text,                     Date = entry.LastUpdatedTime.DateTime,                     Url = entry.Links.Select(l => Path.Combine(l.BaseUri.OriginalString,                         l.Uri.OriginalString)).FirstOrDefault(),                     Age = entry.Categories.Where(c => c.Label == "Age")                         .Select(c => c.Name).FirstOrDefault(),                     Magnitude = magnitude.GetValueOrDefault(),                     Latitude = latitude.GetValueOrDefault(),                     Longitude = longitude.GetValueOrDefault(),                     DepthInKm = depthKm.GetValueOrDefault(),                     DepthDesc = depthDesc,                     UtcTime = utcTime,                     LocalTime = localTime                 };             }               return quake;         }           private T GetElementValue<T>(SyndicationItem entry, String name)         {             var el = entry.ElementExtensions.Where(e => e.OuterName == name).FirstOrDefault();             T value = default(T);               if ( el != null )             {                 value = el.GetObject<T>();             }               return value;         }           private double? TryParse(String value)         {             double d;             if ( Double.TryParse(value, out d) )             {                 return d;             }             return null;         }           /// <summary>         /// Gets the feed at the specified URL.         /// </summary>         /// <param name="url">The URL.</param>         /// <returns>A <see cref="SyndicationFeed"/> object.</returns>         public static SyndicationFeed GetFeed(String url)         {             SyndicationFeed feed = null;               try             {                 log.Debug("Loading RSS feed: " + url);                   using ( var reader = XmlReader.Create(url) )                 {                     feed = SyndicationFeed.Load(reader);                 }             }             catch ( Exception ex )             {                 log.Error("Error occurred while loading RSS feed: " + url, ex);             }               return feed;         }     } }   The only method that will be generated in the client side proxy class, EarthquakeContext, will be the GetEarthquakes() method. The reason being that it is the only public instance method and it returns an IQueryable<Earthquake> collection that can be consumed by the client application. GetEarthquakes() calls the static GetFeed(String) method, which utilizes the built in SyndicationFeed API to load the external data feed. You will need to add a reference to the System.ServiceModel.Web library in order to take advantage of the RSS/Atom reader. The API will also allow you to create your own feeds to serve up in your applications. Model I have also created a Model folder and added a new class, Earthquake.cs. The Earthquake object will hold the various properties returned from the Atom feed. Here is a sample of the code for that class. Notice the [Key] attribute on the Id property, which is required by RIA Services to uniquely identify the entity. using System; using System.Collections.Generic; using System.Linq; using System.Runtime.Serialization; using System.ComponentModel.DataAnnotations;   namespace EarthquakeLocator.Web.Model {     /// <summary>     /// Represents an earthquake occurrence and related information.     /// </summary>     [DataContract]     public class Earthquake     {         /// <summary>         /// Gets or sets the id.         /// </summary>         /// <value>The id.</value>         [Key]         [DataMember]         public string Id { get; set; }           /// <summary>         /// Gets or sets the title.         /// </summary>         /// <value>The title.</value>         [DataMember]         public string Title { get; set; }           /// <summary>         /// Gets or sets the summary.         /// </summary>         /// <value>The summary.</value>         [DataMember]         public string Summary { get; set; }           // additional properties omitted     } }   View Model The recent trend to use the MVVM pattern for WPF and Silverlight provides a great way to separate the data and behavior logic out of the user interface layer of your client applications. I have chosen to use the MVVM Light Toolkit for the Earthquake Locator, but there are other options out there if you prefer another library. That said, I went ahead and created a ViewModel folder in the Silverlight project and added a EarthquakeViewModel class that derives from ViewModelBase. Here is the code: using System; using System.Collections.ObjectModel; using System.ComponentModel.Composition; using System.ComponentModel.Composition.Hosting; using Microsoft.Maps.MapControl; using GalaSoft.MvvmLight; using EarthquakeLocator.Web.Model; using EarthquakeLocator.Web.Services;   namespace EarthquakeLocator.ViewModel {     /// <summary>     /// Provides data for views displaying earthquake information.     /// </summary>     public class EarthquakeViewModel : ViewModelBase     {         [Import]         public EarthquakeContext Context;           /// <summary>         /// Initializes a new instance of the <see cref="EarthquakeViewModel"/> class.         /// </summary>         public EarthquakeViewModel()         {             var catalog = new AssemblyCatalog(GetType().Assembly);             var container = new CompositionContainer(catalog);             container.ComposeParts(this);             Initialize();         }           /// <summary>         /// Initializes a new instance of the <see cref="EarthquakeViewModel"/> class.         /// </summary>         /// <param name="context">The context.</param>         public EarthquakeViewModel(EarthquakeContext context)         {             Context = context;             Initialize();         }           private void Initialize()         {             MapCenter = new Location(20, -170);             ZoomLevel = 2;         }           #region Private Methods           private void OnAutoLoadDataChanged()         {             LoadEarthquakes();         }           private void LoadEarthquakes()         {             var query = Context.GetEarthquakesQuery();             Context.Earthquakes.Clear();               Context.Load(query, (op) =>             {                 if ( !op.HasError )                 {                     foreach ( var item in op.Entities )                     {                         Earthquakes.Add(item);                     }                 }             }, null);         }           #endregion Private Methods           #region Properties           private bool autoLoadData;         /// <summary>         /// Gets or sets a value indicating whether to auto load data.         /// </summary>         /// <value><c>true</c> if auto loading data; otherwise, <c>false</c>.</value>         public bool AutoLoadData         {             get { return autoLoadData; }             set             {                 if ( autoLoadData != value )                 {                     autoLoadData = value;                     RaisePropertyChanged("AutoLoadData");                     OnAutoLoadDataChanged();                 }             }         }           private ObservableCollection<Earthquake> earthquakes;         /// <summary>         /// Gets the collection of earthquakes to display.         /// </summary>         /// <value>The collection of earthquakes.</value>         public ObservableCollection<Earthquake> Earthquakes         {             get             {                 if ( earthquakes == null )                 {                     earthquakes = new ObservableCollection<Earthquake>();                 }                   return earthquakes;             }         }           private Location mapCenter;         /// <summary>         /// Gets or sets the map center.         /// </summary>         /// <value>The map center.</value>         public Location MapCenter         {             get { return mapCenter; }             set             {                 if ( mapCenter != value )                 {                     mapCenter = value;                     RaisePropertyChanged("MapCenter");                 }             }         }           private double zoomLevel;         /// <summary>         /// Gets or sets the zoom level.         /// </summary>         /// <value>The zoom level.</value>         public double ZoomLevel         {             get { return zoomLevel; }             set             {                 if ( zoomLevel != value )                 {                     zoomLevel = value;                     RaisePropertyChanged("ZoomLevel");                 }             }         }           #endregion Properties     } }   The EarthquakeViewModel class contains all of the properties that will be bound to by the various controls in our views. Be sure to read through the LoadEarthquakes() method, which handles calling the GetEarthquakes() method in our EarthquakeService via the EarthquakeContext proxy, and also transfers the loaded entities into the view model’s Earthquakes collection. Another thing to notice is what’s going on in the default constructor. I chose to use the Managed Extensibility Framework (MEF) for my composition needs, but you can use any dependency injection library or none at all. To allow the EarthquakeContext class to be discoverable by MEF, I added the following partial class so that I could supply the appropriate [Export] attribute: using System; using System.ComponentModel.Composition;   namespace EarthquakeLocator.Web.Services {     /// <summary>     /// The client side proxy for the EarthquakeService class.     /// </summary>     [Export]     public partial class EarthquakeContext     {     } }   One last piece I wanted to point out before moving on to the user interface, I added a client side partial class for the Earthquake entity that contains helper properties that we will bind to later: using System;   namespace EarthquakeLocator.Web.Model {     /// <summary>     /// Represents an earthquake occurrence and related information.     /// </summary>     public partial class Earthquake     {         /// <summary>         /// Gets the location based on the current Latitude/Longitude.         /// </summary>         /// <value>The location.</value>         public string Location         {             get { return String.Format("{0},{1}", Latitude, Longitude); }         }           /// <summary>         /// Gets the size based on the Magnitude.         /// </summary>         /// <value>The size.</value>         public double Size         {             get { return (Magnitude * 3); }         }     } }   View Now the fun part! Usually, I would create a Views folder to place all of my View controls in, but I took the easy way out and added the following XAML code to the default MainPage.xaml file. Be sure to add the bing prefix associating the Microsoft.Maps.MapControl namespace after adding the assembly reference to your project. The MVVM Light Toolkit project templates come with a ViewModelLocator class that you can use via a static resource, but I am instantiating the EarthquakeViewModel directly in my user control. I am setting the AutoLoadData property to true as a way to trigger the LoadEarthquakes() method call. The MapItemsControl found within the <bing:Map> control binds its ItemsSource property to the Earthquakes collection of the view model, and since it is an ObservableCollection<T>, we get the automatic two way data binding via the INotifyCollectionChanged interface. <UserControl x:Class="EarthquakeLocator.MainPage"     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"     xmlns:d="http://schemas.microsoft.com/expression/blend/2008"     xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"     xmlns:bing="clr-namespace:Microsoft.Maps.MapControl;assembly=Microsoft.Maps.MapControl"     xmlns:vm="clr-namespace:EarthquakeLocator.ViewModel"     mc:Ignorable="d" d:DesignWidth="640" d:DesignHeight="480" >     <UserControl.Resources>         <DataTemplate x:Key="EarthquakeTemplate">             <Ellipse Fill="Red" Stroke="Black" StrokeThickness="1"                      Width="{Binding Size}" Height="{Binding Size}"                      bing:MapLayer.Position="{Binding Location}"                      bing:MapLayer.PositionOrigin="Center">                 <ToolTipService.ToolTip>                     <StackPanel>                         <TextBlock Text="{Binding Title}" FontSize="14" FontWeight="Bold" />                         <TextBlock Text="{Binding UtcTime}" />                         <TextBlock Text="{Binding LocalTime}" />                         <TextBlock Text="{Binding DepthDesc}" />                     </StackPanel>                 </ToolTipService.ToolTip>             </Ellipse>         </DataTemplate>     </UserControl.Resources>       <UserControl.DataContext>         <vm:EarthquakeViewModel AutoLoadData="True" />     </UserControl.DataContext>       <Grid x:Name="LayoutRoot">           <bing:Map x:Name="map" CredentialsProvider="--Your-Bing-Maps-Key--"                   Center="{Binding MapCenter, Mode=TwoWay}"                   ZoomLevel="{Binding ZoomLevel, Mode=TwoWay}">             <bing:MapItemsControl ItemsSource="{Binding Earthquakes}"                                   ItemTemplate="{StaticResource EarthquakeTemplate}" />         </bing:Map>       </Grid> </UserControl>   The EarthquakeTemplate defines the Ellipse that will represent each earthquake, the Width and Height that are determined by the Magnitude, the Position on the map, and also the tooltip that will appear when we mouse over each data point. Running the application will give us the following result (shown with a tooltip example): That concludes this portion of our show but I plan on implementing additional functionality in later blog posts. Be sure to come back soon to see the next installments in this series. Enjoy!   Additional Resources USGS Earthquake Data Feeds Brad Abrams shows how RIA Services and MVVM can work together

    Read the article

  • Netflix, jQuery, JSONP, and OData

    - by Stephen Walther
    At the last MIX conference, Netflix announced that they are exposing their catalog of movie information using the OData protocol. This is great news! This means that you can take advantage of all of the advanced OData querying features against a live database of Netflix movies. In this blog entry, I’ll demonstrate how you can use Netflix, jQuery, JSONP, and OData to create a simple movie lookup form. The form enables you to enter a movie title, or part of a movie title, and display a list of matching movies. For example, Figure 1 illustrates the movies displayed when you enter the value robot into the lookup form.   Using the Netflix OData Catalog API You can learn about the Netflix OData Catalog API at the following website: http://developer.netflix.com/docs/oData_Catalog The nice thing about this website is that it provides plenty of samples. It also has a good general reference for OData. For example, the website includes a list of OData filter operators and functions. The Netflix Catalog API exposes 4 top-level resources: Titles – A database of Movie information including interesting movie properties such as synopsis, BoxArt, and Cast. People – A database of people information including interesting information such as Awards, TitlesDirected, and TitlesActedIn. Languages – Enables you to get title information in different languages. Genres – Enables you to get title information for specific movie genres. OData is REST based. This means that you can perform queries by putting together the right URL. For example, if you want to get a list of the movies that were released after 2010 and that had an average rating greater than 4 then you can enter the following URL in the address bar of your browser: http://odata.netflix.com/Catalog/Titles?$filter=ReleaseYear gt 2010&AverageRating gt 4 Entering this URL returns the movies in Figure 2. Creating the Movie Lookup Form The complete code for the Movie Lookup form is contained in Listing 1. Listing 1 – MovieLookup.htm <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Netflix with jQuery</title> <style type="text/css"> #movieTemplateContainer div { width:400px; padding: 10px; margin: 10px; border: black solid 1px; } </style> <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> <script src="App_Scripts/Microtemplates.js" type="text/javascript"></script> </head> <body> <label>Search Movies:</label> <input id="movieName" size="50" /> <button id="btnLookup">Lookup</button> <div id="movieTemplateContainer"></div> <script id="movieTemplate" type="text/html"> <div> <img src="<%=BoxArtSmallUrl %>" /> <strong><%=Name%></strong> <p> <%=Synopsis %> </p> </div> </script> <script type="text/javascript"> $("#btnLookup").click(function () { // Build OData query var movieName = $("#movieName").val(); var query = "http://odata.netflix.com/Catalog" // netflix base url + "/Titles" // top-level resource + "?$filter=substringof('" + escape(movieName) + "',Name)" // filter by movie name + "&$callback=callback" // jsonp request + "&$format=json"; // json request // Make JSONP call to Netflix $.ajax({ dataType: "jsonp", url: query, jsonpCallback: "callback", success: callback }); }); function callback(result) { // unwrap result var movies = result["d"]["results"]; // show movies in template var showMovie = tmpl("movieTemplate"); var html = ""; for (var i = 0; i < movies.length; i++) { // flatten movie movies[i].BoxArtSmallUrl = movies[i].BoxArt.SmallUrl; // render with template html += showMovie(movies[i]); } $("#movieTemplateContainer").html(html); } </script> </body> </html> The HTML page in Listing 1 includes two JavaScript libraries: <script src="http://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.js" type="text/javascript"></script> <script src="App_Scripts/Microtemplates.js" type="text/javascript"></script> The first script tag retrieves jQuery from the Microsoft Ajax CDN. You can learn more about the Microsoft Ajax CDN by visiting the following website: http://www.asp.net/ajaxLibrary/cdn.ashx The second script tag is used to reference Resig’s micro-templating library. Because I want to use a template to display each movie, I need this library: http://ejohn.org/blog/javascript-micro-templating/ When you enter a value into the Search Movies input field and click the button, the following JavaScript code is executed: // Build OData query var movieName = $("#movieName").val(); var query = "http://odata.netflix.com/Catalog" // netflix base url + "/Titles" // top-level resource + "?$filter=substringof('" + escape(movieName) + "',Name)" // filter by movie name + "&$callback=callback" // jsonp request + "&$format=json"; // json request // Make JSONP call to Netflix $.ajax({ dataType: "jsonp", url: query, jsonpCallback: "callback", success: callback }); This code Is used to build a query that will be executed against the Netflix Catalog API. For example, if you enter the search phrase King Kong then the following URL is created: http://odata.netflix.com/Catalog/Titles?$filter=substringof(‘King%20Kong’,Name)&$callback=callback&$format=json This query includes the following parameters: $filter – You assign a filter expression to this parameter to filter the movie results. $callback – You assign the name of a JavaScript callback method to this parameter. OData calls this method to return the movie results. $format – you assign either the value json or xml to this parameter to specify how the format of the movie results. Notice that all of the OData parameters -- $filter, $callback, $format -- start with a dollar sign $. The Movie Lookup form uses JSONP to retrieve data across the Internet. Because WCF Data Services supports JSONP, and Netflix uses WCF Data Services to expose movies using the OData protocol, you can use JSONP when interacting with the Netflix Catalog API. To learn more about using JSONP with OData, see Pablo Castro’s blog: http://blogs.msdn.com/pablo/archive/2009/02/25/adding-support-for-jsonp-and-url-controlled-format-to-ado-net-data-services.aspx The actual JSONP call is performed by calling the $.ajax() method. When this call successfully completes, the JavaScript callback() method is called. The callback() method looks like this: function callback(result) { // unwrap result var movies = result["d"]["results"]; // show movies in template var showMovie = tmpl("movieTemplate"); var html = ""; for (var i = 0; i < movies.length; i++) { // flatten movie movies[i].BoxArtSmallUrl = movies[i].BoxArt.SmallUrl; // render with template html += showMovie(movies[i]); } $("#movieTemplateContainer").html(html); } The movie results from Netflix are passed to the callback method. The callback method takes advantage of Resig’s micro-templating library to display each of the movie results. A template used to display each movie is passed to the tmpl() method. The movie template looks like this: <script id="movieTemplate" type="text/html"> <div> <img src="<%=BoxArtSmallUrl %>" /> <strong><%=Name%></strong> <p> <%=Synopsis %> </p> </div> </script>   This template looks like a server-side ASP.NET template. However, the template is rendered in the client (browser) instead of the server. Summary The goal of this blog entry was to demonstrate how well jQuery works with OData. We managed to use a number of interesting open-source libraries and open protocols while building the Movie Lookup form including jQuery, JSONP, JSON, and OData.

    Read the article

  • Top things web developers should know about the Visual Studio 2013 release

    - by Jon Galloway
    ASP.NET and Web Tools for Visual Studio 2013 Release NotesASP.NET and Web Tools for Visual Studio 2013 Release NotesSummary for lazy readers: Visual Studio 2013 is now available for download on the Visual Studio site and on MSDN subscriber downloads) Visual Studio 2013 installs side by side with Visual Studio 2012 and supports round-tripping between Visual Studio versions, so you can try it out without committing to a switch Visual Studio 2013 ships with the new version of ASP.NET, which includes ASP.NET MVC 5, ASP.NET Web API 2, Razor 3, Entity Framework 6 and SignalR 2.0 The new releases ASP.NET focuses on One ASP.NET, so core features and web tools work the same across the platform (e.g. adding ASP.NET MVC controllers to a Web Forms application) New core features include new templates based on Bootstrap, a new scaffolding system, and a new identity system Visual Studio 2013 is an incredible editor for web files, including HTML, CSS, JavaScript, Markdown, LESS, Coffeescript, Handlebars, Angular, Ember, Knockdown, etc. Top links: Visual Studio 2013 content on the ASP.NET site are in the standard new releases area: http://www.asp.net/vnext ASP.NET and Web Tools for Visual Studio 2013 Release Notes Short intro videos on the new Visual Studio web editor features from Scott Hanselman and Mads Kristensen Announcing release of ASP.NET and Web Tools for Visual Studio 2013 post on the official .NET Web Development and Tools Blog Scott Guthrie's post: Announcing the Release of Visual Studio 2013 and Great Improvements to ASP.NET and Entity Framework Okay, for those of you who are still with me, let's dig in a bit. Quick web dev notes on downloading and installing Visual Studio 2013 I found Visual Studio 2013 to be a pretty fast install. According to Brian Harry's release post, installing over pre-release versions of Visual Studio is supported.  I've installed the release version over pre-release versions, and it worked fine. If you're only going to be doing web development, you can speed up the install if you just select Web Developer tools. Of course, as a good Microsoft employee, I'll mention that you might also want to install some of those other features, like the Store apps for Windows 8 and the Windows Phone 8.0 SDK, but they do download and install a lot of other stuff (e.g. the Windows Phone SDK sets up Hyper-V and downloads several GB's of VM's). So if you're planning just to do web development for now, you can pick just the Web Developer Tools and install the other stuff later. If you've got a fast internet connection, I recommend using the web installer instead of downloading the ISO. The ISO includes all the features, whereas the web installer just downloads what you're installing. Visual Studio 2013 development settings and color theme When you start up Visual Studio, it'll prompt you to pick some defaults. These are totally up to you -whatever suits your development style - and you can change them later. As I said, these are completely up to you. I recommend either the Web Development or Web Development (Code Only) settings. The only real difference is that Code Only hides the toolbars, and you can switch between them using Tools / Import and Export Settings / Reset. Web Development settings Web Development (code only) settings Usually I've just gone with Web Development (code only) in the past because I just want to focus on the code, although the Standard toolbar does make it easier to switch default web browsers. More on that later. Color theme Sigh. Okay, everyone's got their favorite colors. I alternate between Light and Dark depending on my mood, and I personally like how the low contrast on the window chrome in those themes puts the emphasis on my code rather than the tabs and toolbars. I know some people got pretty worked up over that, though, and wanted the blue theme back. I personally don't like it - it reminds me of ancient versions of Visual Studio that I don't want to think about anymore. So here's the thing: if you install Visual Studio Ultimate, it defaults to Blue. The other versions default to Light. If you use Blue, I won't criticize you - out loud, that is. You can change themes really easily - either Tools / Options / Environment / General, or the smart way: ctrl+q for quick launch, then type Theme and hit enter. Signing in During the first run, you'll be prompted to sign in. You don't have to - you can click the "Not now, maybe later" link at the bottom of that dialog. I recommend signing in, though. It's not hooked in with licensing or tracking the kind of code you write to sell you components. It is doing good things, like  syncing your Visual Studio settings between computers. More about that here. So, you don't have to, but I sure do. Overview of shiny new things in ASP.NET land There are a lot of good new things in ASP.NET. I'll list some of my favorite here, but you can read more on the ASP.NET site. One ASP.NET You've heard us talk about this for a while. The idea is that options are good, but choice can be a burden. When you start a new ASP.NET project, why should you have to make a tough decision - with long-term consequences - about how your application will work? If you want to use ASP.NET Web Forms, but have the option of adding in ASP.NET MVC later, why should that be hard? It's all ASP.NET, right? Ideally, you'd just decide that you want to use ASP.NET to build sites and services, and you could use the appropriate tools (the green blocks below) as you needed them. So, here it is. When you create a new ASP.NET application, you just create an ASP.NET application. Next, you can pick from some templates to get you started... but these are different. They're not "painful decision" templates, they're just some starting pieces. And, most importantly, you can mix and match. I can pick a "mostly" Web Forms template, but include MVC and Web API folders and core references. If you've tried to mix and match in the past, you're probably aware that it was possible, but not pleasant. ASP.NET MVC project files contained special project type GUIDs, so you'd only get controller scaffolding support in a Web Forms project if you manually edited the csproj file. Features in one stack didn't work in others. Project templates were painful choices. That's no longer the case. Hooray! I just did a demo in a presentation last week where I created a new Web Forms + MVC + Web API site, built a model, scaffolded MVC and Web API controllers with EF Code First, add data in the MVC view, viewed it in Web API, then added a GridView to the Web Forms Default.aspx page and bound it to the Model. In about 5 minutes. Sure, it's a simple example, but it's great to be able to share code and features across the whole ASP.NET family. Authentication In the past, authentication was built into the templates. So, for instance, there was an ASP.NET MVC 4 Intranet Project template which created a new ASP.NET MVC 4 application that was preconfigured for Windows Authentication. All of that authentication stuff was built into each template, so they varied between the stacks, and you couldn't reuse them. You didn't see a lot of changes to the authentication options, since they required big changes to a bunch of project templates. Now, the new project dialog includes a common authentication experience. When you hit the Change Authentication button, you get some common options that work the same way regardless of the template or reference settings you've made. These options work on all ASP.NET frameworks, and all hosting environments (IIS, IIS Express, or OWIN for self-host) The default is Individual User Accounts: This is the standard "create a local account, using username / password or OAuth" thing; however, it's all built on the new Identity system. More on that in a second. The one setting that has some configuration to it is Organizational Accounts, which lets you configure authentication using Active Directory, Windows Azure Active Directory, or Office 365. Identity There's a new identity system. We've taken the best parts of the previous ASP.NET Membership and Simple Identity systems, rolled in a lot of feedback and made big enhancements to support important developer concerns like unit testing and extensiblity. I've written long posts about ASP.NET identity, and I'll do it again. Soon. This is not that post. The short version is that I think we've finally got just the right Identity system. Some of my favorite features: There are simple, sensible defaults that work well - you can File / New / Run / Register / Login, and everything works. It supports standard username / password as well as external authentication (OAuth, etc.). It's easy to customize without having to re-implement an entire provider. It's built using pluggable pieces, rather than one large monolithic system. It's built using interfaces like IUser and IRole that allow for unit testing, dependency injection, etc. You can easily add user profile data (e.g. URL, twitter handle, birthday). You just add properties to your ApplicationUser model and they'll automatically be persisted. Complete control over how the identity data is persisted. By default, everything works with Entity Framework Code First, but it's built to support changes from small (modify the schema) to big (use another ORM, store your data in a document database or in the cloud or in XML or in the EXIF data of your desktop background or whatever). It's configured via OWIN. More on OWIN and Katana later, but the fact that it's built using OWIN means it's portable. You can find out more in the Authentication and Identity section of the ASP.NET site (and lots more content will be going up there soon). New Bootstrap based project templates The new project templates are built using Bootstrap 3. Bootstrap (formerly Twitter Bootstrap) is a front-end framework that brings a lot of nice benefits: It's responsive, so your projects will automatically scale to device width using CSS media queries. For example, menus are full size on a desktop browser, but on narrower screens you automatically get a mobile-friendly menu. The built-in Bootstrap styles make your standard page elements (headers, footers, buttons, form inputs, tables etc.) look nice and modern. Bootstrap is themeable, so you can reskin your whole site by dropping in a new Bootstrap theme. Since Bootstrap is pretty popular across the web development community, this gives you a large and rapidly growing variety of templates (free and paid) to choose from. Bootstrap also includes a lot of very useful things: components (like progress bars and badges), useful glyphicons, and some jQuery plugins for tooltips, dropdowns, carousels, etc.). Here's a look at how the responsive part works. When the page is full screen, the menu and header are optimized for a wide screen display: When I shrink the page down (this is all based on page width, not useragent sniffing) the menu turns into a nice mobile-friendly dropdown: For a quick example, I grabbed a new free theme off bootswatch.com. For simple themes, you just need to download the boostrap.css file and replace the /content/bootstrap.css file in your project. Now when I refresh the page, I've got a new theme: Scaffolding The big change in scaffolding is that it's one system that works across ASP.NET. You can create a new Empty Web project or Web Forms project and you'll get the Scaffold context menus. For release, we've got MVC 5 and Web API 2 controllers. We had a preview of Web Forms scaffolding in the preview releases, but they weren't fully baked for RTM. Look for them in a future update, expected pretty soon. This scaffolding system wasn't just changed to work across the ASP.NET frameworks, it's also built to enable future extensibility. That's not in this release, but should also hopefully be out soon. Project Readme page This is a small thing, but I really like it. When you create a new project, you get a Project_Readme.html page that's added to the root of your project and opens in the Visual Studio built-in browser. I love it. A long time ago, when you created a new project we just dumped it on you and left you scratching your head about what to do next. Not ideal. Then we started adding a bunch of Getting Started information to the new project templates. That told you what to do next, but you had to delete all of that stuff out of your website. It doesn't belong there. Not ideal. This is a simple HTML file that's not integrated into your project code at all. You can delete it if you want. But, it shows a lot of helpful links that are current for the project you just created. In the future, if we add new wacky project types, they can create readme docs with specific information on how to do appropriately wacky things. Side note: I really like that they used the internal browser in Visual Studio to show this content rather than popping open an HTML page in the default browser. I hate that. It's annoying. If you're doing that, I hope you'll stop. What if some unnamed person has 40 or 90 tabs saved in their browser session? When you pop open your "Thanks for installing my Visual Studio extension!" page, all eleventy billion tabs start up and I wish I'd never installed your thing. Be like these guys and pop stuff Visual Studio specific HTML docs in the Visual Studio browser. ASP.NET MVC 5 The biggest change with ASP.NET MVC 5 is that it's no longer a separate project type. It integrates well with the rest of ASP.NET. In addition to that and the other common features we've already looked at (Bootstrap templates, Identity, authentication), here's what's new for ASP.NET MVC. Attribute routing ASP.NET MVC now supports attribute routing, thanks to a contribution by Tim McCall, the author of http://attributerouting.net. With attribute routing you can specify your routes by annotating your actions and controllers. This supports some pretty complex, customized routing scenarios, and it allows you to keep your route information right with your controller actions if you'd like. Here's a controller that includes an action whose method name is Hiding, but I've used AttributeRouting to configure it to /spaghetti/with-nesting/where-is-waldo public class SampleController : Controller { [Route("spaghetti/with-nesting/where-is-waldo")] public string Hiding() { return "You found me!"; } } I enable that in my RouteConfig.cs, and I can use that in conjunction with my other MVC routes like this: public class RouteConfig { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapMvcAttributeRoutes(); routes.MapRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); } } You can read more about Attribute Routing in ASP.NET MVC 5 here. Filter enhancements There are two new additions to filters: Authentication Filters and Filter Overrides. Authentication filters are a new kind of filter in ASP.NET MVC that run prior to authorization filters in the ASP.NET MVC pipeline and allow you to specify authentication logic per-action, per-controller, or globally for all controllers. Authentication filters process credentials in the request and provide a corresponding principal. Authentication filters can also add authentication challenges in response to unauthorized requests. Override filters let you change which filters apply to a given action method or controller. Override filters specify a set of filter types that should not be run for a given scope (action or controller). This allows you to configure filters that apply globally but then exclude certain global filters from applying to specific actions or controllers. ASP.NET Web API 2 ASP.NET Web API 2 includes a lot of new features. Attribute Routing ASP.NET Web API supports the same attribute routing system that's in ASP.NET MVC 5. You can read more about the Attribute Routing features in Web API in this article. OAuth 2.0 ASP.NET Web API picks up OAuth 2.0 support, using security middleware running on OWIN (discussed below). This is great for features like authenticated Single Page Applications. OData Improvements ASP.NET Web API now has full OData support. That required adding in some of the most powerful operators: $select, $expand, $batch and $value. You can read more about OData operator support in this article by Mike Wasson. Lots more There's a huge list of other features, including CORS (cross-origin request sharing), IHttpActionResult, IHttpRequestContext, and more. I think the best overview is in the release notes. OWIN and Katana I've written about OWIN and Katana recently. I'm a big fan. OWIN is the Open Web Interfaces for .NET. It's a spec, like HTML or HTTP, so you can't install OWIN. The benefit of OWIN is that it's a community specification, so anyone who implements it can plug into the ASP.NET stack, either as middleware or as a host. Katana is the Microsoft implementation of OWIN. It leverages OWIN to wire up things like authentication, handlers, modules, IIS hosting, etc., so ASP.NET can host OWIN components and Katana components can run in someone else's OWIN implementation. Howard Dierking just wrote a cool article in MSDN magazine describing Katana in depth: Getting Started with the Katana Project. He had an interesting example showing an OWIN based pipeline which leveraged SignalR, ASP.NET Web API and NancyFx components in the same stack. If this kind of thing makes sense to you, that's great. If it doesn't, don't worry, but keep an eye on it. You're going to see some cool things happen as a result of ASP.NET becoming more and more pluggable. Visual Studio Web Tools Okay, this stuff's just crazy. Visual Studio has been adding some nice web dev features over the past few years, but they've really cranked it up for this release. Visual Studio is by far my favorite code editor for all web files: CSS, HTML, JavaScript, and lots of popular libraries. Stop thinking of Visual Studio as a big editor that you only use to write back-end code. Stop editing HTML and CSS in Notepad (or Sublime, Notepad++, etc.). Visual Studio starts up in under 2 seconds on a modern computer with an SSD. Misspelling HTML attributes or your CSS classes or jQuery or Angular syntax is stupid. It doesn't make you a better developer, it makes you a silly person who wastes time. Browser Link Browser Link is a real-time, two-way connection between Visual Studio and all connected browsers. It's only attached when you're running locally, in debug, but it applies to any and all connected browser, including emulators. You may have seen demos that showed the browsers refreshing based on changes in the editor, and I'll agree that's pretty cool. But it's really just the start. It's a two-way connection, and it's built for extensiblity. That means you can write extensions that push information from your running application (in IE, Chrome, a mobile emulator, etc.) back to Visual Studio. Mads and team have showed off some demonstrations where they enabled edit mode in the browser which updated the source HTML back on the browser. It's also possible to look at how the rendered HTML performs, check for compatibility issues, watch for unused CSS classes, the sky's the limit. New HTML editor The previous HTML editor had a lot of old code that didn't allow for improvements. The team rewrote the HTML editor to take advantage of the new(ish) extensibility features in Visual Studio, which then allowed them to add in all kinds of features - things like CSS Class and ID IntelliSense (so you type style="" and get a list of classes and ID's for your project), smart indent based on how your document is formatted, JavaScript reference auto-sync, etc. Here's a 3 minute tour from Mads Kristensen. The previous HTML editor had a lot of old code that didn't allow for improvements. The team rewrote the HTML editor to take advantage of the new(ish) extensibility features in Visual Studio, which then allowed them to add in all kinds of features - things like CSS Class and ID IntelliSense (so you type style="" and get a list of classes and ID's for your project), smart indent based on how your document is formatted, JavaScript reference auto-sync, etc. Lots more Visual Studio web dev features That's just a sampling - there's a ton of great features for JavaScript editing, CSS editing, publishing, and Page Inspector (which shows real-time rendering of your page inside Visual Studio). Here are some more short videos showing those features. Lots, lots more Okay, that's just a summary, and it's still quite a bit. Head on over to http://asp.net/vnext for more information, and download Visual Studio 2013 now to get started!

    Read the article

  • Batch Best Practices and Technical Best Practices Updated

    - by ACShorten
    The Batch Best Practices for Oracle Utilities Application Framework based products (Doc Id: 836362.1) and Technical Best Practices for Oracle Utilities Application Framework Based Products (Doc Id: 560367.1) have been updated with updated and new advice for the various versions of the Oracle Utilities Application Framework based products. These documents cover the following products: Oracle Utilities Customer Care And Billing (V2 and above) Oracle Utilities Meter Data Management (V2 and above) Oracle Utilities Mobile Workforce Management (V2 and above) Oracle Utilities Smart Grid Gateway (V2 and above) – All editions Oracle Enterprise Taxation Management (all versions) Oracle Enterprise Taxation and Policy Management (all versions) Whilst there is new advice, some of which has been posted on this blog, a lot of sections have been updated for advice based upon feedback from customers, partners, consultants, our development teams and our hard working Support personnel. All whitepapers are available from My Oracle Support.

    Read the article

  • Validating the SharePoint InputFormTextBox / RichText Editor using JavaScript

    - by Jignesh Gangajaliya
    In the previous post I mentioned about manipulating SharePoint PeoplePicker control using JavaScript, in this post I will explain how to validate the InputFormTextBox contol using JavaScript. Here is the nice post by Becky Isserman on why not to use RequiredFieldValdator or InputFormRequiredFieldValidator with InputFormTextbox. function ValidateComments() {     //retrieve the text from rich text editor.     var text = RTE_GetRichEditTextOnly("<%= rteComments.ClientID %>");     if (text != "")     {         return true;     }     else     {         alert('Please enter your comments.');         //set focus back to the rich text editor.         RTE_GiveEditorFocus("<%= rteComments.ClientID %>");         return false;     }     return true; } <SharePoint:InputFormTextBox ID="rteComments" runat="server" RichText="true" RichTextMode="Compatible" Rows="10" TextMode="MultiLine" CausesValidation="true" ></SharePoint:InputFormTextBox> <asp:Button ID="btnSubmit" runat="server" Text="Submit" OnClick="btnSubmit_Click" OnClientClick="return ValidateComments()" CausesValidation="true" /> - Jignesh

    Read the article

  • How to use jQuery Date Range Picker plugin in asp.net

    - by alaa9jo
    I stepped by this page: http://www.filamentgroup.com/lab/date_range_picker_using_jquery_ui_16_and_jquery_ui_css_framework/ and let me tell you,this is one of the best and coolest daterangepicker in the web in my opinion,they did a great job with extending the original jQuery UI DatePicker.Of course I made enhancements to the original plugin (fixed few bugs) and added a new option (Clear) to clear the textbox. In this article I well use that updated plugin and show you how to use it in asp.net..you will definitely like it. So,What do I need? 1- jQuery library : you can use 1.3.2 or 1.4.2 which is the latest version so far,in my article I will use the latest version. 2- jQuery UI library (1.8): As I mentioned earlier,daterangepicker plugin is based on the original jQuery UI DatePicker so that library should be included into your page. 3- jQuery DateRangePicker plugin : you can go to the author page or use the modified one (it's included in the attachment),in this article I will use the modified one. 4- Visual Studio 2005 or later : very funny :D,in my article I will use VS 2008. Note: in the attachment,I included all CSS and JS files so don't worry. How to use it? First thing,you will have to include all of the CSS and JS files into your page like this: <script src="Scripts/jquery-1.4.2.min.js" type="text/javascript"></script> <script src="Scripts/jquery-ui-1.8.custom.min.js" type="text/javascript"></script> <script src="Scripts/daterangepicker.jQuery.js" type="text/javascript"></script> <link href="CSS/redmond/jquery-ui-1.8.custom.css" rel="stylesheet" type="text/css" /> <link href="CSS/ui.daterangepicker.css" rel="stylesheet" type="text/css" /> <style type="text/css"> .ui-daterangepicker { font-size: 10px; } </style> Then add this html: <asp:TextBox ID="TextBox1" runat="server" Font-Size="10px"></asp:TextBox><asp:Button ID="SubmitButton" runat="server" Text="Submit" OnClick="SubmitButton_Click" /> <span>First Date:</span><asp:Label ID="FirstDate" runat="server"></asp:Label> <span>Second Date:</span><asp:Label ID="SecondDate" runat="server"></asp:Label> As you can see,it includes TextBox1 which we are going to attach the daterangepicker to it,2 labels to show you later on by code on how to read the date from the textbox and set it to the labels Now we have to attach the daterangepicker to the textbox by using jQuery (Note:visit the author's website for more info on daterangerpicker's options and how to use them): <script type="text/javascript"> $(function() { $("#<%= TextBox1.ClientID %>").attr("readonly", "readonly"); $("#<%= TextBox1.ClientID %>").attr("unselectable", "on"); $("#<%= TextBox1.ClientID %>").daterangepicker({ presetRanges: [], arrows: true, dateFormat: 'd M, yy', clearValue: '', datepickerOptions: { changeMonth: true, changeYear: true} }); }); </script> Finally,add this C# code: protected void SubmitButton_Click(object sender, EventArgs e) { if (TextBox1.Text.Trim().Length == 0) { return; } string selectedDate = TextBox1.Text; if (selectedDate.Contains("-")) { DateTime startDate; DateTime endDate; string[] splittedDates = selectedDate.Split("-".ToCharArray(), StringSplitOptions.RemoveEmptyEntries); if (splittedDates.Count() == 2 && DateTime.TryParse(splittedDates[0], out startDate) && DateTime.TryParse(splittedDates[1], out endDate)) { FirstDate.Text = startDate.ToShortDateString(); SecondDate.Text = endDate.ToShortDateString(); } else { //maybe the client has modified/altered the input i.e. hacking tools } } else { DateTime selectedDateObj; if (DateTime.TryParse(selectedDate, out selectedDateObj)) { FirstDate.Text = selectedDateObj.ToShortDateString(); SecondDate.Text = string.Empty; } else { //maybe the client has modified/altered the input i.e. hacking tools } } } This is the way on how to read from the textbox,That's it!. FAQ: 1-Why did you add this code?: <style type="text/css"> .ui-daterangepicker { font-size: 10px; } </style> A:For two reasons: 1)To show the Daterangepicker in a smaller size because it's original size is huge 2)To show you how to control the size of it. 2- Can I change the theme? A: yes you can,you will notice that I'm using Redmond theme which you will find it in jQuery UI website,visit their website and download a different theme,you may also have to make modifications to the css of daterangepicker,it's all yours. 3- Why did you add a font size to the textbox? A: To make the design look better,try to remove it and see by your self. 4- Can I register the script at codebehind? A: yes you can 5- I see you have added these two lines,what they do? $("#<%= TextBox1.ClientID %>").attr("readonly", "readonly"); $("#<%= TextBox1.ClientID %>").attr("unselectable", "on"); A:The first line will make the textbox not editable by the user,the second will block the blinking typing cursor from appearing if the user clicked on the textbox,you will notice that both lines are necessary to be used together,you can't just use one of them...for logical reasons of course. Finally,I hope everyone liked the article and as always,your feedbacks are always welcomed and if anyone have any suggestions or made any modifications that might be useful for anyone else then please post it at at the author's website and post a reference to your post here.

    Read the article

  • Low graphics performance with Intel HD graphics

    - by neil
    hey, my laptop should be capable of running some games fine but doesn't. Examples are egoboo and tome. http://www.ebuyer.com/product/237739 this is my laptop. I tried the gears test and i only get 60 FPS, on IRC they said thats a big issue and should try the forums. I am using Ubuntu 11.04 and was told I should have the newest drivers. neil@neil-K52F:~$ /usr/lib/nux/unity_support_test --print OpenGL vendor string: Tungsten Graphics, Inc OpenGL renderer string: Mesa DRI Intel(R) Ironlake Mobile GEM 20100330 DEVELOPMENT OpenGL version string: 2.1 Mesa 7.10.2 Not software rendered: yes Not blacklisted: yes GLX fbconfig: yes GLX texture from pixmap: yes GL npot or rect textures: yes GL vertex program: yes GL fragment program: yes GL vertex buffer object: yes GL framebuffer object: yes GL version is 1.4+: yes Unity supported: yes

    Read the article

  • SQL SERVER – Enumerations in Relational Database – Best Practice

    - by pinaldave
    Marko Parkkola This article has been submitted by Marko Parkkola, Data systems designer at Saarionen Oy, Finland. Marko is excellent developer and always thinking at next level. You can read his earlier comment which created very interesting discussion here: SQL SERVER- IF EXISTS(Select null from table) vs IF EXISTS(Select 1 from table). I must express my special thanks to Marko for sending this best practice for Enumerations in Relational Database. He has really wrote excellent piece here and welcome comments here. Enumerations in Relational Database This is a subject which is very basic thing in relational databases but often not very well understood and sometimes badly implemented. There are of course many ways to do this but I concentrate only two cases, one which is “the right way” and one which is definitely wrong way. The concept Let’s say we have table Person in our database. Person has properties/fields like Firstname, Lastname, Birthday and so on. Then there’s a field that tells person’s marital status and let’s name it the same way; MaritalStatus. Now MaritalStatus is an enumeration. In C# I would definitely make it an enumeration with values likes Single, InRelationship, Married, Divorced. Now here comes the problem, SQL doesn’t have enumerations. The wrong way This is, in my opinion, absolutely the wrong way to do this. It has one upside though; you’ll see the enumeration’s description instantly when you do simple SELECT query and you don’t have to deal with mysterious values. There’s plenty of downsides too and one would be database fragmentation. Consider this (I’ve left all indexes and constraints out of the query on purpose). CREATE TABLE [dbo].[Person] ( [Firstname] NVARCHAR(100), [Lastname] NVARCHAR(100), [Birthday] datetime, [MaritalStatus] NVARCHAR(10) ) You have nvarchar(20) field in the table that tells the marital status. Obvious problem with this is that what if you create a new value which doesn’t fit into 20 characters? You’ll have to come and alter the table. There are other problems also but I’ll leave those for the reader to think about. The correct way Here’s how I’ve done this in many projects. This model still has one problem but it can be alleviated in the application layer or with CHECK constraints if you like. First I will create a namespace table which tells the name of the enumeration. I will add one row to it too. I’ll write all the indexes and constraints here too. CREATE TABLE [CodeNamespace] ( [Id] INT IDENTITY(1, 1), [Name] NVARCHAR(100) NOT NULL, CONSTRAINT [PK_CodeNamespace] PRIMARY KEY ([Id]), CONSTRAINT [IXQ_CodeNamespace_Name] UNIQUE NONCLUSTERED ([Name]) ) GO INSERT INTO [CodeNamespace] SELECT 'MaritalStatus' GO Then I create a table that holds the actual values and which reference to namespace table in order to group the values under different namespaces. I’ll add couple of rows here too. CREATE TABLE [CodeValue] ( [CodeNamespaceId] INT NOT NULL, [Value] INT NOT NULL, [Description] NVARCHAR(100) NOT NULL, [OrderBy] INT, CONSTRAINT [PK_CodeValue] PRIMARY KEY CLUSTERED ([CodeNamespaceId], [Value]), CONSTRAINT [FK_CodeValue_CodeNamespace] FOREIGN KEY ([CodeNamespaceId]) REFERENCES [CodeNamespace] ([Id]) ) GO -- 1 is the 'MaritalStatus' namespace INSERT INTO [CodeValue] SELECT 1, 1, 'Single', 1 INSERT INTO [CodeValue] SELECT 1, 2, 'In relationship', 2 INSERT INTO [CodeValue] SELECT 1, 3, 'Married', 3 INSERT INTO [CodeValue] SELECT 1, 4, 'Divorced', 4 GO Now there’s four columns in CodeValue table. CodeNamespaceId tells under which namespace values belongs to. Value tells the enumeration value which is used in Person table (I’ll show how this is done below). Description tells what the value means. You can use this, for example, column in UI’s combo box. OrderBy tells if the values needs to be ordered in some way when displayed in the UI. And here’s the Person table again now with correct columns. I’ll add one row here to show how enumerations are to be used. CREATE TABLE [dbo].[Person] ( [Firstname] NVARCHAR(100), [Lastname] NVARCHAR(100), [Birthday] datetime, [MaritalStatus] INT ) GO INSERT INTO [Person] SELECT 'Marko', 'Parkkola', '1977-03-04', 3 GO Now I said earlier that there is one problem with this. MaritalStatus column doesn’t have any database enforced relationship to the CodeValue table so you can enter any value you like into this field. I’ve solved this problem in the application layer by selecting all the values from the CodeValue table and put them into a combobox / dropdownlist (with Value field as value and Description as text) so the end user can’t enter any illegal values; and of course I’ll check the entered value in data access layer also. I said in the “The wrong way” section that there is one benefit to it. In fact, you can have the same benefit here by using a simple view, which I schema bound so you can even index it if you like. CREATE VIEW [dbo].[Person_v] WITH SCHEMABINDING AS SELECT p.[Firstname], p.[Lastname], p.[BirthDay], c.[Description] MaritalStatus FROM [dbo].[Person] p JOIN [dbo].[CodeValue] c ON p.[MaritalStatus] = c.[Value] JOIN [dbo].[CodeNamespace] n ON n.[Id] = c.[CodeNamespaceId] AND n.[Name] = 'MaritalStatus' GO -- Select from View SELECT * FROM [dbo].[Person_v] GO This is excellent write up byMarko Parkkola. Do you have this kind of design setup at your organization? Let us know your opinion. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Database, DBA, Readers Contribution, Software Development, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Installation doesn't detect existing partitions

    - by retrac1324
    I am trying to install Ubuntu 11.10 in a dual boot with my existing Windows 7 but the installer does not detect any existing partitions. I have tried resetting my BCD using EasyBCD and doing fixmbr from the Windows startup disc. A while ago I had to use TestDisk to recover my partition table so this might be the cause but I have installed Ubuntu and Windows many times before with no problems. fdisk -l output: Disk /dev/sda: 640.1 GB, 640135028736 bytes 255 heads, 63 sectors/track, 77825 cylinders, total 1250263728 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x360555e5 Device Boot Start End Blocks Id System /dev/sda1 * 2048 1250274689 625136321 7 HPFS/NTFS/exFAT Disk /dev/sdf: 7803 MB, 7803174912 bytes 255 heads, 63 sectors/track, 948 cylinders, total 15240576 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x6f795a8d Device Boot Start End Blocks Id System /dev/sdf1 * 63 15240575 7620256+ c W95 FAT32 (LBA)

    Read the article

  • Install the Ajax Control Toolkit from NuGet

    - by Stephen Walther
    The Ajax Control Toolkit is now available from NuGet. This makes it super easy to add the latest version of the Ajax Control Toolkit to any Web Forms application. If you haven’t used NuGet yet, then you are missing out on a great tool which you can use with Visual Studio to add new features to an application. You can use NuGet with both ASP.NET MVC and ASP.NET Web Forms applications. NuGet is compatible with both Websites and Web Applications and it works with both C# and VB.NET applications. For example, I habitually use NuGet to add the latest version of ELMAH, Entity Framework, jQuery, jQuery UI, and jQuery Templates to applications that I create. To download NuGet, visit the NuGet website at: http://NuGet.org Imagine, for example, that you want to take advantage of the Ajax Control Toolkit RoundedCorners extender to create cross-browser compatible rounded corners in a Web Forms application. Follow these steps. Right click on your project in the Solution Explorer window and select the option Add Library Package Reference. In the Add Library Package Reference dialog, select the Online tab and enter AjaxControlToolkit in the search box: Click the Install button and the latest version of the Ajax Control Toolkit will be installed. Installing the Ajax Control Toolkit makes several modifications to your application. First, a reference to the Ajax Control Toolkit is added to your application. In a Web Application Project, you can see the new reference in the References folder: Installing the Ajax Control Toolkit NuGet package also updates your Web.config file. The tag prefix ajaxToolkit is registered so that you can easily use Ajax Control Toolkit controls within any page without adding a @Register directive to the page. <configuration> <system.web> <compilation debug="true" targetFramework="4.0" /> <pages> <controls> <add tagPrefix="ajaxToolkit" assembly="AjaxControlToolkit" namespace="AjaxControlToolkit" /> </controls> </pages> </system.web> </configuration> You should do a rebuild of your application by selecting the Visual Studio menu option Build, Rebuild Solution so that Visual Studio picks up on the new controls (You won’t get Intellisense for the Ajax Control Toolkit controls until you do a build). After you add the Ajax Control Toolkit to your application, you can start using any of the 40 Ajax Control Toolkit controls in your application (see http://www.asp.net/ajax/ajaxcontroltoolkit/samples/ for a reference for the controls). <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="WebForm1.aspx.cs" Inherits="WebApplication1.WebForm1" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title>Rounded Corners</title> <style type="text/css"> #pnl1 { background-color: gray; width: 200px; color:White; font: 14pt Verdana; } #pnl1_contents { padding: 10px; } </style> </head> <body> <form id="form1" runat="server"> <div> <asp:Panel ID="pnl1" runat="server"> <div id="pnl1_contents"> I have rounded corners! </div> </asp:Panel> <ajaxToolkit:ToolkitScriptManager ID="sm1" runat="server" /> <ajaxToolkit:RoundedCornersExtender TargetControlID="pnl1" runat="server" /> </div> </form> </body> </html> The page contains the following three controls: Panel – The Panel control named pnl1 contains the content which appears with rounded corners. ToolkitScriptManager – Every page which uses the Ajax Control Toolkit must contain a single ToolkitScriptManager. The ToolkitScriptManager loads all of the JavaScript files used by the Ajax Control Toolkit. RoundedCornersExtender – This Ajax Control Toolkit extender targets the Panel control. It makes the Panel control appear with rounded corners. You can control the “roundiness” of the corners by modifying the Radius property. Notice that you get Intellisense when typing the Ajax Control Toolkit tags. As soon as you type <ajaxToolkit, all of the available Ajax Control Toolkit controls appear: When you open the page in a browser, then the contents of the Panel appears with rounded corners. The advantage of using the RoundedCorners extender is that it is cross-browser compatible. It works great with Internet Explorer, Opera, Firefox, Chrome, and Safari even though different browsers implement rounded corners in different ways. The RoundedCorners extender even works with an ancient browser such as Internet Explorer 6. Getting the Latest Version of the Ajax Control Toolkit The Ajax Control Toolkit continues to evolve at a rapid pace. We are hard at work at fixing bugs and adding new features to the project. We plan to have a new release of the Ajax Control Toolkit each month. The easiest way to get the latest version of the Ajax Control Toolkit is to use NuGet. You can open the NuGet Add Library Package Reference dialog at any time to update the Ajax Control Toolkit to the latest version.

    Read the article

  • AMD-V is not enable in virtualbox in amd APU

    - by shantanu
    I am running Dual core AMD E450 APU. When i tried to run a 64-bit OS that requires hardware virtualization using virtual-box it showed me an error "AMD-V is not enable". My AMD processor should provide AMD-V support. And i can find no option for AMD-V in BIOS. How can i solve this problem? How could i enable AMD-V for my APU? Thanks in advance lscpu :- Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 2 On-line CPU(s) list: 0,1 Thread(s) per core: 1 Core(s) per socket: 2 Socket(s): 1 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 20 Model: 2 Stepping: 0 CPU MHz: 1650.000 BogoMIPS: 3291.72 Virtualization: AMD-V L1d cache: 32K L1i cache: 32K L2 cache: 512K NUMA node0 CPU(s): 0,1 EDITED:- Error of virtualBOX:- Failed to open a session for the virtual machine XXX. AMD-V is disabled in the BIOS. (VERR_SVM_DISABLED). Result Code: NS_ERROR_FAILURE (0x80004005) Component: Console Interface: IConsole {1968b7d3-e3bf-4ceb-99e0-cb7c913317bb}

    Read the article

  • Image Preview in ASP.NET MVC

    - by imran_ku07
      Introduction :         Previewing an image is a great way to improve the UI of your site. Also it is always best to check the file type, size and see a preview before submitting the whole form. There are some ways to do this using simple JavaScript but not work in all browsers (like FF3).In this Article I will show you how do this using ASP.NET MVC application. You also see how this will work in case of nested form.   Description :          Create a new ASP.NET MVC project and then add a file upload and image control into your View. <form id="form1" method="post" action="NerdDinner/ImagePreview/AjaxSubmit">            <table>                <tr>                    <td>                        <input type="file" name="imageLoad1" id="imageLoad1"  onchange="ChangeImage(this,'#imgThumbnail')" />                    </td>                </tr>                <tr>                    <td align="center">                        <img src="images/TempImage.gif" id="imgThumbnail" height="200px" width="200px">                     </td>                </tr>            </table>        </form>           Note that here NerdDinner is refers to the virtual directory name, ImagePreview is the Controller and ImageLoad is the action name which you will see shortly          I will use the most popular jQuery form plug-in, that turns a form into an AJAX form with very little code. Therefore you must get these from Jquery site and then add these files into your page.          <script src="NerdDinner/Scripts/jquery-1.3.2.js" type="text/javascript"></script>        <script src="NerdDinner/Scripts/jquery.form.js" type="text/javascript"></script>            Then add the javascript function. <script type="text/javascript">function ChangeImage(fileId,imageId){ $("#form1").ajaxSubmit({success: function(responseText){ var d=new Date(); $(imageId)[0].src="NerdDinner/ImagePreview/ImageLoad?a="+d.getTime(); } });}</script>             This function simply submit the form named form1 asynchronously to ImagePreviewController's method AjaxSubmit and after successfully receiving the response, it will set the image src property to the action method ImageLoad. Here I am also adding querystring, preventing the browser to serve the cached image.           Now I will create a new Controller named ImagePreviewController. public class ImagePreviewController : Controller { [AcceptVerbs(HttpVerbs.Post)] public ActionResult AjaxSubmit(int? id) { Session["ContentLength"] = Request.Files[0].ContentLength; Session["ContentType"] = Request.Files[0].ContentType; byte[] b = new byte[Request.Files[0].ContentLength]; Request.Files[0].InputStream.Read(b, 0, Request.Files[0].ContentLength); Session["ContentStream"] = b; return Content( Request.Files[0].ContentType+";"+ Request.Files[0].ContentLength ); } public ActionResult ImageLoad(int? id) { byte[] b = (byte[])Session["ContentStream"]; int length = (int)Session["ContentLength"]; string type = (string)Session["ContentType"]; Response.Buffer = true; Response.Charset = ""; Response.Cache.SetCacheability(HttpCacheability.NoCache); Response.ContentType = type; Response.BinaryWrite(b); Response.Flush(); Session["ContentLength"] = null; Session["ContentType"] = null; Session["ContentStream"] = null; Response.End(); return Content(""); } }             The AjaxSubmit action method will save the image in Session and return content type and content length in response. ImageLoad action method will return the contents of image in response.Then clear these Sessions.           Just run your application and see the effect.   Checking Size and Content Type of File:          You may notice that AjaxSubmit action method is returning both content type and content length. You can check both properties before submitting your complete form.     $(myform).ajaxSubmit({success: function(responseText)            {                                var contentType=responseText.substring(0,responseText.indexOf(';'));                var contentLength=responseText.substring(responseText.indexOf(';')+1);                // Here you can do your validation                var d=new Date();                $(imageId)[0].src="http://weblogs.asp.net/MoneypingAPP/ImagePreview/ImageLoad?a="+d.getTime();            }        });  Handling Nested Form Case:          The above code will work if you have only one form. But this is not the case always.You may have a form control which wraps all the controls and you do not want to submit the whole form, just for getting a preview effect.           In this case you need to create a dynamic form control using JavaScript, and then add file upload control to this form and submit the form asynchronously  function ChangeImage(fileId,imageId)         {            var myform=document.createElement("form");                    myform.action="NerdDinner/ImagePreview/AjaxSubmit";            myform.enctype="multipart/form-data";            myform.method="post";            var imageLoad=document.getElementById(fileId).cloneNode(true);            myform.appendChild(imageLoad);            document.body.appendChild(myform);            $(myform).ajaxSubmit({success: function(responseText)                {                                    var contentType=responseText.substring(0,responseText.indexOf(';'));                    var contentLength=responseText.substring(responseText.indexOf(';')+1);                    var d=new Date();                    $(imageId)[0].src="http://weblogs.asp.net/MoneypingAPP/ImagePreview/ImageLoad?a="+d.getTime();                    document.body.removeChild(myform);                }            });        }            You also need append the child in order to send request and remove them after receiving response.

    Read the article

  • Create an iTunes Account without a credit card

    - by Matthew Guay
    iTunes Store offers a large variety of free content, but to download it you have to have an account. Usually you have to enter your credit card information to sign up, but here’s an easy way to get an iTunes account for free downloads without entering any payment info. Although iTunes Store is known for paid downloads of movies, music, and more, it also has a treasure trove of free media.  Some of it, including Podcasts and iTunes U educational content do not require an account to download.  However, any other free content, including free iPhone/iPod Touch apps and free or promotional music, videos, and TV Shows all require an account to download.  If you try to download a free movie or music download, you will be required to enter payment information. Even though your card will not be charged, it will be kept on file so you can be charged if you download a for-pay item.  However, if you only plan to download free items, it may be preferable to not have your account linked to a credit card. The following steps will get you an account without entering your credit card info. Getting Started First, make sure you have iTunes installed.  If you don’t already have it, download and install it (link below) with the default settings. Now open iTunes, and click the iTunes Store link on the left. Click the App Store link on the top of this page. Select a free app to download.  A simple way to do this is to scroll down to the Top Free Apps box on the right side, hover your mouse over the first item, and click on the Free button that appears when you hover over it. A popup will open asking you to sign in with your Apple ID.  Click “Create New Account”. Click Continue to create your account. Check the box to accept the Store Terms and Conditions, and click Continue.   Enter your email address, password, security question, and date of birth, and uncheck the boxes to get email if you don’t want it…then click Continue. Now, you will be asked to provide a payment method.  Notice now that the last option says None!  Click that bullet option… Then enter your billing address.  Simply enter your normal billing address, even though you are not entering a payment method.  Click Continue and your account will be created! If you get the Address Verification screen just verify your county and click Done. An email will be sent to you to verify your account… Click on the link in your email to verify your account, iTunes will launch and you’re prompted to enter in the Apple ID and Password you just created. Your account is successfully created! Now you can easily download any free media from iTunes.  Keep an eye on the Free on iTunes box on the bottom of the iTunes Store page for interesting downloads, or if you have an iPhone or iPod Touch, watch the popular Free downloads on the Apps page. And of course there is always great content on iTunes U to grab free as well. Purchasing for-pay media If you want to purchase an item on the iTunes store later, simply click on the item to download as normal.  Click Buy to proceed with the purchase. iTunes will prompt you that you need to enter payment information to complete the purchase.  Enter your Apple ID email and password, and then add the payment information as prompted.   Remove Payment Information from an iTunes Account If you’ve already entered payment information into your iTunes account, and would like to remove it, click Store in the top iTunes menu, and select View My Account. Enter your Apple ID email and password, and click View Account.   This will open your account information.  Click the Edit Payment Information button.   Now, click the None button to remove your payment information.  Click Done to save the changes. Your account will now prompt you to enter payment information if you try to make a purchase.  You could repeat these steps after making a purchase if you do not want iTunes to keep your payment info on file. Conclusion This is a great way to make an iTunes account without entering your credit card, or to remove your credit card info from your account.  Parents may especially enjoy this tip, as they can have an iTunes account on their kids computer or iPod Touch without worrying about them spending money with it. Links Download iTunes Similar Articles Productive Geek Tips Quick Tip: Switch Between Signatures in Outlook 2007 the Easy WayRedeem Pre-paid Zune Card Points for Zune Marketplace MediaCreate An Electronic Business Card In Outlook 2007Understanding Windows Vista Aero Glass RequirementsSpeed up Your Windows Vista Computer with ReadyBoost TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Draw Online using Harmony How to Browse Privately in Firefox Kill Processes Quickly with Process Assassin Need to Come Up with a Good Name? Try Wordoid StockFox puts a Lightweight Stock Ticker in your Statusbar Explore Google Public Data Visually

    Read the article

  • Toorcon 15 (2013)

    - by danx
    The Toorcon gang (senior staff): h1kari (founder), nfiltr8, and Geo Introduction to Toorcon 15 (2013) A Tale of One Software Bypass of MS Windows 8 Secure Boot Breaching SSL, One Byte at a Time Running at 99%: Surviving an Application DoS Security Response in the Age of Mass Customized Attacks x86 Rewriting: Defeating RoP and other Shinanighans Clowntown Express: interesting bugs and running a bug bounty program Active Fingerprinting of Encrypted VPNs Making Attacks Go Backwards Mask Your Checksums—The Gorry Details Adventures with weird machines thirty years after "Reflections on Trusting Trust" Introduction to Toorcon 15 (2013) Toorcon 15 is the 15th annual security conference held in San Diego. I've attended about a third of them and blogged about previous conferences I attended here starting in 2003. As always, I've only summarized the talks I attended and interested me enough to write about them. Be aware that I may have misrepresented the speaker's remarks and that they are not my remarks or opinion, or those of my employer, so don't quote me or them. Those seeking further details may contact the speakers directly or use The Google. For some talks, I have a URL for further information. A Tale of One Software Bypass of MS Windows 8 Secure Boot Andrew Furtak and Oleksandr Bazhaniuk Yuri Bulygin, Oleksandr ("Alex") Bazhaniuk, and (not present) Andrew Furtak Yuri and Alex talked about UEFI and Bootkits and bypassing MS Windows 8 Secure Boot, with vendor recommendations. They previously gave this talk at the BlackHat 2013 conference. MS Windows 8 Secure Boot Overview UEFI (Unified Extensible Firmware Interface) is interface between hardware and OS. UEFI is processor and architecture independent. Malware can replace bootloader (bootx64.efi, bootmgfw.efi). Once replaced can modify kernel. Trivial to replace bootloader. Today many legacy bootkits—UEFI replaces them most of them. MS Windows 8 Secure Boot verifies everything you load, either through signatures or hashes. UEFI firmware relies on secure update (with signed update). You would think Secure Boot would rely on ROM (such as used for phones0, but you can't do that for PCs—PCs use writable memory with signatures DXE core verifies the UEFI boat loader(s) OS Loader (winload.efi, winresume.efi) verifies the OS kernel A chain of trust is established with a root key (Platform Key, PK), which is a cert belonging to the platform vendor. Key Exchange Keys (KEKs) verify an "authorized" database (db), and "forbidden" database (dbx). X.509 certs with SHA-1/SHA-256 hashes. Keys are stored in non-volatile (NV) flash-based NVRAM. Boot Services (BS) allow adding/deleting keys (can't be accessed once OS starts—which uses Run-Time (RT)). Root cert uses RSA-2048 public keys and PKCS#7 format signatures. SecureBoot — enable disable image signature checks SetupMode — update keys, self-signed keys, and secure boot variables CustomMode — allows updating keys Secure Boot policy settings are: always execute, never execute, allow execute on security violation, defer execute on security violation, deny execute on security violation, query user on security violation Attacking MS Windows 8 Secure Boot Secure Boot does NOT protect from physical access. Can disable from console. Each BIOS vendor implements Secure Boot differently. There are several platform and BIOS vendors. It becomes a "zoo" of implementations—which can be taken advantage of. Secure Boot is secure only when all vendors implement it correctly. Allow only UEFI firmware signed updates protect UEFI firmware from direct modification in flash memory protect FW update components program SPI controller securely protect secure boot policy settings in nvram protect runtime api disable compatibility support module which allows unsigned legacy Can corrupt the Platform Key (PK) EFI root certificate variable in SPI flash. If PK is not found, FW enters setup mode wich secure boot turned off. Can also exploit TPM in a similar manner. One is not supposed to be able to directly modify the PK in SPI flash from the OS though. But they found a bug that they can exploit from User Mode (undisclosed) and demoed the exploit. It loaded and ran their own bootkit. The exploit requires a reboot. Multiple vendors are vulnerable. They will disclose this exploit to vendors in the future. Recommendations: allow only signed updates protect UEFI fw in ROM protect EFI variable store in ROM Breaching SSL, One Byte at a Time Yoel Gluck and Angelo Prado Angelo Prado and Yoel Gluck, Salesforce.com CRIME is software that performs a "compression oracle attack." This is possible because the SSL protocol doesn't hide length, and because SSL compresses the header. CRIME requests with every possible character and measures the ciphertext length. Look for the plaintext which compresses the most and looks for the cookie one byte-at-a-time. SSL Compression uses LZ77 to reduce redundancy. Huffman coding replaces common byte sequences with shorter codes. US CERT thinks the SSL compression problem is fixed, but it isn't. They convinced CERT that it wasn't fixed and they issued a CVE. BREACH, breachattrack.com BREACH exploits the SSL response body (Accept-Encoding response, Content-Encoding). It takes advantage of the fact that the response is not compressed. BREACH uses gzip and needs fairly "stable" pages that are static for ~30 seconds. It needs attacker-supplied content (say from a web form or added to a URL parameter). BREACH listens to a session's requests and responses, then inserts extra requests and responses. Eventually, BREACH guesses a session's secret key. Can use compression to guess contents one byte at-a-time. For example, "Supersecret SupersecreX" (a wrong guess) compresses 10 bytes, and "Supersecret Supersecret" (a correct guess) compresses 11 bytes, so it can find each character by guessing every character. To start the guess, BREACH needs at least three known initial characters in the response sequence. Compression length then "leaks" information. Some roadblocks include no winners (all guesses wrong) or too many winners (multiple possibilities that compress the same). The solutions include: lookahead (guess 2 or 3 characters at-a-time instead of 1 character). Expensive rollback to last known conflict check compression ratio can brute-force first 3 "bootstrap" characters, if needed (expensive) block ciphers hide exact plain text length. Solution is to align response in advance to block size Mitigations length: use variable padding secrets: dynamic CSRF tokens per request secret: change over time separate secret to input-less servlets Future work eiter understand DEFLATE/GZIP HTTPS extensions Running at 99%: Surviving an Application DoS Ryan Huber Ryan Huber, Risk I/O Ryan first discussed various ways to do a denial of service (DoS) attack against web services. One usual method is to find a slow web page and do several wgets. Or download large files. Apache is not well suited at handling a large number of connections, but one can put something in front of it Can use Apache alternatives, such as nginx How to identify malicious hosts short, sudden web requests user-agent is obvious (curl, python) same url requested repeatedly no web page referer (not normal) hidden links. hide a link and see if a bot gets it restricted access if not your geo IP (unless the website is global) missing common headers in request regular timing first seen IP at beginning of attack count requests per hosts (usually a very large number) Use of captcha can mitigate attacks, but you'll lose a lot of genuine users. Bouncer, goo.gl/c2vyEc and www.github.com/rawdigits/Bouncer Bouncer is software written by Ryan in netflow. Bouncer has a small, unobtrusive footprint and detects DoS attempts. It closes blacklisted sockets immediately (not nice about it, no proper close connection). Aggregator collects requests and controls your web proxies. Need NTP on the front end web servers for clean data for use by bouncer. Bouncer is also useful for a popularity storm ("Slashdotting") and scraper storms. Future features: gzip collection data, documentation, consumer library, multitask, logging destroyed connections. Takeaways: DoS mitigation is easier with a complete picture Bouncer designed to make it easier to detect and defend DoS—not a complete cure Security Response in the Age of Mass Customized Attacks Peleus Uhley and Karthik Raman Peleus Uhley and Karthik Raman, Adobe ASSET, blogs.adobe.com/asset/ Peleus and Karthik talked about response to mass-customized exploits. Attackers behave much like a business. "Mass customization" refers to concept discussed in the book Future Perfect by Stan Davis of Harvard Business School. Mass customization is differentiating a product for an individual customer, but at a mass production price. For example, the same individual with a debit card receives basically the same customized ATM experience around the world. Or designing your own PC from commodity parts. Exploit kits are another example of mass customization. The kits support multiple browsers and plugins, allows new modules. Exploit kits are cheap and customizable. Organized gangs use exploit kits. A group at Berkeley looked at 77,000 malicious websites (Grier et al., "Manufacturing Compromise: The Emergence of Exploit-as-a-Service", 2012). They found 10,000 distinct binaries among them, but derived from only a dozen or so exploit kits. Characteristics of Mass Malware: potent, resilient, relatively low cost Technical characteristics: multiple OS, multipe payloads, multiple scenarios, multiple languages, obfuscation Response time for 0-day exploits has gone down from ~40 days 5 years ago to about ~10 days now. So the drive with malware is towards mass customized exploits, to avoid detection There's plenty of evicence that exploit development has Project Manager bureaucracy. They infer from the malware edicts to: support all versions of reader support all versions of windows support all versions of flash support all browsers write large complex, difficult to main code (8750 lines of JavaScript for example Exploits have "loose coupling" of multipe versions of software (adobe), OS, and browser. This allows specific attacks against specific versions of multiple pieces of software. Also allows exploits of more obscure software/OS/browsers and obscure versions. Gave examples of exploits that exploited 2, 3, 6, or 14 separate bugs. However, these complete exploits are more likely to be buggy or fragile in themselves and easier to defeat. Future research includes normalizing malware and Javascript. Conclusion: The coming trend is that mass-malware with mass zero-day attacks will result in mass customization of attacks. x86 Rewriting: Defeating RoP and other Shinanighans Richard Wartell Richard Wartell The attack vector we are addressing here is: First some malware causes a buffer overflow. The malware has no program access, but input access and buffer overflow code onto stack Later the stack became non-executable. The workaround malware used was to write a bogus return address to the stack jumping to malware Later came ASLR (Address Space Layout Randomization) to randomize memory layout and make addresses non-deterministic. The workaround malware used was to jump t existing code segments in the program that can be used in bad ways "RoP" is Return-oriented Programming attacks. RoP attacks use your own code and write return address on stack to (existing) expoitable code found in program ("gadgets"). Pinkie Pie was paid $60K last year for a RoP attack. One solution is using anti-RoP compilers that compile source code with NO return instructions. ASLR does not randomize address space, just "gadgets". IPR/ILR ("Instruction Location Randomization") randomizes each instruction with a virtual machine. Richard's goal was to randomize a binary with no source code access. He created "STIR" (Self-Transofrming Instruction Relocation). STIR disassembles binary and operates on "basic blocks" of code. The STIR disassembler is conservative in what to disassemble. Each basic block is moved to a random location in memory. Next, STIR writes new code sections with copies of "basic blocks" of code in randomized locations. The old code is copied and rewritten with jumps to new code. the original code sections in the file is marked non-executible. STIR has better entropy than ASLR in location of code. Makes brute force attacks much harder. STIR runs on MS Windows (PEM) and Linux (ELF). It eliminated 99.96% or more "gadgets" (i.e., moved the address). Overhead usually 5-10% on MS Windows, about 1.5-4% on Linux (but some code actually runs faster!). The unique thing about STIR is it requires no source access and the modified binary fully works! Current work is to rewrite code to enforce security policies. For example, don't create a *.{exe,msi,bat} file. Or don't connect to the network after reading from the disk. Clowntown Express: interesting bugs and running a bug bounty program Collin Greene Collin Greene, Facebook Collin talked about Facebook's bug bounty program. Background at FB: FB has good security frameworks, such as security teams, external audits, and cc'ing on diffs. But there's lots of "deep, dark, forgotten" parts of legacy FB code. Collin gave several examples of bountied bugs. Some bounty submissions were on software purchased from a third-party (but bounty claimers don't know and don't care). We use security questions, as does everyone else, but they are basically insecure (often easily discoverable). Collin didn't expect many bugs from the bounty program, but they ended getting 20+ good bugs in first 24 hours and good submissions continue to come in. Bug bounties bring people in with different perspectives, and are paid only for success. Bug bounty is a better use of a fixed amount of time and money versus just code review or static code analysis. The Bounty program started July 2011 and paid out $1.5 million to date. 14% of the submissions have been high priority problems that needed to be fixed immediately. The best bugs come from a small % of submitters (as with everything else)—the top paid submitters are paid 6 figures a year. Spammers like to backstab competitors. The youngest sumitter was 13. Some submitters have been hired. Bug bounties also allows to see bugs that were missed by tools or reviews, allowing improvement in the process. Bug bounties might not work for traditional software companies where the product has release cycle or is not on Internet. Active Fingerprinting of Encrypted VPNs Anna Shubina Anna Shubina, Dartmouth Institute for Security, Technology, and Society (I missed the start of her talk because another track went overtime. But I have the DVD of the talk, so I'll expand later) IPsec leaves fingerprints. Using netcat, one can easily visually distinguish various crypto chaining modes just from packet timing on a chart (example, DES-CBC versus AES-CBC) One can tell a lot about VPNs just from ping roundtrips (such as what router is used) Delayed packets are not informative about a network, especially if far away from the network More needed to explore about how TCP works in real life with respect to timing Making Attacks Go Backwards Fuzzynop FuzzyNop, Mandiant This talk is not about threat attribution (finding who), product solutions, politics, or sales pitches. But who are making these malware threats? It's not a single person or group—they have diverse skill levels. There's a lot of fat-fingered fumblers out there. Always look for low-hanging fruit first: "hiding" malware in the temp, recycle, or root directories creation of unnamed scheduled tasks obvious names of files and syscalls ("ClearEventLog") uncleared event logs. Clearing event log in itself, and time of clearing, is a red flag and good first clue to look for on a suspect system Reverse engineering is hard. Disassembler use takes practice and skill. A popular tool is IDA Pro, but it takes multiple interactive iterations to get a clean disassembly. Key loggers are used a lot in targeted attacks. They are typically custom code or built in a backdoor. A big tip-off is that non-printable characters need to be printed out (such as "[Ctrl]" "[RightShift]") or time stamp printf strings. Look for these in files. Presence is not proof they are used. Absence is not proof they are not used. Java exploits. Can parse jar file with idxparser.py and decomile Java file. Java typially used to target tech companies. Backdoors are the main persistence mechanism (provided externally) for malware. Also malware typically needs command and control. Application of Artificial Intelligence in Ad-Hoc Static Code Analysis John Ashaman John Ashaman, Security Innovation Initially John tried to analyze open source files with open source static analysis tools, but these showed thousands of false positives. Also tried using grep, but tis fails to find anything even mildly complex. So next John decided to write his own tool. His approach was to first generate a call graph then analyze the graph. However, the problem is that making a call graph is really hard. For example, one problem is "evil" coding techniques, such as passing function pointer. First the tool generated an Abstract Syntax Tree (AST) with the nodes created from method declarations and edges created from method use. Then the tool generated a control flow graph with the goal to find a path through the AST (a maze) from source to sink. The algorithm is to look at adjacent nodes to see if any are "scary" (a vulnerability), using heuristics for search order. The tool, called "Scat" (Static Code Analysis Tool), currently looks for C# vulnerabilities and some simple PHP. Later, he plans to add more PHP, then JSP and Java. For more information see his posts in Security Innovation blog and NRefactory on GitHub. Mask Your Checksums—The Gorry Details Eric (XlogicX) Davisson Eric (XlogicX) Davisson Sometimes in emailing or posting TCP/IP packets to analyze problems, you may want to mask the IP address. But to do this correctly, you need to mask the checksum too, or you'll leak information about the IP. Problem reports found in stackoverflow.com, sans.org, and pastebin.org are usually not masked, but a few companies do care. If only the IP is masked, the IP may be guessed from checksum (that is, it leaks data). Other parts of packet may leak more data about the IP. TCP and IP checksums both refer to the same data, so can get more bits of information out of using both checksums than just using one checksum. Also, one can usually determine the OS from the TTL field and ports in a packet header. If we get hundreds of possible results (16x each masked nibble that is unknown), one can do other things to narrow the results, such as look at packet contents for domain or geo information. With hundreds of results, can import as CSV format into a spreadsheet. Can corelate with geo data and see where each possibility is located. Eric then demoed a real email report with a masked IP packet attached. Was able to find the exact IP address, given the geo and university of the sender. Point is if you're going to mask a packet, do it right. Eric wouldn't usually bother, but do it correctly if at all, to not create a false impression of security. Adventures with weird machines thirty years after "Reflections on Trusting Trust" Sergey Bratus Sergey Bratus, Dartmouth College (and Julian Bangert and Rebecca Shapiro, not present) "Reflections on Trusting Trust" refers to Ken Thompson's classic 1984 paper. "You can't trust code that you did not totally create yourself." There's invisible links in the chain-of-trust, such as "well-installed microcode bugs" or in the compiler, and other planted bugs. Thompson showed how a compiler can introduce and propagate bugs in unmodified source. But suppose if there's no bugs and you trust the author, can you trust the code? Hell No! There's too many factors—it's Babylonian in nature. Why not? Well, Input is not well-defined/recognized (code's assumptions about "checked" input will be violated (bug/vunerabiliy). For example, HTML is recursive, but Regex checking is not recursive. Input well-formed but so complex there's no telling what it does For example, ELF file parsing is complex and has multiple ways of parsing. Input is seen differently by different pieces of program or toolchain Any Input is a program input executes on input handlers (drives state changes & transitions) only a well-defined execution model can be trusted (regex/DFA, PDA, CFG) Input handler either is a "recognizer" for the inputs as a well-defined language (see langsec.org) or it's a "virtual machine" for inputs to drive into pwn-age ELF ABI (UNIX/Linux executible file format) case study. Problems can arise from these steps (without planting bugs): compiler linker loader ld.so/rtld relocator DWARF (debugger info) exceptions The problem is you can't really automatically analyze code (it's the "halting problem" and undecidable). Only solution is to freeze code and sign it. But you can't freeze everything! Can't freeze ASLR or loading—must have tables and metadata. Any sufficiently complex input data is the same as VM byte code Example, ELF relocation entries + dynamic symbols == a Turing Complete Machine (TM). @bxsays created a Turing machine in Linux from relocation data (not code) in an ELF file. For more information, see Rebecca "bx" Shapiro's presentation from last year's Toorcon, "Programming Weird Machines with ELF Metadata" @bxsays did same thing with Mach-O bytecode Or a DWARF exception handling data .eh_frame + glibc == Turning Machine X86 MMU (IDT, GDT, TSS): used address translation to create a Turning Machine. Page handler reads and writes (on page fault) memory. Uses a page table, which can be used as Turning Machine byte code. Example on Github using this TM that will fly a glider across the screen Next Sergey talked about "Parser Differentials". That having one input format, but two parsers, will create confusion and opportunity for exploitation. For example, CSRs are parsed during creation by cert requestor and again by another parser at the CA. Another example is ELF—several parsers in OS tool chain, which are all different. Can have two different Program Headers (PHDRs) because ld.so parses multiple PHDRs. The second PHDR can completely transform the executable. This is described in paper in the first issue of International Journal of PoC. Conclusions trusting computers not only about bugs! Bugs are part of a problem, but no by far all of it complex data formats means bugs no "chain of trust" in Babylon! (that is, with parser differentials) we need to squeeze complexity out of data until data stops being "code equivalent" Further information See and langsec.org. USENIX WOOT 2013 (Workshop on Offensive Technologies) for "weird machines" papers and videos.

    Read the article

  • SQL SERVER – Disabled Index and Update Statistics

    - by pinaldave
    When we try to update the statistics, it throws an error as if the clustered index is disabled. Now let us enable the clustered index only and attempt to update the statistics of the table right after that. Have you ever come across the situation where a conversation never gets over and it continues even though original point of discussion has passed. I am facing the same situation in the case of Disabled Index. Here is the link to original conversations. SQL SERVER – Disable Clustered Index and Data Insert – Reader had a issue here with Disabled Index SQL SERVER – Understanding ALTER INDEX ALL REBUILD with Disabled Clustered Index – Reader asked the effect of Rebuilding Indexes The same reader asked me today – “I understood what the disabled indexes do; what is their effect on statistics. Is it true that even though indexes are disabled, they continue updating the statistics?“ The answer is very interesting: If you have disabled clustered index, you will be not able to update the statistics at all for any index. If you have enabled clustered index and disabled non clustered index when you update the statistics of the table, it automatically updates the statistics of the ALL (disabled and enabled – both) the indexes on the table. If you are not satisfied with the answer, let us go over a simple example. I have written necessary comments in the code itself to have a clear idea. USE tempdb GO -- Drop Table if Exists IF EXISTS (SELECT * FROM sys.objects WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[TableName]') AND type IN (N'U')) DROP TABLE [dbo].[TableName] GO -- Create Table CREATE TABLE [dbo].[TableName]( [ID] [int] NOT NULL, [FirstCol] [varchar](50) NULL ) GO -- Insert Some data INSERT INTO TableName SELECT 1, 'First' UNION ALL SELECT 2, 'Second' UNION ALL SELECT 3, 'Third' UNION ALL SELECT 4, 'Fourth' UNION ALL SELECT 5, 'Five' GO -- Create Clustered Index ALTER TABLE [TableName] ADD CONSTRAINT [PK_TableName] PRIMARY KEY CLUSTERED ([ID] ASC) GO -- Create Nonclustered Index CREATE UNIQUE NONCLUSTERED INDEX [IX_NonClustered_TableName] ON [dbo].[TableName] ([FirstCol] ASC) GO -- Check that all the indexes are enabled SELECT OBJECT_NAME(OBJECT_ID), Name, type_desc, is_disabled FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'TableName' GO Now let us update the statistics of the table and check the statistics update date. -- Update the stats of table UPDATE STATISTICS TableName WITH FULLSCAN GO -- Check Statistics Last Updated Datetime SELECT name AS index_name, STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated FROM sys.indexes WHERE OBJECT_ID = OBJECT_ID('TableName') GO Now let us disable the indexes and check if they are disabled using sys.indexes. -- Disable Indexes -- Disable Nonclustered Index ALTER INDEX [IX_NonClustered_TableName] ON [dbo].[TableName] DISABLE GO -- Disable Clustered Index ALTER INDEX [PK_TableName] ON [dbo].[TableName] DISABLE GO -- Check that all the indexes are disabled SELECT OBJECT_NAME(OBJECT_ID), Name, type_desc, is_disabled FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'TableName' GO Let us try to update the statistics of the table. -- Update the stats of table UPDATE STATISTICS TableName WITH FULLSCAN GO /* -- Above operation should thrown following error Msg 1974, Level 16, State 1, Line 1 Cannot perform the specified operation on table 'TableName' because its clustered index 'PK_TableName' is disabled. */ When we try to update the statistics it throws an error as it clustered index is disabled. Now let us enable the clustered index only and attempt to update the statistics of the table right after that. -- Now let us rebuild clustered index only ALTER INDEX [PK_TableName] ON [dbo].[TableName] REBUILD GO -- Check that all the indexes status SELECT OBJECT_NAME(OBJECT_ID), Name, type_desc, is_disabled FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'TableName' GO -- Check Statistics Last Updated Datetime SELECT name AS index_name, STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated FROM sys.indexes WHERE OBJECT_ID = OBJECT_ID('TableName') GO -- Update the stats of table UPDATE STATISTICS TableName WITH FULLSCAN GO -- Check Statistics Last Updated Datetime SELECT name AS index_name, STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated FROM sys.indexes WHERE OBJECT_ID = OBJECT_ID('TableName') GO We can clearly see that even though the nonclustered index is disabled it is also updated. If you do not need a nonclustered index, I suggest you to drop it as keeping them disabled is an overhead on your system. This is because every time the statistics are updated for system all the statistics for disabled indexesare also updated. -- Clean up DROP TABLE [TableName] GO The complete script is given below for easy reference. USE tempdb GO -- Drop Table if Exists IF EXISTS (SELECT * FROM sys.objects WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[TableName]') AND type IN (N'U')) DROP TABLE [dbo].[TableName] GO -- Create Table CREATE TABLE [dbo].[TableName]( [ID] [int] NOT NULL, [FirstCol] [varchar](50) NULL ) GO -- Insert Some data INSERT INTO TableName SELECT 1, 'First' UNION ALL SELECT 2, 'Second' UNION ALL SELECT 3, 'Third' UNION ALL SELECT 4, 'Fourth' UNION ALL SELECT 5, 'Five' GO -- Create Clustered Index ALTER TABLE [TableName] ADD CONSTRAINT [PK_TableName] PRIMARY KEY CLUSTERED ([ID] ASC) GO -- Create Nonclustered Index CREATE UNIQUE NONCLUSTERED INDEX [IX_NonClustered_TableName] ON [dbo].[TableName] ([FirstCol] ASC) GO -- Check that all the indexes are enabled SELECT OBJECT_NAME(OBJECT_ID), Name, type_desc, is_disabled FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'TableName' GO -- Update the stats of table UPDATE STATISTICS TableName WITH FULLSCAN GO -- Check Statistics Last Updated Datetime SELECT name AS index_name, STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated FROM sys.indexes WHERE OBJECT_ID = OBJECT_ID('TableName') GO -- Disable Indexes -- Disable Nonclustered Index ALTER INDEX [IX_NonClustered_TableName] ON [dbo].[TableName] DISABLE GO -- Disable Clustered Index ALTER INDEX [PK_TableName] ON [dbo].[TableName] DISABLE GO -- Check that all the indexes are disabled SELECT OBJECT_NAME(OBJECT_ID), Name, type_desc, is_disabled FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'TableName' GO -- Update the stats of table UPDATE STATISTICS TableName WITH FULLSCAN GO /* -- Above operation should thrown following error Msg 1974, Level 16, State 1, Line 1 Cannot perform the specified operation on table 'TableName' because its clustered index 'PK_TableName' is disabled. */ -- Now let us rebuild clustered index only ALTER INDEX [PK_TableName] ON [dbo].[TableName] REBUILD GO -- Check that all the indexes status SELECT OBJECT_NAME(OBJECT_ID), Name, type_desc, is_disabled FROM sys.indexes WHERE OBJECT_NAME(OBJECT_ID) = 'TableName' GO -- Check Statistics Last Updated Datetime SELECT name AS index_name, STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated FROM sys.indexes WHERE OBJECT_ID = OBJECT_ID('TableName') GO -- Update the stats of table UPDATE STATISTICS TableName WITH FULLSCAN GO -- Check Statistics Last Updated Datetime SELECT name AS index_name, STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated FROM sys.indexes WHERE OBJECT_ID = OBJECT_ID('TableName') GO -- Clean up DROP TABLE [TableName] GO Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Statistics

    Read the article

  • Node.js Adventure - Host Node.js on Windows Azure Worker Role

    - by Shaun
    In my previous post I demonstrated about how to develop and deploy a Node.js application on Windows Azure Web Site (a.k.a. WAWS). WAWS is a new feature in Windows Azure platform. Since it’s low-cost, and it provides IIS and IISNode components so that we can host our Node.js application though Git, FTP and WebMatrix without any configuration and component installation. But sometimes we need to use the Windows Azure Cloud Service (a.k.a. WACS) and host our Node.js on worker role. Below are some benefits of using worker role. - WAWS leverages IIS and IISNode to host Node.js application, which runs in x86 WOW mode. It reduces the performance comparing with x64 in some cases. - WACS worker role does not need IIS, hence there’s no restriction of IIS, such as 8000 concurrent requests limitation. - WACS provides more flexibility and controls to the developers. For example, we can RDP to the virtual machines of our worker role instances. - WACS provides the service configuration features which can be changed when the role is running. - WACS provides more scaling capability than WAWS. In WAWS we can have at most 3 reserved instances per web site while in WACS we can have up to 20 instances in a subscription. - Since when using WACS worker role we starts the node by ourselves in a process, we can control the input, output and error stream. We can also control the version of Node.js.   Run Node.js in Worker Role Node.js can be started by just having its execution file. This means in Windows Azure, we can have a worker role with the “node.exe” and the Node.js source files, then start it in Run method of the worker role entry class. Let’s create a new windows azure project in Visual Studio and add a new worker role. Since we need our worker role execute the “node.exe” with our application code we need to add the “node.exe” into our project. Right click on the worker role project and add an existing item. By default the Node.js will be installed in the “Program Files\nodejs” folder so we can navigate there and add the “node.exe”. Then we need to create the entry code of Node.js. In WAWS the entry file must be named “server.js”, which is because it’s hosted by IIS and IISNode and IISNode only accept “server.js”. But here as we control everything we can choose any files as the entry code. For example, I created a new JavaScript file named “index.js” in project root. Since we created a C# Windows Azure project we cannot create a JavaScript file from the context menu “Add new item”. We have to create a text file, and then rename it to JavaScript extension. After we added these two files we should set their “Copy to Output Directory” property to “Copy Always”, or “Copy if Newer”. Otherwise they will not be involved in the package when deployed. Let’s paste a very simple Node.js code in the “index.js” as below. As you can see I created a web server listening at port 12345. 1: var http = require("http"); 2: var port = 12345; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then we need to start “node.exe” with this file when our worker role was started. This can be done in its Run method. I found the Node.js and entry JavaScript file name, and then create a new process to run it. Our worker role will wait for the process to be exited. If everything is OK once our web server was opened the process will be there listening for incoming requests, and should not be terminated. The code in worker role would be like this. 1: public override void Run() 2: { 3: // This is a sample worker implementation. Replace with your logic. 4: Trace.WriteLine("NodejsHost entry point called", "Information"); 5:  6: // retrieve the node.exe and entry node.js source code file name. 7: var node = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot\node.exe"); 8: var js = "index.js"; 9:  10: // prepare the process starting of node.exe 11: var info = new ProcessStartInfo(node, js) 12: { 13: CreateNoWindow = false, 14: ErrorDialog = true, 15: WindowStyle = ProcessWindowStyle.Normal, 16: UseShellExecute = false, 17: WorkingDirectory = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot") 18: }; 19: Trace.WriteLine(string.Format("{0} {1}", node, js), "Information"); 20:  21: // start the node.exe with entry code and wait for exit 22: var process = Process.Start(info); 23: process.WaitForExit(); 24: } Then we can run it locally. In the computer emulator UI the worker role started and it executed the Node.js, then Node.js windows appeared. Open the browser to verify the website hosted by our worker role. Next let’s deploy it to azure. But we need some additional steps. First, we need to create an input endpoint. By default there’s no endpoint defined in a worker role. So we will open the role property window in Visual Studio, create a new input TCP endpoint to the port we want our website to use. In this case I will use 80. Even though we created a web server we should add a TCP endpoint of the worker role, since Node.js always listen on TCP instead of HTTP. And then changed the “index.js”, let our web server listen on 80. 1: var http = require("http"); 2: var port = 80; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then publish it to Windows Azure. And then in browser we can see our Node.js website was running on WACS worker role. We may encounter an error if we tried to run our Node.js website on 80 port at local emulator. This is because the compute emulator registered 80 and map the 80 endpoint to 81. But our Node.js cannot detect this operation. So when it tried to listen on 80 it will failed since 80 have been used.   Use NPM Modules When we are using WAWS to host Node.js, we can simply install modules we need, and then just publish or upload all files to WAWS. But if we are using WACS worker role, we have to do some extra steps to make the modules work. Assuming that we plan to use “express” in our application. Firstly of all we should download and install this module through NPM command. But after the install finished, they are just in the disk but not included in the worker role project. If we deploy the worker role right now the module will not be packaged and uploaded to azure. Hence we need to add them to the project. On solution explorer window click the “Show all files” button, select the “node_modules” folder and in the context menu select “Include In Project”. But that not enough. We also need to make all files in this module to “Copy always” or “Copy if newer”, so that they can be uploaded to azure with the “node.exe” and “index.js”. This is painful step since there might be many files in a module. So I created a small tool which can update a C# project file, make its all items as “Copy always”. The code is very simple. 1: static void Main(string[] args) 2: { 3: if (args.Length < 1) 4: { 5: Console.WriteLine("Usage: copyallalways [project file]"); 6: return; 7: } 8:  9: var proj = args[0]; 10: File.Copy(proj, string.Format("{0}.bak", proj)); 11:  12: var xml = new XmlDocument(); 13: xml.Load(proj); 14: var nsManager = new XmlNamespaceManager(xml.NameTable); 15: nsManager.AddNamespace("pf", "http://schemas.microsoft.com/developer/msbuild/2003"); 16:  17: // add the output setting to copy always 18: var contentNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:Content", nsManager); 19: UpdateNodes(contentNodes, xml, nsManager); 20: var noneNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:None", nsManager); 21: UpdateNodes(noneNodes, xml, nsManager); 22: xml.Save(proj); 23:  24: // remove the namespace attributes 25: var content = xml.InnerXml.Replace("<CopyToOutputDirectory xmlns=\"\">", "<CopyToOutputDirectory>"); 26: xml.LoadXml(content); 27: xml.Save(proj); 28: } 29:  30: static void UpdateNodes(XmlNodeList nodes, XmlDocument xml, XmlNamespaceManager nsManager) 31: { 32: foreach (XmlNode node in nodes) 33: { 34: var copyToOutputDirectoryNode = node.SelectSingleNode("pf:CopyToOutputDirectory", nsManager); 35: if (copyToOutputDirectoryNode == null) 36: { 37: var n = xml.CreateNode(XmlNodeType.Element, "CopyToOutputDirectory", null); 38: n.InnerText = "Always"; 39: node.AppendChild(n); 40: } 41: else 42: { 43: if (string.Compare(copyToOutputDirectoryNode.InnerText, "Always", true) != 0) 44: { 45: copyToOutputDirectoryNode.InnerText = "Always"; 46: } 47: } 48: } 49: } Please be careful when use this tool. I created only for demo so do not use it directly in a production environment. Unload the worker role project, execute this tool with the worker role project file name as the command line argument, it will set all items as “Copy always”. Then reload this worker role project. Now let’s change the “index.js” to use express. 1: var express = require("express"); 2: var app = express(); 3:  4: var port = 80; 5:  6: app.configure(function () { 7: }); 8:  9: app.get("/", function (req, res) { 10: res.send("Hello Node.js!"); 11: }); 12:  13: app.get("/User/:id", function (req, res) { 14: var id = req.params.id; 15: res.json({ 16: "id": id, 17: "name": "user " + id, 18: "company": "IGT" 19: }); 20: }); 21:  22: app.listen(port); Finally let’s publish it and have a look in browser.   Use Windows Azure SQL Database We can use Windows Azure SQL Database (a.k.a. WACD) from Node.js as well on worker role hosting. Since we can control the version of Node.js, here we can use x64 version of “node-sqlserver” now. This is better than if we host Node.js on WAWS since it only support x86. Just install the “node-sqlserver” module from NPM, copy the “sqlserver.node” from “Build\Release” folder to “Lib” folder. Include them in worker role project and run my tool to make them to “Copy always”. Finally update the “index.js” to use WASD. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:{SERVER NAME}.database.windows.net,1433;Database={DATABASE NAME};Uid={LOGIN}@{SERVER NAME};Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Publish to azure and now we can see our Node.js is working with WASD through x64 version “node-sqlserver”.   Summary In this post I demonstrated how to host our Node.js in Windows Azure Cloud Service worker role. By using worker role we can control the version of Node.js, as well as the entry code. And it’s possible to do some pre jobs before the Node.js application started. It also removed the IIS and IISNode limitation. I personally recommended to use worker role as our Node.js hosting. But there are some problem if you use the approach I mentioned here. The first one is, we need to set all JavaScript files and module files as “Copy always” or “Copy if newer” manually. The second one is, in this way we cannot retrieve the cloud service configuration information. For example, we defined the endpoint in worker role property but we also specified the listening port in Node.js hardcoded. It should be changed that our Node.js can retrieve the endpoint. But I can tell you it won’t be working here. In the next post I will describe another way to execute the “node.exe” and Node.js application, so that we can get the cloud service configuration in Node.js. I will also demonstrate how to use Windows Azure Storage from Node.js by using the Windows Azure Node.js SDK.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • New Reference Configuration: Accelerate Deployment of Virtual Infrastructure

    - by monica.kumar
    Today, Oracle announced the availability of Oracle VM blade cluster reference configuration based on Sun servers, storage and Oracle VM software. Assembling and integrating software and hardware systems from different vendors can be a huge barrier to deploying virtualized infrastructures as it is often a complicated, time-consuming, risky and expensive process. Using this tested configuration can help reduce the time to configure and deploy a virtual infrastructure by up to 98% as compared to putting together multi-vendor configurations. Once ready, the infrastructure can be used to easily deploy enterprise applications in a matter of minutes to hours as opposed to days/weeks, by using Oracle VM Templates. Find out more: Press Release Business whitepaper Technical whitepaper

    Read the article

  • Passing multiple simple POST Values to ASP.NET Web API

    - by Rick Strahl
    A few weeks backs I posted a blog post  about what does and doesn't work with ASP.NET Web API when it comes to POSTing data to a Web API controller. One of the features that doesn't work out of the box - somewhat unexpectedly -  is the ability to map POST form variables to simple parameters of a Web API method. For example imagine you have this form and you want to post this data to a Web API end point like this via AJAX: <form> Name: <input type="name" name="name" value="Rick" /> Value: <input type="value" name="value" value="12" /> Entered: <input type="entered" name="entered" value="12/01/2011" /> <input type="button" id="btnSend" value="Send" /> </form> <script type="text/javascript"> $("#btnSend").click( function() { $.post("samples/PostMultipleSimpleValues?action=kazam", $("form").serialize(), function (result) { alert(result); }); }); </script> or you might do this more explicitly by creating a simple client map and specifying the POST values directly by hand:$.post("samples/PostMultipleSimpleValues?action=kazam", { name: "Rick", value: 1, entered: "12/01/2012" }, $("form").serialize(), function (result) { alert(result); }); On the wire this generates a simple POST request with Url Encoded values in the content:POST /AspNetWebApi/samples/PostMultipleSimpleValues?action=kazam HTTP/1.1 Host: localhost User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:15.0) Gecko/20100101 Firefox/15.0.1 Accept: application/json Connection: keep-alive Content-Type: application/x-www-form-urlencoded; charset=UTF-8 X-Requested-With: XMLHttpRequest Referer: http://localhost/AspNetWebApi/FormPostTest.html Content-Length: 41 Pragma: no-cache Cache-Control: no-cachename=Rick&value=12&entered=12%2F10%2F2011 Seems simple enough, right? We are basically posting 3 form variables and 1 query string value to the server. Unfortunately Web API can't handle request out of the box. If I create a method like this:[HttpPost] public string PostMultipleSimpleValues(string name, int value, DateTime entered, string action = null) { return string.Format("Name: {0}, Value: {1}, Date: {2}, Action: {3}", name, value, entered, action); }You'll find that you get an HTTP 404 error and { "Message": "No HTTP resource was found that matches the request URI…"} Yes, it's possible to pass multiple POST parameters of course, but Web API expects you to use Model Binding for this - mapping the post parameters to a strongly typed .NET object, not to single parameters. Alternately you can also accept a FormDataCollection parameter on your API method to get a name value collection of all POSTed values. If you're using JSON only, using the dynamic JObject/JValue objects might also work. ModelBinding is fine in many use cases, but can quickly become overkill if you only need to pass a couple of simple parameters to many methods. Especially in applications with many, many AJAX callbacks the 'parameter mapping type' per method signature can lead to serious class pollution in a project very quickly. Simple POST variables are also commonly used in AJAX applications to pass data to the server, even in many complex public APIs. So this is not an uncommon use case, and - maybe more so a behavior that I would have expected Web API to support natively. The question "Why aren't my POST parameters mapping to Web API method parameters" is already a frequent one… So this is something that I think is fairly important, but unfortunately missing in the base Web API installation. Creating a Custom Parameter Binder Luckily Web API is greatly extensible and there's a way to create a custom Parameter Binding to provide this functionality! Although this solution took me a long while to find and then only with the help of some folks Microsoft (thanks Hong Mei!!!), it's not difficult to hook up in your own projects. It requires one small class and a GlobalConfiguration hookup. Web API parameter bindings allow you to intercept processing of individual parameters - they deal with mapping parameters to the signature as well as converting the parameters to the actual values that are returned. Here's the implementation of the SimplePostVariableParameterBinding class:public class SimplePostVariableParameterBinding : HttpParameterBinding { private const string MultipleBodyParameters = "MultipleBodyParameters"; public SimplePostVariableParameterBinding(HttpParameterDescriptor descriptor) : base(descriptor) { } /// <summary> /// Check for simple binding parameters in POST data. Bind POST /// data as well as query string data /// </summary> public override Task ExecuteBindingAsync(ModelMetadataProvider metadataProvider, HttpActionContext actionContext, CancellationToken cancellationToken) { // Body can only be read once, so read and cache it NameValueCollection col = TryReadBody(actionContext.Request); string stringValue = null; if (col != null) stringValue = col[Descriptor.ParameterName]; // try reading query string if we have no POST/PUT match if (stringValue == null) { var query = actionContext.Request.GetQueryNameValuePairs(); if (query != null) { var matches = query.Where(kv => kv.Key.ToLower() == Descriptor.ParameterName.ToLower()); if (matches.Count() > 0) stringValue = matches.First().Value; } } object value = StringToType(stringValue); // Set the binding result here SetValue(actionContext, value); // now, we can return a completed task with no result TaskCompletionSource<AsyncVoid> tcs = new TaskCompletionSource<AsyncVoid>(); tcs.SetResult(default(AsyncVoid)); return tcs.Task; } private object StringToType(string stringValue) { object value = null; if (stringValue == null) value = null; else if (Descriptor.ParameterType == typeof(string)) value = stringValue; else if (Descriptor.ParameterType == typeof(int)) value = int.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(Int32)) value = Int32.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(Int64)) value = Int64.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(decimal)) value = decimal.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(double)) value = double.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(DateTime)) value = DateTime.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(bool)) { value = false; if (stringValue == "true" || stringValue == "on" || stringValue == "1") value = true; } else value = stringValue; return value; } /// <summary> /// Read and cache the request body /// </summary> /// <param name="request"></param> /// <returns></returns> private NameValueCollection TryReadBody(HttpRequestMessage request) { object result = null; // try to read out of cache first if (!request.Properties.TryGetValue(MultipleBodyParameters, out result)) { // parsing the string like firstname=Hongmei&lastname=Ge result = request.Content.ReadAsFormDataAsync().Result; request.Properties.Add(MultipleBodyParameters, result); } return result as NameValueCollection; } private struct AsyncVoid { } }   The ExecuteBindingAsync method is fired for each parameter that is mapped and sent for conversion. This custom binding is fired only if the incoming parameter is a simple type (that gets defined later when I hook up the binding), so this binding never fires on complex types or if the first type is not a simple type. For the first parameter of a request the Binding first reads the request body into a NameValueCollection and caches that in the request.Properties collection. The request body can only be read once, so the first parameter request reads it and then caches it. Subsequent parameters then use the cached POST value collection. Once the form collection is available the value of the parameter is read, and the value is translated into the target type requested by the Descriptor. SetValue writes out the value to be mapped. Once you have the ParameterBinding in place, the binding has to be assigned. This is done along with all other Web API configuration tasks at application startup in global.asax's Application_Start:GlobalConfiguration.Configuration.ParameterBindingRules .Insert(0, (HttpParameterDescriptor descriptor) => { var supportedMethods = descriptor.ActionDescriptor.SupportedHttpMethods; // Only apply this binder on POST and PUT operations if (supportedMethods.Contains(HttpMethod.Post) || supportedMethods.Contains(HttpMethod.Put)) { var supportedTypes = new Type[] { typeof(string), typeof(int), typeof(decimal), typeof(double), typeof(bool), typeof(DateTime) }; if (supportedTypes.Where(typ => typ == descriptor.ParameterType).Count() > 0) return new SimplePostVariableParameterBinding(descriptor); } // let the default bindings do their work return null; });   The ParameterBindingRules.Insert method takes a delegate that checks which type of requests it should handle. The logic here checks whether the request is POST or PUT and whether the parameter type is a simple type that is supported. Web API calls this delegate once for each method signature it tries to map and the delegate returns null to indicate it's not handling this parameter, or it returns a new parameter binding instance - in this case the SimplePostVariableParameterBinding. Once the parameter binding and this hook up code is in place, you can now pass simple POST values to methods with simple parameters. The examples I showed above should now work in addition to the standard bindings. Summary Clearly this is not easy to discover. I spent quite a bit of time digging through the Web API source trying to figure this out on my own without much luck. It took Hong Mei at Micrsoft to provide a base example as I asked around so I can't take credit for this solution :-). But once you know where to look, Web API is brilliantly extensible to make it relatively easy to customize the parameter behavior. I'm very stoked that this got resolved  - in the last two months I've had two customers with projects that decided not to use Web API in AJAX heavy SPA applications because this POST variable mapping wasn't available. This might actually change their mind to still switch back and take advantage of the many great features in Web API. I too frequently use plain POST variables for communicating with server AJAX handlers and while I could have worked around this (with untyped JObject or the Form collection mostly), having proper POST to parameter mapping makes things much easier. I said this in my last post on POST data and say it again here: I think POST to method parameter mapping should have been shipped in the box with Web API, because without knowing about this limitation the expectation is that simple POST variables map to parameters just like query string values do. I hope Microsoft considers including this type of functionality natively in the next version of Web API natively or at least as a built-in HttpParameterBinding that can be just added. This is especially true, since this binding doesn't affect existing bindings. Resources SimplePostVariableParameterBinding Source on GitHub Global.asax hookup source Mapping URL Encoded Post Values in  ASP.NET Web API© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api  AJAX   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Customizing the processing of ListItems for asp:RadioButtonList with "Flow" layout and "Horizontal"

    - by evovision
    Hi, recently I was asked to add an ability to pad specific elements from each other to a certain distance in RadioButtonList control. Not quite common everyday task I would say :)   Ok, let's get started!   Prerequisites: ASP.NET Page having RadioButtonList control with RepeatLayout="Flow" RepeatDirection="Horizontal" properties set.   Implementation:  The underlying data was coming from another source, so the only fast way to add meta information about padding was the text value itself (yes, not very optimal solution): Id = 1, Name = "This is first element" and for padding we agreed to use <space/> meta tag: Id = 2, Name = "<space padcount="30px"/>This is second padded element"   To handle items rendering in RadioButtonList control I've created custom class and subclassed from it:    public class CustomRadioButtonList : RadioButtonList    {        private Action<ListItem, HtmlTextWriter> _preProcess;         protected override void RenderItem(ListItemType itemType, int repeatIndex, RepeatInfo repeatInfo, HtmlTextWriter writer)        {            if (_preProcess != null)            {                _preProcess(this.Items[repeatIndex], writer);            }             base.RenderItem(itemType, repeatIndex, repeatInfo, writer);        }         public void SetPrePrenderItemFunction(Action<ListItem, HtmlTextWriter> func)        {            _preProcess = func;        }    }   It is pretty straightforward approach, the key is to override RenderItem method. Class has SetPrePrenderItemFunction method which is used to pass custom processing function that takes 2 parameters: ListItem and HtmlTextWriter objects.   Now update existing RadioButtonList control in Default.aspx: add this to beginning of the page:   <%@ Register Namespace="Sample.Controls" TagPrefix="uc1" %>   and update the control to:   <uc1:CustomRadioButtonList ID="customRbl" runat="server" DataValueField="Id" DataTextField="Name"            RepeatLayout="Flow" RepeatDirection="Horizontal"></uc1:CustomRadioButtonList>   Now, from codebehind of the page:   Add regular expression that will be used for parsing:   private Regex _regex = new Regex(@"(?:[<]space padcount\s*?=\s*?(?:'|"")(?<padcount>\d+)(?:(?:\s+)?px)?(?:'|"")\s*?/>)(?<content>.*)?", RegexOptions.IgnoreCase | RegexOptions.Compiled);   and finally setup the processing function in Page_Load:   protected void Page_Load(object sender, EventArgs e)    {        customRbl.DataSource = DataObjects;         customRbl.SetPrePrenderItemFunction((listItem, writer) =>        {            Match match = _regex.Match(listItem.Text);            if (match.Success)            {                writer.Write(string.Format(@"<span style=""padding-left:{0}"">Extreme values: </span>", match.Groups["padcount"].Value + "px"));                 // if you need to pad listitem use code below                //x.Attributes.CssStyle.Add("padding-left", match.Groups["padcount"].Value + "px");                 // remove meta tag from text                listItem.Text = match.Groups["content"].Value;            }        });         customRbl.DataBind();    }   That's it! :)   Run the attached sample application:     P.S.: of course several other approaches could have been used for that purpose including events and the functionality for processing could also be embedded inside control itself. Current solution suits slightly better due some other reasons for situation where it was used, in your case consider this as a kick start for your own implementation :)   Source application: CustomRadioButtonList.zip

    Read the article

  • Exception:Cannot Start your application.The Workgroup information file is missing or opened exclusiv

    - by Jeev
    We were getting this error when trying   to connect  to a password protected access file. This is what the connection string looked likestring conString =@"Provider=Microsoft.Jet.OLEDB.4.0;Data Source="Path to your access file";User Id=;Password=password";To fix the issue this is what we didstring conString =@"Provider=Microsoft.Jet.OLEDB.4.0;Data Source="Path to your access file";Jet OLEDB:Database Password=password";  We removed the User id and changed the password to Jet OLEDB:Database Password Hope this helps someone   

    Read the article

  • Qml and QfileSystemModel interaction problem

    - by user136432
    I'm having some problem in realizing an interaction between QML and C++ to obtain a very basic file browser that is shown within a ListView. I tried to use as model for my data the QT class QFileSystemModel, but it did't work as I expected, probably I didn't fully understand the QT class documentation about the use of this model or the example I found on the internet. Here is the code that I am using: File main.cpp #include <QModelIndex> #include <QFileSystemModel> #include <QQmlContext> #include <QApplication> #include "qtquick2applicationviewer.h" int main(int argc, char *argv[]) { QApplication app(argc, argv); QFileSystemModel* model = new QFileSystemModel; model->setRootPath("C:/"); model->setFilter(QDir::Files | QDir::AllDirs); QtQuick2ApplicationViewer viewer; // Make QFileSystemModel* available for QML use. viewer.rootContext()->setContextProperty("myFileModel", model); viewer.setMainQmlFile(QStringLiteral("qml/ProvaQML/main.qml")); viewer.showExpanded(); return app.exec(); } File main.qml Rectangle { id: main width: 800 height: 600 ListView { id: view property string root_path: "C:/Users" x: 40 y: 20 width: parent.width - (2*x) height: parent.height - (2*y) VisualDataModel { id: myVisualModel model: myFileModel // Get the model QFileSystemModel exposed from C++ delegate { Rectangle { width: 210; height: 20; radius: 5; border.width: 2; border.color: "orange"; color: "yellow"; Text { text: fileName; x: parent.x + 10; } MouseArea { anchors.fill: parent onDoubleClicked: { myVisualModel.rootIndex = myVisualModel.modelIndex(index) } } } } } highlight: Rectangle { color: "lightsteelblue"; radius: 5 } focus: true } } The first problem with this code is that first elements that I can see within my list are my PC logical drives even if I set a specific path. Then when I first double click on drive "C:\" it shows the list of files and directories on that path, but when I double click on a directory a second time the screen flickers for one moment and then it shows again the PC logical drives. Can anyone tell me how should I use the QFileSystemModel class with a ListView QML object? Thanks in advance! Carlo

    Read the article

  • Creating a JSONP Formatter for ASP.NET Web API

    - by Rick Strahl
    Out of the box ASP.NET WebAPI does not include a JSONP formatter, but it's actually very easy to create a custom formatter that implements this functionality. JSONP is one way to allow Browser based JavaScript client applications to bypass cross-site scripting limitations and serve data from the non-current Web server. AJAX in Web Applications uses the XmlHttp object which by default doesn't allow access to remote domains. There are number of ways around this limitation <script> tag loading and JSONP is one of the easiest and semi-official ways that you can do this. JSONP works by combining JSON data and wrapping it into a function call that is executed when the JSONP data is returned. If you use a tool like jQUery it's extremely easy to access JSONP content. Imagine that you have a URL like this: http://RemoteDomain/aspnetWebApi/albums which on an HTTP GET serves some data - in this case an array of record albums. This URL is always directly accessible from an AJAX request if the URL is on the same domain as the parent request. However, if that URL lives on a separate server it won't be easily accessible to an AJAX request. Now, if  the server can serve up JSONP this data can be accessed cross domain from a browser client. Using jQuery it's really easy to retrieve the same data with JSONP:function getAlbums() { $.getJSON("http://remotedomain/aspnetWebApi/albums?callback=?",null, function (albums) { alert(albums.length); }); } The resulting callback the same as if the call was to a local server when the data is returned. jQuery deserializes the data and feeds it into the method. Here the array is received and I simply echo back the number of items returned. From here your app is ready to use the data as needed. This all works fine - as long as the server can serve the data with JSONP. What does JSONP look like? JSONP is a pretty simple 'protocol'. All it does is wrap a JSON response with a JavaScript function call. The above result from the JSONP call looks like this:Query17103401925975181569_1333408916499( [{"Id":"34043957","AlbumName":"Dirty Deeds Done Dirt Cheap",…},{…}] ) The way JSONP works is that the client (jQuery in this case) sends of the request, receives the response and evals it. The eval basically executes the function and deserializes the JSON inside of the function. It's actually a little more complex for the framework that does this, but that's the gist of what happens. JSONP works by executing the code that gets returned from the JSONP call. JSONP and ASP.NET Web API As mentioned previously, JSONP support is not natively in the box with ASP.NET Web API. But it's pretty easy to create and plug-in a custom formatter that provides this functionality. The following code is based on Christian Weyers example but has been updated to the latest Web API CodePlex bits, which changes the implementation a bit due to the way dependent objects are exposed differently in the latest builds. Here's the code:  using System; using System.IO; using System.Net; using System.Net.Http.Formatting; using System.Net.Http.Headers; using System.Threading.Tasks; using System.Web; using System.Net.Http; namespace Westwind.Web.WebApi { /// <summary> /// Handles JsonP requests when requests are fired with /// text/javascript or application/json and contain /// a callback= (configurable) query string parameter /// /// Based on Christian Weyers implementation /// https://github.com/thinktecture/Thinktecture.Web.Http/blob/master/Thinktecture.Web.Http/Formatters/JsonpFormatter.cs /// </summary> public class JsonpFormatter : JsonMediaTypeFormatter { public JsonpFormatter() { SupportedMediaTypes.Add(new MediaTypeHeaderValue("application/json")); SupportedMediaTypes.Add(new MediaTypeHeaderValue("text/javascript")); //MediaTypeMappings.Add(new UriPathExtensionMapping("jsonp", "application/json")); JsonpParameterName = "callback"; } /// <summary> /// Name of the query string parameter to look for /// the jsonp function name /// </summary> public string JsonpParameterName {get; set; } /// <summary> /// Captured name of the Jsonp function that the JSON call /// is wrapped in. Set in GetPerRequestFormatter Instance /// </summary> private string JsonpCallbackFunction; public override bool CanWriteType(Type type) { return true; } /// <summary> /// Override this method to capture the Request object /// and look for the query string parameter and /// create a new instance of this formatter. /// /// This is the only place in a formatter where the /// Request object is available. /// </summary> /// <param name="type"></param> /// <param name="request"></param> /// <param name="mediaType"></param> /// <returns></returns> public override MediaTypeFormatter GetPerRequestFormatterInstance(Type type, HttpRequestMessage request, MediaTypeHeaderValue mediaType) { var formatter = new JsonpFormatter() { JsonpCallbackFunction = GetJsonCallbackFunction(request) }; return formatter; } /// <summary> /// Override to wrap existing JSON result with the /// JSONP function call /// </summary> /// <param name="type"></param> /// <param name="value"></param> /// <param name="stream"></param> /// <param name="contentHeaders"></param> /// <param name="transportContext"></param> /// <returns></returns> public override Task WriteToStreamAsync(Type type, object value, Stream stream, HttpContentHeaders contentHeaders, TransportContext transportContext) { if (!string.IsNullOrEmpty(JsonpCallbackFunction)) { return Task.Factory.StartNew(() => { var writer = new StreamWriter(stream); writer.Write( JsonpCallbackFunction + "("); writer.Flush(); base.WriteToStreamAsync(type, value, stream, contentHeaders, transportContext).Wait(); writer.Write(")"); writer.Flush(); }); } else { return base.WriteToStreamAsync(type, value, stream, contentHeaders, transportContext); } } /// <summary> /// Retrieves the Jsonp Callback function /// from the query string /// </summary> /// <returns></returns> private string GetJsonCallbackFunction(HttpRequestMessage request) { if (request.Method != HttpMethod.Get) return null; var query = HttpUtility.ParseQueryString(request.RequestUri.Query); var queryVal = query[this.JsonpParameterName]; if (string.IsNullOrEmpty(queryVal)) return null; return queryVal; } } } Note again that this code will not work with the Beta bits of Web API - it works only with post beta bits from CodePlex and hopefully this will continue to work until RTM :-) This code is a bit different from Christians original code as the API has changed. The biggest change is that the Read/Write functions no longer receive a global context object that gives access to the Request and Response objects as the older bits did. Instead you now have to override the GetPerRequestFormatterInstance() method, which receives the Request as a parameter. You can capture the Request there, or use the request to pick up the values you need and store them on the formatter. Note that I also have to create a new instance of the formatter since I'm storing request specific state on the instance (information whether the callback= querystring is present) so I return a new instance of this formatter. Other than that the code should be straight forward: The code basically writes out the function pre- and post-amble and the defers to the base stream to retrieve the JSON to wrap the function call into. The code uses the Async APIs to write this data out (this will take some getting used to seeing all over the place for me). Hooking up the JsonpFormatter Once you've created a formatter, it has to be added to the request processing sequence by adding it to the formatter collection. Web API is configured via the static GlobalConfiguration object.  protected void Application_Start(object sender, EventArgs e) { // Verb Routing RouteTable.Routes.MapHttpRoute( name: "AlbumsVerbs", routeTemplate: "albums/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumApi" } ); GlobalConfiguration .Configuration .Formatters .Insert(0, new Westwind.Web.WebApi.JsonpFormatter()); }   That's all it takes. Note that I added the formatter at the top of the list of formatters, rather than adding it to the end which is required. The JSONP formatter needs to fire before any other JSON formatter since it relies on the JSON formatter to encode the actual JSON data. If you reverse the order the JSONP output never shows up. So, in general when adding new formatters also try to be aware of the order of the formatters as they are added. Resources JsonpFormatter Code on GitHub© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • .NET 3.5 Installation Problems in Windows 8

    - by Rick Strahl
    Windows 8 installs with .NET 4.5. A default installation of Windows 8 doesn't seem to include .NET 3.0 or 3.5, although .NET 2.0 does seem to be available by default (presumably because Windows has app dependencies on that). I ran into some pretty nasty compatibility issues regarding .NET 3.5 which I'll describe in this post. I'll preface this by saying that depending on how you install Windows 8 you may not run into these issues. In fact, it's probably a special case, but one that might be common with developer folks reading my blog. Specifically it's the install order that screwed things up for me -  installing Visual Studio before explicitly installing .NET 3.5 from Windows Features - in particular. If you install Visual Studio 2010 I highly recommend you install .NET 3.5 from Windows features BEFORE you install Visual Studio 2010 and save yourself the trouble I went through. So when I installed Windows 8, and then looked at the Windows Features to install after the fact in the Windows Feature dialog, I thought - .NET 3.5 - who needs it. I'd be happy to not have to install .NET 3.5, but unfortunately I found out quite a while after initial installation that one of my applications/tools (DevExpress's awesome CodeRush) depends on it and won't install without it. Enabling .NET 3.5 in Windows 8 If you want to run .NET 3.5 on Windows 8, don't download an installer - those installers don't work on Windows 8, and you don't need to do this because you can use the Windows Features dialog to enable .NET 3.5: And that *should* do the trick. If you do this before you install other apps that require .NET 3.5 and install a non-SP1 one version of it, you are going to have no problems. Unfortunately for me, even after I've installed the above, when I run the CodeRush installer I still get this lovely dialog: Now I double checked to see if .NET 3.5 is installed - it is, both for 32 bit and 64 bit. I went as far as creating a small .NET Console app and running it to verify that it actually runs. And it does… So naturally I thought the CodeRush installer is a little whacky. After some back and forth Alex Skorkin on Twitter pointed me in the right direction: He asked me to look in the registry for exact info on which version of .NET 3.5 is installed here: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP where I found that .NET 3.5 SP1 was installed. This is the 64 bit key which looks all correct. However, when I looked under the 32 bit node I found: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\NET Framework Setup\NDP\v3.5 Notice that the service pack number is set to 0, rather than 1 (which it was for the 64 bit install), which is what the installer requires. So to summarize: the 64 bit version is installed with SP1, the 32 bit version is not. Uhm, Ok… thanks for that! Easy to fix, you say - just install SP1. Nope, not so easy because the standalone installer doesn't work on Windows 8. I can't get either .NET 3.5 installer or the SP 1 installer to even launch. They simply start and hang (or exit immediately) without messages. I also tried to get Windows to update .NET 3.5 by checking for Windows Updates, which should pick up on the dated version of .NET 3.5 and pull down SP1, but that's also no go. Check for Updates doesn't bring down any updates for me yet. I'm sure at some random point in the future Windows will deem it necessary to update .NET 3.5 to SP1, but at this point it's not letting me coerce it to do it explicitly. How did this happen I'm not sure exactly whether this is the cause and effect, but I suspect the story goes like this: Installed Windows 8 without support for .NET 3.5 Installed Visual Studio 2010 which installs .NET 3.5 (no SP) I now had .NET 3.5 installed but without SP1. I then: Tried to install CodeRush - Error: .NET 3.5 SP1 required Enabled .NET 3.5 in Windows Features I figured enabling the .NET 3.5 Windows Features would do the trick. But still no go. Now I suspect Visual Studio installed the 32 bit version of .NET 3.5 on my machine and Windows Features detected the previous install and didn't reinstall it. This left the 32 bit install at least with no SP1 installed. How to Fix it My final solution was to completely uninstall .NET 3.5 *and* to reboot: Go to Windows Features Uncheck the .NET Framework 3.5 Restart Windows Go to Windows Features Check .NET Framework 3.5 and voila, I now have a proper installation of .NET 3.5. I tried this before but without the reboot step in between which did not work. Make sure you reboot between uninstalling and reinstalling .NET 3.5! More Problems The above fixed me right up, but in looking for a solution it seems that a lot of people are also having problems with .NET 3.5 installing properly from the Windows Features dialog. The problem there is that the feature wasn't properly loading from the installer disks or not downloading the proper components for updates. It turns out you can explicitly install Windows features using the DISM tool in Windows.dism.exe /online /enable-feature /featurename:NetFX3 /Source:f:\sources\sxs You can try this without the /Source flag first - which uses the hidden Windows installer files if you kept those. Otherwise insert the DVD or ISO and point at the path \sources\sxs path where the installer lives. This also gives you a little more information if something does go wrong.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Windows  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • September 2012 Release of the Ajax Control Toolkit

    - by Stephen.Walther
    I’m excited to announce the September 2012 release of the Ajax Control Toolkit! This is the first release of the Ajax Control Toolkit which supports the .NET 4.5 framework. We also continue to support ASP.NET 3.5 and ASP.NET 4.0. With this release, we’ve made several important bug fixes. The Superexpert team focused on fixing the highest voted issues associated with the CascadingDropDown control. I’ve created a list of these bug fixes later in this blog post. You can download the latest release of the Ajax Control Toolkit by visiting the following page at CodePlex: http://AjaxControlToolkit.CodePlex.com Alternatively, you can install the latest version of the Ajax Control Toolkit using NuGet by firing off the following command from the Package Manager Console: Install-Package AjaxControlToolkit Using the Ajax Control Toolkit with ASP.NET 4.5 Let me walk through the steps for using the Ajax Control Toolkit with ASP.NET 4.5. First, I’ll create a new ASP.NET 4.5 website with Visual Studio 2012. I’ll create the new website with the ASP.NET Web Forms Application template: When you create a new ASP.NET 4.5 site with the ASP.NET Web Forms Application template, you get a starter website. If you run the site, then you get a page with default content: Let me show you how you can add the Ajax Control Toolkit Calendar control to the homepage of this starter site. The first step is to use NuGet to install the Ajax Control Toolkit. Right-click the References folder in the Solution Explorer window and select the menu option Manage NuGet Packages. In the Manage NuGet Packages dialog, use the search box to search for the Ajax Control Toolkit (enter “AjaxControlToolkit”). After you find it, click the Install button to add the Ajax Control Toolkit to your project. That’s all you have to do to install the Ajax Control Toolkit! Now we are ready to start using the Ajax Control Toolkit controls. Open the default.aspx page so we can modify the contents of the page. Erase everything contained in the Content control with the ID of BodyContent. After erasing the content, declare the following two controls: <asp:TextBox ID="vacationDate" runat="server" /> <ajaxToolkit:CalendarExtender TargetControlID="vacationDate" runat="server" /> The first control is a standard ASP.NET TextBox control and the second control is an Ajax Control Toolkit Calendar control. You should get intellisense as you type out the Ajax Control Toolkit Calendar control. If you don’t, then close and re-open the Default.aspx page. Now, let’s run our app. Hit the F5 button or select Debug, Start Debugging from the Visual Studio menu. You will get the error message “MsAjaxBundle is not a valid script name”. Don’t despair! We need to update the Master Page so it uses the ToolkitScriptManager instead of the default ScriptManager. Open the Site.Master file and find where the ScriptManager is declared. The ScriptManager should look like this: <asp:ScriptManager runat="server"> <Scripts> <%--Framework Scripts--%> <asp:ScriptReference Name="MsAjaxBundle" /> <asp:ScriptReference Name="jquery" /> <asp:ScriptReference Name="jquery.ui.combined" /> <asp:ScriptReference Name="WebForms.js" Assembly="System.Web" Path="~/Scripts/WebForms/WebForms.js" /> <asp:ScriptReference Name="WebUIValidation.js" Assembly="System.Web" Path="~/Scripts/WebForms/WebUIValidation.js" /> <asp:ScriptReference Name="MenuStandards.js" Assembly="System.Web" Path="~/Scripts/WebForms/MenuStandards.js" /> <asp:ScriptReference Name="GridView.js" Assembly="System.Web" Path="~/Scripts/WebForms/GridView.js" /> <asp:ScriptReference Name="DetailsView.js" Assembly="System.Web" Path="~/Scripts/WebForms/DetailsView.js" /> <asp:ScriptReference Name="TreeView.js" Assembly="System.Web" Path="~/Scripts/WebForms/TreeView.js" /> <asp:ScriptReference Name="WebParts.js" Assembly="System.Web" Path="~/Scripts/WebForms/WebParts.js" /> <asp:ScriptReference Name="Focus.js" Assembly="System.Web" Path="~/Scripts/WebForms/Focus.js" /> <asp:ScriptReference Name="WebFormsBundle" /> <%--Site Scripts--%> </Scripts> </asp:ScriptManager> We need to make three changes to the ScriptManager: 1) We need to replace the asp:ScriptManager with the ajaxToolkit:ToolkitScriptManager 2) We need to remove the MsAjaxBundle bundle from the ScriptReferences 3) We need to remove the Assembly=”System.Web” attributes from the ScriptReferences After you make these three changes, the ToolkitScriptManager should looks like this: <ajaxToolkit:ToolkitScriptManager runat="server"> <Scripts> <%--Framework Scripts--%> <asp:ScriptReference Name="jquery" /> <asp:ScriptReference Name="jquery.ui.combined" /> <asp:ScriptReference Name="WebForms.js" Path="~/Scripts/WebForms/WebForms.js" /> <asp:ScriptReference Name="WebUIValidation.js" Path="~/Scripts/WebForms/WebUIValidation.js" /> <asp:ScriptReference Name="MenuStandards.js" Path="~/Scripts/WebForms/MenuStandards.js" /> <asp:ScriptReference Name="GridView.js" Path="~/Scripts/WebForms/GridView.js" /> <asp:ScriptReference Name="DetailsView.js" Path="~/Scripts/WebForms/DetailsView.js" /> <asp:ScriptReference Name="TreeView.js" Path="~/Scripts/WebForms/TreeView.js" /> <asp:ScriptReference Name="WebParts.js" Path="~/Scripts/WebForms/WebParts.js" /> <asp:ScriptReference Name="Focus.js" Path="~/Scripts/WebForms/Focus.js" /> <asp:ScriptReference Name="WebFormsBundle" /> <%--Site Scripts--%> </Scripts> </ajaxToolkit:ToolkitScriptManager> After we make these changes, the app should run successfully. You’ll get a page which contains a text field. When you click inside the text field, a popup calendar is displayed. Ajax Control Toolkit and jQuery You might have noticed that the ScriptManager includes a reference to jQuery by default. We did not remove that reference when we converted the ScriptManager to a ToolkitScriptManager. You can use the Ajax Control Toolkit and jQuery side-by-side. Here’s how you can modify the Default.aspx page so that it contains two popup calendars. The first popup calendar is created with the Ajax Control Toolkit and the second popup calendar is created with jQuery: <asp:TextBox ID="vacationDate" runat="server" /> <ajaxToolkit:CalendarExtender TargetControlID="vacationDate" runat="server" /> <input id="birthDate" /> <script> $("#birthDate").datepicker(); </script> Before you can start using jQuery UI plugins, you need to complete one more step. You need to add the jQuery UI themes bundle to the HEAD of the Site.Master page like this: <head runat="server"> <meta charset="utf-8" /> <title><%: Page.Title %> - My ASP.NET Application</title> <asp:PlaceHolder runat="server"> <%: Scripts.Render("~/bundles/modernizr") %> </asp:PlaceHolder> <webopt:BundleReference runat="server" Path="~/Content/css" /> <webopt:BundleReference runat="server" Path="~/Content/themes/base/css" /> <link href="~/favicon.ico" rel="shortcut icon" type="image/x-icon" /> <meta name="viewport" content="width=device-width" /> <asp:ContentPlaceHolder runat="server" ID="HeadContent" /> </head> The markup above includes a reference to the jQuery UI themes bundle: <webopt:BundleReference runat="server" Path="~/Content/themes/base/css" /> Now that we have made these changes, we can use the Ajax Control Toolkit and jQuery at the same time. When you run your app, you get two popup calendars. When you click in the first text field, the Ajax Control Toolkit calendar appears. When you click in the second text field, the jQuery UI popup calendar appears: Bug Fixes in this Release We made several important bug fixes with this release of the Ajax Control Toolkit and integrated several Pull Requests contributed by the community. Our primary focus during this sprint was fixing issues with the CascadingDropDown control. We fixed the following issues associated with the CascadingDropDown: · 9490 – Don’t disable dropdowns in CascadingDropDown · 14223 – CascadingDropDown Reset or Setting SelectedValue from WebMethod · 12189 – CascadingDropDown not obeying disabled state of DropDownList · 22942 – CascadingDropDown infinite loop (with solution) · 8671 – CascadingDropdown options is null or undefined · 14407 – CascadingDropDown: populated client event happens too often · 17148 – CascadingDropDown – Add “UseHttpGet” property · 10221 – No NotNull check in CascadingDropDown · 12228 – Provide property for case-insensitive DefaultValue lookup in CascadingDropdown We also fixed the following two issues which are not directly related to the CascadingDropDown control: · 27108 – CalendarExtender: Bug when selecting December shifts to January. · 27041 – Input controls with HTML5 types do not post back in Firefox, Chrome, Safari Finally, we integrated several Pull Requests submitted by the community (Thank you community!): · Added French localized resources for the AjaxFileUpload · Resolved an issue which prevented the AjaxFileUpload control from working with pages that require query string variables. · Extended the AjaxFileUploadEventArgs class to include the current file index in the queue and the total number of files in the queue. · Fixed an issue with TabContainer and TabPanel which caused the OnActiveTabChanged event to fire too often. Summary I’m happy to see the Ajax Control Toolkit move forward into the brave new world of ASP.NET 4.5! In this latest release, we focused on ensuring that the Ajax Control Toolkit works smoothly with ASP.NET 4.5 applications. We also fixed the highest voted bugs associated with the CascadingDropDown control and integrated several Pull Request submitted by the community. Once again, I want to thank the Superexpert team for their hard work on this release!

    Read the article

< Previous Page | 406 407 408 409 410 411 412 413 414 415 416 417  | Next Page >