Search Results

Search found 58245 results on 2330 pages for 'asp net authentication'.

Page 114/2330 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • VB.net (aspx) mysql connection

    - by StealthRT
    Hey all i am new to ASP.NET and VB.net code behind. I have a classic ASP page that connects to the mySQL server with the following code: Set oConnection = Server.CreateObject("ADODB.Connection") Set oRecordset = Server.CreateObject("ADODB.Recordset") oConnection.Open "DRIVER={MySQL ODBC 3.51 Driver}; SERVER=xxx.com; PORT=3306; DATABASE=xxx; USER=xxx; PASSWORD=xxx; OPTION=3;" sqltemp = "select * from userinfo WHERE emailAddress = '" & theUN & "'" oRecordset.Open sqltemp, oConnection,3,3 if oRecordset.EOF then ... However, i am unable to find anything to connect to mySQL in ASP.NET (VB.NET). I have only found this peice of code that does not seem to work once it gets to the "Dim conn As New OdbcConnection(MyConString)" code: Dim MyConString As String = "DRIVER={MySQL ODBC 3.51 Driver};" & _ "SERVER=xxx.com;" & _ "DATABASE=xxx;" & _ "UID=xxx;" & _ "PASSWORD=xxx;" & _ "OPTION=3;" Dim conn As New OdbcConnection(MyConString) MyConnection.Open() Dim MyCommand As New OdbcCommand MyCommand.Connection = MyConnection MyCommand.CommandText = "select * from userinfo WHERE emailAddress = '" & theUN & "'"" MyCommand.ExecuteNonQuery() MyConnection.Close() I have these import statements also: <%@ Import Namespace=System %> <%@ Import Namespace=System.IO %> <%@ Import Namespace=System.Web %> <%@ Import Namespace=System.ServiceProcess %> <%@ Import Namespace=Microsoft.Data.Odbc %> <%@ Import Namespace=MySql.Data.MySqlClient %> <%@ Import Namespace=MySql.Data %> <%@ Import Namespace=System.Data %> So any help would be great! :o) David

    Read the article

  • Rendering a control generates security exception in .Net 4

    - by Jason Short
    I am having a problem with code that worked fine in .Net 2 giving this error under .Net 4. Build (web): Inheritance security rules violated while overriding member: 'Controls.RelatedPosts.RenderControl(System.Web.UI.HtmlTextWriter)'. Security accessibility of the overriding method must match the security accessibility of the method being overriden. This is in DotNetBlogEngine. There were several other security demands in the code that .Net 4 didn't seem to like. I followed some of the advice I found on blogs (and here) and got rid of all the other errors. But this one still eludes me. The Main blogengine core dll is not set for security demands anylonger and is compiled for .Net 4 as well. This error is in the website side attempting to use the dll. There are controls that call a RenderControl method taking an HtmlTextWriter. Apparently the text writer now has some soft of security attributes set on it. Each of the controls implements a custom interface ( public interface ICustomFilter ), there are no security permissions present or demands. The site is running full trust on my local dev machine.

    Read the article

  • Project setup for an ADO.NET/WCF DataService

    - by Slauma
    I'd like to implement a ADO.NET/WCF DataService and I am wondering what's the best way to setup a project in VS2008 SP1 for this purpose. Currently I have an ASP.NET web application project (not of "WebSite" project type). The data access layer is an Entity model (EF version 1) with SQL Server database. I have the Entity Model in a separate DLL project and the web application project references to this assembly for all data accesses. The ADO.NET/WCF DataService needs to communicate with the Entity model/database as well. It has to be hosted on the same web server (IIS 7.5) together with the web application. Since the DataService is not directly related to that specific web application (though it will provide and modify data from/in the same database the web application uses as well) my basic idea was to separate the DataService in its own new project (which also references the Entity Model DLL). Now I have seen that there is no project type "ADO.NET/WCF DataService" in VS2008 SP1. It seems only possible to add a DataService as an element to other existing projects, for instance Web Application projects. Why isn't there a separate DataService project type? Does this mean now that I have to add the DataService as an element to my Web Application project? Or shall I create a new Web Application project and add a DataService to it? (I could delete the pregenerated default.aspx since I do not need any web pages in this project.) What's the best way? Thank you for suggestions in advance!

    Read the article

  • Converting Python Script to Vb.NET - Involves Post and Input XML String

    - by Jason Shoulders
    I'm trying to convert a Python Script to Vb.Net. The Python Script appears to accept some XML input data and then takes it to a web URL and does a "POST". I tried some VB.NET code to do it, but I think my approach is off because I got an error back "BadXmlDataErr" plus I can't really format my input XML very well - I'm only doing string and value. The input XML is richer than that. Here is an example of what the XML input data looks like in the Python script: <obj is="MyOrg:realCommand_v1/" > <int name="priority" val="1" /> <real name="value" val="9.5" /> <str name="user" val="MyUserName" /> <reltime name="overrideTime" val="PT60S"/> </obj> Here's the Vb.net code I attempted to convert that: Dim reqparm As New Specialized.NameValueCollection reqparm.Add("priority", "1") reqparm.Add("value", "9.5") reqparm.Add("user", "MyUserName") reqparm.Add("overrideTime", "PT60S") Using client As New Net.WebClient Dim sTheUrl As String = "[My URL]" Dim responsebytes = client.UploadValues(sTheUrl, "POST", MyReqparm) Dim responsebody = (New System.Text.UTF8Encoding).GetString(responsebytes) End Using I feel like I should be doing something else. Can anyone point me to the right direction?

    Read the article

  • Adding a hyperlink in a client report definition file (RDLC)

    - by rajbk
    This post shows you how to add a hyperlink to your RDLC report. In a previous post, I showed you how to create an RDLC report. We have been given the requirement to the report we created earlier, the Northwind Product report, to add a column that will contain hyperlinks which are unique per row.  The URLs will be RESTful with the ProductID at the end. Clicking on the URL will take them to a website like so: http://localhost/products/3  where 3 is the primary key of the product row clicked on. To start off, open the RDLC and add a new column to the product table.   Add text to the header (Details) and row (Product Website). Right click on the row (not header) and select “TextBox properties” Select Action – Go to URL. You could hard code a URL here but what we need is a URL that changes based on the ProductID.   Click on the expression button (fx) The expression builder gives you access to several functions and constants including the fields in your dataset. See this reference for more details: Common Expressions for ReportViewer Reports. Add the following expression: = "http://localhost/products/" & Fields!ProductID.Value Click OK to exit the Expression Builder. The report will not render because hyperlinks are disabled by default in the ReportViewer control. To enable it, add the following in your page load event (where rvProducts is the ID of your ReportViewerControl): protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { rvProducts.LocalReport.EnableHyperlinks = true; } } We want our links to open in a new window so set the HyperLinkTarget property of the ReportViewer control to “_blank”   We are done adding hyperlinks to our report. Clicking on the links for each product pops open a new windows. The URL has the ProductID added at the end. Enjoy!

    Read the article

  • Tulsa SharePoint Interest Group – SharePoint 2010 Mini-Launch Event - Review

    - by dmccollough
    The Tulsa SharePoint Interest Group set a record for attendance last night at our SharePoint 2010 Mini-Launch Event. Approximately 40+ people showed up to listen to SharePoint MVP Eric Shupps, The SharePoint Cowboy to discuss all of the new features for both administrators and developers. All of the Tulsa SharePoint Interest Group Officers worked very hard to ensure that this event happened. We hosted our event at our local Dave & Busters and it was a great location with good food and great service. All of the officers of the Tulsa SharePoint Interest Group would like to extend a big Thank You to all of our sponsor that helped us in making our SharePoint 2010 Mini-Launch Event a reality.

    Read the article

  • Creating a dynamic proxy generator – Part 1 – Creating the Assembly builder, Module builder and cach

    - by SeanMcAlinden
    I’ve recently started a project with a few mates to learn the ins and outs of Dependency Injection, AOP and a number of other pretty crucial patterns of development as we’ve all been using these patterns for a while but have relied totally on third part solutions to do the magic. We thought it would be interesting to really get into the details by rolling our own IoC container and hopefully learn a lot on the way, and you never know, we might even create an excellent framework. The open source project is called Rapid IoC and is hosted at http://rapidioc.codeplex.com/ One of the most interesting tasks for me is creating the dynamic proxy generator for enabling Aspect Orientated Programming (AOP). In this series of articles, I’m going to track each step I take for creating the dynamic proxy generator and I’ll try my best to explain what everything means - mainly as I’ll be using Reflection.Emit to emit a fair amount of intermediate language code (IL) to create the proxy types at runtime which can be a little taxing to read. It’s worth noting that building the proxy is without a doubt going to be slightly painful so I imagine there will be plenty of areas I’ll need to change along the way. Anyway lets get started…   Part 1 - Creating the Assembly builder, Module builder and caching mechanism Part 1 is going to be a really nice simple start, I’m just going to start by creating the assembly, module and type caches. The reason we need to create caches for the assembly, module and types is simply to save the overhead of recreating proxy types that have already been generated, this will be one of the important steps to ensure that the framework is fast… kind of important as we’re calling the IoC container ‘Rapid’ – will be a little bit embarrassing if we manage to create the slowest framework. The Assembly builder The assembly builder is what is used to create an assembly at runtime, we’re going to have two overloads, one will be for the actual use of the proxy generator, the other will be mainly for testing purposes as it will also save the assembly so we can use Reflector to examine the code that has been created. Here’s the code: DynamicAssemblyBuilder using System; using System.Reflection; using System.Reflection.Emit; namespace Rapid.DynamicProxy.Assembly {     /// <summary>     /// Class for creating an assembly builder.     /// </summary>     internal static class DynamicAssemblyBuilder     {         #region Create           /// <summary>         /// Creates an assembly builder.         /// </summary>         /// <param name="assemblyName">Name of the assembly.</param>         public static AssemblyBuilder Create(string assemblyName)         {             AssemblyName name = new AssemblyName(assemblyName);               AssemblyBuilder assembly = AppDomain.CurrentDomain.DefineDynamicAssembly(                     name, AssemblyBuilderAccess.Run);               DynamicAssemblyCache.Add(assembly);               return assembly;         }           /// <summary>         /// Creates an assembly builder and saves the assembly to the passed in location.         /// </summary>         /// <param name="assemblyName">Name of the assembly.</param>         /// <param name="filePath">The file path.</param>         public static AssemblyBuilder Create(string assemblyName, string filePath)         {             AssemblyName name = new AssemblyName(assemblyName);               AssemblyBuilder assembly = AppDomain.CurrentDomain.DefineDynamicAssembly(                     name, AssemblyBuilderAccess.RunAndSave, filePath);               DynamicAssemblyCache.Add(assembly);               return assembly;         }           #endregion     } }   So hopefully the above class is fairly explanatory, an AssemblyName is created using the passed in string for the actual name of the assembly. An AssemblyBuilder is then constructed with the current AppDomain and depending on the overload used, it is either just run in the current context or it is set up ready for saving. It is then added to the cache.   DynamicAssemblyCache using System.Reflection.Emit; using Rapid.DynamicProxy.Exceptions; using Rapid.DynamicProxy.Resources.Exceptions;   namespace Rapid.DynamicProxy.Assembly {     /// <summary>     /// Cache for storing the dynamic assembly builder.     /// </summary>     internal static class DynamicAssemblyCache     {         #region Declarations           private static object syncRoot = new object();         internal static AssemblyBuilder Cache = null;           #endregion           #region Adds a dynamic assembly to the cache.           /// <summary>         /// Adds a dynamic assembly builder to the cache.         /// </summary>         /// <param name="assemblyBuilder">The assembly builder.</param>         public static void Add(AssemblyBuilder assemblyBuilder)         {             lock (syncRoot)             {                 Cache = assemblyBuilder;             }         }           #endregion           #region Gets the cached assembly                  /// <summary>         /// Gets the cached assembly builder.         /// </summary>         /// <returns></returns>         public static AssemblyBuilder Get         {             get             {                 lock (syncRoot)                 {                     if (Cache != null)                     {                         return Cache;                     }                 }                   throw new RapidDynamicProxyAssertionException(AssertionResources.NoAssemblyInCache);             }         }           #endregion     } } The cache is simply a static property that will store the AssemblyBuilder (I know it’s a little weird that I’ve made it public, this is for testing purposes, I know that’s a bad excuse but hey…) There are two methods for using the cache – Add and Get, these just provide thread safe access to the cache.   The Module Builder The module builder is required as the create proxy classes will need to live inside a module within the assembly. Here’s the code: DynamicModuleBuilder using System.Reflection.Emit; using Rapid.DynamicProxy.Assembly; namespace Rapid.DynamicProxy.Module {     /// <summary>     /// Class for creating a module builder.     /// </summary>     internal static class DynamicModuleBuilder     {         /// <summary>         /// Creates a module builder using the cached assembly.         /// </summary>         public static ModuleBuilder Create()         {             string assemblyName = DynamicAssemblyCache.Get.GetName().Name;               ModuleBuilder moduleBuilder = DynamicAssemblyCache.Get.DefineDynamicModule                 (assemblyName, string.Format("{0}.dll", assemblyName));               DynamicModuleCache.Add(moduleBuilder);               return moduleBuilder;         }     } } As you can see, the module builder is created on the assembly that lives in the DynamicAssemblyCache, the module is given the assembly name and also a string representing the filename if the assembly is to be saved. It is then added to the DynamicModuleCache. DynamicModuleCache using System.Reflection.Emit; using Rapid.DynamicProxy.Exceptions; using Rapid.DynamicProxy.Resources.Exceptions; namespace Rapid.DynamicProxy.Module {     /// <summary>     /// Class for storing the module builder.     /// </summary>     internal static class DynamicModuleCache     {         #region Declarations           private static object syncRoot = new object();         internal static ModuleBuilder Cache = null;           #endregion           #region Add           /// <summary>         /// Adds a dynamic module builder to the cache.         /// </summary>         /// <param name="moduleBuilder">The module builder.</param>         public static void Add(ModuleBuilder moduleBuilder)         {             lock (syncRoot)             {                 Cache = moduleBuilder;             }         }           #endregion           #region Get           /// <summary>         /// Gets the cached module builder.         /// </summary>         /// <returns></returns>         public static ModuleBuilder Get         {             get             {                 lock (syncRoot)                 {                     if (Cache != null)                     {                         return Cache;                     }                 }                   throw new RapidDynamicProxyAssertionException(AssertionResources.NoModuleInCache);             }         }           #endregion     } }   The DynamicModuleCache is very similar to the assembly cache, it is simply a statically stored module with thread safe Add and Get methods.   The DynamicTypeCache To end off this post, I’m going to create the cache for storing the generated proxy classes. I’ve spent a fair amount of time thinking about the type of collection I should use to store the types and have finally decided that for the time being I’m going to use a generic dictionary. This may change when I can actually performance test the proxy generator but the time being I think it makes good sense in theory, mainly as it pretty much maintains it’s performance with varying numbers of items – almost constant (0)1. Plus I won’t ever need to loop through the items which is not the dictionaries strong point. Here’s the code as it currently stands: DynamicTypeCache using System; using System.Collections.Generic; using System.Security.Cryptography; using System.Text; namespace Rapid.DynamicProxy.Types {     /// <summary>     /// Cache for storing proxy types.     /// </summary>     internal static class DynamicTypeCache     {         #region Declarations           static object syncRoot = new object();         public static Dictionary<string, Type> Cache = new Dictionary<string, Type>();           #endregion           /// <summary>         /// Adds a proxy to the type cache.         /// </summary>         /// <param name="type">The type.</param>         /// <param name="proxy">The proxy.</param>         public static void AddProxyForType(Type type, Type proxy)         {             lock (syncRoot)             {                 Cache.Add(GetHashCode(type.AssemblyQualifiedName), proxy);             }         }           /// <summary>         /// Tries the type of the get proxy for.         /// </summary>         /// <param name="type">The type.</param>         /// <returns></returns>         public static Type TryGetProxyForType(Type type)         {             lock (syncRoot)             {                 Type proxyType;                 Cache.TryGetValue(GetHashCode(type.AssemblyQualifiedName), out proxyType);                 return proxyType;             }         }           #region Private Methods           private static string GetHashCode(string fullName)         {             SHA1CryptoServiceProvider provider = new SHA1CryptoServiceProvider();             Byte[] buffer = Encoding.UTF8.GetBytes(fullName);             Byte[] hash = provider.ComputeHash(buffer, 0, buffer.Length);             return Convert.ToBase64String(hash);         }           #endregion     } } As you can see, there are two public methods, one for adding to the cache and one for getting from the cache. Hopefully they should be clear enough, the Get is a TryGet as I do not want the dictionary to throw an exception if a proxy doesn’t exist within the cache. Other than that I’ve decided to create a key using the SHA1CryptoServiceProvider, this may change but my initial though is the SHA1 algorithm is pretty fast to put together using the provider and it is also very unlikely to have any hashing collisions. (there are some maths behind how unlikely this is – here’s the wiki if you’re interested http://en.wikipedia.org/wiki/SHA_hash_functions)   Anyway, that’s the end of part 1 – although I haven’t started any of the fun stuff (by fun I mean hairpulling, teeth grating Relfection.Emit style fun), I’ve got the basis of the DynamicProxy in place so all we have to worry about now is creating the types, interceptor classes, method invocation information classes and finally a really nice fluent interface that will abstract all of the hard-core craziness away and leave us with a lightning fast, easy to use AOP framework. Hope you find the series interesting. All of the source code can be viewed and/or downloaded at our codeplex site - http://rapidioc.codeplex.com/ Kind Regards, Sean.

    Read the article

  • The perfect DotNetNuke Christmas present

    - by Chris Hammond
    Are you racking your brain trying to come up with that DotNetNuke person in your life? If so, I’ve got just the solution! You can buy them my book! DotNetNuke 5 User’s Guide: Get your website up and running ! It’s the perfect item for the DotNetNuke love of your life. If you buy a copy and want it signed, I’ll even offer to sign it if you mail it to me. Please be sure to include postage both ways. You probably won’t be able to get it to me and back in time for Christmas but the signing can happen...(read more)

    Read the article

  • “Unplugged” LIDNUG online talk with me on Monday (April 16th)

    - by ScottGu
    This coming Monday (April 16th) I’m doing another online LIDNUG session.  The talk will be from 10am to 11:30am (Pacific Time).  I do these talks a few times a year and they tend to be pretty fun.  Attendees can ask any questions they want to me, and listen to me answer them live via LiveMeeting.  We usually end up having some really good discussions on a wide variety of topics.  Any topic or question is fair game. You can learn more and register to attend the online event for free here. I’ll update this post with a download link to a recorded audio version of the talk after the event is over. Hope to get a chance to chat with some of you there! Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Windows Azure: General Availability of Web Sites + Mobile Services, New AutoScale + Alerts Support, No Credit Card Needed for MSDN

    - by ScottGu
    This morning we released a major set of updates to Windows Azure.  These updates included: Web Sites: General Availability Release of Windows Azure Web Sites with SLA Mobile Services: General Availability Release of Windows Azure Mobile Services with SLA Auto-Scale: New automatic scaling support for Web Sites, Cloud Services and Virtual Machines Alerts/Notifications: New email alerting support for all Compute Services (Web Sites, Mobile Services, Cloud Services, and Virtual Machines) MSDN: No more credit card requirement for sign-up All of these improvements are now available to use immediately (note: some are still in preview).  Below are more details about them. Web Sites: General Availability Release of Windows Azure Web Sites I’m incredibly excited to announce the General Availability release of Windows Azure Web Sites. The Windows Azure Web Sites service is perfect for hosting a web presence, building customer engagement solutions, and delivering business web apps.  Today’s General Availability release means we are taking off the “preview” tag from the Free and Standard (formerly called reserved) tiers of Windows Azure Web Sites.  This means we are providing: A 99.9% monthly SLA (Service Level Agreement) for the Standard tier Microsoft Support available on a 24x7 basis (with plans that range from developer plans to enterprise Premier support) The Free tier runs in a shared compute environment and supports up to 10 web sites. While the Free tier does not come with an SLA, it works great for rapid development and testing and enables you to quickly spike out ideas at no cost. The Standard tier, which was called “Reserved” during the preview, runs using dedicated per-customer VM instances for great performance, isolation and scalability, and enables you to host up to 500 different Web sites within them.  You can easily scale your Standard instances on-demand using the Windows Azure Management Portal.  You can adjust VM instance sizes from a Small instance size (1 core, 1.75GB of RAM), up to a Medium instance size (2 core, 3.5GB of RAM), or Large instance (4 cores and 7 GB RAM).  You can choose to run between 1 and 10 Standard instances, enabling you to easily scale up your web backend to 40 cores of CPU and 70GB of RAM: Today’s release also includes general availability support for custom domain SSL certificate bindings for web sites running using the Standard tier. Customers will be able to utilize certificates they purchase for their custom domains and use either SNI or IP based SSL encryption. SNI encryption is available for all modern browsers and does not require an IP address.  SSL certificates can be used for individual sites or wild-card mapped across multiple sites (we charge extra for the use of a SSL cert – but the fee is per-cert and not per site which means you pay once for it regardless of how many sites you use it with).  Today’s release also includes the following new features: Auto-Scale support Today’s Windows Azure release adds preview support for Auto-Scaling web sites.  This enables you to setup automatic scale rules based on the activity of your instances – allowing you to automatically scale down (and save money) when they are below a CPU threshold you define, and automatically scale up quickly when traffic increases.  See below for more details. 64-bit and 32-bit mode support You can now choose to run your standard tier instances in either 32-bit or 64-bit mode (previously they only ran in 32-bit mode).  This enables you to address even more memory within individual web applications. Memory dumps Memory dumps can be very useful for diagnosing issues and debugging apps. Using a REST API, you can now get a memory dump of your sites, which you can then use for investigating issues in Visual Studio Debugger, WinDbg, and other tools. Scaling Sites Independently Prior to today’s release, all sites scaled up/down together whenever you scaled any site in a sub-region. So you may have had to keep your proof-of-concept or testing sites in a separate sub-region if you wanted to keep them in the Free tier. This will no longer be necessary.  Windows Azure Web Sites can now mix different tier levels in the same geographic sub-region. This allows you, for example, to selectively move some of your sites in the West US sub-region up to Standard tier when they require the features, scalability, and SLA of the Standard tier. Full pricing details on Windows Azure Web Sites can be found here.  Note that the “Shared Tier” of Windows Azure Web Sites remains in preview mode (and continues to have discounted preview pricing).  Mobile Services: General Availability Release of Windows Azure Mobile Services I’m incredibly excited to announce the General Availability release of Windows Azure Mobile Services.  Mobile Services is perfect for building scalable cloud back-ends for Windows 8.x, Windows Phone, Apple iOS, Android, and HTML/JavaScript applications.  Customers We’ve seen tremendous adoption of Windows Azure Mobile Services since we first previewed it last September, and more than 20,000 customers are now running mobile back-ends in production using it.  These customers range from startups like Yatterbox, to university students using Mobile Services to complete apps like Sly Fox in their spare time, to media giants like Verdens Gang finding new ways to deliver content, and telcos like TalkTalk Business delivering the up-to-the-minute information their customers require.  In today’s Build keynote, we demonstrated how TalkTalk Business is using Windows Azure Mobile Services to deliver service, outage and billing information to its customers, wherever they might be. Partners When we unveiled the source control and Custom API features I blogged about two weeks ago, we enabled a range of new scenarios, one of which is a more flexible way to work with third party services.  The following blogs, samples and tutorials from our partners cover great ways you can extend Mobile Services to help you build rich modern apps: New Relic allows developers to monitor and manage the end-to-end performance of iOS and Android applications connected to Mobile Services. SendGrid eliminates the complexity of sending email from Mobile Services, saving time and money, while providing reliable delivery to the inbox. Twilio provides a telephony infrastructure web service in the cloud that you can use with Mobile Services to integrate phone calls, text messages and IP voice communications into your mobile apps. Xamarin provides a Mobile Services add on to make it easy building cross-platform connected mobile aps. Pusher allows quickly and securely add scalable real-time messaging functionality to Mobile Services-based web and mobile apps. Visual Studio 2013 and Windows 8.1 This week during //build/ keynote, we demonstrated how Visual Studio 2013, Mobile Services and Windows 8.1 make building connected apps easier than ever. Developers building Windows 8 applications in Visual Studio can now connect them to Windows Azure Mobile Services by simply right clicking then choosing Add Connected Service. You can either create a new Mobile Service or choose existing Mobile Service in the Add Connected Service dialog. Once completed, Visual Studio adds a reference to Mobile Services SDK to your project and generates a Mobile Services client initialization snippet automatically. Add Push Notifications Push Notifications and Live Tiles are a key to building engaging experiences. Visual Studio 2013 and Mobile Services make it super easy to add push notifications to your Windows 8.1 app, by clicking Add a Push Notification item: The Add Push Notification wizard will then guide you through the registration with the Windows Store as well as connecting your app to a new or existing mobile service. Upon completion of the wizard, Visual Studio will configure your mobile service with the WNS credentials, as well as add sample logic to your client project and your mobile service that demonstrates how to send push notifications to your app. Server Explorer Integration In Visual Studio 2013 you can also now view your Mobile Services in the the Server Explorer. You can add tables, edit, and save server side scripts without ever leaving Visual Studio, as shown on the image below: Pricing With today’s general availability release we are announcing that we will be offering Mobile Services in three tiers – Free, Standard, and Premium.  Each tier is metered using a simple pricing model based on the # of API calls (bandwidth is included at no extra charge), and the Standard and Premium tiers are backed by 99.9% monthly SLAs.  You can elastically scale up or down the number of instances you have of each tier to increase the # of API requests your service can support – allowing you to efficiently scale as your business grows. The following table summarizes the new pricing model (full pricing details here):   You can find the full details of the new pricing model here. Build Conference Talks The //BUILD/ conference will be packed with sessions covering every aspect of developing connected applications with Mobile Services. The best part is that, even if you can’t be with us in San Francisco, every session is being streamed live. Be sure not to miss these talks: Mobile Services – Soup to Nuts — Josh Twist Building Cross-Platform Apps with Windows Azure Mobile Services — Chris Risner Connected Windows Phone Apps made Easy with Mobile Services — Yavor Georgiev Build Connected Windows 8.1 Apps with Mobile Services — Nick Harris Who’s that user? Identity in Mobile Apps — Dinesh Kulkarni Building REST Services with JavaScript — Nathan Totten Going Live and Beyond with Windows Azure Mobile Services — Kirill Gavrylyuk , Paul Batum Protips for Windows Azure Mobile Services — Chris Risner AutoScale: Dynamically scale up/down your app based on real-world usage One of the key benefits of Windows Azure is that you can dynamically scale your application in response to changing demand. In the past, though, you have had to either manually change the scale of your application, or use additional tooling (such as WASABi or MetricsHub) to automatically scale your application. Today, we’re announcing that AutoScale will be built-into Windows Azure directly.  With today’s release it is now enabled for Cloud Services, Virtual Machines and Web Sites (Mobile Services support will come soon). Auto-scale enables you to configure Windows Azure to automatically scale your application dynamically on your behalf (without any manual intervention) so you can achieve the ideal performance and cost balance. Once configured it will regularly adjust the number of instances running in response to the load in your application. Currently, we support two different load metrics: CPU percentage Storage queue depth (Cloud Services and Virtual Machines only) We’ll enable automatic scaling on even more scale metrics in future updates. When to use Auto-Scale The following are good criteria for services/apps that will benefit from the use of auto-scale: The service/app can scale horizontally (e.g. it can be duplicated to multiple instances) The service/app load changes over time If your app meets these criteria, then you should look to leverage auto-scale. How to Enable Auto-Scale To enable auto-scale, simply navigate to the Scale tab in the Windows Azure Management Portal for the app/service you wish to enable.  Within the scale tab turn the Auto-Scale setting on to either CPU or Queue (for Cloud Services and VMs) to enable Auto-Scale.  Then change the instance count and target CPU settings to configure the Auto-Scale ranges you want to maintain. The image below demonstrates how to enable Auto-Scale on a Windows Azure Web-Site.  I’ve configured the web-site so that it will run using between 1 and 5 VM instances.  The exact # used will depend on the aggregate CPU of the VMs using the 40-70% range I’ve configured below.  If the aggregate CPU goes above 70%, then Windows Azure will automatically add new VMs to the pool (up to the maximum of 5 instances I’ve configured it to use).  If the aggregate CPU drops below 40% then Windows Azure will automatically start shutting down VMs to save me money: Once you’ve turned auto-scale on, you can return to the Scale tab at any point and select Off to manually set the number of instances. Using the Auto-Scale Preview With today’s update you can now, in just a few minutes, have Windows Azure automatically adjust the number of instances you have running  in your apps to keep your service performant at an even better cost. Auto-scale is being released today as a preview feature, and will be free until General Availability. During preview, each subscription is limited to 10 separate auto-scale rules across all of the resources they have (Web sites, Cloud services or Virtual Machines). If you hit the 10 limit, you can disable auto-scale for any resource to enable it for another. Alerts and Notifications Starting today we are now providing the ability to configure threshold based alerts on monitoring metrics. This feature is available for compute services (cloud services, VM, websites and mobiles services). Alerts provide you the ability to get proactively notified of active or impending issues within your application.  You can define alert rules for: Virtual machine monitoring metrics that are collected from the host operating system (CPU percentage, network in/out, disk read bytes/sec and disk write bytes/sec) and on monitoring metrics from monitoring web endpoint urls (response time and uptime) that you have configured. Cloud service monitoring metrics that are collected from the host operating system (same as VM), monitoring metrics from the guest VM (from performance counters within the VM) and on monitoring metrics from monitoring web endpoint urls (response time and uptime) that you have configured. For Web Sites and Mobile Services, alerting rules can be configured on monitoring metrics from monitoring endpoint urls (response time and uptime) that you have configured. Creating Alert Rules You can add an alert rule for a monitoring metric by navigating to the Setting -> Alerts tab in the Windows Azure Management Portal. Click on the Add Rule button to create an alert rule. Give the alert rule a name and optionally add a description. Then pick the service which you want to define the alert rule on: The next step in the alert creation wizard will then filter the monitoring metrics based on the service you selected:   Once created the rule will show up in your alerts list within the settings tab: The rule above is defined as “not activated” since it hasn’t tripped over the CPU threshold we set.  If the CPU on the above machine goes over the limit, though, I’ll get an email notifying me from an Windows Azure Alerts email address ([email protected]). And when I log into the portal and revisit the alerts tab I’ll see it highlighted in red.  Clicking it will then enable me to see what is causing it to fail, as well as view the history of when it has happened in the past. Alert Notifications With today’s initial preview you can now easily create alerting rules based on monitoring metrics and get notified on active or impending issues within your application that require attention. During preview, each subscription is limited to 10 alert rules across all of the services that support alert rules. No More Credit Card Requirement for MSDN Subscribers Earlier this month (during TechEd 2013), Windows Azure announced that MSDN users will get Windows Azure Credits every month that they can use for any Windows Azure services they want. You can read details about this in my previous Dev/Test blog post. Today we are making further updates to enable an easier Windows Azure signup for MSDN users. MSDN users will now not be required to provide payment information (e.g. no credit card) during sign-up, so long as they use the service within the included monetary credit for the billing period. For usage beyond the monetary credit, they can enable overages by providing the payment information and remove the spending limit. This enables a super easy, one page sign-up experience for MSDN users.  Simply sign-up for your Windows Azure trial using the same Microsoft ID that you use to manage your MSDN account, then complete the one page sign-up form below and you will be able to spend your free monthly MSDN credits (up to $150 each month) on any Windows Azure resource for dev/test:   This makes it trivially easy for every MDSN customer to start using Windows Azure today.  If you haven’t signed up yet, I definitely recommend checking it out. Summary Today’s release includes a ton of great features that enable you to build even better cloud solutions.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Authorize.Net, Silent Posts, and URL Rewriting Don't Mix

    The too long, didn't read synopsis: If you use Authorize.Net and its silent post feature and it stops working, make sure that if your website uses URL rewriting to strip or add a www to the domain name that the URL you specify for the silent post matches the URL rewriting rule because Authorize.Net's silent post feature won't resubmit the post request to URL specified via the redirect response. I have a client that uses Authorize.Net to manage and bill customers. Like many payment gateways, Authorize.Net supports recurring payments. For example, a website may charge members a monthly fee to access their services. With Authorize.Net you can provide the billing amount and schedule and at each interval Authorize.Net will automatically charge the customer's credit card and deposit the funds to your account. You may want to do something whenever Authorize.Net performs a recurring payment. For instance, if the recurring payment charge was a success you would extend the customer's service; if the transaction was denied then you would cancel their service (or whatever). To accomodate this, Authorize.Net offers a silent post feature. Properly configured, Authorize.Net will send an HTTP request that contains details of the recurring payment transaction to a URL that you specify. This URL could be an ASP.NET page on your server that then parses the data from Authorize.Net and updates the specified customer's account accordingly. (Of course, you can always view the history of recurring payments through the reporting interface on Authorize.Net's website; the silent post feature gives you a way to programmatically respond to a recurring payment.) Recently, this client of mine that uses Authorize.Net informed me that several paying customers were telling him that their access to the site had been cut off even though their credit cards had been recently billed. Looking through our logs, I noticed that we had not shown any recurring payment log activity for over a month. I figured one of two things must be going on: either Authorize.Net wasn't sending us the silent post requests anymore or the page that was processing them wasn't doing so correctly. I started by verifying that our Authorize.Net account was properly setup to use the silent post feature and that it was pointing to the correct URL. Authorize.Net's site indicated the silent post was configured and that recurring payment transaction details were being sent to http://example.com/AuthorizeNetProcessingPage.aspx. Next, I wanted to determine what information was getting sent to that URL.The application was setup tolog the parsed results of the Authorize.Net request, such as what customer the recurring payment applied to; however,we were not logging the actual HTTP request coming from Authorize.Net. I contacted Authorize.Net's support to inquire if they logged the HTTP request send via the silent post feature and was told that they did not. I decided to add a bit of code to log the incoming HTTP request, which you can do by using the Request object's SaveAs method. This allowed me to saveevery incoming HTTP request to the silent post page to a text file on the server. Upon the next recurring payment, I was able to see the HTTP request being received by the page: GET /AuthorizeNetProcessingPage.aspx HTTP/1.1Connection: CloseAccept: */*Host: www.example.com That was it. Two things alarmed me: first, the request was obviously a GET and not a POST; second, there was no POST body (obviously), which is where Authorize.Net passes along thedetails of the recurring payment transaction.What stuck out was the Host header, which differed slightly from the silent post URL configured in Authorize.Net. Specifically, the Host header in the above logged request pointed to www.example.com, whereas the Authorize.Net configuration used example.com (no www). About a month ago - the same time these recurring payment transaction detailswere no longer being processed by our ASP.NET page - we had implemented IIS 7's URL rewriting feature to permanently redirect all traffic to example.com to www.example.com. Could that be the problem? I contacted Authorize.Net's support again and asked them if their silent post algorithmwould follow the301HTTP response and repost the recurring payment transaction details. They said, Yes, the silent post would follow redirects. Their reports didn't jive with my observations, so I went ahead and updated our Authorize.Net configuration to point to http://www.example.com/AuthorizeNetProcessingPage.aspx instead of http://example.com/AuthorizeNetProcessingPage.aspx. And, I'm happy to report, recurring payments and correctly being processed again! If you use Authorize.Net and the silent post feature, and you notice that your processing page is not longer working, make sure you are not using any URL rewriting rules that may conflict with the silent post URL configuration. Hope this saves someone the time it took me to get to the bottom of this. Happy Programming!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Configuring ASP.NET MVC2 on Apache 2.2 using mod_aspdotnet

    - by user40684
    Trying to get an MVC2 website to run on Apache 2.2 web server (running on Windows) that utilizes the mod_aspdotnet module. Have several ASP.NET Virtual Hosts running, trying to add another. MVC2 has NO default page (like the first version of MVC had e.g default.aspx). I have tried various changes to the config: commented out 'DirectoryIndex', changed it to '/'. Set 'ASPNET' to 'Virtual', will not load first page, always get: '403 Forbidden, You don't have permission to access / on this server.' Below is from my http.conf: LoadModule aspdotnet_module "modules/mod_aspdotnet.so" AddHandler asp.net asax ascx ashx asmx aspx axd config cs csproj licx rem resources resx soap vb vbproj vsdisco webinfo <IfModule aspdotnet_module> # Mount the ASP.NET /asp application #AspNetMount /MyWebSiteName "D:/ApacheNET/MyWebSiteName.com" Alias /MyWebSiteName" D:/ApacheNET/MyWebSiteName.com" <VirtualHost *:80> DocumentRoot "D:/ApacheNET/MyWebSiteName.com" ServerName www.MyWebSiteName.com ServerAlias MyWebSiteName.com AspNetMount / "D:/ApacheNET/MyWebSiteName.com" # Other directives here <Directory "D:/ApacheNET/MyWebSiteName.com"> Options FollowSymlinks ExecCGI AspNet All #AspNet Virtual Files Directory Order allow,deny Allow from all DirectoryIndex default.aspx index.aspx index.html #default the index page to .htm and .aspx </Directory> </VirtualHost> # For all virtual ASP.NET webs, we need the aspnet_client files # to serve the client-side helper scripts. AliasMatch /aspnet_client/system_web/(\d+)_(\d+)_(\d+)_(\d+)/(.*) "C:/Windows /Microsoft.NET/Framework/v$1.$2.$3/ASP.NETClientFiles/$4" <Directory "C:/Windows/Microsoft.NET/Framework/v*/ASP.NETClientFiles"> Options FollowSymlinks Order allow,deny Allow from all </Directory> </IfModule> Has anyone successfully run MVC2 (or the first version of MVC) on Apache with the mod_aspdotnet module? Thanks !

    Read the article

  • Help A Hacker: Give ‘Em The Windows Source Code

    - by Ken Cox [MVP]
    The announcement of another Windows megapatch reminded me of a WikiLeaks story about Microsoft Windows that hasn’t attracted much attention. Alarmingly, we learn that the hackers have the Windows source code to study and test for vulnerabilities. Chinese hackers used the knowledge to breach Google’s accounts and servers: “In 2003, the CNITSEC signed a Government Security Program (GSP) international agreement with Microsoft that allowed select companies such as TOPSEC access to Microsoft source code in order to secure the Windows platform” “CNITSEC enterprises has recruited Chinese hackers in support of nationally-funded "network attack scientific research projects." From June 2002 to March 2003, TOPSEC employed a known Chinese hacker, Lin Yong (a.k.a. Lion and owner of the Honker Union of China), as senior security service engineer…” Windows is widely seen as unsecurable. It doesn’t help that Chinese government-funded hackers are probing the source code for vulnerabilities. It seems odd that people who didn’t write the code can find vulnerabilities faster than the owners of the code. Perhaps the U.S. government should hire its own hackers to go over the same Windows source code and then tell Microsoft how to secure its product?

    Read the article

  • Including additional DLL’s in an MSBuild script for Module Packaging

    - by Chris Hammond
    Late last year I created a blog post and video about a new version of the module development template that I released on Codeplex . This new template uses MSBuild scripts instead of NANT scripts to automate the packaging process for the modules built with the template. The MSBuild script works well out of the box, to package your module you simple change into RELEASE mode and then execute the build. If your project contains references to DLLs (in the website’s BIN folder) that you also need to package...(read more)

    Read the article

  • Basic Spatial Data with SQL Server and Entity Framework 5.0

    - by Rick Strahl
    In my most recent project we needed to do a bit of geo-spatial referencing. While spatial features have been in SQL Server for a while using those features inside of .NET applications hasn't been as straight forward as could be, because .NET natively doesn't support spatial types. There are workarounds for this with a few custom project like SharpMap or a hack using the Sql Server specific Geo types found in the Microsoft.SqlTypes assembly that ships with SQL server. While these approaches work for manipulating spatial data from .NET code, they didn't work with database access if you're using Entity Framework. Other ORM vendors have been rolling their own versions of spatial integration. In Entity Framework 5.0 running on .NET 4.5 the Microsoft ORM finally adds support for spatial types as well. In this post I'll describe basic geography features that deal with single location and distance calculations which is probably the most common usage scenario. SQL Server Transact-SQL Syntax for Spatial Data Before we look at how things work with Entity framework, lets take a look at how SQL Server allows you to use spatial data to get an understanding of the underlying semantics. The following SQL examples should work with SQL 2008 and forward. Let's start by creating a test table that includes a Geography field and also a pair of Long/Lat fields that demonstrate how you can work with the geography functions even if you don't have geography/geometry fields in the database. Here's the CREATE command:CREATE TABLE [dbo].[Geo]( [id] [int] IDENTITY(1,1) NOT NULL, [Location] [geography] NULL, [Long] [float] NOT NULL, [Lat] [float] NOT NULL ) Now using plain SQL you can insert data into the table using geography::STGeoFromText SQL CLR function:insert into Geo( Location , long, lat ) values ( geography::STGeomFromText ('POINT(-121.527200 45.712113)', 4326), -121.527200, 45.712113 ) insert into Geo( Location , long, lat ) values ( geography::STGeomFromText ('POINT(-121.517265 45.714240)', 4326), -121.517265, 45.714240 ) insert into Geo( Location , long, lat ) values ( geography::STGeomFromText ('POINT(-121.511536 45.714825)', 4326), -121.511536, 45.714825) The STGeomFromText function accepts a string that points to a geometric item (a point here but can also be a line or path or polygon and many others). You also need to provide an SRID (Spatial Reference System Identifier) which is an integer value that determines the rules for how geography/geometry values are calculated and returned. For mapping/distance functionality you typically want to use 4326 as this is the format used by most mapping software and geo-location libraries like Google and Bing. The spatial data in the Location field is stored in binary format which looks something like this: Once the location data is in the database you can query the data and do simple distance computations very easily. For example to calculate the distance of each of the values in the database to another spatial point is very easy to calculate. Distance calculations compare two points in space using a direct line calculation. For our example I'll compare a new point to all the points in the database. Using the Location field the SQL looks like this:-- create a source point DECLARE @s geography SET @s = geography:: STGeomFromText('POINT(-121.527200 45.712113)' , 4326); --- return the ids select ID, Location as Geo , Location .ToString() as Point , @s.STDistance( Location) as distance from Geo order by distance The code defines a new point which is the base point to compare each of the values to. You can also compare values from the database directly, but typically you'll want to match a location to another location and determine the difference for which you can use the geography::STDistance function. This query produces the following output: The STDistance function returns the straight line distance between the passed in point and the point in the database field. The result for SRID 4326 is always in meters. Notice that the first value passed was the same point so the difference is 0. The other two points are two points here in town in Hood River a little ways away - 808 and 1256 meters respectively. Notice also that you can order the result by the resulting distance, which effectively gives you results that are ordered radially out from closer to further away. This is great for searches of points of interest near a central location (YOU typically!). These geolocation functions are also available to you if you don't use the Geography/Geometry types, but plain float values. It's a little more work, as each point has to be created in the query using the string syntax, but the following code doesn't use a geography field but produces the same result as the previous query.--- using float fields select ID, geography::STGeomFromText ('POINT(' + STR (long, 15,7 ) + ' ' + Str(lat ,15, 7) + ')' , 4326), geography::STGeomFromText ('POINT(' + STR (long, 15,7 ) + ' ' + Str(lat ,15, 7) + ')' , 4326). ToString(), @s.STDistance( geography::STGeomFromText ('POINT(' + STR(long ,15, 7) + ' ' + Str(lat ,15, 7) + ')' , 4326)) as distance from geo order by distance Spatial Data in the Entity Framework Prior to Entity Framework 5.0 on .NET 4.5 consuming of the data above required using stored procedures or raw SQL commands to access the spatial data. In Entity Framework 5 however, Microsoft introduced the new DbGeometry and DbGeography types. These immutable location types provide a bunch of functionality for manipulating spatial points using geometry functions which in turn can be used to do common spatial queries like I described in the SQL syntax above. The DbGeography/DbGeometry types are immutable, meaning that you can't write to them once they've been created. They are a bit odd in that you need to use factory methods in order to instantiate them - they have no constructor() and you can't assign to properties like Latitude and Longitude. Creating a Model with Spatial Data Let's start by creating a simple Entity Framework model that includes a Location property of type DbGeography: public class GeoLocationContext : DbContext { public DbSet<GeoLocation> Locations { get; set; } } public class GeoLocation { public int Id { get; set; } public DbGeography Location { get; set; } public string Address { get; set; } } That's all there's to it. When you run this now against SQL Server, you get a Geography field for the Location property, which looks the same as the Location field in the SQL examples earlier. Adding Spatial Data to the Database Next let's add some data to the table that includes some latitude and longitude data. An easy way to find lat/long locations is to use Google Maps to pinpoint your location, then right click and click on What's Here. Click on the green marker to get the GPS coordinates. To add the actual geolocation data create an instance of the GeoLocation type and use the DbGeography.PointFromText() factory method to create a new point to assign to the Location property:[TestMethod] public void AddLocationsToDataBase() { var context = new GeoLocationContext(); // remove all context.Locations.ToList().ForEach( loc => context.Locations.Remove(loc)); context.SaveChanges(); var location = new GeoLocation() { // Create a point using native DbGeography Factory method Location = DbGeography.PointFromText( string.Format("POINT({0} {1})", -121.527200,45.712113) ,4326), Address = "301 15th Street, Hood River" }; context.Locations.Add(location); location = new GeoLocation() { Location = CreatePoint(45.714240, -121.517265), Address = "The Hatchery, Bingen" }; context.Locations.Add(location); location = new GeoLocation() { // Create a point using a helper function (lat/long) Location = CreatePoint(45.708457, -121.514432), Address = "Kaze Sushi, Hood River" }; context.Locations.Add(location); location = new GeoLocation() { Location = CreatePoint(45.722780, -120.209227), Address = "Arlington, OR" }; context.Locations.Add(location); context.SaveChanges(); } As promised, a DbGeography object has to be created with one of the static factory methods provided on the type as the Location.Longitude and Location.Latitude properties are read only. Here I'm using PointFromText() which uses a "Well Known Text" format to specify spatial data. In the first example I'm specifying to create a Point from a longitude and latitude value, using an SRID of 4326 (just like earlier in the SQL examples). You'll probably want to create a helper method to make the creation of Points easier to avoid that string format and instead just pass in a couple of double values. Here's my helper called CreatePoint that's used for all but the first point creation in the sample above:public static DbGeography CreatePoint(double latitude, double longitude) { var text = string.Format(CultureInfo.InvariantCulture.NumberFormat, "POINT({0} {1})", longitude, latitude); // 4326 is most common coordinate system used by GPS/Maps return DbGeography.PointFromText(text, 4326); } Using the helper the syntax becomes a bit cleaner, requiring only a latitude and longitude respectively. Note that my method intentionally swaps the parameters around because Latitude and Longitude is the common format I've seen with mapping libraries (especially Google Mapping/Geolocation APIs with their LatLng type). When the context is changed the data is written into the database using the SQL Geography type which looks the same as in the earlier SQL examples shown. Querying Once you have some location data in the database it's now super easy to query the data and find out the distance between locations. A common query is to ask for a number of locations that are near a fixed point - typically your current location and order it by distance. Using LINQ to Entities a query like this is easy to construct:[TestMethod] public void QueryLocationsTest() { var sourcePoint = CreatePoint(45.712113, -121.527200); var context = new GeoLocationContext(); // find any locations within 5 kilometers ordered by distance var matches = context.Locations .Where(loc => loc.Location.Distance(sourcePoint) < 5000) .OrderBy( loc=> loc.Location.Distance(sourcePoint) ) .Select( loc=> new { Address = loc.Address, Distance = loc.Location.Distance(sourcePoint) }); Assert.IsTrue(matches.Count() > 0); foreach (var location in matches) { Console.WriteLine("{0} ({1:n0} meters)", location.Address, location.Distance); } } This example produces: 301 15th Street, Hood River (0 meters)The Hatchery, Bingen (809 meters)Kaze Sushi, Hood River (1,074 meters)   The first point in the database is the same as my source point I'm comparing against so the distance is 0. The other two are within the 5 mile radius, while the Arlington location which is 65 miles or so out is not returned. The result is ordered by distance from closest to furthest away. In the code, I first create a source point that is the basis for comparison. The LINQ query then selects all locations that are within 5km of the source point using the Location.Distance() function, which takes a source point as a parameter. You can either use a pre-defined value as I'm doing here, or compare against another database DbGeography property (say when you have to points in the same database for things like routes). What's nice about this query syntax is that it's very clean and easy to read and understand. You can calculate the distance and also easily order by the distance to provide a result that shows locations from closest to furthest away which is a common scenario for any application that places a user in the context of several locations. It's now super easy to accomplish this. Meters vs. Miles As with the SQL Server functions, the Distance() method returns data in meters, so if you need to work with miles or feet you need to do some conversion. Here are a couple of helpers that might be useful (can be found in GeoUtils.cs of the sample project):/// <summary> /// Convert meters to miles /// </summary> /// <param name="meters"></param> /// <returns></returns> public static double MetersToMiles(double? meters) { if (meters == null) return 0F; return meters.Value * 0.000621371192; } /// <summary> /// Convert miles to meters /// </summary> /// <param name="miles"></param> /// <returns></returns> public static double MilesToMeters(double? miles) { if (miles == null) return 0; return miles.Value * 1609.344; } Using these two helpers you can query on miles like this:[TestMethod] public void QueryLocationsMilesTest() { var sourcePoint = CreatePoint(45.712113, -121.527200); var context = new GeoLocationContext(); // find any locations within 5 miles ordered by distance var fiveMiles = GeoUtils.MilesToMeters(5); var matches = context.Locations .Where(loc => loc.Location.Distance(sourcePoint) <= fiveMiles) .OrderBy(loc => loc.Location.Distance(sourcePoint)) .Select(loc => new { Address = loc.Address, Distance = loc.Location.Distance(sourcePoint) }); Assert.IsTrue(matches.Count() > 0); foreach (var location in matches) { Console.WriteLine("{0} ({1:n1} miles)", location.Address, GeoUtils.MetersToMiles(location.Distance)); } } which produces: 301 15th Street, Hood River (0.0 miles)The Hatchery, Bingen (0.5 miles)Kaze Sushi, Hood River (0.7 miles) Nice 'n simple. .NET 4.5 Only Note that DbGeography and DbGeometry are exclusive to Entity Framework 5.0 (not 4.4 which ships in the same NuGet package or installer) and requires .NET 4.5. That's because the new DbGeometry and DbGeography (and related) types are defined in the 4.5 version of System.Data.Entity which is a CLR assembly and is only updated by major versions of .NET. Why this decision was made to add these types to System.Data.Entity rather than to the frequently updated EntityFramework assembly that would have possibly made this work in .NET 4.0 is beyond me, especially given that there are no native .NET framework spatial types to begin with. I find it also odd that there is no native CLR spatial type. The DbGeography and DbGeometry types are specific to Entity Framework and live on those assemblies. They will also work for general purpose, non-database spatial data manipulation, but then you are forced into having a dependency on System.Data.Entity, which seems a bit silly. There's also a System.Spatial assembly that's apparently part of WCF Data Services which in turn don't work with Entity framework. Another example of multiple teams at Microsoft not communicating and implementing the same functionality (differently) in several different places. Perplexed as a I may be, for EF specific code the Entity framework specific types are easy to use and work well. Working with pre-.NET 4.5 Entity Framework and Spatial Data If you can't go to .NET 4.5 just yet you can also still use spatial features in Entity Framework, but it's a lot more work as you can't use the DbContext directly to manipulate the location data. You can still run raw SQL statements to write data into the database and retrieve results using the same TSQL syntax I showed earlier using Context.Database.ExecuteSqlCommand(). Here's code that you can use to add location data into the database:[TestMethod] public void RawSqlEfAddTest() { string sqlFormat = @"insert into GeoLocations( Location, Address) values ( geography::STGeomFromText('POINT({0} {1})', 4326),@p0 )"; var sql = string.Format(sqlFormat,-121.527200, 45.712113); Console.WriteLine(sql); var context = new GeoLocationContext(); Assert.IsTrue(context.Database.ExecuteSqlCommand(sql,"301 N. 15th Street") > 0); } Here I'm using the STGeomFromText() function to add the location data. Note that I'm using string.Format here, which usually would be a bad practice but is required here. I was unable to use ExecuteSqlCommand() and its named parameter syntax as the longitude and latitude parameters are embedded into a string. Rest assured it's required as the following does not work:string sqlFormat = @"insert into GeoLocations( Location, Address) values ( geography::STGeomFromText('POINT(@p0 @p1)', 4326),@p2 )";context.Database.ExecuteSqlCommand(sql, -121.527200, 45.712113, "301 N. 15th Street") Explicitly assigning the point value with string.format works however. There are a number of ways to query location data. You can't get the location data directly, but you can retrieve the point string (which can then be parsed to get Latitude and Longitude) and you can return calculated values like distance. Here's an example of how to retrieve some geo data into a resultset using EF's and SqlQuery method:[TestMethod] public void RawSqlEfQueryTest() { var sqlFormat = @" DECLARE @s geography SET @s = geography:: STGeomFromText('POINT({0} {1})' , 4326); SELECT Address, Location.ToString() as GeoString, @s.STDistance( Location) as Distance FROM GeoLocations ORDER BY Distance"; var sql = string.Format(sqlFormat, -121.527200, 45.712113); var context = new GeoLocationContext(); var locations = context.Database.SqlQuery<ResultData>(sql); Assert.IsTrue(locations.Count() > 0); foreach (var location in locations) { Console.WriteLine(location.Address + " " + location.GeoString + " " + location.Distance); } } public class ResultData { public string GeoString { get; set; } public double Distance { get; set; } public string Address { get; set; } } Hopefully you don't have to resort to this approach as it's fairly limited. Using the new DbGeography/DbGeometry types makes this sort of thing so much easier. When I had to use code like this before I typically ended up retrieving data pks only and then running another query with just the PKs to retrieve the actual underlying DbContext entities. This was very inefficient and tedious but it did work. Summary For the current project I'm working on we actually made the switch to .NET 4.5 purely for the spatial features in EF 5.0. This app heavily relies on spatial queries and it was worth taking a chance with pre-release code to get this ease of integration as opposed to manually falling back to stored procedures or raw SQL string queries to return spatial specific queries. Using native Entity Framework code makes life a lot easier than the alternatives. It might be a late addition to Entity Framework, but it sure makes location calculations and storage easy. Where do you want to go today? ;-) Resources Download Sample Project© Rick Strahl, West Wind Technologies, 2005-2012Posted in ADO.NET  Sql Server  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Tulsa Azure Boot Camp

    - by dmccollough
    Windows Azure Boot Camp presented by HyperVize & TulsaTech When: Thursday July 1st and Friday July 2nd Registration: Click here. Where: TulsaTech Riverside Campus 801 East 91st Street Tulsa, Ok 74132-4008 Click here for a map. Summary Tulsa Windows Azure Boot Camp is a comprehensive 2 day training program for members of the development community in Tulsa Oklahoma. At the conclusion of this program, the attendees should have a deep understanding of Azure, BPOS, and advanced development techniques for both platforms. Who should attend: Web Developers, Backend Developers, SQL DBAs, Consultants, & IT Leaders who are interested in using Azure for development, data storage, or processing. Both days is suggested, but if you can't attend both days, contact us for a special one day pass. Schedule: Day one of the training sessions will be held from July 1st 2010 between the hours of 9AM and 4:30PM. Topics covered on day 1: Azure Basics, Web Development, & Data Storage. Day two of the training sessions will be held from July 2nd 2010 between the hours of 9AM and 4:30PM. Topics covered on day 2: Architecture, Business Value, SOA Development, SQL Azure, & Advanced Development. Pre-requisites: If you want to stay up to speed on the Windows Azure Labs you will need to install the tools and updates listed on the Windows Azure Boot Camp website: http://windowsazurebootcamp.com/whattobring Boot Camp Agenda Day 1 – July1st 2010:  · 8:30 – 9:00 - Registration · 9:00 – 10:00 - Module 1: Intro to Azure & Cloud Computing · 10:00 – 11:00 - Module 2: Using Web Roles · 11:00 – Noon - Lab 1 & workstation configuration · Noon – 1:00 - Lunch · 1:00 – 2:00 - Module 3: Blobs · 2:00 – 3:00 - Module 4: Tables · 3:00 – 4:00 - Module 5: Queues · 4:00 – ? - Q&A / Open Discussion Day 2 – July 2nd 2010: · 9:00 – 10:00 - Module 6: Building a business with Azure · 10:00 – 11:00 - Module 7: Cloud Scenarios · 11:00 – Noon - Module 8: SQL Azure · Noon – 1:00 - Lunch · 1:00 – 2:00 - Module 9: Basic Worker Roles · 2:00 – 3:00 - Module 10: Advanced Worker Roles · 3:00 – 4:00 - Module 11: Azure Diagnostics · 4:00 –    ??? - Module 12: App Fabric  

    Read the article

  • Loading a Template From a User Control

    - by Ricardo Peres
    What if you wanted to load a template (ITemplate property) from an external user control (.ascx) file? Yes, it is possible; there are a number of ways to do this, the one I'll talk about here is through a type converter. You need to apply a TypeConverterAttribute to your ITemplate property where you specify a custom type converter that does the job. This type converter relies on InstanceDescriptor. Here is the code for it: public class TemplateTypeConverter: TypeConverter { public override Boolean CanConvertFrom(ITypeDescriptorContext context, Type sourceType) { return ((sourceType == typeof(String)) || (base.CanConvertFrom(context, sourceType) == true)); } public override Boolean CanConvertTo(ITypeDescriptorContext context, Type destinationType) { return ((destinationType == typeof(InstanceDescriptor)) || (base.CanConvertTo(context, destinationType) == true)); } public override Object ConvertTo(ITypeDescriptorContext context, CultureInfo culture, Object value, Type destinationType) { if (destinationType == typeof(InstanceDescriptor)) { Object objectFactory = value.GetType().GetField("_objectFactory", BindingFlags.NonPublic | BindingFlags.Instance).GetValue(value); Object builtType = objectFactory.GetType().BaseType.GetField("_builtType", BindingFlags.NonPublic | BindingFlags.Instance).GetValue(objectFactory); MethodInfo loadTemplate = typeof(TemplateTypeConverter).GetMethod("LoadTemplate"); return (new InstanceDescriptor(loadTemplate, new Object [] { "~/" + (builtType as Type).Name.Replace('_', '/').Replace("/ascx", ".ascx") })); } return base.ConvertTo(context, culture, value, destinationType); } public static ITemplate LoadTemplate(String virtualPath) { using (Page page = new Page()) { return (page.LoadTemplate(virtualPath)); } } } And, on your control: public class MyControl: Control { [Browsable(false)] [TypeConverter(typeof(TemplateTypeConverter))] public ITemplate Template { get; set; } } This allows the following declaration: Hope this helps! SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.brushes.CSharp.aliases = ['c#', 'c-sharp', 'csharp']; SyntaxHighlighter.brushes.Xml.aliases = ['xml']; SyntaxHighlighter.all();

    Read the article

  • Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 2

    - by rajbk
    In the previous post, you saw how to create an OData feed and pre-filter the data. In this post, we will see how to shape the data. A sample project is attached at the bottom of this post. Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 1 Shaping the feed The Product feed we created earlier returns too much information about our products. Let’s change this so that only the following properties are returned – ProductID, ProductName, QuantityPerUnit, UnitPrice, UnitsInStock. We also want to return only Products that are not discontinued.  Splitting the Entity To shape our data according to the requirements above, we are going to split our Product Entity into two and expose one through the feed. The exposed entity will contain only the properties listed above. We will use the other Entity in our Query Interceptor to pre-filter the data so that discontinued products are not returned. Go to the design surface for the Entity Model and make a copy of the Product entity. A “Product1” Entity gets created.   Rename Product1 to ProductDetail. Right click on the Product entity and select “Add Association” Make a one to one association between Product and ProductDetails.   Keep only the properties we wish to expose on the Product entity and delete all other properties on it (see diagram below). You delete a property on an Entity by right clicking on the property and selecting “delete”. Keep the ProductID on the ProductDetail. Delete any other property on the ProductDetail entity that is already present in the Product entity. Your design surface should look like below:    Mapping Entity to Database Tables Right click on “ProductDetail” and go to “Table Mapping”   Add a mapping to the “Products” table in the Mapping Details.   After mapping ProductDetail, you should see the following.   Add a referential constraint. Lets add a referential constraint which is similar to a referential integrity constraint in SQL. Double click on the Association between the Entities and add the constraint with “Principal” set to “Product”. Let us review what we did so far. We made a copy of the Product entity and called it ProductDetail We created a one to one association between these entities Excluding the ProductID, we made sure properties were not duplicated between these entities  We added a ProductDetail entity to Products table mapping (Entity to Database). We added a referential constraint between the entities. Lets build our project. We get the following error: ”'NortwindODataFeed.Product' does not contain a definition for 'Discontinued' and no extension method 'Discontinued' accepting a first argument of type 'NortwindODataFeed.Product' could be found …" The reason for this error is because our Product Entity no longer has a “Discontinued” property. We “moved” it to the ProductDetail entity since we want our Product Entity to contain only properties that will be exposed by our feed. Since we have a one to one association between the entities, we can easily rewrite our Query Interceptor like so: [QueryInterceptor("Products")] public Expression<Func<Product, bool>> OnReadProducts() { return o => o.ProductDetail.Discontinued == false; } Similarly, all “hidden” properties of the Product table are available to us internally (through the ProductDetail Entity) for any additional logic we wish to implement. Compile the project and view the feed. We see that the feed returns only the properties that were part of the requirement.   To see the data in JSON format, you have to create a request with the following request header Accept: application/json, text/javascript, */* (easy to do in jQuery) The result should look like this: { "d" : { "results": [ { "__metadata": { "uri": "http://localhost.:2576/DataService.svc/Products(1)", "type": "NorthwindModel.Product" }, "ProductID": 1, "ProductName": "Chai", "QuantityPerUnit": "10 boxes x 20 bags", "UnitPrice": "18.0000", "UnitsInStock": 39 }, { "__metadata": { "uri": "http://localhost.:2576/DataService.svc/Products(2)", "type": "NorthwindModel.Product" }, "ProductID": 2, "ProductName": "Chang", "QuantityPerUnit": "24 - 12 oz bottles", "UnitPrice": "19.0000", "UnitsInStock": 17 }, { ... ... If anyone has the $format operation working, please post a comment. It was not working for me at the time of writing this.  We have successfully pre-filtered our data to expose only products that have not been discontinued and shaped our data so that only certain properties of the Entity are exposed. Note that there are several other ways you could implement this like creating a QueryView, Stored Procedure or DefiningQuery. You have seen how easy it is to create an OData feed, shape the data and pre-filter it by hardly writing any code of your own. For more details on OData, Google it with your favorite search engine :-) Also check out the one of the most passionate persons I have ever met, Pablo Castro – the Architect of Aristoria WCF Data Services. Watch his MIX 2010 presentation titled “OData: There's a Feed for That” here. Download Sample Project for VS 2010 RTM NortwindODataFeed.zip

    Read the article

  • Initially Unselected DropDownList

    - by Ricardo Peres
    One of the most (IMHO) things with DropDownList is its inability to show an unselected value at load time, which is something that HTML does permit. I decided to change the DropDownList to add this behavior. All was needed was some JavaScript and reflection. See the result for yourself: public class CustomDropDownList : DropDownList { public CustomDropDownList() { this.InitiallyUnselected = true; } [DefaultValue(true)] public Boolean InitiallyUnselected { get; set; } protected override void OnInit(EventArgs e) { this.Page.RegisterRequiresControlState(this); this.Page.PreRenderComplete += this.OnPreRenderComplete; base.OnInit(e); } protected virtual void OnPreRenderComplete(Object sender, EventArgs args) { FieldInfo cachedSelectedValue = typeof(ListControl).GetField("cachedSelectedValue", BindingFlags.NonPublic | BindingFlags.Instance); if (String.IsNullOrEmpty(cachedSelectedValue.GetValue(this) as String) == true) { if (this.InitiallyUnselected == true) { if ((ScriptManager.GetCurrent(this.Page) != null) && (ScriptManager.GetCurrent(this.Page).IsInAsyncPostBack == true)) { ScriptManager.RegisterStartupScript(this, this.GetType(), "unselect" + this.ClientID, "$get('" + this.ClientID + "').selectedIndex = -1;", true); } else { this.Page.ClientScript.RegisterStartupScript(this.GetType(), "unselect" + this.ClientID, "$get('" + this.ClientID + "').selectedIndex = -1;", true); } } } } } SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.brushes.CSharp.aliases = ['c#', 'c-sharp', 'csharp']; SyntaxHighlighter.all();

    Read the article

  • Certificate error with Web Platform Installer

    - by findleyd
     A friend of mine was having an issue getting the Web Platform Installer to work on his Windows Server 2008 R2 box. He said there was some sort of cert error and asked me to try https://go.microsoft.com/fwlink/?LinkId=158722 on my local machine to see if I got the cert error.  I tried it and I did get a cert error on Windows 7 64bit. I happened to notice that that url simply redirects to https://www.microsoft.com/web/webpi/2.0/WebProductList.xml . Out of curiosity I dropped to a command line and tried to run .\WebPlatformInstaller.exe /? to see if there were any command line options. It gave an error that said invalid URI. So we tried running it with the product list url like: "WebPlatformInstaller.exe https://www.microsoft.com/web/webpi/2.0/WebProductList.xml" . This seems to get around the expired cert that is on go.microsoft.com.  Here's a screen shot of the error: 

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >