Search Results

Search found 6185 results on 248 pages for 'where is searchadministration aspx'.

Page 65/248 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • WCF RIA Services feedback

    - by pluginbaby
      If you use or plan to use WCF RIA Services, here is your chance to shape the future of this product, vote or propose features for vNext in this page: http://dotnet.uservoice.com/forums/57026-wcf-ria-services You can find help and ask questions on the current release of RIA Services on the official forum: http://forums.silverlight.net/forums/53.aspx

    Read the article

  • WCF RIA Services feedback

      If you use or plan to use WCF RIA Services, here is your chance to shape the future of this product, vote or propose features for vNext in this page: http://dotnet.uservoice.com/forums/57026-wcf-ria-services You can find help and ask questions on the current release of RIA Services on the official forum: http://forums.silverlight.net/forums/53.aspx ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • RequestValidation Changes in ASP.NET 4.0

    - by Rick Strahl
    There’s been a change in the way the ValidateRequest attribute on WebForms works in ASP.NET 4.0. I noticed this today while updating a post on my WebLog all of which contain raw HTML and so all pretty much trigger request validation. I recently upgraded this app from ASP.NET 2.0 to 4.0 and it’s now failing to update posts. At first this was difficult to track down because of custom error handling in my app – the custom error handler traps the exception and logs it with only basic error information so the full detail of the error was initially hidden. After some more experimentation in development mode the error that occurs is the typical ASP.NET validate request error (‘A potentially dangerous Request.Form value was detetected…’) which looks like this in ASP.NET 4.0: At first when I got this I was real perplexed as I didn’t read the entire error message and because my page does have: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="NewEntry.aspx.cs" Inherits="Westwind.WebLog.NewEntry" MasterPageFile="~/App_Templates/Standard/AdminMaster.master" ValidateRequest="false" EnableEventValidation="false" EnableViewState="false" %> WTF? ValidateRequest would seem like it should be enough, but alas in ASP.NET 4.0 apparently that setting alone is no longer enough. Reading the fine print in the error explains that you need to explicitly set the requestValidationMode for the application back to V2.0 in web.config: <httpRuntime executionTimeout="300" requestValidationMode="2.0" /> Kudos for the ASP.NET team for putting up a nice error message that tells me how to fix this problem, but excuse me why the heck would you change this behavior to require an explicit override to an optional and by default disabled page level switch? You’ve just made a relatively simple fix to a solution a nasty morass of hard to discover configuration settings??? The original way this worked was perfectly discoverable via attributes in the page. Now you can set this setting in the page and get completely unexpected behavior and you are required to set what effectively amounts to a backwards compatibility flag in the configuration file. It turns out the real reason for the .config flag is that the request validation behavior has moved from WebForms pipeline down into the entire ASP.NET/IIS request pipeline and is now applied against all requests. Here’s what the breaking changes page from Microsoft says about it: The request validation feature in ASP.NET provides a certain level of default protection against cross-site scripting (XSS) attacks. In previous versions of ASP.NET, request validation was enabled by default. However, it applied only to ASP.NET pages (.aspx files and their class files) and only when those pages were executing. In ASP.NET 4, by default, request validation is enabled for all requests, because it is enabled before the BeginRequest phase of an HTTP request. As a result, request validation applies to requests for all ASP.NET resources, not just .aspx page requests. This includes requests such as Web service calls and custom HTTP handlers. Request validation is also active when custom HTTP modules are reading the contents of an HTTP request. As a result, request validation errors might now occur for requests that previously did not trigger errors. To revert to the behavior of the ASP.NET 2.0 request validation feature, add the following setting in the Web.config file: <httpRuntime requestValidationMode="2.0" /> However, we recommend that you analyze any request validation errors to determine whether existing handlers, modules, or other custom code accesses potentially unsafe HTTP inputs that could be XSS attack vectors. Ok, so ValidateRequest of the form still works as it always has but it’s actually the ASP.NET Event Pipeline, not WebForms that’s throwing the above exception as request validation is applied to every request that hits the pipeline. Creating the runtime override removes the HttpRuntime checking and restores the WebForms only behavior. That fixes my immediate problem but still leaves me wondering especially given the vague wording of the above explanation. One thing that’s missing in the description is above is one important detail: The request validation is applied only to application/x-www-form-urlencoded POST content not to all inbound POST data. When I first read this this freaked me out because it sounds like literally ANY request hitting the pipeline is affected. To make sure this is not really so I created a quick handler: public class Handler1 : IHttpHandler { public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/plain"; context.Response.Write("Hello World <hr>" + context.Request.Form.ToString()); } public bool IsReusable { get { return false; } } } and called it with Fiddler by posting some XML to the handler using a default form-urlencoded POST content type: and sure enough – hitting the handler also causes the request validation error and 500 server response. Changing the content type to text/xml effectively fixes the problem however, bypassing the request validation filter so Web Services/AJAX handlers and custom modules/handlers that implement custom protocols aren’t affected as long as they work with special input content types. It also looks that multipart encoding does not trigger event validation of the runtime either so this request also works fine: POST http://rasnote/weblog/handler1.ashx HTTP/1.1 Content-Type: multipart/form-data; boundary=------7cf2a327f01ae User-Agent: West Wind Internet Protocols 5.53 Host: rasnote Content-Length: 40 Pragma: no-cache <xml>asdasd</xml>--------7cf2a327f01ae *That* probably should trigger event validation – since it is a potential HTML form submission, but it doesn’t. New Runtime Feature, Global Scope Only? Ok, so request validation is now a runtime feature but sadly it’s a feature that’s scoped to the ASP.NET Runtime – effective scope to the entire running application/app domain. You can still manually force validation using Request.ValidateInput() which gives you the option to do this in code, but that realistically will only work with the requestValidationMode set to V2.0 as well since the 4.0 mode auto-fires before code ever gets a chance to intercept the call. Given all that, the new setting in ASP.NET 4.0 seems to limit options and makes things more difficult and less flexible. Of course Microsoft gets to say ASP.NET is more secure by default because of it but what good is that if you have to turn off this flag the very first time you need to allow one single request that bypasses request validation??? This is really shortsighted design… <sigh>© Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • Microsoft F#

    - by Aamir Hasan
    F# brings you type safe, succinct, efficient and expressive functional programming language on the .NET platform. It is a simple and pragmatic language, and has particular strengths in data-oriented programming, parallel I/O programming, parallel CPU programming, scripting and algorithmic development. F# cannot solve any problem C# could. F# is a functional language, statically typed. F# is a functional language that supports O-O-Programming References:http://msdn.microsoft.com/en-us/fsharp/cc835246.aspx http://research.microsoft.com/en-us/um/cambridge/projects/fsharp/

    Read the article

  • Using a WCF Message Inspector to extend AppFabric Monitoring

    - by Shawn Cicoria
    I read through Ron Jacobs post on Monitoring WCF Data Services with AppFabric http://blogs.msdn.com/b/endpoint/archive/2010/06/09/tracking-wcf-data-services-with-windows-server-appfabric.aspx What is immediately striking are 2 things – it’s so easy to get monitoring data into a viewer (AppFabric Dashboard) w/ very little work.  And the 2nd thing is, why can’t this be a WCF message inspector on the dispatch side. So, I took the base class WCFUserEventProvider that’s located in the WCF/WF samples [1] in the following path, \WF_WCF_Samples\WCF\Basic\Management\AnalyticTraceExtensibility\CS\WCFAnalyticTracingExtensibility\  and then created a few classes that project the injection as a IEndPointBehavior There are just 3 classes to drive injection of the inspector at runtime via config: IDispatchMessageInspector implementation BehaviorExtensionElement implementation IEndpointBehavior implementation The full source code is below with a link to the solution file here: [Solution File] using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.ServiceModel.Dispatcher; using System.ServiceModel.Channels; using System.ServiceModel; using System.ServiceModel.Configuration; using System.ServiceModel.Description; using Microsoft.Samples.WCFAnalyticTracingExtensibility; namespace Fabrikam.Services { public class AppFabricE2EInspector : IDispatchMessageInspector { static WCFUserEventProvider evntProvider = null; static AppFabricE2EInspector() { evntProvider = new WCFUserEventProvider(); } public object AfterReceiveRequest( ref Message request, IClientChannel channel, InstanceContext instanceContext) { OperationContext ctx = OperationContext.Current; var opName = ctx.IncomingMessageHeaders.Action; evntProvider.WriteInformationEvent("start", string.Format("operation: {0} at address {1}", opName, ctx.EndpointDispatcher.EndpointAddress)); return null; } public void BeforeSendReply(ref System.ServiceModel.Channels.Message reply, object correlationState) { OperationContext ctx = OperationContext.Current; var opName = ctx.IncomingMessageHeaders.Action; evntProvider.WriteInformationEvent("end", string.Format("operation: {0} at address {1}", opName, ctx.EndpointDispatcher.EndpointAddress)); } } public class AppFabricE2EBehaviorElement : BehaviorExtensionElement { #region BehaviorExtensionElement /// <summary> /// Gets the type of behavior. /// </summary> /// <value></value> /// <returns>The type that implements the end point behavior<see cref="T:System.Type"/>.</returns> public override Type BehaviorType { get { return typeof(AppFabricE2EEndpointBehavior); } } /// <summary> /// Creates a behavior extension based on the current configuration settings. /// </summary> /// <returns>The behavior extension.</returns> protected override object CreateBehavior() { return new AppFabricE2EEndpointBehavior(); } #endregion BehaviorExtensionElement } public class AppFabricE2EEndpointBehavior : IEndpointBehavior //, IServiceBehavior { #region IEndpointBehavior public void AddBindingParameters(ServiceEndpoint endpoint, BindingParameterCollection bindingParameters) {} public void ApplyClientBehavior(ServiceEndpoint endpoint, ClientRuntime clientRuntime) { throw new NotImplementedException(); } public void ApplyDispatchBehavior(ServiceEndpoint endpoint, EndpointDispatcher endpointDispatcher) { endpointDispatcher.DispatchRuntime.MessageInspectors.Add(new AppFabricE2EInspector()); } public void Validate(ServiceEndpoint endpoint) { ; } #endregion IEndpointBehavior } }     [1] http://www.microsoft.com/downloads/details.aspx?FamilyID=35ec8682-d5fd-4bc3-a51a-d8ad115a8792&displaylang=en

    Read the article

  • Don’t forget the usergroup meeting in London on Tuesday

    - by simonsabin
    Its not too late to register for the SQLSocial event in London on Tuesday. This is a must attend event for anyone that wants to know whats coming with SQL Server in the next release or are considering SQL Azure. You can register here http://sqlsocial20110607.eventbrite.com/ For full details of the event go to http://www.sqlsocial.com/Events/11-05-09/An_evening_with_the_SQL_Server_Leadership_Team.aspx...(read more)

    Read the article

  • SO-Aware sessions in Dallas and Houston

    - by gsusx
    Our WCF Registry: SO-Aware keeps being evangelized throughout the world. This week Tellago Studios' Dwight Goins will be speaking at Microsoft events in Dallas and Houston ( https://msevents.microsoft.com/cui/EventDetail.aspx?culture=en-US&EventID=1032469800&IO=ycqB%2bGJQr78fJBMJTye1oA%3d%3d ) about WCF management best practices using SO-Aware . If you are in the area and passionate about WCF you should definitely swing by and give Dwight a hard time ;)...(read more)

    Read the article

  • Fragmented Log files could be slowing down your database

    - by Fatherjack
    Something that is sometimes forgotten by a lot of DBAs is the fact that database log files get fragmented in the same way that you get fragmentation in a data file. The cause is very different but the effect is the same – too much effort reading and writing data. Data files get fragmented as data is changed through normal system activity, INSERTs, UPDATEs and DELETEs cause fragmentation and most experienced DBAs are monitoring their indexes for fragmentation and dealing with it accordingly. However, you don’t hear about so many working on their log files. How can a log file get fragmented? I’m glad you asked. When you create a database there are at least two files created on the disk storage; an mdf for the data and an ldf for the log file (you can also have ndf files for extra data storage but that’s off topic for now). It is wholly possible to have more than one log file but in most cases there is little point in creating more than one as the log file is written to in a ‘wrap-around’ method (more on that later). When a log file is created at the time that a database is created the file is actually sub divided into a number of virtual log files (VLFs). The number and size of these VLFs depends on the size chosen for the log file. VLFs are also created in the space added to a log file when a log file growth event takes place. Do you have your log files set to auto grow? Then you have potentially been introducing many VLFs into your log file. Let’s get to see how many VLFs we have in a brand new database. USE master GO CREATE DATABASE VLF_Test ON ( NAME = VLF_Test, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test.mdf', SIZE = 100, MAXSIZE = 500, FILEGROWTH = 50 ) LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5MB, MAXSIZE = 250MB, FILEGROWTH = 5MB ); go USE VLF_Test go DBCC LOGINFO; The results of this are firstly a new database is created with specified files sizes and the the DBCC LOGINFO results are returned to the script editor. The DBCC LOGINFO results have plenty of interesting information in them but lets first note there are 4 rows of information, this relates to the fact that 4 VLFs have been created in the log file. The values in the FileSize column are the sizes of each VLF in bytes, you will see that the last one to be created is slightly larger than the others. So, a 5MB log file has 4 VLFs of roughly 1.25 MB. Lets alter the CREATE DATABASE script to create a log file that’s a bit bigger and see what happens. Alter the code above so that the log file details are replaced by LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 1GB, MAXSIZE = 25GB, FILEGROWTH = 1GB ); With a bigger log file specified we get more VLFs What if we make it bigger again? LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5GB, MAXSIZE = 250GB, FILEGROWTH = 5GB ); This time we see more VLFs are created within our log file. We now have our 5GB log file comprised of 16 files of 320MB each. In fact these sizes fall into all the ranges that control the VLF creation criteria – what a coincidence! The rules that are followed when a log file is created or has it’s size increased are pretty basic. If the file growth is lower than 64MB then 4 VLFs are created If the growth is between 64MB and 1GB then 8 VLFs are created If the growth is greater than 1GB then 16 VLFs are created. Now the potential for chaos comes if the default values and settings for log file growth are used. By default a database log file gets a 1MB log file with unlimited growth in steps of 10%. The database we just created is 6 MB, let’s add some data and see what happens. USE vlf_test go -- we need somewhere to put the data so, a table is in order IF OBJECT_ID('A_Table') IS NOT NULL DROP TABLE A_Table go CREATE TABLE A_Table ( Col_A int IDENTITY, Col_B CHAR(8000) ) GO -- Let's check the state of the log file -- 4 VLFs found EXECUTE ('DBCC LOGINFO'); go -- We can go ahead and insert some data and then check the state of the log file again INSERT A_Table (col_b) SELECT TOP 500 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO -- insert 500 rows and we get 22 VLFs EXECUTE ('DBCC LOGINFO'); go -- Let's insert more rows INSERT A_Table (col_b) SELECT TOP 2000 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO 10 -- insert 2000 rows, in 10 batches and we suddenly have 107 VLFs EXECUTE ('DBCC LOGINFO'); Well, that escalated quickly! Our log file is split, internally, into 107 fragments after a few thousand inserts. The same happens with any logged transactions, I just chose to illustrate this with INSERTs. Having too many VLFs can cause performance degradation at times of database start up, log backup and log restore operations so it’s well worth keeping a check on this property. How do we prevent excessive VLF creation? Creating the database with larger files and also with larger growth steps and actively choosing to grow your databases rather than leaving it to the Auto Grow event can make sure that the growths are made with a size that is optimal. How do we resolve a situation of a database with too many VLFs? This process needs to be done when the database is under little or no stress so that you don’t affect system users. The steps are: BACKUP LOG YourDBName TO YourBackupDestinationOfChoice Shrink the log file to its smallest possible size DBCC SHRINKFILE(FileNameOfTLogHere, TRUNCATEONLY) * Re-size the log file to the size you want it to, taking in to account your expected needs for the coming months or year. ALTER DATABASE YourDBName MODIFY FILE ( NAME = FileNameOfTLogHere, SIZE = TheSizeYouWantItToBeIn_MB) * – If you don’t know the file name of your log file then run sp_helpfile while you are connected to the database that you want to work on and you will get the details you need. The resize step can take quite a while This is already detailed far better than I can explain it by Kimberley Tripp in her blog 8-Steps-to-better-Transaction-Log-throughput.aspx. The result of this will be a log file with a VLF count according to the bullet list above. Knowing when VLFs are being created By complete coincidence while I have been writing this blog (it’s been quite some time from it’s inception to going live) Jonathan Kehayias from SQLSkills.com has written a great article on how to track database file growth using Event Notifications and Service Broker. I strongly recommend taking a look at it as this is going to catch any sneaky auto grows that take place and let you know about them right away. Hassle free monitoring of VLFs If you are lucky or wise enough to be using SQL Monitor or another monitoring tool that let’s you write your own custom metrics then you can keep an eye on this very easily. There is a custom metric for VLFs (written by Stuart Ainsworth) already on the site and there are some others there are very useful so take a moment or two to look around while you are there. Resources MSDN – http://msdn.microsoft.com/en-us/library/ms179355(v=sql.105).aspx Kimberly Tripp from SQLSkills.com – http://www.sqlskills.com/BLOGS/KIMBERLY/post/8-Steps-to-better-Transaction-Log-throughput.aspx Thomas LaRock at Simple-Talk.com – http://www.simple-talk.com/sql/database-administration/monitoring-sql-server-virtual-log-file-fragmentation/ Disclosure I am a Friend of Red Gate. This means that I am more than likely to say good things about Red Gate DBA and Developer tools. No matter how awesome I make them sound, take the time to compare them with other products before you contact the Red Gate sales team to make your order.

    Read the article

  • Edinburgh this Thurs (25th) - Rob Carrol talks about how to build a high performance, scalable repor

    - by tonyrogerson
    Scottish Area SQL Server User Group Meeting, Edinburgh - Thursday 25th March An evening of SQL Server 2008 Reporting Services Scalability and Performance with Rob Carrol, see how to build a high performance, scalable reporting platform and the tuning techniques required to ensure that report performance remains optimal as your platform grows. Pizza and drinks will be provided! Register at http://www.sqlserverfaq.com/events/221/SQL-Server-2008-Reporting-Services-Scalability-and-Performance.aspx...(read more)

    Read the article

  • SQL Windowing screencast session for Cuppa Corner - rolling totals, data cleansing

    - by tonyrogerson
    In this 10 minute screencast I go through the basics of what I term windowing, which is basically the technique of filtering to a set of rows given a specific value, for instance a Sub-Query that aggregates or a join that returns more than just one row (for instance on a one to one relationship). http://sqlserverfaq.com/content/SQL-Basic-Windowing-using-Joins.aspx SQL below... USE tempdb go CREATE TABLE RollingTotals_Nesting ( client_id int not null, transaction_date date not null, transaction_amount...(read more)

    Read the article

  • Tip of the day: Don’t misuse the Link button control

    - by anas
    Misuse ? Yes it is ! I have seen a lot of developers who are using the LinkButton to do redirection only ! They are handling it’s click event to just write Response.Redirect ("url”) like this: protected void LinkButton1_Click( object sender, EventArgs e) { Response.Redirect( "~/ForgotPassword.aspx" ); } Ok so to understand why it’s not a good practice let’s discuss the redirection steps involved when using the mentioned method: User submits the page by clicking on the LinkButton control...(read more)

    Read the article

  • Updated slide decks from SSMS presentation at SNESSUG

    - by AaronBertrand
    Tonight I spoke at the SNESSUG user group meeting in Warwick, RI. You can download the slide deck here (this is a 3.5 MB PDF with presenter notes): http://sqlblog.com/files/folders/23423/download.aspx If you attended the talk, please feel free to provide feedback at speakerrate.com: http://speakerrate.com/talks/2849-management-studio-tips-tricks Today also happened to be a birthday celebration for Grant Fritchey ( blog | twitter ). He blogged about the meeting and also took a picture of the cake...(read more)

    Read the article

  • MS SQL Server 2008 Developer Training Kit Released

    - by Aamir Hasan
    The SQL Server 2008 Developer Training Kit will help you understand how to build web applications which deeply exploit the rich data types, programming models and new development paradigms in SQL Server 2008.  http://www.microsoft.com/downloads/details.aspx?FamilyID=E9C68E1B-1E0E-4299-B498-6AB3CA72A6D7&displaylang=en

    Read the article

  • Different types of Session state management options available with ASP.NET

    - by Aamir Hasan
    ASP.NET provides In-Process and Out-of-Process state management.In-Process stores the session in memory on the web server.This requires the a "sticky-server" (or no load-balancing) so that the user is always reconnected to the same web server.Out-of-Process Session state management stores data in an external data source.The external data source may be either a SQL Server or a State Server service.Out-of-Process state management requires that all objects stored in session are serializable.Linkhttp://msdn.microsoft.com/en-us/library/ms178586%28VS.80%29.aspx

    Read the article

  • Simon Sabin has a great discount for the SQL Server Masterclass

    - by Testas
    Check out Simons blog post to get a discount of £100 for this event http://sqlblogcasts.com/blogs/simons/archive/2010/05/14/paul-and-kimberly-are-coming-the-uk.aspx   Remember as well  Pencil the 17th June in your diary, send an email [email protected] with the title of Masterclass in the subject line. On Friday 25th May we will draw out a name and the winner will have free entrance to a must see seminar on SQL Server from two of the industry’s leading experts. Thanks Chris

    Read the article

  • CODE PROJECT VIRTUAL TECH SUMMIT ON MOBILE DEVELOPMENT &ndash; ON DEMAND

    - by Tiago Salgado
    Who has not seen the Code Project's Tech Virtual Summit on Mobile Development, you can now see all the sessions on demand. The sessions are: The Mobile Development Landscape Android Push Notifications Beginning Android Flash Development Android for .NET/C# Developers Using MonoDroid iPhone 101: Introduction to iPhone and iOS Development Building Rich Mobile Apps with HTML5, CSS3 and JavaScript Building MVVM apps for Windows Phone 7 Using Panorama and Pivot Controls for WP7 apps Building Data Visualization Applications for Windows Phone 7 To access the sessions, you need to register at the following link: http://www.virtualtechsummits.com/Register.aspx?EventID=11

    Read the article

  • Customizing CreateUserWizard control to show only Sign Up step

    - by bipinjoshi
    Recently a reader asked - Can CreateUserWizard control be customized to show a predefined Security Questions instead of allowing user to enter his own question? Can CreateUserWizard control be configured such that it shows only one step (Sign Up)? Can the completion step be skipped altogether? This short post is an attempt to answer these questions.http://www.bipinjoshi.net/articles/6439dc7c-08c7-4eec-b196-d1590699224c.aspx 

    Read the article

  • Watching Green Day and discovering Sitecore, priceless.

    - by jonel
    I’m feeling inspired and I’d like to share a technique we’ve implemented in Sitecore to address a URL mapping from our legacy site that we wanted to carry over to the new beautiful Littelfuse.com. The challenge is to carry over all of our series URLs that have been published in our datasheets, we currently have a lot of series and having to create a manual mapping for those could be really tedious. It has the format of http://www.littelfuse.com/series/series-name.html, for instance, http://www.littelfuse.com/series/flnr.html. It would have been easier if we have our information architecture defined like this but that would have been too easy. I took a solution that is 2-fold. First, I need to create a URL rewrite rule using the IIS URL Rewrite Module 2.0. Secondly, we need to implement a handler that will take care of the actual lookup of the actual series. It will be amazing after we’ve gone over the details. Let’s start with the URL rewrite. Create a new blank rule, you can name it with anything you wish. The key part here to talk about is the Pattern and the Action groups. The Pattern is nothing but regex. Basically, I’m telling it to match the regex I have defined. In the Action group, I am telling it what to do, in this case, rewrite to the redirect.aspx webform. In this implementation, I will be using Rewrite instead of redirect so the URL sticks in the browser. If you opt to use Redirect, then the URL bar will display the new URL our webform will redirect to. Let me explain one small thing, the (\w+) in my Pattern group’s regex, will actually translate to {R:1} in my Action’s group. This is where the magic begins. Now let’s see what our Redirect.aspx contains. Remember our {R:1} above which becomes the query string variable s? This are basic .Net code. The only thing that you will probably ask is the LFSearch class. It’s our own implementation of addressing finding items by using a field search, we supply the fieldname, the value of the field, the template name of the item we are after, and the value of true or false if we want to do an exact search, or not. If eureka, then redirect to that item’s Path (Url). If not, tell the user tough luck, here’s the 404 page as a consolation. Amazing, ain’t it?

    Read the article

  • Type of Blobs

    - by kaleidoscope
    With the release of Windows Azure November 2009 CTP, now we have two types of blobs. Block Blob - This blob type is in place since PDC 2008 and is optimized for streaming workloads. [Max Size allowed : 200GB] Page Blob - With November 2009 CTP release, a new blob type is added which is optimized for random read / writes called Page Blob. [Max Size allowed : 1TB] More details can be found at: http://geekswithblogs.net/IUnknown/archive/2009/11/16/azure-november-ctp-announced.aspx Amit, S

    Read the article

  • ASP.NET vNext in Visual Studio &ldquo;14&rdquo; CTP

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2014/06/05/asp.net-vnext-in-visual-studio-ldquo14rdquo-ctp.aspxMicrosoft have issued a long article about Visual Studio “14” CTP at http://blogs.msdn.com/b/webdev/archive/2014/06/03/asp-net-vnext-in-visual-studio-14-ctp.aspx. Please remember that the 14” CTP does NOT have “side-by-side” support and thus should be installed on PC or virtual machine with no other version of Visual Studio installed.

    Read the article

  • Introduction to Developing Mobile Web Applications in ASP.NET MVC 4

    - by bipinjoshi
    As mobile devices are becoming more and more popular, web developers are also finding it necessary to target mobile devices while building their web sites. While developing a mobile web site is challenging due to the complexity in terms of device detection, screen size and browser support, ASP.NET MVC4 makes a developer's life easy by providing easy ways to develop mobile web applications. To that end this article introduces you to the basics of developing web sites using ASP.NET MVC4 targeted at mobile devices.http://www.binaryintellect.net/articles/7a33d6fa-1dec-49fe-9487-30675d0a09f0.aspx

    Read the article

  • User Group Meeting Summary - April 2010

    - by Michael Stephenson
    Thanks to everyone who could make it to what turned out to be an excellent SBUG event.  First some thanks to:  Speakers: Anthony Ross and Elton Stoneman Host: The various people at Hitachi who helped to organise and arrange the venue.   Session 1 - Getting up and running with Windows Mobile and the Windows Azure Service Bus In this session Anthony discussed some considerations for using Windows Mobile and the Windows Azure Service Bus from a real-world project which Hitachi have been working on with EasyJet.  Anthony also walked through a simplified demo of the concepts which applied on the project.   In addition to the slides and demo it was also very interesting to discuss with the guys involved on this project to hear about their real experiences developing with the Azure Service Bus and some of the limitations they have had to work around in Windows Mobiles ability to interact with the service bus.   On the back of this session we will look to do some further activities around this topic and the guys offered to share their wish list of features for both Windows Mobile and Windows Azure which we will look to share for user group discussion.   Another interesting point was the cost aspects of using the ISB which were very low.   Session 2 - The Enterprise Cache In the second session Elton used a few slides which are based around one of his customer scenario's where they are looking into the concept of an Enterprise Cache within the organisation.  Elton discusses this concept and also a codeplex project he is putting together which allows you to take advantage of a cache with various providers such as Memcached, AppFabric Caching and Ncache.   Following the presentation it was interesting to hear peoples thoughts on various aspects such as the enterprise cache versus an out of process application cache.  Also there was interesting discussion around how people would like to search the cache in the future.   We will again look to put together some follow-up activity on this   Meeting Summary Following the meeting all slide decks are saved in the skydrive location where we keep content from all meetings: http://cid-40015ea59a1307c8.skydrive.live.com/browse.aspx/.Public/SBUG/SBUG%20Meetings/2010%20April   Remember that the details of all previous events are on the following page. http://uksoabpm.org/Events.aspx   Competition We had three copies of the Windows Identity Foundation Patterns and Practices book that were raffles on the night, it would be great to hear any feedback on the book from those who won it.   Recording The user group meeting was recorded and we will look to make this available online sometime soon.   UG Business The following things were discussed as general UG topics:   We will change the name of the user group to the UK Connected Systems User Group to we are more inline with other user groups who cover similar topics and we believe this will help us to attract more members.  The content or focus of the user group is not expected to change.   The next meeting is 26th May and can be registered at the following link: http://sbugmay2010.eventbrite.com/

    Read the article

  • On StringComparison Values

    - by Jesse
    When you use the .NET Framework’s String.Equals and String.Compare methods do you use an overloStringComparison enumeration value? If not, you should be because the value provided for that StringComparison argument can have a big impact on the results of your string comparison. The StringComparison enumeration defines values that fall into three different major categories: Culture-sensitive comparison using a specific culture, defaulted to the Thread.CurrentThread.CurrentCulture value (StringComparison.CurrentCulture and StringComparison.CurrentCutlureIgnoreCase) Invariant culture comparison (StringComparison.InvariantCulture and StringComparison.InvariantCultureIgnoreCase) Ordinal (byte-by-byte) comparison of  (StringComparison.Ordinal and StringComparison.OrdinalIgnoreCase) There is a lot of great material available that detail the technical ins and outs of these different string comparison approaches. If you’re at all interested in the topic these two MSDN articles are worth a read: Best Practices For Using Strings in the .NET Framework: http://msdn.microsoft.com/en-us/library/dd465121.aspx How To Compare Strings: http://msdn.microsoft.com/en-us/library/cc165449.aspx Those articles cover the technical details of string comparison well enough that I’m not going to reiterate them here other than to say that the upshot is that you typically want to use the culture-sensitive comparison whenever you’re comparing strings that were entered by or will be displayed to users and the ordinal comparison in nearly all other cases. So where does that leave the invariant culture comparisons? The “Best Practices For Using Strings in the .NET Framework” article has the following to say: “On balance, the invariant culture has very few properties that make it useful for comparison. It does comparison in a linguistically relevant manner, which prevents it from guaranteeing full symbolic equivalence, but it is not the choice for display in any culture. One of the few reasons to use StringComparison.InvariantCulture for comparison is to persist ordered data for a cross-culturally identical display. For example, if a large data file that contains a list of sorted identifiers for display accompanies an application, adding to this list would require an insertion with invariant-style sorting.” I don’t know about you, but I feel like that paragraph is a bit lacking. Are there really any “real world” reasons to use the invariant culture comparison? I think the answer to this question is, “yes”, but in order to understand why we should first think about what the invariant culture comparison really does. The invariant culture comparison is really just a culture-sensitive comparison using a special invariant culture (Michael Kaplan has a great post on the history of the invariant culture on his blog: http://blogs.msdn.com/b/michkap/archive/2004/12/29/344136.aspx). This means that the invariant culture comparison will apply the linguistic customs defined by the invariant culture which are guaranteed not to differ between different machines or execution contexts. This sort of consistently does prove useful if you needed to maintain a list of strings that are sorted in a meaningful and consistent way regardless of the user viewing them or the machine on which they are being viewed. Example: Prototype Names Let’s say that you work for a large multi-national toy company with branch offices in 10 different countries. Each year the company would work on 15-25 new toy prototypes each of which is assigned a “code name” while it is under development. Coming up with fun new code names is a big part of the company culture that everyone really enjoys, so to be fair the CEO of the company spent a lot of time coming up with a prototype naming scheme that would be fun for everyone to participate in, fair to all of the different branch locations, and accessible to all members of the organization regardless of the country they were from and the language that they spoke. Each new prototype will get a code name that begins with a letter following the previously created name using the alphabetical order of the Latin/Roman alphabet. Each new year prototype names would start back at “A”. The country that leads the prototype development effort gets to choose the name in their native language. (An appropriate Romanization system will be used for countries where the primary language is not written in the Latin/Roman alphabet. For example, the Pinyin system could be used for Chinese). To avoid repeating names, a list of all current and past prototype names will be maintained on each branch location’s company intranet site. Assuming that maintaining a single pre-sorted list is not feasible among all of the highly distributed intranet implementations, what string comparison method would you use to sort each year’s list of prototype names so that the list is both meaningful and consistent regardless of the country within which the list is being viewed? Sorting the list with a culture-sensitive comparison using the default configured culture on each country’s intranet server the list would probably work most of the time, but subtle differences between cultures could mean that two different people would see a list that was sorted slightly differently. The CEO wants the prototype names to be a unifying aspect of company culture and is adamant that everyone see the the same list sorted in the same order and there’s no way to guarantee a consistent sort across different cultures using the culture-sensitive string comparison rules. The culture-sensitive sort would produce a meaningful list for the specific user viewing it, but it wouldn’t always be consistent between different users. Sorting with the ordinal comparison would certainly be consistent regardless of the user viewing it, but would it be meaningful? Let’s say that the current year’s prototype name list looks like this: Antílope (Spanish) Babouin (French) Cahoun (Czech) Diamond (English) Flosse (German) If you were to sort this list using ordinal rules you’d end up with: Antílope Babouin Diamond Flosse Cahoun This sort is no good because the entry for “C” appears the bottom of the list after “F”. This is because the Czech entry for the letter “C” makes use of a diacritic (accent mark). The ordinal string comparison does a byte-by-byte comparison of the code points that make up each character in the string and the code point for the “C” with the diacritic mark is higher than any letter without a diacritic mark, which pushes that entry to the bottom of the sorted list. The CEO wants each country to be able to create prototype names in their native language, which means we need to allow for names that might begin with letters that have diacritics, so ordinal sorting kills the meaningfulness of the list. As it turns out, this situation is actually well-suited for the invariant culture comparison. The invariant culture accounts for linguistically relevant factors like the use of diacritics but will provide a consistent sort across all machines that perform the sort. Now that we’ve walked through this example, the following line from the “Best Practices For Using Strings in the .NET Framework” makes a lot more sense: One of the few reasons to use StringComparison.InvariantCulture for comparison is to persist ordered data for a cross-culturally identical display That line describes the prototype name example perfectly: we need a way to persist ordered data for a cross-culturally identical display. While this example is 100% made-up, I think it illustrates that there are indeed real-world situations where the invariant culture comparison is useful.

    Read the article

  • Visual Studio 2012 RC and Windows 8 Release Review is available for download

    - by Fredrik N
    Today Visual Studio 2012 RC is available for download at:http://www.microsoft.com/visualstudio/11/en-us/downloads#express-win8EF 5, MVC 4, WebApi and much more in the RC release. Widows 8 Release Review!http://blogs.msdn.com/b/b8/archive/2012/05/31/delivering-the-windows-8-release-preview.aspxASP.NET MVC 4 RC for Visual Studio 2010 SP1http://www.microsoft.com/en-us/download/details.aspx?id=29935 Happy coding!!

    Read the article

  • New T-SQL Features in SQL Server 2011

    - by Divya Agrawal
    SQL Server 2011 (or Denali) CTP is now available and can be downloaded at http://www.microsoft.com/downloads/en/details.aspx?FamilyID=6a04f16f-f6be-4f92-9c92-f7e5677d91f9&displaylang=en SQL Server 2011 has several major enhancements including a new look for SSMS. SSMS is now   similar to Visual Studio   with greatly improved Intellisense support. This article we will focus on the T-SQL Enhancements in SQL Server 2011. The main [...]

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >