Search Results

Search found 52798 results on 2112 pages for 'net reflector'.

Page 39/2112 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • How do you Send More that 20 Parameters to a Stored Procedure Using ODP.Net?

    - by discwiz
    Switching from Microsofts Oracle Driver to ODP.NET version 10.2.0.100. After changing the data types to OracleDBTypes in a procedure, that worked perficetly using System.Data.OracleClient, the procedure fails if we try and pass in more that 20 parameters. The error returned is: ORA-06550: line 1, column 7: PLS-00306: wrong number or types of arguments in call to 'ADD_TARP_EVENT' ORA-06550: line 1, column 7: PL/SQL: Statement ignorede If we reduce the number of parameters to less than 20 it works. Is this a known issue? Thanks, Dave Here the code for creating the parameters: Shared Function CreateTarpEventCommand(ByVal aTarpEvent As TARPEventType) As OracleCommand Dim cmd As New OracleCommand With aTarpEvent cmd.Parameters.Add(New OracleParameter("I_facID_C", OracleDbType.Char)).Value = .FacilityShortName cmd.Parameters.Add(New OracleParameter("I_facName_VC", OracleDbType.Varchar2)).Value = .FacilityLongName cmd.Parameters.Add(New OracleParameter("I_client_VC", OracleDbType.Varchar2)).Value = .ComputerNameTarpIsRunningOn cmd.Parameters.Add(New OracleParameter("I_TARP_Version_VC", OracleDbType.Varchar2)).Value = .TarpVersionNumber cmd.Parameters.Add(New OracleParameter("I_NAS_Type_VC", OracleDbType.Varchar2)).Value = .FacilityNASSystemType cmd.Parameters.Add(New OracleParameter("I_Aircraft1_Callsign_VC", OracleDbType.Varchar2)).Value = .Aircraft1Callsign If .Aircraft1Type Is Nothing Then cmd.Parameters.Add(New OracleParameter("I_Aircraft1_Type_VC", OracleDbType.Varchar2)).Value = .Aircraft1Type End If If .Aircraft1Category Is Nothing Then cmd.Parameters.Add(New OracleParameter("I_Aircraft1_Cat_VC", OracleDbType.Varchar2)).Value = .Aircraft1Category End If cmd.Parameters.Add(New OracleParameter("I_Aircraft2_Callsign_VC", OracleDbType.Varchar2)).Value = .Aircraft2Callsign If .Aircraft2Type Is Nothing Then cmd.Parameters.Add(New OracleParameter("I_Aircraft2_Type_VC", OracleDbType.Varchar2)).Value = .Aircraft2Type End If If .Aircraft2Category Is Nothing Then cmd.Parameters.Add(New OracleParameter("I_Aircraft2_Cat_VC", OracleDbType.Varchar2)).Value = .Aircraft2Category End If If .SensorShortName Is Nothing Then cmd.Parameters.Add(New OracleParameter("I_Sensor_Name_VC", OracleDbType.Varchar2)).Value = .SensorShortName End If If .TarpConfigurationName Is Nothing Then cmd.Parameters.Add(New OracleParameter("I_TARP_Config_Name_VC", OracleDbType.Varchar2)).Value = .TarpConfigurationName End If If .EntryCreatorID Is Nothing Then cmd.Parameters.Add(New OracleParameter("I_Create_VC", OracleDbType.Varchar2)).Value = .EntryCreatorID End If If .LogAction Is Nothing Then cmd.Parameters.Add(New OracleParameter("I_Log_Action_VC", OracleDbType.Varchar2)).Value = .LogAction End If cmd.Parameters.Add(New OracleParameter("I_TARP_Mode_VC", OracleDbType.Varchar2)).Value = .TarpOperatingMode cmd.Parameters.Add(New OracleParameter("I_Min_Loss_N", OracleDbType.Decimal)).Value = .ClosestMeasureOfLoSS If .MapName Is Nothing Then cmd.Parameters.Add(New OracleParameter("I_MAP_NAME_VC", OracleDbType.Varchar2)).Value = .MapName End If If .TarpConfigurationFileHash Is Nothing Then cmd.Parameters.Add(New OracleParameter("I_CONFIG_HASH_VC", OracleDbType.Varchar2)).Value = .TarpConfigurationFileHash End If Dim aDate As OracleDate = CType(.LossEventsMessages(0).LossEventTime, System.DateTime) cmd.Parameters.Add(New OracleParameter("I_FIRST_LOSS_EVENT_DATE", OracleDbType.Date)).Value = aDate cmd.Parameters.Add(New OracleParameter("I_FIRST_LOSS_EVENT_MS_N", OracleDbType.Int32)).Value = .LossEventsMessages(0).LossEventMilliSeconds If .ZippedMapFiles Is Nothing Then cmd.Parameters.Add(New OracleParameter("I_Map_File_BL", OracleDbType.Blob)).Value = .ZippedMapFiles End If cmd.Parameters.Add(New OracleParameter("I_TARP_Package_BL", OracleDbType.Blob)).Value = .ZippedTarpPackageWithoutMaps cmd.Parameters.Add(New OracleParameter("rs_RESULTS", OracleDbType.RefCursor)).Direction = ParameterDirection.Output End With Return cmd End Function And here is the code for executing the procedure: Dim workingDataSet As New DataSet Dim oracleConnection As New OracleConnection Dim cmd As New OracleCommand Dim oracleDataAdapter As New OracleDataAdapter Try Using oracleConnection oracleConnection.ConnectionString = System.Configuration.ConfigurationManager.AppSettings("MasterConnectionODT") cmd = HelperDB.CreateTarpEventCommand(TarpEvent) cmd.Connection = oracleConnection cmd.CommandText = "LOADER.ADD_TARP_EVENT" cmd.CommandType = CommandType.StoredProcedure Using oracleConnection oracleConnection.Open() Dim aTransation As OracleTransaction = oracleConnection.BeginTransaction(IsolationLevel.ReadCommitted) Try Using oracleDataAdapter oracleDataAdapter = New OracleDataAdapter(cmd) oracleDataAdapter.TableMappings.Add("Results", "rs_Max") oracleDataAdapter.Fill(workingDataSet) ....

    Read the article

  • Is .NET Compact a perfect subset of .NET?

    - by Andrew J. Brehm
    Is .NET Compact a perfect subset of .NET? Can I write a Windows Forms application and run it on .NET Compact, assuming that I took into account screen size and other limitations and avoid classes and methods not supported by .NET Compact or is .NET Compact a diffent and incompatible GUI framework?

    Read the article

  • Reuse another ASP.NET session (set Session ID)

    - by queen3
    My problem is that when I open web application from Outlook in a separate IE window, the ASP.NET session is lost. This is (as described in several places) because in-memory cookie is lost. So it goes like this: User works with ASP.NET web application in Outlook, and this stores some info in ASP.NET session User clicks Print to open new IE window with print-ready data The new window has different ASP.NET session ID and can't access old data. I think, maybe, if I pass ASP.NET session ID to new IE window, I can somehow "attach" to that session? Tell ASP.NET that this is the one that I need to be current?

    Read the article

  • .NET Framework 4 RTM on Windows server 2008 R2

    - by mare
    I've just installed .NET 4 on Windows SErver 2008 R2 x64 and I am getting 500 Internal Server Error with an ASP.NET MVC application which was previously running fine on 3.5. The application was upgraded from targeting 3.5 to target 4 and I personally built it today on my development machine (changed in VS - Properties to .NET Framework 4). On the server I installed .NET Framework 4 Client profile and Full both automatically through the Web Platform Installer. ASP.NET MVC 2 was also installed through Platform Installer. I created a new .NET 4 application pool in IIS and placed the web app in it. Also I have custom errors turned Off in web.config but even so no detailed error is displayed - just the plain IIS 7.5 500 Internal Server Error. Any suggestions?

    Read the article

  • why licenced code is packed and then is reviewable using Disassembler at the same time ?

    - by Asad Butt
    Is it legal / ethical to copy code for any reason, or utilize it (like code review) from the .Net framework or any other .Net based API using Reflector or similar tools ? If it is, what advantages do Microsoft and other licence based softwares have for packing there code ? If it is not, Why can we use ILDasm and Reflector ? Another way of saying this is Why to pack it up if it is fine to review it ? probably I am missing some bits in the question, any one who feels, could ask this question in a better way, is most welcome to edit. Thanks

    Read the article

  • Profile service is not available in code in Page_Load within ASP.NET Web Projects

    - by afsharm
    Profile that is used for ASP.NET Profile Service is not available in Page code behind files like in Page_Load. It may be just a problem with Visual Studio installation/configuration, but as another problem, classes placed in App_Code in not seen in page codes. Even when I'm adding new ASP.NET folder to my project, "App_Code" is not available as an option. I tested the entire scenario with ASP.NET Web Project and Empty ASP.NET Web Project. This problem does exists while creating ASP.NET Website. Environment: Visual Studio 2010 Ultimate x64, ASP.NET 4.0, Windows Server 2008 R2 x64. What may be the problem and how it can be solved?

    Read the article

  • Comparing ASP.Net Framework to Cakephp, Zend , Ruby on Rails

    - by numerical25
    I am a PHP developer migrating to C# ASP.Net Framework. As of right now, I am experienced in using Php for developing sites and I use CakePhp and Zend framework as my RAD tools to help me produce better applications. As I move over to ASP.NET, I have this view that C# ASP.Net framework itself is already a RAD tool and is equivalent to using Cakephp, Zend, or even Ruby on Rails. So I really shouldn't have no concerns trying to find a separate library for ASP.NET that will help me produce better applications. To me, in a sense the ASP.NET is already like a MVC cause it seperates the model from the view and the methods are almost like controllers. So as far as having the best tools are concerned, should I be satisfied with just using ASP.NET as my RAD tool.

    Read the article

  • QT vs. Net - REAL comparisons for R.A.D. projects

    - by Pirate for Profit
    Man in all these Qt vs. .NET discussions 90% these people argue about the dumbest crap. Trying to get a real comparison chart here, because I know a little about both frameworks but I don't know everything. I believe Qt and .NET both have strengths and weaknesses. This is to make a comparison that highlights these so people can make more informed decisions before embarking on a project, in the spirit of R.A.D. Event Handling In Qt the event handling system is very simple. You just emit signals when something cool happens and then catch them in slots. ie. // run some calculations, then emit valueChanged(30, false, 20.2); and then catching it, any object can make a slot to recieve that message easily void MyObj::valueChanged(int percent, bool ok, float timeRemaining). It's easy to "block" an event or "disconnect" when needed, and works seamlessly across threads... once you get the hang of it, it just seems a lot more natural and intuitive than the way the .NET event handling is set up (you know, void valueChanged(object sender, CustomEventArgs e). And I'm not just talking about syntax, because in the end the .NET anonymous delegates are the bomb. I'm also talking about in more than just reflection (because, yes, .NET obviously has much stronger reflection capabilities). I'm talking about in the way the system feels to a human being. Qt wins hands down for the simplest yet still flexible event handling system ever i m o. Plugins and such I do love some of the ease of C# compared to C++, as well as .NET's assembly architecture, even though it leads to a bunch of .dll's (there's ways to combine everything into a single exe though). That is a big bonus for modular projects, which are a PITA to import stuff in C++ as far as RAD is concerned. Database Ease of Doing Crap Also what about datasets and database manipulations. I think .net wins here but I'm not sure. Threading/Conccurency How do you guys think of the threading? In .NET, all I've ever done is make like a list of master worker threads with locks. I like QConcurrentFramework, you don't worry about locks or anything, and with the ease of the signal slot system across threads it's nice to get notified about the progress of things. QConcurrent is the simplest threading mechanism I've ever played with. Memory Usage Also what do you think of the overall memory usage comparison. Is the .NET garbage collector pretty on the ball and quick compared to the instantaneous nature of native memory management? Or does it just let programs leak up a storm and lag the computer then clean it up when it's about to really lag? Doesn't the just-in-time compiler make native code that is pretty good, like and that only happens the first time the program is run? However, I am a n00b who doesn't know what I'm talking about, please school me on the subject.

    Read the article

  • Spring.NET & Immediacy CMS (or how to inject to server side controls without using PageHandlerFactor

    - by Simon Rice
    Is there any way to inject dependencies into an Immediacy CMS control using Spring.NET, ideally without having to use to ContextRegistry when initialising the control? Update, with my own answer The issue here is that Immediacy already has a handler defined in web.config that deals with all aspx pages, & so it's not possible add an entry for Spring.NET's PageHandlerFactory in web.config as per a normal webforms app. That rules out making the control implement ISupportsWebDependencyInjection. Furthermore, most of Immediacy's generated pages are aspx pages that don't physically exist on the drive. I have changed the title of the question to reflect this. What I have done to get Dependency Injection working is: Add the usual entries to web.config for Spring.NET as outlined in the documentation, except for the adding the entry to the <httpHandlers> section. In this case I've got my object definitions in Spring.config. Create the following abstract base class that will deal with all of the Dependency Injection work: DIControl.cs public abstract class DIControl : ImmediacyControl { protected virtual string DIName { get { return this.GetType().Name; } } protected override void OnInit(EventArgs e) { if (ContextRegistry.GetContext().GetObject(DIName, this.GetType()) != null) ContextRegistry.GetContext().ConfigureObject(this, DIName); base.OnInit(e); } } For non-immediacy controls, you can make this server side control inherit from Control or whatever subclass of that you like. For any control with which you wish to use with Spring.NET's Inversion of Control container, define it to inherit from DIControl & add the relelvant entry to Spring.config, for example: SampleControl.cs public class SampleControl : DIControl, INamingContainer { public string Text { get; set; } protected string InjectedText { get; set; } public SampleControl() : base() { Text = "Hello world"; } protected override void RenderContents(HtmlTextWriter output) { output.Write(string.Format("{0} {1}", Text, InjectedText)); } } Spring.config <objects xmlns="http://www.springframework.net"> <object id="SampleControl" type="MyProject.SampleControl, MyAssembly"> <property name="InjectedText" value="from Spring.NET" /> </object> </objects> You can optionally override DIName if you wish to name your entry in Spring.config differently from the name of your class. Provided everything's done correctly, you will have the control writing out "Hello world from Spring.NET!" when used in a page. This solution uses Spring.NET's ContextRegistry from within the control, but I would be surprised if there's no way around that for Immediacy at least since the page objects themselves aren't accessible. However, can this be improved at all from a Spring.NET perspective? Is there maybe an Immediacy plugin that already does this that I'm completely unaware of? Or is there an approach that does this in a more elegant way? I'm open to suggestions.

    Read the article

  • How to create a new WCF/MVC/jQuery application from scratch

    - by pjohnson
    As a corporate developer by trade, I don't get much opportunity to create from-the-ground-up web sites; usually it's tweaks, fixes, and new functionality to existing sites. And with hobby sites, I often don't find the challenges I run into with enterprise systems; usually it's starting from Visual Studio's boilerplate project and adding whatever functionality I want to play around with, rarely deploying outside my own machine. So my experience creating a new enterprise-level site was a bit dated, and the technologies to do so have come a long way, and are much more ready to go out of the box. My intention with this post isn't so much to provide any groundbreaking insights, but to just tie together a lot of information in one place to make it easy to create a new site from scratch. Architecture One site I created earlier this year had an MVC 3 front end and a WCF 4-driven service layer. Using Visual Studio 2010, these project types are easy enough to add to a new solution. I created a third Class Library project to store common functionality the front end and services layers both needed to access, for example, the DataContract classes that the front end uses to call services in the service layer. By keeping DataContract classes in a separate project, I avoided the need for the front end to have an assembly/project reference directly to the services code, a bit cleaner and more flexible of an SOA implementation. Consuming the service Even by this point, VS has given you a lot. You have a working web site and a working service, neither of which do much but are great starting points. To wire up the front end and the services, I needed to create proxy classes and WCF client configuration information. I decided to use the SvcUtil.exe utility provided as part of the Windows SDK, which you should have installed if you installed VS. VS also provides an Add Service Reference command since the .NET 1.x ASMX days, which I've never really liked; it creates several .cs/.disco/etc. files, some of which contained hardcoded URL's, adding duplicate files (*1.cs, *2.cs, etc.) without doing a good job of cleaning up after itself. I've found SvcUtil much cleaner, as it outputs one C# file (containing several proxy classes) and a config file with settings, and it's easier to use to regenerate the proxy classes when the service changes, and to then maintain all your configuration in one place (your Web.config, instead of the Service Reference files). I provided it a reference to a copy of my common assembly so it doesn't try to recreate the data contract classes, had it use the type List<T> for collections, and modified the output files' names and .NET namespace, ending up with a command like: svcutil.exe /l:cs /o:MyService.cs /config:MyService.config /r:MySite.Common.dll /ct:System.Collections.Generic.List`1 /n:*,MySite.Web.ServiceProxies http://localhost:59999/MyService.svc I took the generated MyService.cs file and drop it in the web project, under a ServiceProxies folder, matching the namespace and keeping it separate from classes I coded manually. Integrating the config file took a little more work, but only needed to be done once as these settings didn't often change. A great thing Microsoft improved with WCF 4 is configuration; namely, you can use all the default settings and not have to specify them explicitly in your config file. Unfortunately, SvcUtil doesn't generate its config file this way. If you just copy & paste MyService.config's contents into your front end's Web.config, you'll copy a lot of settings you don't need, plus this will get unwieldy if you add more services in the future, each with its own custom binding. Really, as the only mandatory settings are the endpoint's ABC's (address, binding, and contract) you can get away with just this: <system.serviceModel>  <client>    <endpoint address="http://localhost:59999/MyService.svc" binding="wsHttpBinding" contract="MySite.Web.ServiceProxies.IMyService" />  </client></system.serviceModel> By default, the services project uses basicHttpBinding. As you can see, I switched it to wsHttpBinding, a more modern standard. Using something like netTcpBinding would probably be faster and more efficient since the client & service are both written in .NET, but it requires additional server setup and open ports, whereas switching to wsHttpBinding is much simpler. From an MVC controller action method, I instantiated the client, and invoked the method for my operation. As with any object that implements IDisposable, I wrapped it in C#'s using() statement, a tidy construct that ensures Dispose gets called no matter what, even if an exception occurs. Unfortunately there are problems with that, as WCF's ClientBase<TChannel> class doesn't implement Dispose according to Microsoft's own usage guidelines. I took an approach similar to Technology Toolbox's fix, except using partial classes instead of a wrapper class to extend the SvcUtil-generated proxy, making the fix more seamless from the controller's perspective, and theoretically, less code I have to change if and when Microsoft fixes this behavior. User interface The MVC 3 project template includes jQuery and some other common JavaScript libraries by default. I updated the ones I used to the latest versions using NuGet, available in VS via the Tools > Library Package Manager > Manage NuGet Packages for Solution... > Updates. I also used this dialog to remove packages I wasn't using. Given that it's smart enough to know the difference between the .js and .min.js files, I was hoping it would be smart enough to know which to include during build and publish operations, but this doesn't seem to be the case. I ended up using Cassette to perform the minification and bundling of my JavaScript and CSS files; ASP.NET 4.5 includes this functionality out of the box. The web client to web server link via jQuery was easy enough. In my JavaScript function, unobtrusively wired up to a button's click event, I called $.ajax, corresponding to an action method that returns a JsonResult, accomplished by passing my model class to the Controller.Json() method, which jQuery helpfully translates from JSON to a JavaScript object.$.ajax calls weren't perfectly straightforward. I tried using the simpler $.post method instead, but ran into trouble without specifying the contentType parameter, which $.post doesn't have. The url parameter is simple enough, though for flexibility in how the site is deployed, I used MVC's Url.Action method to get the URL, then sent this to JavaScript in a JavaScript string variable. If the request needed input data, I used the JSON.stringify function to convert a JavaScript object with the parameters into a JSON string, which MVC then parses into strongly-typed C# parameters. I also specified "json" for dataType, and "application/json; charset=utf-8" for contentType. For success and error, I provided my success and error handling functions, though success is a bit hairier. "Success" in this context indicates whether the HTTP request succeeds, not whether what you wanted the AJAX call to do on the web server was successful. For example, if you make an AJAX call to retrieve a piece of data, the success handler will be invoked for any 200 OK response, and the error handler will be invoked for failed requests, e.g. a 404 Not Found (if the server rejected the URL you provided in the url parameter) or 500 Internal Server Error (e.g. if your C# code threw an exception that wasn't caught). If an exception was caught and handled, or if the data requested wasn't found, this would likely go through the success handler, which would need to do further examination to verify it did in fact get back the data for which it asked. I discuss this more in the next section. Logging and exception handling At this point, I had a working application. If I ran into any errors or unexpected behavior, debugging was easy enough, but of course that's not an option on public web servers. Microsoft Enterprise Library 5.0 filled this gap nicely, with its Logging and Exception Handling functionality. First I installed Enterprise Library; NuGet as outlined above is probably the best way to do so. I needed a total of three assembly references--Microsoft.Practices.EnterpriseLibrary.ExceptionHandling, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging, and Microsoft.Practices.EnterpriseLibrary.Logging. VS links with the handy Enterprise Library 5.0 Configuration Console, accessible by right-clicking your Web.config and choosing Edit Enterprise Library V5 Configuration. In this console, under Logging Settings, I set up a Rolling Flat File Trace Listener to write to log files but not let them get too large, using a Text Formatter with a simpler template than that provided by default. Logging to a different (or additional) destination is easy enough, but a flat file suited my needs. At this point, I verified it wrote as expected by calling the Microsoft.Practices.EnterpriseLibrary.Logging.Logger.Write method from my C# code. With those settings verified, I went on to wire up Exception Handling with Logging. Back in the EntLib Configuration Console, under Exception Handling, I used a LoggingExceptionHandler, setting its Logging Category to the category I already had configured in the Logging Settings. Then, from code (e.g. a controller's OnException method, or any action method's catch block), I called the Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicy.HandleException method, providing the exception and the exception policy name I had configured in the Exception Handling Settings. Before I got this configured correctly, when I tried it out, nothing was logged. In working with .NET, I'm used to seeing an exception if something doesn't work or isn't set up correctly, but instead working with these EntLib modules reminds me more of JavaScript (before the "use strict" v5 days)--it just does nothing and leaves you to figure out why, I presume due in part to the listener pattern Microsoft followed with the Enterprise Library. First, I verified logging worked on its own. Then, verifying/correcting where each piece wires up to the next resolved my problem. Your C# code calls into the Exception Handling module, referencing the policy you pass the HandleException method; that policy's configuration contains a LoggingExceptionHandler that references a logCategory; that logCategory should be added in the loggingConfiguration's categorySources section; that category references a listener; that listener should be added in the loggingConfiguration's listeners section, which specifies the name of the log file. One final note on error handling, as the proper way to handle WCF and MVC errors is a whole other very lengthy discussion. For AJAX calls to MVC action methods, depending on your configuration, an exception thrown here will result in ASP.NET'S Yellow Screen Of Death being sent back as a response, which is at best unnecessarily and uselessly verbose, and at worst a security risk as the internals of your application are exposed to potential hackers. I mitigated this by overriding my controller's OnException method, passing the exception off to the Exception Handling module as above. I created an ErrorModel class with as few properties as possible (e.g. an Error string), sending as little information to the client as possible, to both maximize bandwidth and mitigate risk. I then return an ErrorModel in JSON format for AJAX requests: if (filterContext.HttpContext.Request.IsAjaxRequest()){    filterContext.Result = Json(new ErrorModel(...));    filterContext.ExceptionHandled = true;} My $.ajax calls from the browser get a valid 200 OK response and go into the success handler. Before assuming everything is OK, I check if it's an ErrorModel or a model containing what I requested. If it's an ErrorModel, or null, I pass it to my error handler. If the client needs to handle different errors differently, ErrorModel can contain a flag, error code, string, etc. to differentiate, but again, sending as little information back as possible is ideal. Summary As any experienced ASP.NET developer knows, this is a far cry from where ASP.NET started when I began working with it 11 years ago. WCF services are far more powerful than ASMX ones, MVC is in many ways cleaner and certainly more unit test-friendly than Web Forms (if you don't consider the code/markup commingling you're doing again), the Enterprise Library makes error handling and logging almost entirely configuration-driven, AJAX makes a responsive UI more feasible, and jQuery makes JavaScript coding much less painful. It doesn't take much work to get a functional, maintainable, flexible application, though having it actually do something useful is a whole other matter.

    Read the article

  • Tip/Trick: Fix Common SEO Problems Using the URL Rewrite Extension

    - by ScottGu
    Search engine optimization (SEO) is important for any publically facing web-site.  A large % of traffic to sites now comes directly from search engines, and improving your site’s search relevancy will lead to more users visiting your site from search engine queries.  This can directly or indirectly increase the money you make through your site. This blog post covers how you can use the free Microsoft URL Rewrite Extension to fix a bunch of common SEO problems that your site might have.  It takes less than 15 minutes (and no code changes) to apply 4 simple URL Rewrite rules to your site, and in doing so cause search engines to drive more visitors and traffic to your site.  The techniques below work equally well with both ASP.NET Web Forms and ASP.NET MVC based sites.  They also works with all versions of ASP.NET (and even work with non-ASP.NET content). [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Measuring the SEO of your website with the Microsoft SEO Toolkit A few months ago I blogged about the free SEO Toolkit that we’ve shipped.  This useful tool enables you to automatically crawl/scan your site for SEO correctness, and it then flags any SEO issues it finds.  I highly recommend downloading and using the tool against any public site you work on.  It makes it easy to spot SEO issues you might have in your site, and pinpoint ways to optimize it further. Below is a simple example of a report I ran against one of my sites (www.scottgu.com) prior to applying the URL Rewrite rules I’ll cover later in this blog post:   Search Relevancy and URL Splitting Two of the important things that search engines evaluate when assessing your site’s “search relevancy” are: How many other sites link to your content.  Search engines assume that if a lot of people around the web are linking to your content, then it is likely useful and so weight it higher in relevancy. The uniqueness of the content it finds on your site.  If search engines find that the content is duplicated in multiple places around the Internet (or on multiple URLs on your site) then it is likely to drop the relevancy of the content. One of the things you want to be very careful to avoid when building public facing sites is to not allow different URLs to retrieve the same content within your site.  Doing so will hurt with both of the situations above.  In particular, allowing external sites to link to the same content with multiple URLs will cause your link-count and page-ranking to be split up across those different URLs (and so give you a smaller page rank than what it would otherwise be if it was just one URL).  Not allowing external sites to link to you in different ways sounds easy in theory – but you might wonder what exactly this means in practice and how you avoid it. 4 Really Common SEO Problems Your Sites Might Have Below are 4 really common scenarios that can cause your site to inadvertently expose multiple URLs for the same content.  When this happens external sites linking to yours will end up splitting their page links across multiple URLs - and as a result cause you to have a lower page ranking with search engines than you deserve. SEO Problem #1: Default Document IIS (and other web servers) supports the concept of a “default document”.  This allows you to avoid having to explicitly specify the page you want to serve at either the root of the web-site/application, or within a sub-directory.  This is convenient – but means that by default this content is available via two different publically exposed URLs (which is bad).  For example: http://scottgu.com/ http://scottgu.com/default.aspx SEO Problem #2: Different URL Casings Web developers often don’t realize URLs are case sensitive to search engines on the web.  This means that search engines will treat the following links as two completely different URLs: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx SEO Problem #3: Trailing Slashes Consider the below two URLs – they might look the same at first, but they are subtly different. The trailing slash creates yet another situation that causes search engines to treat the URLs as different and so split search rankings: http://scottgu.com http://scottgu.com/ SEO Problem #4: Canonical Host Names Sometimes sites support scenarios where they support a web-site with both a leading “www” hostname prefix as well as just the hostname itself.  This causes search engines to treat the URLs as different and split search rankling: http://scottgu.com/albums.aspx/ http://www.scottgu.com/albums.aspx/ How to Easily Fix these SEO Problems in 10 minutes (or less) using IIS Rewrite If you haven’t been careful when coding your sites, chances are you are suffering from one (or more) of the above SEO problems.  Addressing these issues will improve your search engine relevancy ranking and drive more traffic to your site. The “good news” is that fixing the above 4 issues is really easy using the URL Rewrite Extension.  This is a completely free Microsoft extension available for IIS 7.x (on Windows Server 2008, Windows Server 2008 R2, Windows 7 and Windows Vista).  The great thing about using the IIS Rewrite extension is that it allows you to fix the above problems *without* having to change any code within your applications.  You can easily install the URL Rewrite Extension in under 3 minutes using the Microsoft Web Platform Installer (a free tool we ship that automates setting up web servers and development machines).  Just click the green “Install Now” button on the URL Rewrite Spotlight page to install it on your Windows Server 2008, Windows 7 or Windows Vista machine: Once installed you’ll find that a new “URL Rewrite” icon is available within the IIS 7 Admin Tool: Double-clicking the icon will open up the URL Rewrite admin panel – which will display the list of URL Rewrite rules configured for a particular application or site: Notice that our rewrite rule list above is currently empty (which is the default when you first install the extension).  We can click the “Add Rule…” link button in the top-right of the panel to add and enable new URL Rewriting logic for our site.  Scenario 1: Handling Default Document Scenarios One of the SEO problems I discussed earlier in this post was the scenario where the “default document” feature of IIS causes you to inadvertently expose two URLs for the same content on your site.  For example: http://scottgu.com/ http://scottgu.com/default.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the second URL to instead go to the first one.  We will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  Let’s look at how we can create such a rule.  We’ll begin by clicking the “Add Rule” link in the screenshot above.  This will cause the below dialog to display: We’ll select the “Blank Rule” template within the “Inbound rules” section to create a new custom URL Rewriting rule.  This will display an empty pane like below: Don’t worry – setting up the above rule is easy.  The following 4 steps explain how to do so: Step 1: Name the Rule Our first step will be to name the rule we are creating.  Naming it with a descriptive name will make it easier to find and understand later.  Let’s name this rule our “Default Document URL Rewrite” rule: Step 2: Setup the Regular Expression that Matches this Rule Our second step will be to specify a regular expression filter that will cause this rule to execute when an incoming URL matches the regex pattern.   Don’t worry if you aren’t good with regular expressions - I suck at them too. The trick is to know someone who is good at them or copy/paste them from a web-site.  Below we are going to specify the following regular expression as our pattern rule: (.*?)/?Default\.aspx$ This pattern will match any URL string that ends with Default.aspx. The "(.*?)" matches any preceding character zero or more times. The "/?" part says to match the slash symbol zero or one times. The "$" symbol at the end will ensure that the pattern will only match strings that end with Default.aspx.  Combining all these regex elements allows this rule to work not only for the root of your web site (e.g. http://scottgu.com/default.aspx) but also for any application or subdirectory within the site (e.g. http://scottgu.com/photos/default.aspx.  Because the “ignore case” checkbox is selected it will match both “Default.aspx” as well as “default.aspx” within the URL.   One nice feature built-into the rule editor is a “Test pattern” button that you can click to bring up a dialog that allows you to test out a few URLs with the rule you are configuring: Above I've added a “products/default.aspx” URL and clicked the “Test” button.  This will give me immediate feedback on whether the rule will execute for it.  Step 3: Setup a Permanent Redirect Action We’ll then setup an action to occur when our regular expression pattern matches the incoming URL: In the dialog above I’ve changed the “Action Type” drop down to be a “Redirect” action.  The “Redirect Type” will be a HTTP 301 Permanent redirect – which means search engines will follow it. I’ve also set the “Redirect URL” property to be: {R:1}/ This indicates that we want to redirect the web client requesting the original URL to a new URL that has the originally requested URL path - minus the "Default.aspx" in it.  For example, requests for http://scottgu.com/default.aspx will be redirected to http://scottgu.com/, and requests for http://scottgu.com/photos/default.aspx will be redirected to http://scottgu.com/photos/ The "{R:N}" regex construct, where N >= 0, is called a back-reference and N is the back-reference index. In the case of our pattern "(.*?)/?Default\.aspx$", if the input URL is "products/Default.aspx" then {R:0} will contain "products/Default.aspx" and {R:1} will contain "products".  We are going to use this {R:1}/ value to be the URL we redirect users to.  Step 4: Apply and Save the Rule Our final step is to click the “Apply” button in the top right hand of the IIS admin tool – which will cause the tool to persist the URL Rewrite rule into our application’s root web.config file (under a <system.webServer/rewrite> configuration section): <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Because IIS 7.x and ASP.NET share the same web.config files, you can actually just copy/paste the above code into your web.config files using Visual Studio and skip the need to run the admin tool entirely.  This also makes adding/deploying URL Rewrite rules with your ASP.NET applications really easy. Step 5: Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com/ http://scottgu.com/default.aspx Notice that the second URL automatically redirects to the first one.  Because it is a permanent redirect, search engines will follow the URL and should update the page ranking of http://scottgu.com to include links to http://scottgu.com/default.aspx as well. Scenario 2: Different URL Casing Another common SEO problem I discussed earlier in this post is that URLs are case sensitive to search engines on the web.  This means that search engines will treat the following links as two completely different URLs: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL to instead go to the second (all lower-case) one.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve. To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: Unlike the previous scenario (where we created a “Blank Rule”), with this scenario we can take advantage of a built-in “Enforce lowercase URLs” rule template.  When we click the “ok” button we’ll see the following dialog which asks us if we want to create a rule that enforces the use of lowercase letters in URLs: When we click the “Yes” button we’ll get a pre-written rule that automatically performs a permanent redirect if an incoming URL has upper-case characters in it – and automatically send users to a lower-case version of the URL: We can click the “Apply” button to use this rule “as-is” and have it apply to all incoming URLs to our site.  Because my www.scottgu.com site uses ASP.NET Web Forms, I’m going to make one small change to the rule we generated above – which is to add a condition that will ensure that URLs to ASP.NET’s built-in “WebResource.axd” handler are excluded from our case-sensitivity URL Rewrite logic.  URLs to the WebResource.axd handler will only come from server-controls emitted from my pages – and will never be linked to from external sites.  While my site will continue to function fine if we redirect these URLs to automatically be lower-case – doing so isn’t necessary and will add an extra HTTP redirect to many of my pages.  The good news is that adding a condition that prevents my URL Rewriting rule from happening with certain URLs is easy.  We simply need to expand the “Conditions” section of the form above We can then click the “Add” button to add a condition clause.  This will bring up the “Add Condition” dialog: Above I’ve entered {URL} as the Condition input – and said that this rule should only execute if the URL does not match a regex pattern which contains the string “WebResource.axd”.  This will ensure that WebResource.axd URLs to my site will be allowed to execute just fine without having the URL be re-written to be all lower-case. Note: If you have static resources (like references to .jpg, .css, and .js files) within your site that currently use upper-case characters you’ll probably want to add additional condition filter clauses so that URLs to them also don’t get redirected to be lower-case (just add rules for patterns like .jpg, .gif, .js, etc).  Your site will continue to work fine if these URLs get redirected to be lower case (meaning the site won’t break) – but it will cause an extra HTTP redirect to happen on your site for URLs that don’t need to be redirected for SEO reasons.  So setting up a condition clause makes sense to add. When I click the “ok” button above and apply our lower-case rewriting rule the admin tool will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx Notice that the first URL (which has a capital “A”) automatically does a redirect to a lower-case version of the URL.  Scenario 3: Trailing Slashes Another common SEO problem I discussed earlier in this post is the scenario of trailing slashes within URLs.  The trailing slash creates yet another situation that causes search engines to treat the URLs as different and so split search rankings: http://scottgu.com http://scottgu.com/ We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL (that does not have a trailing slash) to instead go to the second one that does.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: The URL Rewrite admin tool has a built-in “Append or remove the trailing slash symbol” rule template.  When we select it and click the “ok” button we’ll see the following dialog which asks us if we want to create a rule that automatically redirects users to a URL with a trailing slash if one isn’t present: Like within our previous lower-casing rewrite rule we’ll add one additional condition clause that will exclude WebResource.axd URLs from being processed by this rule.  This will avoid an unnecessary redirect for happening for those URLs. When we click the “OK” button we’ll get a pre-written rule that automatically performs a permanent redirect if the URL doesn’t have a trailing slash – and if the URL is not processed by either a directory or a file.  This will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>                 <rule name="Trailing Slash" stopProcessing="true">                     <match url="(.*[^/])$" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />                         <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com http://scottgu.com/ Notice that the first URL (which has no trailing slash) automatically does a redirect to a URL with the trailing slash.  Because it is a permanent redirect, search engines will follow the URL and update the page ranking. Scenario 4: Canonical Host Names The final SEO problem I discussed earlier are scenarios where a site works with both a leading “www” hostname prefix as well as just the hostname itself.  This causes search engines to treat the URLs as different and split search rankling: http://www.scottgu.com/albums.aspx http://scottgu.com/albums.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL (that has a www prefix) to instead go to the second URL.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: The URL Rewrite admin tool has a built-in “Canonical domain name” rule template.  When we select it and click the “ok” button we’ll see the following dialog which asks us if we want to create a redirect rule that automatically redirects users to a primary host name URL: Above I’m entering the primary URL address I want to expose to the web: scottgu.com.  When we click the “OK” button we’ll get a pre-written rule that automatically performs a permanent redirect if the URL has another leading domain name prefix.  This will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Cannonical Hostname">                     <match url="(.*)" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{HTTP_HOST}" pattern="^scottgu\.com$" negate="true" />                     </conditions>                     <action type="Redirect" url="http://scottgu.com/{R:1}" />                 </rule>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>                 <rule name="Trailing Slash" stopProcessing="true">                     <match url="(.*[^/])$" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />                         <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://www.scottgu.com/albums.aspx http://scottgu.com/albums.aspx Notice that the first URL (which has the “www” prefix) now automatically does a redirect to the second URL which does not have the www prefix.  Because it is a permanent redirect, search engines will follow the URL and update the page ranking. 4 Simple Rules for Improved SEO The above 4 rules are pretty easy to setup and should take less than 15 minutes to configure on existing sites you already have.  The beauty of using a solution like the URL Rewrite Extension is that you can take advantage of it without having to change code within your web-site – and without having to break any existing links already pointing at your site.  Users who follow existing links will be automatically redirected to the new URLs you wish to publish.  And search engines will start to give your site a higher search relevancy ranking – which will list your site higher in search results and drive more traffic to it. Customizing your URL Rewriting rules further is easy to-do either by editing the web.config file directly, or alternatively, just double click the URL Rewrite icon within the IIS 7.x admin tool and it will list all the active rules for your web-site or application: Clicking any of the rules above will open the rules editor back up and allow you to tweak/customize/save them further. Summary Measuring and improving SEO is something every developer building a public-facing web-site needs to think about and focus on.  If you haven’t already, download and use the SEO Toolkit to analyze the SEO of your sites today. New URL Routing features in ASP.NET MVC and ASP.NET Web Forms 4 make it much easier to build applications that have more control over the URLs that are published.  Tools like the URL Rewrite Extension that I’ve talked about in this blog post make it much easier to improve the URLs that are published from sites you already have built today – without requiring you to change a lot of code. The URL Rewrite Extension provides a bunch of additional great capabilities – far beyond just SEO - as well.  I’ll be covering these additional capabilities more in future blog posts. Hope this helps, Scott

    Read the article

  • Crazy idea: Connect .NET and SAP with SAP JCo using IKVM.NET

    - by Kottan
    Because the SAP Connector for .NET is no longer maintained by SAP, I am now looking for an alternative to connect the Microsoft world with the SAP world. I know there a third party products like ERPConnect, but I want to do this with tools from SAP. Therefore there arised the crazy idea to use the SAP Java Connector in combination with the tool IKVM.NET (www.ikvm.net/devguide/net2java.html). IKVM.NET provides The IKVMC tool, which converts Java bytecode to .NET dll's and exe's. "No sooner said than done!" I converted the SAP JCo to .NET dlls and created a new Visual Studio solution. I put all the JCO files into a subdirectory of my solution. I set 2 references to the generated IKVM.OpenJDK.Core.dll and sapjco.dll. Great, all JCO classes where now available as .NET classes. Full of optimism I wrote some little code to connect to a SAP system. JCO.Client client = null; client = JCO.createClient(...) The compiliation of my testcode had no errors. "Wonderful !" I thought. Then I started my tetstapplication. Unfortunately I got an exception calling JCO.createClient: Could not load middleware layer 'com.sap.mw.jco.rfc.MiddlewareRFC'\r\nno sapjcorfc in java.library.path I have 2 questions on this topic. 1) Do you think my idea using SAP Java Connector to connect .NET with SAP is a good idea or is it nonsens ? Perhaps someone had already the same idea ;-) 2) How can the above exception be solved ?

    Read the article

  • Anatomy of a .NET Assembly - PE Headers

    - by Simon Cooper
    Today, I'll be starting a look at what exactly is inside a .NET assembly - how the metadata and IL is stored, how Windows knows how to load it, and what all those bytes are actually doing. First of all, we need to understand the PE file format. PE files .NET assemblies are built on top of the PE (Portable Executable) file format that is used for all Windows executables and dlls, which itself is built on top of the MSDOS executable file format. The reason for this is that when .NET 1 was released, it wasn't a built-in part of the operating system like it is nowadays. Prior to Windows XP, .NET executables had to load like any other executable, had to execute native code to start the CLR to read & execute the rest of the file. However, starting with Windows XP, the operating system loader knows natively how to deal with .NET assemblies, rendering most of this legacy code & structure unnecessary. It still is part of the spec, and so is part of every .NET assembly. The result of this is that there are a lot of structure values in the assembly that simply aren't meaningful in a .NET assembly, as they refer to features that aren't needed. These are either set to zero or to certain pre-defined values, specified in the CLR spec. There are also several fields that specify the size of other datastructures in the file, which I will generally be glossing over in this initial post. Structure of a PE file Most of a PE file is split up into separate sections; each section stores different types of data. For instance, the .text section stores all the executable code; .rsrc stores unmanaged resources, .debug contains debugging information, and so on. Each section has a section header associated with it; this specifies whether the section is executable, read-only or read/write, whether it can be cached... When an exe or dll is loaded, each section can be mapped into a different location in memory as the OS loader sees fit. In order to reliably address a particular location within a file, most file offsets are specified using a Relative Virtual Address (RVA). This specifies the offset from the start of each section, rather than the offset within the executable file on disk, so the various sections can be moved around in memory without breaking anything. The mapping from RVA to file offset is done using the section headers, which specify the range of RVAs which are valid within that section. For example, if the .rsrc section header specifies that the base RVA is 0x4000, and the section starts at file offset 0xa00, then an RVA of 0x401d (offset 0x1d within the .rsrc section) corresponds to a file offset of 0xa1d. Because each section has its own base RVA, each valid RVA has a one-to-one mapping with a particular file offset. PE headers As I said above, most of the header information isn't relevant to .NET assemblies. To help show what's going on, I've created a diagram identifying all the various parts of the first 512 bytes of a .NET executable assembly. I've highlighted the relevant bytes that I will refer to in this post: Bear in mind that all numbers are stored in the assembly in little-endian format; the hex number 0x0123 will appear as 23 01 in the diagram. The first 64 bytes of every file is the DOS header. This starts with the magic number 'MZ' (0x4D, 0x5A in hex), identifying this file as an executable file of some sort (an .exe or .dll). Most of the rest of this header is zeroed out. The important part of this header is at offset 0x3C - this contains the file offset of the PE signature (0x80). Between the DOS header & PE signature is the DOS stub - this is a stub program that simply prints out 'This program cannot be run in DOS mode.\r\n' to the console. I will be having a closer look at this stub later on. The PE signature starts at offset 0x80, with the magic number 'PE\0\0' (0x50, 0x45, 0x00, 0x00), identifying this file as a PE executable, followed by the PE file header (also known as the COFF header). The relevant field in this header is in the last two bytes, and it specifies whether the file is an executable or a dll; bit 0x2000 is set for a dll. Next up is the PE standard fields, which start with a magic number of 0x010b for x86 and AnyCPU assemblies, and 0x20b for x64 assemblies. Most of the rest of the fields are to do with the CLR loader stub, which I will be covering in a later post. After the PE standard fields comes the NT-specific fields; again, most of these are not relevant for .NET assemblies. The one that is is the highlighted Subsystem field, and specifies if this is a GUI or console app - 0x20 for a GUI app, 0x30 for a console app. Data directories & section headers After the PE and COFF headers come the data directories; each directory specifies the RVA (first 4 bytes) and size (next 4 bytes) of various important parts of the executable. The only relevant ones are the 2nd (Import table), 13th (Import Address table), and 15th (CLI header). The Import and Import Address table are only used by the startup stub, so we will look at those later on. The 15th points to the CLI header, where the CLR-specific metadata begins. After the data directories comes the section headers; one for each section in the file. Each header starts with the section's ASCII name, null-padded to 8 bytes. Again, most of each header is irrelevant, but I've highlighted the base RVA and file offset in each header. In the diagram, you can see the following sections: .text: base RVA 0x2000, file offset 0x200 .rsrc: base RVA 0x4000, file offset 0xa00 .reloc: base RVA 0x6000, file offset 0x1000 The .text section contains all the CLR metadata and code, and so is by far the largest in .NET assemblies. The .rsrc section contains the data you see in the Details page in the right-click file properties page, but is otherwise unused. The .reloc section contains address relocations, which we will look at when we study the CLR startup stub. What about the CLR? As you can see, most of the first 512 bytes of an assembly are largely irrelevant to the CLR, and only a few bytes specify needed things like the bitness (AnyCPU/x86 or x64), whether this is an exe or dll, and the type of app this is. There are some bytes that I haven't covered that affect the layout of the file (eg. the file alignment, which determines where in a file each section can start). These values are pretty much constant in most .NET assemblies, and don't affect the CLR data directly. Conclusion To summarize, the important data in the first 512 bytes of a file is: DOS header. This contains a pointer to the PE signature. DOS stub, which we'll be looking at in a later post. PE signature PE file header (aka COFF header). This specifies whether the file is an exe or a dll. PE standard fields. This specifies whether the file is AnyCPU/32bit or 64bit. PE NT-specific fields. This specifies what type of app this is, if it is an app. Data directories. The 15th entry (at offset 0x168) contains the RVA and size of the CLI header inside the .text section. Section headers. These are used to map between RVA and file offset. The important one is .text, which is where all the CLR data is stored. In my next post, we'll start looking at the metadata used by the CLR directly, which is all inside the .text section.

    Read the article

  • Browser Specific Extensions of HttpClient

    - by imran_ku07
            Introduction:                     REpresentational State Transfer (REST) causing/leaving a great impact on service/API development because it offers a way to access a service without requiring any specific library by embracing HTTP and its features. ASP.NET Web API makes it very easy to quickly build RESTful HTTP services. These HTTP services can be consumed by a variety of clients including browsers, devices, machines, etc. With .NET Framework 4.5, we can use HttpClient class to consume/send/receive RESTful HTTP services(for .NET Framework 4.0, HttpClient class is shipped as part of ASP.NET Web API). The HttpClient class provides a bunch of helper methods(for example, DeleteAsync, PostAsync, GetStringAsync, etc.) to consume a HTTP service very easily. ASP.NET Web API added some more extension methods(for example, PutAsJsonAsync, PutAsXmlAsync, etc) into HttpClient class to further simplify the usage. In addition, HttpClient is also an ideal choice for writing integration test for a RESTful HTTP service. Since a browser is a main client of any RESTful API, it is also important to test the HTTP service on a variety of browsers. RESTful service embraces HTTP headers and different browsers send different HTTP headers. So, I have created a package that will add overloads(with an additional Browser parameter) for almost all the helper methods of HttpClient class. In this article, I will show you how to use this package.           Description:                     Create/open your test project and install ImranB.SystemNetHttp.HttpClientExtensions NuGet package. Then, add this using statement on your class, using ImranB.SystemNetHttp;                     Then, you can start using any HttpClient helper method which include the additional Browser parameter. For example,  var client = new HttpClient(myserver); var task = client.GetAsync("http://domain/myapi", Browser.Chrome); task.Wait(); var response = task.Result; .                     Here is the definition  of Browser, public enum Browser { Firefox = 0, Chrome = 1, IE10 = 2, IE9 = 3, IE8 = 4, IE7 = 5, IE6 = 6, Safari = 7, Opera = 8, Maxthon = 9, }                     These extension methods will make it very easy to write browser specific integration test. It will also help HTTP service consumer to mimic the request sending behavior of a browser. This package source is available on github. So, you can grab the source and add some additional behavior on the top of these extensions.         Summary:                     Testing a REST API is an important aspect of service development and today, testing with a browser is crucial. In this article, I showed how to write integration test that will mimic the browser request sending behavior. I also showed an example. Hopefully you will enjoy this article too.

    Read the article

  • SQL Server connection string Asynchronous Processing=true

    - by George2
    Hello everyone, I am using .Net 2.0 + SQL Server 2005 Enterprise + VSTS 2008 + C# + ADO.Net to develop ASP.Net Web application. My question is, if I am using Asynchronous Processing=true with SQL Server authentication mode (not Windows authentication mode, i.e. using sa account and password in connection string in web.config), I am wondering whether Asynchronous Processing=true will impact performance of my web application (or depends on my ADO.Net code implementation pattern/scenario)? And why? thanks in advance, George

    Read the article

  • Top 10 posts of 2010

    - by nmarun
    I quote one of my professors when I say: “We Share – We Improve”. It is through blogging that I’ve learned quite a bit. The ‘R&D’ done to learn and perfect a technology and the comments by other experts adds towards skill-set building. Below are some of the articles that I’m glad I blogged about. ASP.NET MVC 2 Model Binding for a Collection MVC 3 - first look To ref or not to ref Xap Reflector – Silverlight 4 Beware of const members LINQ to JS COM Automation with OpenOffice – Silverlight 4 VS 2010 Productivity Power Tools Using Unity Application Block – from basics to generics ASP.NET MVC Model Binding Wishing you all a happy 2011 and keep/start blogging!

    Read the article

  • Microsoft .NET Web Programming: Web Sites versus Web Applications

    - by SAMIR BHOGAYTA
    In .NET 2.0, Microsoft introduced the Web Site. This was the default way to create a web Project in Visual Studio 2005. In Visual Studio 2008, the Web Application has been restored as the default web Project in Visual Studio/.NET 3.x The Web Site is a file/folder based Project structure. It is designed such that pages are not compiled until they are requested ("on demand"). The advantages to the Web Site are: 1) It is designed to accommodate non-.NET Applications 2) Deployment is as simple as copying files to the target server 3) Any portion of the Web Site can be updated without requiring recompilation of the entire Site. The Web Application is a .dll-based Project structure. ASP.NET pages and supporting files are compiled into assemblies that are then deployed to the target server. Advantages of the Web Application are: 1) Precompiled files do not expose code to an attacker 2) Precompiled files run faster because they are binary data (the Microsoft Intermediate Language, or MSIL) executed by the CLR (Common Language Runtime) 3) References, assemblies, and other project dependencies are built in to the compiled site and automatically managed. They do not need to be manually deployed and/or registered in the Global Assembly Cache: deployment does this for you If you are planning on using automated build and deployment, such as the Team Foundation Server Team Build engine, you will need to have your code in the form of a Web Application. If you have a Web Site, it will not properly compile as a Web Application would. However, all is not lost: it is possible to work around the issue by adding a Web Deployment Project to your Solution and then: a) configuring the Web Deployment Project to precompile your code; and b) configuring your Team Build definition to use the Web Deployment Project as its source for compilation. https://msevents.microsoft.com/cui/WebCastEventDetails.aspx?culture=en-US&EventID=1032380764&CountryCode=US

    Read the article

  • ASP.NET MVC AND TOOLBOX

    - by imran_ku07
       Introduction :           ASP.NET MVC popularity is not hidden from the today's world of web applications. One of the great thing in ASP.NET is the separation of concerns, in which presentation views are separate from the business or modal layer. In these views ASP.NET MVC provides some very good controls which generate commonly used HTML markup fragments using a shorter syntax. These presentation views are familiar to web forms developers. But a pain for developers to use these controls is that they need to type these helpers controls every time when they need to use a control, because they are more familiar to drag and drop controls from ToolBox. So in this article i will use a cool feature of Visual Studio that allows you to add these controls in ToolBox once and then, when needed, just drag and drop controls from ToolBox, very similar like in web forms.   Description :            Visual Studio ToolBox is rich enough that allows you to store code and HTML snippets in ToolBox. All you need is select the HTML Helper and then simply drag and drop into Toolbox. Repeat this Procedure for every HTML Helper in ASP.NET MVC.             When you need to use a HTML Helper, you can drag and drop it from ToolBox and become happy with drag and drop programming. Summary :              In this article you see that how Visual Studio helps you to drag and drop HTML snippets from Design view to toolbox. This is one of the coolest features in Visual Studio.

    Read the article

  • How to disable an ASP.NET linkbutton when clicked

    - by Jeff Widmer
    Scenario: User clicks a LinkButton in your ASP.NET page and you want to disable it immediately using javascript so that the user cannot accidentally click it again.  I wrote about disabling a regular submit button here: How to disable an ASP.NET button when clicked.  But the method described in the other blog post does not work for disabling a LinkButton.  This is because the Post Back Event Reference is called using a snippet of javascript from within the href of the anchor tag: <a id="MyContrl_MyButton" href="javascript:__doPostBack('MyContrl$MyButton','')">My Button</a> If you try to add an onclick event to disable the button, even though the button will become disabled, the href will still be allowed to be clicked multiple times (causing duplicate form submissions).  To get around this, in addition to disabling the button in the onclick javascript, you can set the href to “#” to prevent it from doing anything on the page.  You can add this to the LinkButton from your code behind like this: MyButton.Attributes.Add("onclick", "this.href='#';this.disabled=true;" + Page.ClientScript.GetPostBackEventReference(MyButton, "").ToString()); This code adds javascript to set the href to “#” and then disable the button in the onclick event of the LinkButton by appending to the Attributes collection of the ASP.NET LinkButton control.  Then the Post Back Event Reference for the button is called right after disabling the button.  Make sure you add the Post Back Event Reference to the onclick because now that you are changing the anchor href, the button still needs to perform the original postback. With the code above now the button onclick event will look something like this: onclick="this.href='#';this.disabled=true;__doPostBack('MyContrl$MyButton','');" The anchor href is set to “#”, the linkbutton is disabled, AND then the button post back method is called. Technorati Tags: ASP.NET LinkButton

    Read the article

  • Creating vCard action result

    - by DigiMortal
    I added support for vCards to one of my ASP.NET MVC applications. I worked vCard support out as very simple and intelligent solution that fits perfectly to ASP.NET MVC applications. In this posting I will show you how to send vCards out as response to ASP.NET MVC request. We need three things: some vCard class, vCard action result, controller method to test vCard action result. Everything is very simple, let’s get hands on. vCard class As first thing we need vCard class. Last year I introduced vCard class that supports also images. Let’s take this class because it is easy to use and some dirty work is already done for us. NB! Take a look at ASP.NET example in the blog posting referred above. We need it later when we close the topic. Now think about how useful blogging and information sharing with others can be. With this class available at public I saved pretty much time now. :) vCardResult As we have vCard it is now time to write action result that we can use in our controllers. Here’s the code. public class vCardResult : ActionResult {     private vCard _card;       protected vCardResult() { }       public vCardResult(vCard card)     {         _card = card;     }       public override void ExecuteResult(ControllerContext context)     {         var response = context.HttpContext.Response;         response.ContentType = "text/vcard";         response.AddHeader("Content-Disposition", "attachment; fileName=" + _card.FirstName + " " + _card.LastName + ".vcf");           var cardString = _card.ToString();         var inputEncoding = Encoding.Default;         var outputEncoding = Encoding.GetEncoding("windows-1257");         var cardBytes = inputEncoding.GetBytes(cardString);           var outputBytes = Encoding.Convert(inputEncoding,                                 outputEncoding, cardBytes);           response.OutputStream.Write(outputBytes, 0, outputBytes.Length);     } } And we are done. Some notes: vCard is sent to browser as downloadable file (user can save or open it with Outlook or any other e-mail client that supports vCards), File name is made of first and last name of contact. Encoding is important because Outlook may not understand vCards otherwise (don’t know if this problem is solved in Outlook 2010). Using vCardResult in controller Now let’s tale a look at simple controller method that accepts person ID and returns vCardResult. public class ContactsController : Controller {       // ... other controller methods ...       public vCardResult vCard(int id)     {         var person = _partyRepository.GetPersonById(id);         var card = new vCard                 {                     FirstName=person.FirstName,                     LastName = person.LastName,                     StreetAddress = person.StreetAddress,                     City = person.City,                     CountryName = person.Country.Name,                       Mobile = person.Mobile,                     Phone = person.Phone,                     Email = person.Email,                 };           return new vCardResult(card);     } } Now you can run Visual Studio and check out how your vCard is moving from your web application to your e-mail client. Conclusion We took old code that worked well with ASP.NET Forms and we divided it into action result and controller method that uses vCard as bridge between our controller and action result. All functionality is located where it should be and we did nothing complex. We wrote only couple of lines of very easy code to achieve our goal. Do you understand now why I love ASP.NET MVC? :)

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >