Search Results

Search found 5963 results on 239 pages for 'rest wing'.

Page 49/239 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Problem removing a dynamic form field in jquery

    - by rshivers
    I'm trying to remove a dynamic form field by click a button. It will also subtract whatever values it had from the total amount from my calculation. This is the code: function removeFormField(id) { var id = $(id).attr("name"); $('#target1').text($("#total" + id).map(function() { var currentValue = parseFloat(document.getElementById("currentTotal").value); var newValue = parseFloat($("#total" + id).text()); var newTotal = currentValue - newValue; document.getElementById("currentTotal").value = newTotal; return newTotal; }).get().join()); $(id).remove(); } Okay, it will do the subtraction portion of the code with no problem, this issue is with the last line to remove the field. If I comment out the rest of the code it will work, but not with the rest of the code. I know this is something simple, yet I can't seem to wrap my mind around it. Can someone please help?

    Read the article

  • How to hide classes to external namespaces? Something like the package-protected modifier in Java

    - by devoured elysium
    In java is easy to "hide" classes from outside your package(namespace), as you can define them as package-protected. There seems to be no equivalent keyword modifier in C#. Is there any way I could mimic that behaviour in C#? I have a couple of classes that I really wouldn't like the rest of the assembly to know of. It is ok for classes in the same namespace to know of, but I'd like them to be hidden from the rest of the library/application. I know of the internal keyword, but that only hiddes classes if you try to access them from outside your assembly. That is not really my case, as I'd like to keep everything glued in just one .DLL/.EXE. Thanks

    Read the article

  • combining two png files in android

    - by John
    I have two png image files that I would like my android app to combine problematically into one png image file and am wondering if it is possible to do so? if so, what I would like to do is just overlay them on each other to create one file. the idea behind this is that I have a handful of png files, some with a portion of the image on the left with the rest transparent and the others with an image on the right and the rest transparent. and based on user input it will combine the two to make one file to display. (and i cant just display the two images side by side, they need to be one file) is this possible and how so?

    Read the article

  • remove item from array javascript

    - by Red
    I was trying to remove some items from an array , Array.prototype.remove = function(from, to) { var rest = this.slice((to || from) + 1 || this.length); this.length = from < 0 ? this.length + from : from; return this.push.apply(this, rest); }; var BOM = [0,1,0,1,0,1,1]; var bomlength = BOM.length; for(var i = 0; i < IDLEN ;++i) { if( BOM[i] == 1) { BOM.remove(i); //IDLEN--; } } RESULT IS BOM = [0,0,0,1]; expected result is BOM = [0,0,0]; its looks like i am doing something wrong , Please help me. Thanks.

    Read the article

  • How to make content take up 100% of height and width

    - by Hiro2k
    I'm so close but I can't get this to work like I want it. I'm trying to get the header and the menu to always be visible and have the content take up the rest of the view screen and have it's own scrollbar when it overflows. The problem is that the width of the content isn't being stretched to the right and I get a scroll bar in the middle of my page. I also can't get it to take up the rest of the remaining window height, if I set the height to 100% it wants to use the whole window height instead of what is left. I'm only working with IE7 or better so need to worry about javascript and am not averse to using jQuery if it can solve this problem! http://pastebin.com/x31mGtXr

    Read the article

  • Why is this writing part of the text to a new line? (Python)

    - by whatsherface
    I'm adding some new bits to one of the lines in a text file and then writing it along with the rest of the lines in the file to a new file. Referring to the if statement, I that to be all on the same line: x = 13.55553e9 y = 14.55553e9 z = 15.55553e9 infname = 'afilename' outfname = 'anotherone' oldfile = open(infname) lnum=1 for line in oldfile: if (lnum==18): line = "{0:.2e}".format(x)+' '+line+' '+"{0:.2e}".format(y)+' '+ {0:.2e}".format(z) newfile = open(outfname,'w') newfile.write(line) lnum=lnum+1 oldfile.close() newfile.close() but y and z are being written on the line below the rest of it. What am I missing here?

    Read the article

  • Accessing current class through $this-> from a function called statically. [PHP]

    - by MQA
    This feels a bit messy, but I'd like to be able to call a member function statically, yet have the rest of the class behave normally... Example: <?php class Email { private $username = 'user'; private $password = 'password'; private $from = '[email protected]'; public $to; public function SendMsg($to, $body) { if (isset($this)) $email &= $this; else $email = new Email(); $email->to = $to; // Rest of function... } } Email::SendMsg('[email protected]'); How best do I allow the static function call in this example? Thanks!

    Read the article

  • Installing the Elgg Social Networking CMS

    One Open Source CMS that stands out above the rest for small businesses and personal networks is the Elgg open source "social engine," which can be used to create social applications. Scott Clark shows you how to install and get started with this social networking CMS.

    Read the article

  • West Wind WebSurge - an easy way to Load Test Web Applications

    - by Rick Strahl
    A few months ago on a project the subject of load testing came up. We were having some serious issues with a Web application that would start spewing SQL lock errors under somewhat heavy load. These sort of errors can be tough to catch, precisely because they only occur under load and not during typical development testing. To replicate this error more reliably we needed to put a load on the application and run it for a while before these SQL errors would flare up. It’s been a while since I’d looked at load testing tools, so I spent a bit of time looking at different tools and frankly didn’t really find anything that was a good fit. A lot of tools were either a pain to use, didn’t have the basic features I needed, or are extravagantly expensive. In  the end I got frustrated enough to build an initially small custom load test solution that then morphed into a more generic library, then gained a console front end and eventually turned into a full blown Web load testing tool that is now called West Wind WebSurge. I got seriously frustrated looking for tools every time I needed some quick and dirty load testing for an application. If my aim is to just put an application under heavy enough load to find a scalability problem in code, or to simply try and push an application to its limits on the hardware it’s running I shouldn’t have to have to struggle to set up tests. It should be easy enough to get going in a few minutes, so that the testing can be set up quickly so that it can be done on a regular basis without a lot of hassle. And that was the goal when I started to build out my initial custom load tester into a more widely usable tool. If you’re in a hurry and you want to check it out, you can find more information and download links here: West Wind WebSurge Product Page Walk through Video Download link (zip) Install from Chocolatey Source on GitHub For a more detailed discussion of the why’s and how’s and some background continue reading. How did I get here? When I started out on this path, I wasn’t planning on building a tool like this myself – but I got frustrated enough looking at what’s out there to think that I can do better than what’s available for the most common simple load testing scenarios. When we ran into the SQL lock problems I mentioned, I started looking around what’s available for Web load testing solutions that would work for our whole team which consisted of a few developers and a couple of IT guys both of which needed to be able to run the tests. It had been a while since I looked at tools and I figured that by now there should be some good solutions out there, but as it turns out I didn’t really find anything that fit our relatively simple needs without costing an arm and a leg… I spent the better part of a day installing and trying various load testing tools and to be frank most of them were either terrible at what they do, incredibly unfriendly to use, used some terminology I couldn’t even parse, or were extremely expensive (and I mean in the ‘sell your liver’ range of expensive). Pick your poison. There are also a number of online solutions for load testing and they actually looked more promising, but those wouldn’t work well for our scenario as the application is running inside of a private VPN with no outside access into the VPN. Most of those online solutions also ended up being very pricey as well – presumably because of the bandwidth required to test over the open Web can be enormous. When I asked around on Twitter what people were using– I got mostly… crickets. Several people mentioned Visual Studio Load Test, and most other suggestions pointed to online solutions. I did get a bunch of responses though with people asking to let them know what I found – apparently I’m not alone when it comes to finding load testing tools that are effective and easy to use. As to Visual Studio, the higher end skus of Visual Studio and the test edition include a Web load testing tool, which is quite powerful, but there are a number of issues with that: First it’s tied to Visual Studio so it’s not very portable – you need a VS install. I also find the test setup and terminology used by the VS test runner extremely confusing. Heck, it’s complicated enough that there’s even a Pluralsight course on using the Visual Studio Web test from Steve Smith. And of course you need to have one of the high end Visual Studio Skus, and those are mucho Dinero ($$$) – just for the load testing that’s rarely an option. Some of the tools are ultra extensive and let you run analysis tools on the target serves which is useful, but in most cases – just plain overkill and only distracts from what I tend to be ultimately interested in: Reproducing problems that occur at high load, and finding the upper limits and ‘what if’ scenarios as load is ramped up increasingly against a site. Yes it’s useful to have Web app instrumentation, but often that’s not what you’re interested in. I still fondly remember early days of Web testing when Microsoft had the WAST (Web Application Stress Tool) tool, which was rather simple – and also somewhat limited – but easily allowed you to create stress tests very quickly. It had some serious limitations (mainly that it didn’t work with SSL),  but the idea behind it was excellent: Create tests quickly and easily and provide a decent engine to run it locally with minimal setup. You could get set up and run tests within a few minutes. Unfortunately, that tool died a quiet death as so many of Microsoft’s tools that probably were built by an intern and then abandoned, even though there was a lot of potential and it was actually fairly widely used. Eventually the tools was no longer downloadable and now it simply doesn’t work anymore on higher end hardware. West Wind Web Surge – Making Load Testing Quick and Easy So I ended up creating West Wind WebSurge out of rebellious frustration… The goal of WebSurge is to make it drop dead simple to create load tests. It’s super easy to capture sessions either using the built in capture tool (big props to Eric Lawrence, Telerik and FiddlerCore which made that piece a snap), using the full version of Fiddler and exporting sessions, or by manually or programmatically creating text files based on plain HTTP headers to create requests. I’ve been using this tool for 4 months now on a regular basis on various projects as a reality check for performance and scalability and it’s worked extremely well for finding small performance issues. I also use it regularly as a simple URL tester, as it allows me to quickly enter a URL plus headers and content and test that URL and its results along with the ability to easily save one or more of those URLs. A few weeks back I made a walk through video that goes over most of the features of WebSurge in some detail: Note that the UI has slightly changed since then, so there are some UI improvements. Most notably the test results screen has been updated recently to a different layout and to provide more information about each URL in a session at a glance. The video and the main WebSurge site has a lot of info of basic operations. For the rest of this post I’ll talk about a few deeper aspects that may be of interest while also giving a glance at how WebSurge works. Session Capturing As you would expect, WebSurge works with Sessions of Urls that are played back under load. Here’s what the main Session View looks like: You can create session entries manually by individually adding URLs to test (on the Request tab on the right) and saving them, or you can capture output from Web Browsers, Windows Desktop applications that call services, your own applications using the built in Capture tool. With this tool you can capture anything HTTP -SSL requests and content from Web pages, AJAX calls, SOAP or REST services – again anything that uses Windows or .NET HTTP APIs. Behind the scenes the capture tool uses FiddlerCore so basically anything you can capture with Fiddler you can also capture with Web Surge Session capture tool. Alternately you can actually use Fiddler as well, and then export the captured Fiddler trace to a file, which can then be imported into WebSurge. This is a nice way to let somebody capture session without having to actually install WebSurge or for your customers to provide an exact playback scenario for a given set of URLs that cause a problem perhaps. Note that not all applications work with Fiddler’s proxy unless you configure a proxy. For example, .NET Web applications that make HTTP calls usually don’t show up in Fiddler by default. For those .NET applications you can explicitly override proxy settings to capture those requests to service calls. The capture tool also has handy optional filters that allow you to filter by domain, to help block out noise that you typically don’t want to include in your requests. For example, if your pages include links to CDNs, or Google Analytics or social links you typically don’t want to include those in your load test, so by capturing just from a specific domain you are guaranteed content from only that one domain. Additionally you can provide url filters in the configuration file – filters allow to provide filter strings that if contained in a url will cause requests to be ignored. Again this is useful if you don’t filter by domain but you want to filter out things like static image, css and script files etc. Often you’re not interested in the load characteristics of these static and usually cached resources as they just add noise to tests and often skew the overall url performance results. In my testing I tend to care only about my dynamic requests. SSL Captures require Fiddler Note, that in order to capture SSL requests you’ll have to install the Fiddler’s SSL certificate. The easiest way to do this is to install Fiddler and use its SSL configuration options to get the certificate into the local certificate store. There’s a document on the Telerik site that provides the exact steps to get SSL captures to work with Fiddler and therefore with WebSurge. Session Storage A group of URLs entered or captured make up a Session. Sessions can be saved and restored easily as they use a very simple text format that simply stored on disk. The format is slightly customized HTTP header traces separated by a separator line. The headers are standard HTTP headers except that the full URL instead of just the domain relative path is stored as part of the 1st HTTP header line for easier parsing. Because it’s just text and uses the same format that Fiddler uses for exports, it’s super easy to create Sessions by hand manually or under program control writing out to a simple text file. You can see what this format looks like in the Capture window figure above – the raw captured format is also what’s stored to disk and what WebSurge parses from. The only ‘custom’ part of these headers is that 1st line contains the full URL instead of the domain relative path and Host: header. The rest of each header are just plain standard HTTP headers with each individual URL isolated by a separator line. The format used here also uses what Fiddler produces for exports, so it’s easy to exchange or view data either in Fiddler or WebSurge. Urls can also be edited interactively so you can modify the headers easily as well: Again – it’s just plain HTTP headers so anything you can do with HTTP can be added here. Use it for single URL Testing Incidentally I’ve also found this form as an excellent way to test and replay individual URLs for simple non-load testing purposes. Because you can capture a single or many URLs and store them on disk, this also provides a nice HTTP playground where you can record URLs with their headers, and fire them one at a time or as a session and see results immediately. It’s actually an easy way for REST presentations and I find the simple UI flow actually easier than using Fiddler natively. Finally you can save one or more URLs as a session for later retrieval. I’m using this more and more for simple URL checks. Overriding Cookies and Domains Speaking of HTTP headers – you can also overwrite cookies used as part of the options. One thing that happens with modern Web applications is that you have session cookies in use for authorization. These cookies tend to expire at some point which would invalidate a test. Using the Options dialog you can actually override the cookie: which replaces the cookie for all requests with the cookie value specified here. You can capture a valid cookie from a manual HTTP request in your browser and then paste into the cookie field, to replace the existing Cookie with the new one that is now valid. Likewise you can easily replace the domain so if you captured urls on west-wind.com and now you want to test on localhost you can do that easily easily as well. You could even do something like capture on store.west-wind.com and then test on localhost/store which would also work. Running Load Tests Once you’ve created a Session you can specify the length of the test in seconds, and specify the number of simultaneous threads to run each session on. Sessions run through each of the URLs in the session sequentially by default. One option in the options list above is that you can also randomize the URLs so each thread runs requests in a different order. This avoids bunching up URLs initially when tests start as all threads run the same requests simultaneously which can sometimes skew the results of the first few minutes of a test. While sessions run some progress information is displayed: By default there’s a live view of requests displayed in a Console-like window. On the bottom of the window there’s a running total summary that displays where you’re at in the test, how many requests have been processed and what the requests per second count is currently for all requests. Note that for tests that run over a thousand requests a second it’s a good idea to turn off the console display. While the console display is nice to see that something is happening and also gives you slight idea what’s happening with actual requests, once a lot of requests are processed, this UI updating actually adds a lot of CPU overhead to the application which may cause the actual load generated to be reduced. If you are running a 1000 requests a second there’s not much to see anyway as requests roll by way too fast to see individual lines anyway. If you look on the options panel, there is a NoProgressEvents option that disables the console display. Note that the summary display is still updated approximately once a second so you can always tell that the test is still running. Test Results When the test is done you get a simple Results display: On the right you get an overall summary as well as breakdown by each URL in the session. Both success and failures are highlighted so it’s easy to see what’s breaking in your load test. The report can be printed or you can also open the HTML document in your default Web Browser for printing to PDF or saving the HTML document to disk. The list on the right shows you a partial list of the URLs that were fired so you can look in detail at the request and response data. The list can be filtered by success and failure requests. Each list is partial only (at the moment) and limited to a max of 1000 items in order to render reasonably quickly. Each item in the list can be clicked to see the full request and response data: This particularly useful for errors so you can quickly see and copy what request data was used and in the case of a GET request you can also just click the link to quickly jump to the page. For non-GET requests you can find the URL in the Session list, and use the context menu to Test the URL as configured including any HTTP content data to send. You get to see the full HTTP request and response as well as a link in the Request header to go visit the actual page. Not so useful for a POST as above, but definitely useful for GET requests. Finally you can also get a few charts. The most useful one is probably the Request per Second chart which can be accessed from the Charts menu or shortcut. Here’s what it looks like:   Results can also be exported to JSON, XML and HTML. Keep in mind that these files can get very large rather quickly though, so exports can end up taking a while to complete. Command Line Interface WebSurge runs with a small core load engine and this engine is plugged into the front end application I’ve shown so far. There’s also a command line interface available to run WebSurge from the Windows command prompt. Using the command line you can run tests for either an individual URL (similar to AB.exe for example) or a full Session file. By default when it runs WebSurgeCli shows progress every second showing total request count, failures and the requests per second for the entire test. A silent option can turn off this progress display and display only the results. The command line interface can be useful for build integration which allows checking for failures perhaps or hitting a specific requests per second count etc. It’s also nice to use this as quick and dirty URL test facility similar to the way you’d use Apache Bench (ab.exe). Unlike ab.exe though, WebSurgeCli supports SSL and makes it much easier to create multi-URL tests using either manual editing or the WebSurge UI. Current Status Currently West Wind WebSurge is still in Beta status. I’m still adding small new features and tweaking the UI in an attempt to make it as easy and self-explanatory as possible to run. Documentation for the UI and specialty features is also still a work in progress. I plan on open-sourcing this product, but it won’t be free. There’s a free version available that provides a limited number of threads and request URLs to run. A relatively low cost license  removes the thread and request limitations. Pricing info can be found on the Web site – there’s an introductory price which is $99 at the moment which I think is reasonable compared to most other for pay solutions out there that are exorbitant by comparison… The reason code is not available yet is – well, the UI portion of the app is a bit embarrassing in its current monolithic state. The UI started as a very simple interface originally that later got a lot more complex – yeah, that never happens, right? Unless there’s a lot of interest I don’t foresee re-writing the UI entirely (which would be ideal), but in the meantime at least some cleanup is required before I dare to publish it :-). The code will likely be released with version 1.0. I’m very interested in feedback. Do you think this could be useful to you and provide value over other tools you may or may not have used before? I hope so – it already has provided a ton of value for me and the work I do that made the development worthwhile at this point. You can leave a comment below, or for more extensive discussions you can post a message on the West Wind Message Board in the WebSurge section Microsoft MVPs and Insiders get a free License If you’re a Microsoft MVP or a Microsoft Insider you can get a full license for free. Send me a link to your current, official Microsoft profile and I’ll send you a not-for resale license. Send any messages to [email protected]. Resources For more info on WebSurge and to download it to try it out, use the following links. West Wind WebSurge Home Download West Wind WebSurge Getting Started with West Wind WebSurge Video© Rick Strahl, West Wind Technologies, 2005-2014Posted in ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Working with Silverlight DataGrid RowDetailsTemplate

    - by mohanbrij
    In this post I am going to show how we can use the Silverlight DataGrid RowDetails Template, Before I start I assume that you know basics of Silverlight and also know how you create a Silverlight Projects. I have started with the Silverlight Application, and kept all the default options before I created a Silverlight Project. After this I added a Silverlight DataGrid control to my MainForm.xaml page, using the DragDrop feature of Visual Studio IDE, this will help me to add the default namespace and references automatically. Just to give you a quick look of what exactly I am going to do, I will show you in the screen below my final target, before I start explaining rest of my codes. Before I start with the real code, first I have to do some ground work, as I am not getting the data from the DB, so I am creating a class where I will populate the dummy data. EmployeeData.cs public class EmployeeData { public string FirstName { get; set; } public string LastName { get; set; } public string Address { get; set; } public string City { get; set; } public string State { get; set; } public string Country { get; set; } public EmployeeData() { } public List<EmployeeData> GetEmployeeData() { List<EmployeeData> employees = new List<EmployeeData>(); employees.Add ( new EmployeeData { Address = "#407, PH1, Foyer Appartment", City = "Bangalore", Country = "India", FirstName = "Brij", LastName = "Mohan", State = "Karnataka" }); employees.Add ( new EmployeeData { Address = "#332, Dayal Niketan", City = "Jamshedpur", Country = "India", FirstName = "Arun", LastName = "Dayal", State = "Jharkhand" }); employees.Add ( new EmployeeData { Address = "#77, MSR Nagar", City = "Bangalore", Country = "India", FirstName = "Sunita", LastName = "Mohan", State = "Karnataka" }); return employees; } } The above class will give me some sample data, I think this will be good enough to start with the actual code. now I am giving below the XAML code from my MainForm.xaml First I will put the Silverlight DataGrid, <data:DataGrid x:Name="gridEmployee" CanUserReorderColumns="False" CanUserSortColumns="False" RowDetailsVisibilityMode="VisibleWhenSelected" HorizontalAlignment="Center" ScrollViewer.VerticalScrollBarVisibility="Auto" Height="200" AutoGenerateColumns="False" Width="350" VerticalAlignment="Center"> Here, the most important property which I am going to set is RowDetailsVisibilityMode="VisibleWhenSelected" This will display the RowDetails only when we select the desired Row. Other option we have in this is Collapsed and Visible. Which will either make the row details always Visible or Always Collapsed. but to get the real effect I have selected VisibleWhenSelected. Now I am going to put the rest of my XAML code. <data:DataGrid.Columns> <!--Begin FirstName Column--> <data:DataGridTextColumn Width="150" Header="First Name" Binding="{Binding FirstName}"/> <!--End FirstName Column--> <!--Begin LastName Column--> <data:DataGridTextColumn Width="150" Header="Last Name" Binding="{Binding LastName}"/> <!--End LastName Column--> </data:DataGrid.Columns> <data:DataGrid.RowDetailsTemplate> <!-- Begin row details section. --> <DataTemplate> <Border BorderBrush="Black" BorderThickness="1" Background="White"> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="0.2*" /> <ColumnDefinition Width="0.8*" /> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition /> <RowDefinition /> <RowDefinition /> <RowDefinition /> </Grid.RowDefinitions> <!-- Controls are bound to FullAddress properties. --> <TextBlock Text="Address : " Grid.Column="0" Grid.Row="0" /> <TextBlock Text="{Binding Address}" Grid.Column="1" Grid.Row="0" /> <TextBlock Text="City : " Grid.Column="0" Grid.Row="1" /> <TextBlock Text="{Binding City}" Grid.Column="1" Grid.Row="1" /> <TextBlock Text="State : " Grid.Column="0" Grid.Row="2" /> <TextBlock Text="{Binding State}" Grid.Column="1" Grid.Row="2" /> <TextBlock Text="Country : " Grid.Column="0" Grid.Row="3" /> <TextBlock Text="{Binding Country}" Grid.Column="1" Grid.Row="3" /> </Grid> </Border> </DataTemplate> <!-- End row details section. --> </data:DataGrid.RowDetailsTemplate>   In the code above, first I am declaring the simple dataGridTextColumn for FirstName and LastName, and after this I am creating the RowDetailTemplate, where we are just putting the code what we usually do to design the Grid. I mean nothing very much RowDetailTemplate Specific, most of the code which you will see inside the RowDetailsTemplate is plain and simple, where I am binding rest of the Address Column. And that,s it. Once we will bind the DataGrid, you are ready to go. In the code below from MainForm.xaml.cs, I am just binding the DataGrid public partial class MainPage : UserControl { public MainPage() { InitializeComponent(); BindControls(); } private void BindControls() { EmployeeData employees = new EmployeeData(); gridEmployee.ItemsSource = employees.GetEmployeeData(); } } Once you will run, you can see the output I have given in the screenshot above. In this example I have just shown the very basic example, now it up to your creativity and requirement, you can put some other controls like checkbox, Images, even other DataGrid, etc inside this RowDetailsTemplate column. I am attaching my sample source code with this post. I have used Silverlight 3 and Visual Studio 2008, but this is fully compatible with you Silverlight 4 and Visual Studio 2010. you may just need to Upgrade the attached Sample. You can download from here.

    Read the article

  • Using jQuery to POST Form Data to an ASP.NET ASMX AJAX Web Service

    - by Rick Strahl
    The other day I got a question about how to call an ASP.NET ASMX Web Service or PageMethods with the POST data from a Web Form (or any HTML form for that matter). The idea is that you should be able to call an endpoint URL, send it regular urlencoded POST data and then use Request.Form[] to retrieve the posted data as needed. My first reaction was that you can’t do it, because ASP.NET ASMX AJAX services (as well as Page Methods and WCF REST AJAX Services) require that the content POSTed to the server is posted as JSON and sent with an application/json or application/x-javascript content type. IOW, you can’t directly call an ASP.NET AJAX service with regular urlencoded data. Note that there are other ways to accomplish this. You can use ASP.NET MVC and a custom route, an HTTP Handler or separate ASPX page, or even a WCF REST service that’s configured to use non-JSON inputs. However if you want to use an ASP.NET AJAX service (or Page Methods) with a little bit of setup work it’s actually quite easy to capture all the form variables on the client and ship them up to the server. The basic steps needed to make this happen are: Capture form variables into an array on the client with jQuery’s .serializeArray() function Use $.ajax() or my ServiceProxy class to make an AJAX call to the server to send this array On the server create a custom type that matches the .serializeArray() name/value structure Create extension methods on NameValue[] to easily extract form variables Create a [WebMethod] that accepts this name/value type as an array (NameValue[]) This seems like a lot of work but realize that steps 3 and 4 are a one time setup step that can be reused in your entire site or multiple applications. Let’s look at a short example that looks like this as a base form of fields to ship to the server: The HTML for this form looks something like this: <div id="divMessage" class="errordisplay" style="display: none"> </div> <div> <div class="label">Name:</div> <div><asp:TextBox runat="server" ID="txtName" /></div> </div> <div> <div class="label">Company:</div> <div><asp:TextBox runat="server" ID="txtCompany"/></div> </div> <div> <div class="label" ></div> <div> <asp:DropDownList runat="server" ID="lstAttending"> <asp:ListItem Text="Attending" Value="Attending"/> <asp:ListItem Text="Not Attending" Value="NotAttending" /> <asp:ListItem Text="Maybe Attending" Value="MaybeAttending" /> <asp:ListItem Text="Not Sure Yet" Value="NotSureYet" /> </asp:DropDownList> </div> </div> <div> <div class="label">Special Needs:<br /> <small>(check all that apply)</small></div> <div> <asp:ListBox runat="server" ID="lstSpecialNeeds" SelectionMode="Multiple"> <asp:ListItem Text="Vegitarian" Value="Vegitarian" /> <asp:ListItem Text="Vegan" Value="Vegan" /> <asp:ListItem Text="Kosher" Value="Kosher" /> <asp:ListItem Text="Special Access" Value="SpecialAccess" /> <asp:ListItem Text="No Binder" Value="NoBinder" /> </asp:ListBox> </div> </div> <div> <div class="label"></div> <div> <asp:CheckBox ID="chkAdditionalGuests" Text="Additional Guests" runat="server" /> </div> </div> <hr /> <input type="button" id="btnSubmit" value="Send Registration" /> The form includes a few different kinds of form fields including a multi-selection listbox to demonstrate retrieving multiple values. Setting up the Server Side [WebMethod] The [WebMethod] on the server we’re going to call is going to be very simple and just capture the content of these values and echo then back as a formatted HTML string. Obviously this is overly simplistic but it serves to demonstrate the simple point of capturing the POST data on the server in an AJAX callback. public class PageMethodsService : System.Web.Services.WebService { [WebMethod] public string SendRegistration(NameValue[] formVars) { StringBuilder sb = new StringBuilder(); sb.AppendFormat("Thank you {0}, <br/><br/>", HttpUtility.HtmlEncode(formVars.Form("txtName"))); sb.AppendLine("You've entered the following: <hr/>"); foreach (NameValue nv in formVars) { // strip out ASP.NET form vars like _ViewState/_EventValidation if (!nv.name.StartsWith("__")) { if (nv.name.StartsWith("txt") || nv.name.StartsWith("lst") || nv.name.StartsWith("chk")) sb.Append(nv.name.Substring(3)); else sb.Append(nv.name); sb.AppendLine(": " + HttpUtility.HtmlEncode(nv.value) + "<br/>"); } } sb.AppendLine("<hr/>"); string[] needs = formVars.FormMultiple("lstSpecialNeeds"); if (needs == null) sb.AppendLine("No Special Needs"); else { sb.AppendLine("Special Needs: <br/>"); foreach (string need in needs) { sb.AppendLine("&nbsp;&nbsp;" + need + "<br/>"); } } return sb.ToString(); } } The key feature of this method is that it receives a custom type called NameValue[] which is an array of NameValue objects that map the structure that the jQuery .serializeArray() function generates. There are two custom types involved in this: The actual NameValue type and a NameValueExtensions class that defines a couple of extension methods for the NameValue[] array type to allow for single (.Form()) and multiple (.FormMultiple()) value retrieval by name. The NameValue class is as simple as this and simply maps the structure of the array elements of .serializeArray(): public class NameValue { public string name { get; set; } public string value { get; set; } } The extension method class defines the .Form() and .FormMultiple() methods to allow easy retrieval of form variables from the returned array: /// <summary> /// Simple NameValue class that maps name and value /// properties that can be used with jQuery's /// $.serializeArray() function and JSON requests /// </summary> public static class NameValueExtensionMethods { /// <summary> /// Retrieves a single form variable from the list of /// form variables stored /// </summary> /// <param name="formVars"></param> /// <param name="name">formvar to retrieve</param> /// <returns>value or string.Empty if not found</returns> public static string Form(this NameValue[] formVars, string name) { var matches = formVars.Where(nv => nv.name.ToLower() == name.ToLower()).FirstOrDefault(); if (matches != null) return matches.value; return string.Empty; } /// <summary> /// Retrieves multiple selection form variables from the list of /// form variables stored. /// </summary> /// <param name="formVars"></param> /// <param name="name">The name of the form var to retrieve</param> /// <returns>values as string[] or null if no match is found</returns> public static string[] FormMultiple(this NameValue[] formVars, string name) { var matches = formVars.Where(nv => nv.name.ToLower() == name.ToLower()).Select(nv => nv.value).ToArray(); if (matches.Length == 0) return null; return matches; } } Using these extension methods it’s easy to retrieve individual values from the array: string name = formVars.Form("txtName"); or multiple values: string[] needs = formVars.FormMultiple("lstSpecialNeeds"); if (needs != null) { // do something with matches } Using these functions in the SendRegistration method it’s easy to retrieve a few form variables directly (txtName and the multiple selections of lstSpecialNeeds) or to iterate over the whole list of values. Of course this is an overly simple example – in typical app you’d probably want to validate the input data and save it to the database and then return some sort of confirmation or possibly an updated data list back to the client. Since this is a full AJAX service callback realize that you don’t have to return simple string values – you can return any of the supported result types (which are most serializable types) including complex hierarchical objects and arrays that make sense to your client code. POSTing Form Variables from the Client to the AJAX Service To call the AJAX service method on the client is straight forward and requires only use of little native jQuery plus JSON serialization functionality. To start add jQuery and the json2.js library to your page: <script src="Scripts/jquery.min.js" type="text/javascript"></script> <script src="Scripts/json2.js" type="text/javascript"></script> json2.js can be found here (be sure to remove the first line from the file): http://www.json.org/json2.js It’s required to handle JSON serialization for those browsers that don’t support it natively. With those script references in the document let’s hookup the button click handler and call the service: $(document).ready(function () { $("#btnSubmit").click(sendRegistration); }); function sendRegistration() { var arForm = $("#form1").serializeArray(); $.ajax({ url: "PageMethodsService.asmx/SendRegistration", type: "POST", contentType: "application/json", data: JSON.stringify({ formVars: arForm }), dataType: "json", success: function (result) { var jEl = $("#divMessage"); jEl.html(result.d).fadeIn(1000); setTimeout(function () { jEl.fadeOut(1000) }, 5000); }, error: function (xhr, status) { alert("An error occurred: " + status); } }); } The key feature in this code is the $("#form1").serializeArray();  call which serializes all the form fields of form1 into an array. Each form var is represented as an object with a name/value property. This array is then serialized into JSON with: JSON.stringify({ formVars: arForm }) The format for the parameter list in AJAX service calls is an object with one property for each parameter of the method. In this case its a single parameter called formVars and we’re assigning the array of form variables to it. The URL to call on the server is the name of the Service (or ASPX Page for Page Methods) plus the name of the method to call. On return the success callback receives the result from the AJAX callback which in this case is the formatted string which is simply assigned to an element in the form and displayed. Remember the result type is whatever the method returns – it doesn’t have to be a string. Note that ASP.NET AJAX and WCF REST return JSON data as a wrapped object so the result has a ‘d’ property that holds the actual response: jEl.html(result.d).fadeIn(1000); Slightly simpler: Using ServiceProxy.js If you want things slightly cleaner you can use the ServiceProxy.js class I’ve mentioned here before. The ServiceProxy class handles a few things for calling ASP.NET and WCF services more cleanly: Automatic JSON encoding Automatic fix up of ‘d’ wrapper property Automatic Date conversion on the client Simplified error handling Reusable and abstracted To add the service proxy add: <script src="Scripts/ServiceProxy.js" type="text/javascript"></script> and then change the code to this slightly simpler version: <script type="text/javascript"> proxy = new ServiceProxy("PageMethodsService.asmx/"); $(document).ready(function () { $("#btnSubmit").click(sendRegistration); }); function sendRegistration() { var arForm = $("#form1").serializeArray(); proxy.invoke("SendRegistration", { formVars: arForm }, function (result) { var jEl = $("#divMessage"); jEl.html(result).fadeIn(1000); setTimeout(function () { jEl.fadeOut(1000) }, 5000); }, function (error) { alert(error.message); } ); } The code is not very different but it makes the call as simple as specifying the method to call, the parameters to pass and the actions to take on success and error. No more remembering which content type and data types to use and manually serializing to JSON. This code also removes the “d” property processing in the response and provides more consistent error handling in that the call always returns an error object regardless of a server error or a communication error unlike the native $.ajax() call. Either approach works and both are pretty easy. The ServiceProxy really pays off if you use lots of service calls and especially if you need to deal with date values returned from the server  on the client. Summary Making Web Service calls and getting POST data to the server is not always the best option – ASP.NET and WCF AJAX services are meant to work with data in objects. However, in some situations it’s simply easier to POST all the captured form data to the server instead of mapping all properties from the input fields to some sort of message object first. For this approach the above POST mechanism is useful as it puts the parsing of the data on the server and leaves the client code lean and mean. It’s even easy to build a custom model binder on the server that can map the array values to properties on an object generically with some relatively simple Reflection code and without having to manually map form vars to properties and do string conversions. Keep in mind though that other approaches also abound. ASP.NET MVC makes it pretty easy to create custom routes to data and the built in model binder makes it very easy to deal with inbound form POST data in its original urlencoded format. The West Wind West Wind Web Toolkit also includes functionality for AJAX callbacks using plain POST values. All that’s needed is a Method parameter to query/form value to specify the method to be called on the server. After that the content type is completely optional and up to the consumer. It’d be nice if the ASP.NET AJAX Service and WCF AJAX Services weren’t so tightly bound to the content type so that you could more easily create open access service endpoints that can take advantage of urlencoded data that is everywhere in existing pages. It would make it much easier to create basic REST endpoints without complicated service configuration. Ah one can dream! In the meantime I hope this article has given you some ideas on how you can transfer POST data from the client to the server using JSON – it might be useful in other scenarios beyond ASP.NET AJAX services as well. Additional Resources ServiceProxy.js A small JavaScript library that wraps $.ajax() to call ASP.NET AJAX and WCF AJAX Services. Includes date parsing extensions to the JSON object, a global dataFilter for processing dates on all jQuery JSON requests, provides cleanup for the .NET wrapped message format and handles errors in a consistent fashion. Making jQuery Calls to WCF/ASMX with a ServiceProxy Client More information on calling ASMX and WCF AJAX services with jQuery and some more background on ServiceProxy.js. Note the implementation has slightly changed since the article was written. ww.jquery.js The West Wind West Wind Web Toolkit also includes ServiceProxy.js in the West Wind jQuery extension library. This version is slightly different and includes embedded json encoding/decoding based on json2.js.© Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  ASP.NET  AJAX  

    Read the article

  • How to find and fix performance problems in ORM powered applications

    - by FransBouma
    Once in a while we get requests about how to fix performance problems with our framework. As it comes down to following the same steps and looking into the same things every single time, I decided to write a blogpost about it instead, so more people can learn from this and solve performance problems in their O/R mapper powered applications. In some parts it's focused on LLBLGen Pro but it's also usable for other O/R mapping frameworks, as the vast majority of performance problems in O/R mapper powered applications are not specific for a certain O/R mapper framework. Too often, the developer looks at the wrong part of the application, trying to fix what isn't a problem in that part, and getting frustrated that 'things are so slow with <insert your favorite framework X here>'. I'm in the O/R mapper business for a long time now (almost 10 years, full time) and as it's a small world, we O/R mapper developers know almost all tricks to pull off by now: we all know what to do to make task ABC faster and what compromises (because there are almost always compromises) to deal with if we decide to make ABC faster that way. Some O/R mapper frameworks are faster in X, others in Y, but you can be sure the difference is mainly a result of a compromise some developers are willing to deal with and others aren't. That's why the O/R mapper frameworks on the market today are different in many ways, even though they all fetch and save entities from and to a database. I'm not suggesting there's no room for improvement in today's O/R mapper frameworks, there always is, but it's not a matter of 'the slowness of the application is caused by the O/R mapper' anymore. Perhaps query generation can be optimized a bit here, row materialization can be optimized a bit there, but it's mainly coming down to milliseconds. Still worth it if you're a framework developer, but it's not much compared to the time spend inside databases and in user code: if a complete fetch takes 40ms or 50ms (from call to entity object collection), it won't make a difference for your application as that 10ms difference won't be noticed. That's why it's very important to find the real locations of the problems so developers can fix them properly and don't get frustrated because their quest to get a fast, performing application failed. Performance tuning basics and rules Finding and fixing performance problems in any application is a strict procedure with four prescribed steps: isolate, analyze, interpret and fix, in that order. It's key that you don't skip a step nor make assumptions: these steps help you find the reason of a problem which seems to be there, and how to fix it or leave it as-is. Skipping a step, or when you assume things will be bad/slow without doing analysis will lead to the path of premature optimization and won't actually solve your problems, only create new ones. The most important rule of finding and fixing performance problems in software is that you have to understand what 'performance problem' actually means. Most developers will say "when a piece of software / code is slow, you have a performance problem". But is that actually the case? If I write a Linq query which will aggregate, group and sort 5 million rows from several tables to produce a resultset of 10 rows, it might take more than a couple of milliseconds before that resultset is ready to be consumed by other logic. If I solely look at the Linq query, the code consuming the resultset of the 10 rows and then look at the time it takes to complete the whole procedure, it will appear to me to be slow: all that time taken to produce and consume 10 rows? But if you look closer, if you analyze and interpret the situation, you'll see it does a tremendous amount of work, and in that light it might even be extremely fast. With every performance problem you encounter, always do realize that what you're trying to solve is perhaps not a technical problem at all, but a perception problem. The second most important rule you have to understand is based on the old saying "Penny wise, Pound Foolish": the part which takes e.g. 5% of the total time T for a given task isn't worth optimizing if you have another part which takes a much larger part of the total time T for that same given task. Optimizing parts which are relatively insignificant for the total time taken is not going to bring you better results overall, even if you totally optimize that part away. This is the core reason why analysis of the complete set of application parts which participate in a given task is key to being successful in solving performance problems: No analysis -> no problem -> no solution. One warning up front: hunting for performance will always include making compromises. Fast software can be made maintainable, but if you want to squeeze as much performance out of your software, you will inevitably be faced with the dilemma of compromising one or more from the group {readability, maintainability, features} for the extra performance you think you'll gain. It's then up to you to decide whether it's worth it. In almost all cases it's not. The reason for this is simple: the vast majority of performance problems can be solved by implementing the proper algorithms, the ones with proven Big O-characteristics so you know the performance you'll get plus you know the algorithm will work. The time taken by the algorithm implementing code is inevitable: you already implemented the best algorithm. You might find some optimizations on the technical level but in general these are minor. Let's look at the four steps to see how they guide us through the quest to find and fix performance problems. Isolate The first thing you need to do is to isolate the areas in your application which are assumed to be slow. For example, if your application is a web application and a given page is taking several seconds or even minutes to load, it's a good candidate to check out. It's important to start with the isolate step because it allows you to focus on a single code path per area with a clear begin and end and ignore the rest. The rest of the steps are taken per identified problematic area. Keep in mind that isolation focuses on tasks in an application, not code snippets. A task is something that's started in your application by either another task or the user, or another program, and has a beginning and an end. You can see a task as a piece of functionality offered by your application.  Analyze Once you've determined the problem areas, you have to perform analysis on the code paths of each area, to see where the performance problems occur and which areas are not the problem. This is a multi-layered effort: an application which uses an O/R mapper typically consists of multiple parts: there's likely some kind of interface (web, webservice, windows etc.), a part which controls the interface and business logic, the O/R mapper part and the RDBMS, all connected with either a network or inter-process connections provided by the OS or other means. Each of these parts, including the connectivity plumbing, eat up a part of the total time it takes to complete a task, e.g. load a webpage with all orders of a given customer X. To understand which parts participate in the task / area we're investigating and how much they contribute to the total time taken to complete the task, analysis of each participating task is essential. Start with the code you wrote which starts the task, analyze the code and track the path it follows through your application. What does the code do along the way, verify whether it's correct or not. Analyze whether you have implemented the right algorithms in your code for this particular area. Remember we're looking at one area at a time, which means we're ignoring all other code paths, just the code path of the current problematic area, from begin to end and back. Don't dig in and start optimizing at the code level just yet. We're just analyzing. If your analysis reveals big architectural stupidity, it's perhaps a good idea to rethink the architecture at this point. For the rest, we're analyzing which means we collect data about what could be wrong, for each participating part of the complete application. Reviewing the code you wrote is a good tool to get deeper understanding of what is going on for a given task but ultimately it lacks precision and overview what really happens: humans aren't good code interpreters, computers are. We therefore need to utilize tools to get deeper understanding about which parts contribute how much time to the total task, triggered by which other parts and for example how many times are they called. There are two different kind of tools which are necessary: .NET profilers and O/R mapper / RDBMS profilers. .NET profiling .NET profilers (e.g. dotTrace by JetBrains or Ants by Red Gate software) show exactly which pieces of code are called, how many times they're called, and the time it took to run that piece of code, at the method level and sometimes even at the line level. The .NET profilers are essential tools for understanding whether the time taken to complete a given task / area in your application is consumed by .NET code, where exactly in your code, the path to that code, how many times that code was called by other code and thus reveals where hotspots are located: the areas where a solution can be found. Importantly, they also reveal which areas can be left alone: remember our penny wise pound foolish saying: if a profiler reveals that a group of methods are fast, or don't contribute much to the total time taken for a given task, ignore them. Even if the code in them is perhaps complex and looks like a candidate for optimization: you can work all day on that, it won't matter.  As we're focusing on a single area of the application, it's best to start profiling right before you actually activate the task/area. Most .NET profilers support this by starting the application without starting the profiling procedure just yet. You navigate to the particular part which is slow, start profiling in the profiler, in your application you perform the actions which are considered slow, and afterwards you get a snapshot in the profiler. The snapshot contains the data collected by the profiler during the slow action, so most data is produced by code in the area to investigate. This is important, because it allows you to stay focused on a single area. O/R mapper and RDBMS profiling .NET profilers give you a good insight in the .NET side of things, but not in the RDBMS side of the application. As this article is about O/R mapper powered applications, we're also looking at databases, and the software making it possible to consume the database in your application: the O/R mapper. To understand which parts of the O/R mapper and database participate how much to the total time taken for task T, we need different tools. There are two kind of tools focusing on O/R mappers and database performance profiling: O/R mapper profilers and RDBMS profilers. For O/R mapper profilers, you can look at LLBLGen Prof by hibernating rhinos or the Linq to Sql/LLBLGen Pro profiler by Huagati. Hibernating rhinos also have profilers for other O/R mappers like NHibernate (NHProf) and Entity Framework (EFProf) and work the same as LLBLGen Prof. For RDBMS profilers, you have to look whether the RDBMS vendor has a profiler. For example for SQL Server, the profiler is shipped with SQL Server, for Oracle it's build into the RDBMS, however there are also 3rd party tools. Which tool you're using isn't really important, what's important is that you get insight in which queries are executed during the task / area we're currently focused on and how long they took. Here, the O/R mapper profilers have an advantage as they collect the time it took to execute the query from the application's perspective so they also collect the time it took to transport data across the network. This is important because a query which returns a massive resultset or a resultset with large blob/clob/ntext/image fields takes more time to get transported across the network than a small resultset and a database profiler doesn't take this into account most of the time. Another tool to use in this case, which is more low level and not all O/R mappers support it (though LLBLGen Pro and NHibernate as well do) is tracing: most O/R mappers offer some form of tracing or logging system which you can use to collect the SQL generated and executed and often also other activity behind the scenes. While tracing can produce a tremendous amount of data in some cases, it also gives insight in what's going on. Interpret After we've completed the analysis step it's time to look at the data we've collected. We've done code reviews to see whether we've done anything stupid and which parts actually take place and if the proper algorithms have been implemented. We've done .NET profiling to see which parts are choke points and how much time they contribute to the total time taken to complete the task we're investigating. We've performed O/R mapper profiling and RDBMS profiling to see which queries were executed during the task, how many queries were generated and executed and how long they took to complete, including network transportation. All this data reveals two things: which parts are big contributors to the total time taken and which parts are irrelevant. Both aspects are very important. The parts which are irrelevant (i.e. don't contribute significantly to the total time taken) can be ignored from now on, we won't look at them. The parts which contribute a lot to the total time taken are important to look at. We now have to first look at the .NET profiler results, to see whether the time taken is consumed in our own code, in .NET framework code, in the O/R mapper itself or somewhere else. For example if most of the time is consumed by DbCommand.ExecuteReader, the time it took to complete the task is depending on the time the data is fetched from the database. If there was just 1 query executed, according to tracing or O/R mapper profilers / RDBMS profilers, check whether that query is optimal, uses indexes or has to deal with a lot of data. Interpret means that you follow the path from begin to end through the data collected and determine where, along the path, the most time is contributed. It also means that you have to check whether this was expected or is totally unexpected. My previous example of the 10 row resultset of a query which groups millions of rows will likely reveal that a long time is spend inside the database and almost no time is spend in the .NET code, meaning the RDBMS part contributes the most to the total time taken, the rest is compared to that time, irrelevant. Considering the vastness of the source data set, it's expected this will take some time. However, does it need tweaking? Perhaps all possible tweaks are already in place. In the interpret step you then have to decide that further action in this area is necessary or not, based on what the analysis results show: if the analysis results were unexpected and in the area where the most time is contributed to the total time taken is room for improvement, action should be taken. If not, you can only accept the situation and move on. In all cases, document your decision together with the analysis you've done. If you decide that the perceived performance problem is actually expected due to the nature of the task performed, it's essential that in the future when someone else looks at the application and starts asking questions you can answer them properly and new analysis is only necessary if situations changed. Fix After interpreting the analysis results you've concluded that some areas need adjustment. This is the fix step: you're actively correcting the performance problem with proper action targeted at the real cause. In many cases related to O/R mapper powered applications it means you'll use different features of the O/R mapper to achieve the same goal, or apply optimizations at the RDBMS level. It could also mean you apply caching inside your application (compromise memory consumption over performance) to avoid unnecessary re-querying data and re-consuming the results. After applying a change, it's key you re-do the analysis and interpretation steps: compare the results and expectations with what you had before, to see whether your actions had any effect or whether it moved the problem to a different part of the application. Don't fall into the trap to do partly analysis: do the full analysis again: .NET profiling and O/R mapper / RDBMS profiling. It might very well be that the changes you've made make one part faster but another part significantly slower, in such a way that the overall problem hasn't changed at all. Performance tuning is dealing with compromises and making choices: to use one feature over the other, to accept a higher memory footprint, to go away from the strict-OO path and execute queries directly onto the RDBMS, these are choices and compromises which will cross your path if you want to fix performance problems with respect to O/R mappers or data-access and databases in general. In most cases it's not a big issue: alternatives are often good choices too and the compromises aren't that hard to deal with. What is important is that you document why you made a choice, a compromise: which analysis data, which interpretation led you to the choice made. This is key for good maintainability in the years to come. Most common performance problems with O/R mappers Below is an incomplete list of common performance problems related to data-access / O/R mappers / RDBMS code. It will help you with fixing the hotspots you found in the interpretation step. SELECT N+1: (Lazy-loading specific). Lazy loading triggered performance bottlenecks. Consider a list of Orders bound to a grid. You have a Field mapped onto a related field in Order, Customer.CompanyName. Showing this column in the grid will make the grid fetch (indirectly) for each row the Customer row. This means you'll get for the single list not 1 query (for the orders) but 1+(the number of orders shown) queries. To solve this: use eager loading using a prefetch path to fetch the customers with the orders. SELECT N+1 is easy to spot with an O/R mapper profiler or RDBMS profiler: if you see a lot of identical queries executed at once, you have this problem. Prefetch paths using many path nodes or sorting, or limiting. Eager loading problem. Prefetch paths can help with performance, but as 1 query is fetched per node, it can be the number of data fetched in a child node is bigger than you think. Also consider that data in every node is merged on the client within the parent. This is fast, but it also can take some time if you fetch massive amounts of entities. If you keep fetches small, you can use tuning parameters like the ParameterizedPrefetchPathThreshold setting to get more optimal queries. Deep inheritance hierarchies of type Target Per Entity/Type. If you use inheritance of type Target per Entity / Type (each type in the inheritance hierarchy is mapped onto its own table/view), fetches will join subtype- and supertype tables in many cases, which can lead to a lot of performance problems if the hierarchy has many types. With this problem, keep inheritance to a minimum if possible, or switch to a hierarchy of type Target Per Hierarchy, which means all entities in the inheritance hierarchy are mapped onto the same table/view. Of course this has its own set of drawbacks, but it's a compromise you might want to take. Fetching massive amounts of data by fetching large lists of entities. LLBLGen Pro supports paging (and limiting the # of rows returned), which is often key to process through large sets of data. Use paging on the RDBMS if possible (so a query is executed which returns only the rows in the page requested). When using paging in a web application, be sure that you switch server-side paging on on the datasourcecontrol used. In this case, paging on the grid alone is not enough: this can lead to fetching a lot of data which is then loaded into the grid and paged there. Keep note that analyzing queries for paging could lead to the false assumption that paging doesn't occur, e.g. when the query contains a field of type ntext/image/clob/blob and DISTINCT can't be applied while it should have (e.g. due to a join): the datareader will do DISTINCT filtering on the client. this is a little slower but it does perform paging functionality on the data-reader so it won't fetch all rows even if the query suggests it does. Fetch massive amounts of data because blob/clob/ntext/image fields aren't excluded. LLBLGen Pro supports field exclusion for queries. You can exclude fields (also in prefetch paths) per query to avoid fetching all fields of an entity, e.g. when you don't need them for the logic consuming the resultset. Excluding fields can greatly reduce the amount of time spend on data-transport across the network. Use this optimization if you see that there's a big difference between query execution time on the RDBMS and the time reported by the .NET profiler for the ExecuteReader method call. Doing client-side aggregates/scalar calculations by consuming a lot of data. If possible, try to formulate a scalar query or group by query using the projection system or GetScalar functionality of LLBLGen Pro to do data consumption on the RDBMS server. It's far more efficient to process data on the RDBMS server than to first load it all in memory, then traverse the data in-memory to calculate a value. Using .ToList() constructs inside linq queries. It might be you use .ToList() somewhere in a Linq query which makes the query be run partially in-memory. Example: var q = from c in metaData.Customers.ToList() where c.Country=="Norway" select c; This will actually fetch all customers in-memory and do an in-memory filtering, as the linq query is defined on an IEnumerable<T>, and not on the IQueryable<T>. Linq is nice, but it can often be a bit unclear where some parts of a Linq query might run. Fetching all entities to delete into memory first. To delete a set of entities it's rather inefficient to first fetch them all into memory and then delete them one by one. It's more efficient to execute a DELETE FROM ... WHERE query on the database directly to delete the entities in one go. LLBLGen Pro supports this feature, and so do some other O/R mappers. It's not always possible to do this operation in the context of an O/R mapper however: if an O/R mapper relies on a cache, these kind of operations are likely not supported because they make it impossible to track whether an entity is actually removed from the DB and thus can be removed from the cache. Fetching all entities to update with an expression into memory first. Similar to the previous point: it is more efficient to update a set of entities directly with a single UPDATE query using an expression instead of fetching the entities into memory first and then updating the entities in a loop, and afterwards saving them. It might however be a compromise you don't want to take as it is working around the idea of having an object graph in memory which is manipulated and instead makes the code fully aware there's a RDBMS somewhere. Conclusion Performance tuning is almost always about compromises and making choices. It's also about knowing where to look and how the systems in play behave and should behave. The four steps I provided should help you stay focused on the real problem and lead you towards the solution. Knowing how to optimally use the systems participating in your own code (.NET framework, O/R mapper, RDBMS, network/services) is key for success as well as knowing what's going on inside the application you built. I hope you'll find this guide useful in tracking down performance problems and dealing with them in a useful way.  

    Read the article

  • I’m a dev is this relevant to me?

    - by simonsabin
    I was asked the question today whether the master class that Paul and Kimberley are running next month  (http://www.regonline.co.uk/builder/site/tab1.aspx?EventID=860887 ) is relevant for someone that is a developer. Yes yes yes yes . Consider it like your favourite album, there might be some of the songs that you hate but the rest you love and a couple in particular you will listen to all the time. If you are a developer then you will find that some of the stuff around backs and recovery might...(read more)

    Read the article

  • AspNetCompatibility in WCF Services &ndash; easy to trip up

    - by Rick Strahl
    This isn’t the first time I’ve hit this particular wall: I’m creating a WCF REST service for AJAX callbacks and using the WebScriptServiceHostFactory host factory in the service: <%@ ServiceHost Language="C#" Service="WcfAjax.BasicWcfService" CodeBehind="BasicWcfService.cs" Factory="System.ServiceModel.Activation.WebScriptServiceHostFactory" %>   to avoid all configuration. Because of the Factory that creates the ASP.NET Ajax compatible format via the custom factory implementation I can then remove all of the configuration settings that typically get dumped into the web.config file. However, I do want ASP.NET compatibility so I still leave in: <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true"/> </system.serviceModel> in the web.config file. This option allows you access to the HttpContext.Current object to effectively give you access to most of the standard ASP.NET request and response features. This is not recommended as a primary practice but it can be useful in some scenarios and in backwards compatibility scenerios with ASP.NET AJAX Web Services. Now, here’s where things get funky. Assuming you have the setting in web.config, If you now declare a service like this: [ServiceContract(Namespace = "DevConnections")] #if DEBUG [ServiceBehavior(IncludeExceptionDetailInFaults = true)] #endif public class BasicWcfService (or by using an interface that defines the service contract) you’ll find that the service will not work when an AJAX call is made against it. You’ll get a 500 error and a System.ServiceModel.ServiceActivationException System error. Worse even with the IncludeExceptionDetailInFaults enabled you get absolutely no indication from WCF what the problem is. So what’s the problem?  The issue is that once you specify aspNetCompatibilityEnabled=”true” in the configuration you *have to* specify the AspNetCompatibilityRequirements attribute and one of the modes that enables or at least allows for it. You need either Required or Allow: [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Required)] without it the service will simply fail without further warning. It will also fail if you set the attribute value to NotAllowed. The following also causes the service to fail as above: [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.NotAllowed)] This is not totally unreasonable but it’s a difficult issue to debug especially since the configuration setting is global – if you have more than one service and one requires traditional ASP.NET access and one doesn’t then both must have the attribute specified. This is one reason why you’d want to avoid using this functionality unless absolutely necessary. WCF REST provides some basic access to some of the HTTP features after all, although what’s there is severely limited. I also wish that ServiceActivation errors would provide more error information. Getting an Activation error without further info on what actually is wrong is pretty worthless especially when it is a technicality like a mismatched configuration/attribute setting like this.© Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  WCF  AJAX  

    Read the article

  • Find a Hash Collision, Win $100

    - by Mike C
    Margarity Kerns recently published a very nice article at SQL Server Central on using hash functions to detect changes in rows during the data warehouse load ETL process. On the discussion page for the article I noticed a lot of the same old arguments against using hash functions to detect change. After having this same discussion several times over the past several months in public and private forums, I've decided to see if we can't put this argument to rest for a while. To that end I'm going to...(read more)

    Read the article

  • TechDays 2011 Sweden Videos

    - by Your DisplayName here!
    All the videos from the excellent Örebro event are now online. Dominick Baier: A Technical Introduction to the Windows Identity Foundation (watch) Dominick Baier & Christian Weyer: Securing REST-Services and Web APIs on the Windows Azure Platform (watch) Christian Weyer: Real World Azure - Elasticity from on-premise to the cloud and back (watch) Our interview with Robert (watch)

    Read the article

  • UV texture mapping with perspective correct interpolation

    - by Twodordan
    I am working on a software rasterizer for educational purposes and I am having issues with the texturing. The problem is, only one face of the cube gets correctly textured. The rest are stretched edges: You can see the running program online here. I have used cartesian coordinates, and all I do is interpolate the uv values along the scanlines. The general formula I use for interpolating the uv coordinates is pretty much the one I use for the z-buffering interpolation and looks like this (in this case for horizontal scanlines): u_Slope = (right.u - left.u) / (triangleRight_x - triangleLeft_x); v_Slope = (right.v - left.v) / (triangleRight_x - triangleLeft_x); //[...] new_u = left.u + ((currentX_onScanLine - triangleLeft_x) * u_Slope); new_v = left.v + ((currentX_onScanLine - triangleLeft_x) * v_Slope); Then, when I add each point to the pixel buffer, I restore z and uv: z = (1/z); uv.u = Math.round(uv.u * z *100);//*100 because my texture is 100x100px uv.v = Math.round(uv.v * z *100); Then I turn the u v indexes into one index in order to fetch the correct pixel from the image data (which is a 1 dimensional px array): var index = texture.width * uv.u + uv.v; //and the rest is unimportant imagedata[index].RGBA bla bla The interpolation formula is correct considering the consistency of the texture (including the straight stripes). However, I seem to get quite a lot of 0 values for either u or v. Which is probably why I only get one face right. Furthermore, why is the texture flipped horizontally? (the "1" is flipped) I must get some sleep now, but before I get into further dissecting of every single value to see what goes wrong, Can someone more experienced guess why might this be happening, just by looking at the cube? "I have no idea what I'm doing" (it's my first time implementing a rasterizer). Did I miss an important stage? Thanks for any insight. PS: My UV values are as follows: { u:0, v:0 }, { u:0, v:0.5 }, { u:0.5, v:0.5 }, { u:0.5, v:0 }, { u:0, v:0 }, { u:0, v:0.5 }, { u:0.5, v:0.5 }, { u:0.5, v:0 }

    Read the article

  • How do I fix “Ubuntu is running in low-graphics mode?” for NVIDIA GeForce GT555M

    - by David Chen
    As title, I'm using Ubuntu 10.04, and my ubuntu kept showing the sign “Ubuntu is running in low-graphics mode”. I've read another question with same topic (http://askubuntu.com/questions/10664/how-do-i-fix-ubuntu-is-running-in-low-graphics-mode ), but the other one is using ATI Radeon X1200. How can I fix the problem? I'm running Ubuntu on a 200GB partition, and the rest of my computer is Windows7. My graphic card is NVIDIA GeForce GT 555M, and my computer is ACER ASPIRE 5951G.

    Read the article

  • What's the best way to re-install Ubuntu on my triple-boot system?

    - by TheX
    I have a triple boot system with Windows, Ubuntu 10.10 and 11.04, installed in that order, plus a "Data" Partition that has all my data, what I want to do is remove the 11.04, because I haven't booted into it for over a month, and I am afraid if I try to do any updates it will bork the system... I would like to install a fresh copy of 11.04 and leave the rest of the system intact, what is the best way to do this?

    Read the article

  • Up in the Air: Team Oracle Play-by-Play

    - by Aaron Lazenby
    Yesterday, I had the amazing opportunity to fly along with Sean D. Tucker and Team Oracle. Leaving from the San Carols airport, we did a 30 minute flight over the Pacific just south of the coastal town of Half Moon Bay. In that half hour, I rode through a massive 4G loop, survived a crushing hammerhead, and took control of the plane to perform a basic wing over (you can learn what the heck I'm talking about by visiting this website). I have lots of great video, but it's going to take me some time to make sense of it. For now, here's my Twitter-based play-by-play of yesterday's events. Many thanks to Sean D. Tucker and the whole crew (Ben and Ian, especially) for this great opportunity to fly with Team Oracle.Live tweets from @OracleProfitI will be spending the afternoon in a stunt plane, upside down above the San Francisco bay. http://bit.ly/cwkrkIAt the San Carlos airport. More than slightly freaked out. Shaking hands diminish texting ability. Slightly reassuring. http://yfrog.com/1qt61nj There go the doors to the photo plane... #teamoracle http://yfrog.com/58ywljSean D Tucker assures me: "The sky is a great place to be." Helpful, but I'm still nervous. #teamoracle"You get a parachute. He gets a harness." How was this decision made? #teamoracleThe plane with @radu43 has returned. I'm up next...Couldn't help myself...drank a soda before flying. Mistake? We'll see... #teamoracleAdvice of the day "If you pull with two hands, you improve the chances of the chute deploying on the first try." Lovely. #teamoracleI feel so strange. But I flew a high performance airplane. And did an aerobatics move. Wild. #teamoracle"Flying ten feet off he ground, upside-down at 250 miles per hour isn't exciting to me." Sean D. Tucker #teamoracle"What is exciting to me is flying that perfect pattern, just like I imagined it in my head." Sean D. Tucker #teamoracle"You're going to sleep well tonight. You just carried four times your body weight." #teamoracle #gforce Just watched the #teamoracle plane take off for its flight home. I'm waiting for Caltrain. #undignifiedanticlimaxEnough with the #teamoracle. Check http://blogs.oracle.com/profit for the video. Coming soon! 

    Read the article

  • What is the usage of Splay Trees in the real world?

    - by Meena
    I decided to learn about balanced search trees, so I picked 2-3-4 and splay trees. What are the examples of splay trees usage in the real world? In this Cornell: http://www.cs.cornell.edu/courses/cs3110/2009fa/recitations/rec-splay.html I read that splay trees are 'A good example is a network router'. But from rest of the explanation seams like network routers use hash tables and not splay trees since the lookup time is constant instead of O(log n).

    Read the article

  • How open is open core and is that open enough

    <b>ZDNet:</b> "The idea is that you make the center of your product open source, but put the rest under a paid license. This is supposed to make your venture capital backers happy. You gain the benefits of open source but customers aren&#8217;t &#8220;stealing&#8221; the software."

    Read the article

  • My thoughts on the future of the web with respect to flash, plugins, etc…

    - by joelvarty
    More than 10 years ago I was coding Java applets.  They were great at the time because I could reasonably expect them to run the same way in Netscape and Internet Explorer.  I could also reliably do asynchronous networking back to the server.  But then, Microsoft pulled their native Java runtime from Windows and Internet Explorer.  It got a lot harder to get applets running in people’s browsers. So I started writing ActiveX controls for IE and Java applets for Netscape. Then I switched to Flash, not for too long, but it was enough for me to see that it was a capable and curious implementation of animation, multimedia and script. I even wrote a few Silverlight controls, but then I stopped. I stepped back from all of the “richness” and “interactivity” and I thought about things like accessibility and SEO.  I wondered how my apps and sites might appear to the greater world.  I wondered how the developers I am working with, or who might be inheriting my code down the road, might interact with it. And I thought to myself, What the hell was I thinking? Those embedded controls are not what the web is about, and they run contrary to nearly all of the things that makes the web exciting and fosters innovation within and around.   Those plugins or controls, or whatever you want to refer to them as, are only stop-gaps that fill a hole in the basic HTML/Script/CSS specifications, and that’s all they should ever be used for.  Full stop.  Period.  For instance, I still make use of a nifty little flash control called SWFUpload because it lets me check file size before an upload starts.  I can do the same thing from a Silverlight control.  But rest assured, if I could do this from native javascript, I would in a second.  In fact, the only reason I chose SWFUpload over a ton of other alternatives is that it has a great javascript API so I can do (nearly) all of the UI in regular HTML.  And I ALWAYS provide a non-flash alternative for uploading, and for the rest of any website where the designer has insisted on some piece of creativity that requires flash (usually because the designer is also the flash developer, but that’s an aside…). The web is about openness, and about exposing that openness in such a way that it can be taken advantage of as a small part of a greater whole.  Sure we need security and authentication and ssl and all that stuff, but for me, its something more profound.  For me, the majority of what the web is, is about exposing something that delivers meaning.  What meaning can we derive from an <object> tag?   more later - joel

    Read the article

  • Cloud Application Management for Platforms

    - by user756764
    Today Oracle, along with CloudBees, Cloudsoft, Huawei, Rackspace, Red Hat, and Software AG, published the Cloud Application Management for Platforms (CAMP) specification. This spec deals with application management in the context of PaaS. It defines a model (consisting of a set resources and their relationships), a REST-based API for manipulating that model, and a packaging format for getting applications (and their attendant metadata) into and out of the platform. My colleague, Mark Carlson, has already provided an excellent writeup on the spec here. The following, additional points bear emphasizing: CAMP is language, framework and platform neutral; it should be equally applicable to the task of deploying and managing Ruby on Rails applications as Java/Spring applications (as Node.js applications, etc.) CAMP only covers the interactions between a Cloud Consumer and a Cloud Provider (using the definitions of these terms provided in the NIST Cloud Computing Reference Architecture). The internal APIs used by the Cloud Provider to, for example, deploy additional platform services (e.g. a new message queuing service) are out of CAMP's scope. CAMP supports the management of the entire lifecycle of the application (e.g. start/stop, suspend/resume, etc.) not just the deployment of the components that make up the application. Complexity is the antithesis of interoperability. One of CAMP's goals is to be as broadly interoperable as possible. To this end, the authors of CAMP tried to "make things as simple as possible, but no simpler". For example, JSON is the only serialization format used in the spec (although Providers can extend this to support additional serialization formats such as XML). It remains to be seen whether we can preserve this simplicity as the spec is processed by OASIS. So far, those who have indicated an interest in collaborating on the spec seem to be of a like mind with regards to the need for simplicity. The flip side to simplicity is the knowledge that you undoubtedly missed something that is important to someone. To make up for this, CAMP is designed to be extensible. The idea is to ship what we know will work, allow implementers to extend the spec, then re-factor the spec to incorporate the most popular extensions. Anyone interested in this effort, particularly those of you using PaaS-level services, is encouraged to join the forthcoming OASIS TC. As you may have noticed, CAMP is a bit of a departure from some of the more monolithic management standards that have preceded it. The idea is to develop simple, discrete standards targeted to address specific interoperability and portability problems and tie these standards together with common patterns based on REST and HATEOAS. I'm excited to see how this idea plays out.

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >