Search Results

Search found 10850 results on 434 pages for 'shihab returns'.

Page 98/434 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • Perl help dereferencing a reference to an array of hash references, containing record set data

    - by user1724150
    I'm using the a Amazon Perl module that returns a reference to an array of hash references as $record_sets, containing record set data and I'm having a hard time dereferencing it. I can print the data using data dumper but I need to be able to manipulate the data. Below is the documentation provided for the module Thanks In Advance: #list_resource_record_sets #Lists resource record sets for a hosted zone. #Called in scalar context: $record_sets = $r53->list_resource_record_sets(zone_id => '123ZONEID'); #Returns: A reference to an array of hash references, containing record set data. Example: $record_sets = [ { name => 'example.com.', type => 'MX' ttl => 86400, records => [ '10 mail.example.com' ] }, { name => 'example.com.', type => 'NS', ttl => 172800, records => [ 'ns-001.awsdns-01.net.', 'ns-002.awsdns-02.net.', 'ns-003.awsdns-03.net.', 'ns-004.awsdns-04.net.' ]

    Read the article

  • How to calculate Content-Length for a file download within Kohana PHP?

    - by moritzd
    I'm trying to send a file from within a Kohana model to the browser, but as soon as I add a Content-Length header, the file doesn't start downloading right away. Now the problem seems to be that Kohana is already outputting buffers. An ob_clean at the begin of the script doesn't help this though. Also adding ob_get_length() to the Content-Length isn't helping since this just returns 0. The getFileSize() function returns the right number: if I run the script outside of Kohana, it works. I read that exit() still calls all destructors and it might be that something is outputted by Kohana afterwards, but I can't find out what exactly. Hope someone can help me out here... This is the piece of code I'm using: public function download() { header("Expires: ".gmdate("D, d M Y H:i:s",time()+(3600*7))." GMT\n"); header("Content-Type: ".$this->getFileType()."\n"); header("Content-Transfer-Encoding: binary\n"); header("Last-Modified: " . gmdate("D, d M Y H:i:s",$this->getCreateTime()) . " GMT\n"); header("Content-Length: ".($this->getFileSize()+ob_get_length().";\n"); header('Content-Disposition: attachment; filename="'.basename($this->getFileName())."\"\n\n"); ob_end_flush(); readfile($this->getFilePath()); exit(); }

    Read the article

  • JS: Storing dynamic variables across pages?

    - by user2467599
    I've been looking into local storage options and plugins like Persist.js, sessvars.js, and even sisyphus.js - but I am unsure if any are the best fit (though I'm fairly certain I need to use one). Page one is a form with input fields for data like names, phones, and email. I have a button that replicates a wrapper div (and it's inputs) as long as more inputs are needed. When the form is filled the user hits submit which takes them to a 'confirmation' type php page. I need to the give the user an 'edit' button on page 2 that takes them back to page 1 and leaves all the info alone. For the most part everything returns fine, but if the user had hit the 'replicate' button before submission, and then hits edit afterwards, all the inputs that were dynamically generated return empty and the div no longer exists. Someone suggested that my variables are not persistent (when the replicate button is hit, input with an id="name1" becomes "name2" and so on) so that's when I found out about the plugins mentioned before. Is there a way that I can implement one of those plugins (or any other method) so that when the user returns to page one the div and it's input values remain unchanged? And if I'm on the right track are there any examples?

    Read the article

  • Extending abstract classes in c#

    - by ng
    I am a Java developer and I have noticed some differences in extending abstract classes in c# as opposed to Java. I was wondering how a c# developer would achived the following. 1) Covarience public abstract class A { public abstract List<B> List(); } public class BList : List<T> where T : B { } public abstract class C : A { public abstract BList List(); } So in the above hierarchy, there is covarience in C where it returns a type compatible with what A returns. However this gives me an error in Visual Studio. Is there a way to specify a covarient return type in c#? 2) Adding a setter to a property public abstract class A { public abstract String Name { get; } } public abstract class B : A { public abstract String Name { get; set } } Here the compiler complains of hiding. Any suggestions? Please do not suggest using interfaces unless that is the ONLY way to do this.

    Read the article

  • Delphi static method of a class returning property value

    - by mitko.berbatov
    I'm making a Delphi VCL application. There is a class TStudent where I have two static functions: one which returns last name from an array of TStudent and another one which returns the first name of the student. Their code is something like: class function TStudent.FirstNameOf(aLastName: string): string; var i : integer; begin for i := 0 to Length(studentsArray) - 1 do begin if studentsArray[i].LastName = aLastName then begin result := studentsArray[i].FirstName; Exit; end; end; result := 'no match was found'; end; class function TStudent.LastNameOf(aFirstName: string): string; var i : integer; begin for i := 0 to Length(studentsArray) - 1 do begin if studentsArray[i].FirstName = aFirstName then begin result := studentsArray[i].LastName; Exit; end; end; result := 'no match was found'; end; My question is how can I avoid writing almost same code twice. Is there any way to pass the property as parameter of the functions.

    Read the article

  • XNA, C# - Check if a Vector2 path crosses another Vector2 path.

    - by Nick
    Hello all, I have an XNA question for those with more experience in these matters than myself (maths). Background: I have a game that implements a boundary class, this simply hosts 2 Vector2 objects, a start and an end point. The current implementation crudely handles collision detection by assuming boundaries are always vertical or horizontal, i.e. if start.x and end.x are the same check I am not trying to pass x etc. Ideally what I would like to implement is a method that accepts two Vector2 parameters. The first being a current location, the second being a requested location (where I would like to move it to assuming no objections). The method would also accept a boundary object. What the method should then do is tell me if I am going to cross the boundry in this move. this could be a bool or ideally something representing how far I can actually move. This empty method might explain better than I can in words. /// <summary> /// Checks the move. /// </summary> /// <param name="current">The current.</param> /// <param name="requested">The requested.</param> /// <param name="boundry">The boundry.</param> /// <returns></returns> public bool CheckMove(Vector2 current, Vector2 requested, Boundry boundry) { //return a bool that indicated if the suggested move will cross the boundry. return true; }

    Read the article

  • Variable wont echo

    - by jonnnnnnnnnie
    I have the following code, where the var $username doesn't echo, when you type in a value. //TODO: SET AUTH TOKEN as random hash, save in session $auth_token = rand(); if (isset($_POST['action']) && $_POST['action'] == 'Login') { $errors = array(); //USED TO BUILD UP ARRAY OF ERRORS WHICH ARE THEN ECHOED $username = $_POST['username']; if ($username = '') { $errors['username'] = 'Username is required'; } echo $username; // var_dump($username) returns string 0 } require_once 'login_form.html.php'; ?> login_form is this: <form method="POST" action=""> <input type="hidden" name="auth_token" value="<?php echo $auth_token ?>"> Username: <input type="text" name="username"> Password: <input type="password" name="password1"> <input type="submit" name="action" value="Login"> </form> The auth token part isn't important, it just when I type in a value in username textbox and press the login button, the username wont echo, var_dump returns string (0) and print_r is just blank.

    Read the article

  • Obtaining XML from U.S. Postal Service (USPS) rate calculator API with PHP

    - by Chris F
    hoping somebody here can help me. I'm attempting to pull an XML page from the U.S. Postal Service (USPS) rate calculator, using PHP. Here is the code I am using (with my API login and password replaced of course): <? $api = "http://production.shippingapis.com/ShippingAPI.dll?API=RateV4&XML=<RateV4Request ". "USERID=\"MYUSERID\" PASSWORD=\"MYPASSWORD\"><Revision/><Package ID=\"1ST\">". "<Service>FIRST CLASS</Service><FirstClassMailType>PARCEL</FirstClassMailType>". "<ZipOrigination>12345</ZipOrigination><ZipDestination>54321</ZipDestination>". "<Pounds>0</Pounds><Ounces>9</Ounces><Container/><Size>REGULAR</Size></Package></RateV4Request>"; $xml_string = file_get_contents($api); $xml = simplexml_load_string($xml_string); ?> Pretty straightforward. However it never returns anything. I can paste the URL directly into my browser's address bar: http://production.shippingapis.com/ShippingAPI.dll?API=RateV4&XML=<RateV4RequestUSERID="MYUSERID" PASSWORD="MYPASSWORD"><Revision/><Package ID="1ST"><Service>FIRST CLASS</Service><FirstClassMailType>PARCEL</FirstClassMailType><ZipOrigination>12345</ZipOrigination><ZipDestination>54321</ZipDestination><Pounds>0</Pounds><Ounces>9</Ounces><Container/><Size>REGULAR</Size></Package></RateV4Request> And it returns the XML I need, so I know the URL is valid. But I cannot seem to capture it using PHP. Any help would be tremendously appreciated. Thanks in advance.

    Read the article

  • Displaying Query Results Horizontally

    - by AndyD273
    I am wondering if it is possible to take the results of a query and return them as a CSV string instead of as a column of cells. Basically, we have a table called Customers, and we have a table called CustomerTypeLines, and each Customer can have multiple CustomerTypeLines. When I run a query against it, I run into problems when I want to check multiple types, for instance: Select * from Customers a Inner Join CustomerTypeLines b on a.CustomerID = b.CustomerID where b.CustomerTypeID = 14 and b.CustomerTypeID = 66 ...returns nothing because a customer can't have both on the same line, obviously. In order to make it work, I had to add a field to Customers called CustomerTypes that looks like ,14,66,67, so I can do a Where a.CustomerTypes like '%,14,%' and a.CustomerTypes like '%,66,%' which returns 85 rows. Of course this is a pain because I have to make my program rebuild this field for that Customer each time the CustomerTypeLines table is changed. It would be nice if I could do a sub query in my where that would do the work for me, so instead of returning the results like: 14 66 67 it would return them like ,14,66,67, Is this possible?

    Read the article

  • Moq basic questions

    - by devoured elysium
    I made the following test for my class: var mock = new Mock<IRandomNumberGenerator>(); mock.Setup(framework => framework.Generate(0, 50)) .Returns(7.0); var rnac = new RandomNumberAverageCounter(mock.Object, 1, 100); rnac.Run(); double result = rnac.GetAverage(); Assert.AreEqual(result, 7.0, 0.1); The problem here was that I changed my mind about what range of values Generate(int min, int max) would use. So in Mock.Setup() I defined the range as from 0 to 50 while later I actually called the Generate() method with a range from 1 to 100. I ran the test and it failed. I know that that is what it's supposed to happen but I was left wondering if isn't there a way to launch an exception or throw in a message when trying to run the method with wrong params. Also, if I want to run this Generate() method 10 times with different values (let's say, from 1 to 10), will I have to make 10 mock setups or something, or is there a special method for it? The best I could think of is this (which isn't bad, I'm just asking if there is other better way): for (int i = 1; i < 10; ++i) { mock.Setup(framework => framework.Generate(1, 100)) .Returns((double)i); }

    Read the article

  • How do I use jquery to both download & delete files dynamically from servlet

    - by Adam
    Is it possible to a jquery $.get() to call a servlet and use it to both download a file or update the page without reloading the page? (Or more basically, can I download a file without reloading the page?) For example, I am using a servlet that either returns a file to download of mimetype "application/octet-stream", or returns text to be update in the page of type "text/html". I can write a form with a submit, but then it reloads the page, so I've been trying to use $.get()... but the download doesn't work. <script type="text/javascript"> jQuery(document).ready(function(){ $("#handleFileOptions button").button(); }); function handleFilesSubmit(requestType) { $.get('FileServlet', {filename: $('#radioFileList input:radio:checked').button("widget").text(), requestType: requestType}, function(data){ ...?... }); } </script> In the html: <div id = "handleFiles"> <div id ="radioFileList"> <div id="radioFileList"> <input value="file0.txt" type="radio" id="fileitem0><label for="fileitem0">file0.txt</label> <input value="file1.txt" type="radio" id="fileitem1><label for="fileitem0">file1.txt</label> </div> </div> <div id="handleFileOptions"> <button id="handleFileOption0" onclick="handleFilesSubmit('Download')">Download</button> <button id="handleFileOption1" onclick="handleFilesSubmit('Delete')">Delete</button> </div> </div>

    Read the article

  • Color difference between vista and Win7

    - by MSGrimpeur
    Have an indicator in the form of an image which is displayed in a graphics viewport. The indicator can be any colour the user selects so we created a single image with a pallette and change a specific color in the pallette to the one the user picks using the following code. /// <summary> /// Copies the image and sets transparency and fill colour of the copy. The image is intended to be a simple filled shape such as a square /// with the inside all in one colour. /// </summary> /// <remarks>Assumes the fill colour to be changed is Red, /// black is the boundary colour and off white (RGB 233,233,233) is the colour to be made transparent</remarks> /// <param name="image"></param> /// <param name="fillColour"></param> /// <returns></returns> protected Bitmap CopyWithStyle(Bitmap image, Color fillColour) { ColorPalette selectionIndicatorPalette = image.Palette; int fillColourIndex = selectionIndicatorPalette.IndexOf(Color.Red); selectionIndicatorPalette.Entries[fillColourIndex] = fillColour; image.Palette = selectionIndicatorPalette; Bitmap tempImage = image; tempImage.MakeTransparent(transparentColour); return tempImage; } To be honest I'm not sure if this is a bit cludgy and there is some smarter approach or not, so any thoughts there would help. However the main issue is that this appears to work fine on Win7 but in vista and XP the color does not change. Has any one seen this before. I've found one or two articles that suggest there are some differences in ARGB between them but nothing particularly concrete. Any help greatfully accepted.

    Read the article

  • How can I filter using the current value of a text box in a jQuery selector?

    - by spudly
    I have a validation script that checks for data-* attributes to determine which fields are required. If an element has the 'data-required-if' attribute (which has a jQuery selector as it's value), it checks to see if any elements are found that match that selector. If any are found, the field is required. It does something similar to the following: $('[data-required-if]').each(function () { var selector = $(this).attr('data-required-if'), required = false; if ( !$(selector).length ) { required = true; // do something if this element is empty } }); This works great, but there's a problem. When you use the attribute selector to filter based on the current value of a text field, it really filters on the initial value of the text field. <input type='text' id='myinput' value='initial text' /> <input type='text' id='dependent_input' value='' data-required-if="#myinput[value='']" /> // Step 1: type "foobar" in #myinput // Step 2: run these lines of code: <script> $('#myinput').val() //=> returns "foobar" $('#myinput[value="foobar"]').length //=> returns 0 </script> I understand why it's doing that. jQuery probably uses getAttribute() in the background. Is there any other way to filter based on the current value of an input box using purely jQuery selectors?

    Read the article

  • MVC2 AJAX - determining UpdateTargetId based on the returned data

    - by DanielJW
    The scenario: I'm creating a login form for an MVC2 application. How i'm doing it: The form submits to an MVC2 action which validates the username/password. If it fails validation the action returns the form (a partial view) for the user to try again. If it passes validation the action returns the page the user was visiting before they logged in (a view). What i want to happen: 1 - when the form is submitted and the user validates successfully, The returned result should replace the current page (like what happens if you don't set an UpdateTargetId). 2 - When the form is submitted and the user fails validation, the returned result should replace the form (like what happens if you set the UpdateTargetID to the form's containing element). The problem: I can make both of those things work, but not at the same time. I can either have it always replace the current page, or always just replace the contents of the UpdateTargetId element. But I need it to be able to do either depending on whether the user successfully validated or not. What I need The ideal solution would be to be able to examine the result of the ajax request and determine whether to use the UpdateTargetId (replacing just the form) or not (replacing the whole page). I expect it would involve some work with jquery (assuming it's possible) but i'm not really that great with jquery yet to figure out how to do it myself. If it can't be done this way I'm also open to other methods/solutions for making it work in a similar fashion. Thanks in advance ..

    Read the article

  • Minimizing MySQL output with Compress() and by concatening results?

    - by johnrl
    Hi all. It is crucial that I transfer the least amount of data possible between server and client. Therefore I thought of using the mysql Compress() function. To get the max compression I also want to concatenate all my results in one large string (or several of max length allowed by MySql), to allow for similar results to be compressed, and then compress these/that string. 1st problem (concatenating mysql results): SELECT name,age FROM users returns 10 results. I want to concatenate all these results in one strign on the form: name,age,name,age,name,age... and so on. Is this possible? 2nd problem (compressing the results from above) When I have comstructed the concatenated string as above I want to compress it. If I do: SELECT COMPRESS('myname'); then it just gives me as output the character '-' - sometimes it even returns unprintable characters. How do I get COMPRESS() to return a compressed printable string that I can trasnfer in ex ASCII encoding?

    Read the article

  • Very weird jquery/json problem...

    - by Scarface
    Hey guys I have finally located the cause of this problem...I just have no idea how to fix it and why it is happening. I have a jquery getjson function and it returns 0 results every 2-5 clicks on new topics or refreshes. For some reason if I change my query to sort results by ASC it always returns results and much quicker, but this poses a problem since I need results by DESC. If anyone has any ideas, I would greatly appreciate it because I am dumbfounded at this point. Here is my query but I need it by DESC SELECT time, user, message FROM comments WHERE topic_id='$topic_id' ORDER BY time ASC LIMIT 10 $.getJSON(files+"comments.php?action=view&load=initial&topic_id="+topic_id+"&t=" + (new Date()), function(json) { if(json.length) { for(i=0; i < json.length; i++) { $('#comment-list').prepend(prepare(json[i])); $('#list-' + count).fadeIn(1500); } } }); I return results like so while($row = mysql_fetch_array($res)){ $data[] = $row; } $out = json_encode($data); print $out;

    Read the article

  • How to change a recursive function for count files and catalogues?

    - by user661999
    <?php function scan_dir($dirname) { $file_count = 0 ; $dir_count = 0 ; $dir = opendir($dirname); while (($file = readdir($dir)) !== false) { if($file != "." && $file != "..") { if(is_file($dirname."/".$file)) ++$file_count; if(is_dir($dirname."/".$file)) { ++ $dir_count; scan_dir($dirname."/".$file); } } } closedir($dir); echo "There are $dir_count catalogues and $file_count files.<br>"; } $dirname = "/home/user/path"; scan_dir($dirname); ?> Hello, I have a recursive function for count files and catalogues. It returns result for each catalogue. But I need a common result. How to change the script? It returns : There are 0 catalogues and 3 files. There are 0 catalogues and 1 files. There are 2 catalogues and 14 files. I want: There are 2 catalogues and 18 files.

    Read the article

  • jQuery variable and object caching

    - by niksy
    This is something that has been bugging me some time and every time I found myself using different solution for this. So, I have link in my document which on click creates new element with some ID. <a href="#" id="test-link">Link</a> For the purpose of easier reusing, I would like to store that new elements ID in a variable which is jQuery object var test = $('#test'); On click I append that new element on body, new element is DIV $('body').append('<div id="test"/>'); And here goes the main "problem" - if I test this new elements length with test.length it first returns 0 and later 1. But, when I test it with $('#test').length it returns 1 from the start. I suppose it is some caching mechanism and I was wondering is there better, all-around solution which will allow to store elements in variables in the start for later repurpose and in the same time work with dynamically created elements. Live, delegate, something else? What I do sometimes is create string and add it to jQuery object but I think this is just avoiding the real issue. Also, using .find() inside another jQuery object. Thanks in advance.

    Read the article

  • Android Service Testing with messages

    - by Sandeep Dhull
    I have a service which does its work(perform network operation) depending upon the type of message(message.what) property of the message. Then it returns the resoponse, also as a message to the requesting component(depending upon the message.replyTo). So, i am trying to write the testcases.. But how????? My Architecture of service is like this: 1) A component(ex. Activity) bounds to the service. 2) The component sends message to the Service(using Messenger). 3) The service has a nested class that handles the messages and execute the network call and returns a response as message to the sender(who initially sent the message and using its replyTo property). Now to test this.. i am using Junit test cases.. So , in that .. 1) in setUp() i am binding to the service.. 2) on testBusinessLogic() . i am sending the message to the service .. Now problem is where to get the response message.

    Read the article

  • How to use strtok in C properly so there is no memory leak?

    - by user246392
    I am somewhat confused by what happens when you call strtok on a char pointer in C. I know that it modifies the contents of the string, so if I call strtok on a variable named 'line', its content will change. Assume I follow the bellow approach: void function myFunc(char* line) { // get a pointer to the original memory block char* garbageLine = line; // Do some work // Call strtok on 'line' multiple times until it returns NULL // Do more work free(garbageLine); } Further assume that 'line' is malloced before it is passed to myFunc. Am I supposed to free the original string after using strtok or does it do the job for us? Also, what happens if 'line' is not malloced and I attempt to use the function above? Is it safer to do the following instead? (Assume the programmer won't call free if he knows the line is not malloced) Invocation char* garbageLine = line; myFunc(line); free(garbageLine); Function definition void function myFunc(char* line) { // Do some work // Call strtok on 'line' multiple times until it returns NULL // Do more work }

    Read the article

  • How do I create a self referential association (self join) in a single class using ActiveRecord in Rails?

    - by Daniel Chang
    I am trying to create a self join table that represents a list of customers who can refer each other (perhaps to a product or a program). I am trying to limit my model to just one class, "Customer". The schema is: create_table "customers", force: true do |t| t.string "name" t.integer "referring_customer_id" t.datetime "created_at" t.datetime "updated_at" end add_index "customers", ["referring_customer_id"], name: "index_customers_on_referring_customer_id" My model is: class Customer < ActiveRecord::Base has_many :referrals, class_name: "Customer", foreign_key: "referring_customer_id", conditions: {:referring_customer_id => :id} belongs_to :referring_customer, class_name: "Customer", foreign_key: "referring_customer_id" end I have no problem accessing a customer's referring_customer: @customer.referring_customer.name ... returns the name of the customer that referred @customer. However, I keep getting an empty array when accessing referrals: @customer.referrals ... returns []. I ran binding.pry to see what SQL was being run, given a customer who has a "referer" and should have several referrals. This is the SQL being executed. Customer Load (0.3ms) SELECT "customers".* FROM "customers" WHERE "customers"."id" = ? ORDER BY "customers"."id" ASC LIMIT 1 [["id", 2]] Customer Exists (0.2ms) SELECT 1 AS one FROM "customers" WHERE "customers"."referring_customer_id" = ? AND "customers"."referring_customer_id" = 'id' LIMIT 1 [["referring_customer_id", 3]] I'm a bit lost and am unsure where my problem lies. I don't think my query is correct -- @customer.referrals should return an array of all the referrals, which are the customers who have @customer.id as their referring_customer_id.

    Read the article

  • Loading jQuery Consistently in a .NET Web App

    - by Rick Strahl
    One thing that frequently comes up in discussions when using jQuery is how to best load the jQuery library (as well as other commonly used and updated libraries) in a Web application. Specifically the issue is the one of versioning and making sure that you can easily update and switch versions of script files with application wide settings in one place and having your script usage reflect those settings in the entire application on all pages that use the script. Although I use jQuery as an example here, the same concepts can be applied to any script library - for example in my Web libraries I use the same approach for jQuery.ui and my own internal jQuery support library. The concepts used here can be applied both in WebForms and MVC. Loading jQuery Properly From CDN Before we look at a generic way to load jQuery via some server logic, let me first point out my preferred way to embed jQuery into the page. I use the Google CDN to load jQuery and then use a fallback URL to handle the offline or no Internet connection scenario. Why use a CDN? CDN links tend to be loaded more quickly since they are very likely to be cached in user's browsers already as jQuery CDN is used by many, many sites on the Web. Using a CDN also removes load from your Web server and puts the load bearing on the CDN provider - in this case Google - rather than on your Web site. On the downside, CDN links gives the provider (Google, Microsoft) yet another way to track users through their Web usage. Here's how I use jQuery CDN plus a fallback link on my WebLog for example: <!DOCTYPE HTML> <html> <head> <script src="//ajax.googleapis.com/ajax/libs/jquery/1.6.4/jquery.min.js"></script> <script> if (typeof (jQuery) == 'undefined') document.write(unescape("%3Cscript " + "src='/Weblog/wwSC.axd?r=Westwind.Web.Controls.Resources.jquery.js' %3E%3C/script%3E")); </script> <title>Rick Strahl's Web Log</title> ... </head>   You can see that the CDN is referenced first, followed by a small script block that checks to see whether jQuery was loaded (jQuery object exists). If it didn't load another script reference is added to the document dynamically pointing to a backup URL. In this case my backup URL points at a WebResource in my Westwind.Web  assembly, but the URL can also be local script like src="/scripts/jquery.min.js". Important: Use the proper Protocol/Scheme for  for CDN Urls [updated based on comments] If you're using a CDN to load an external script resource you should always make sure that the script is loaded with the same protocol as the parent page to avoid mixed content warnings by the browser. You don't want to load a script link to an http:// resource when you're on an https:// page. The easiest way to use this is by using a protocol relative URL: <script src="//ajax.googleapis.com/ajax/libs/jquery/1.6.4/jquery.min.js"></script> which is an easy way to load resources from other domains. This URL syntax will automatically use the parent page's protocol (or more correctly scheme). As long as the remote domains support both http:// and https:// access this should work. BTW this also works in CSS (with some limitations) and links. BTW, I didn't know about this until it was pointed out in the comments. This is a very useful feature for many things - ah the benefits of my blog to myself :-) Version Numbers When you use a CDN you notice that you have to reference a specific version of jQuery. When using local files you may not have to do this as you can rename your private copy of jQuery.js, but for CDN the references are always versioned. The version number is of course very important to ensure you getting the version you have tested with, but it's also important to the provider because it ensures that cached content is always correct. If an existing file was updated the updates might take a very long time to get past the locally cached content and won't refresh properly. The version number ensures you get the right version and not some cached content that has been changed but not updated in your cache. On the other hand version numbers also mean that once you decide to use a new version of the script you now have to change all your script references in your pages. Depending on whether you use some sort of master/layout page or not this may or may not be easy in your application. Even if you do use master/layout pages, chances are that you probably have a few of them and at the very least all of those have to be updated for the scripts. If you use individual pages for all content this issue then spreads to all of your pages. Search and Replace in Files will do the trick, but it's still something that's easy to forget and worry about. Personaly I think it makes sense to have a single place where you can specify common script libraries that you want to load and more importantly which versions thereof and where they are loaded from. Loading Scripts via Server Code Script loading has always been important to me and as long as I can remember I've always built some custom script loading routines into my Web frameworks. WebForms makes this fairly easy because it has a reasonably useful script manager (ClientScriptManager and the ScriptManager) which allow injecting script into the page easily from anywhere in the Page cycle. What's nice about these components is that they allow scripts to be injected by controls so components can wrap up complex script/resource dependencies more easily without having to require long lists of CSS/Scripts/Image includes. In MVC or pure script driven applications like Razor WebPages  the process is more raw, requiring you to embed script references in the right place. But its also more immediate - it lets you know exactly which versions of scripts to use because you have to manually embed them. In WebForms with different controls loading resources this often can get confusing because it's quite possible to load multiple versions of the same script library into a page, the results of which are less than optimal… In this post I look a simple routine that embeds jQuery into the page based on a few application wide configuration settings. It returns only a string of the script tags that can be manually embedded into a Page template. It's a small function that merely a string of the script tags shown at the begging of this post along with some options on how that string is comprised. You'll be able to specify in one place which version loads and then all places where the help function is used will automatically reflect this selection. Options allow specification of the jQuery CDN Url, the fallback Url and where jQuery should be loaded from (script folder, Resource or CDN in my case). While this is specific to jQuery you can apply this to other resources as well. For example I use a similar approach with jQuery.ui as well using practically the same semantics. Providing Resources in ControlResources In my Westwind.Web Web utility library I have a class called ControlResources which is responsible for holding resource Urls, resource IDs and string contants that reference those resource IDs. The library also provides a few helper methods for loading common scriptscripts into a Web page. There are specific versions for WebForms which use the ClientScriptManager/ScriptManager and script link methods that can be used in any .NET technology that can embed an expression into the output template (or code for that matter). The ControlResources class contains mostly static content - references to resources mostly. But it also contains a few static properties that configure script loading: A Script LoadMode (CDN, Resource, or script url) A default CDN Url A fallback url They are  static properties in the ControlResources class: public class ControlResources { /// <summary> /// Determines what location jQuery is loaded from /// </summary> public static JQueryLoadModes jQueryLoadMode = JQueryLoadModes.ContentDeliveryNetwork; /// <summary> /// jQuery CDN Url on Google /// </summary> public static string jQueryCdnUrl = "//ajax.googleapis.com/ajax/libs/jquery/1.6.4/jquery.min.js"; /// <summary> /// jQuery CDN Url on Google /// </summary> public static string jQueryUiCdnUrl = "//ajax.googleapis.com/ajax/libs/jqueryui/1.8.16/jquery-ui.min.js"; /// <summary> /// jQuery UI fallback Url if CDN is unavailable or WebResource is used /// Note: The file needs to exist and hold the minimized version of jQuery ui /// </summary> public static string jQueryUiLocalFallbackUrl = "~/scripts/jquery-ui.min.js"; } These static properties are fixed values that can be changed at application startup to reflect your preferences. Since they're static they are application wide settings and respected across the entire Web application running. It's best to set these default in Application_Init or similar startup code if you need to change them for your application: protected void Application_Start(object sender, EventArgs e) { // Force jQuery to be loaded off Google Content Network ControlResources.jQueryLoadMode = JQueryLoadModes.ContentDeliveryNetwork; // Allow overriding of the Cdn url ControlResources.jQueryCdnUrl = "http://ajax.googleapis.com/ajax/libs/jquery/1.6.2/jquery.min.js"; // Route to our own internal handler App.OnApplicationStart(); } With these basic settings in place you can then embed expressions into a page easily. In WebForms use: <!DOCTYPE html> <html> <head runat="server"> <%= ControlResources.jQueryLink() %> <script src="scripts/ww.jquery.min.js"></script> </head> In Razor use: <!DOCTYPE html> <html> <head> @Html.Raw(ControlResources.jQueryLink()) <script src="scripts/ww.jquery.min.js"></script> </head> Note that in Razor you need to use @Html.Raw() to force the string NOT to escape. Razor by default escapes string results and this ensures that the HTML content is properly expanded as raw HTML text. Both the WebForms and Razor output produce: <!DOCTYPE html> <html> <head> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.6.2/jquery.min.js" type="text/javascript"></script> <script type="text/javascript"> if (typeof (jQuery) == 'undefined') document.write(unescape("%3Cscript src='/WestWindWebToolkitWeb/WebResource.axd?d=-b6oWzgbpGb8uTaHDrCMv59VSmGhilZP5_T_B8anpGx7X-PmW_1eu1KoHDvox-XHqA1EEb-Tl2YAP3bBeebGN65tv-7-yAimtG4ZnoWH633pExpJor8Qp1aKbk-KQWSoNfRC7rQJHXVP4tC0reYzVw2&t=634535391996872492' type='text/javascript'%3E%3C/script%3E"));</script> <script src="scripts/ww.jquery.min.js"></script> </head> which produces the desired effect for both CDN load and fallback URL. The implementation of jQueryLink is pretty basic of course: /// <summary> /// Inserts a script link to load jQuery into the page based on the jQueryLoadModes settings /// of this class. Default load is by CDN plus WebResource fallback /// </summary> /// <param name="url"> /// An optional explicit URL to load jQuery from. Url is resolved. /// When specified no fallback is applied /// </param> /// <returns>full script tag and fallback script for jQuery to load</returns> public static string jQueryLink(JQueryLoadModes jQueryLoadMode = JQueryLoadModes.Default, string url = null) { string jQueryUrl = string.Empty; string fallbackScript = string.Empty; if (jQueryLoadMode == JQueryLoadModes.Default) jQueryLoadMode = ControlResources.jQueryLoadMode; if (!string.IsNullOrEmpty(url)) jQueryUrl = WebUtils.ResolveUrl(url); else if (jQueryLoadMode == JQueryLoadModes.WebResource) { Page page = new Page(); jQueryUrl = page.ClientScript.GetWebResourceUrl(typeof(ControlResources), ControlResources.JQUERY_SCRIPT_RESOURCE); } else if (jQueryLoadMode == JQueryLoadModes.ContentDeliveryNetwork) { jQueryUrl = ControlResources.jQueryCdnUrl; if (!string.IsNullOrEmpty(jQueryCdnUrl)) { // check if jquery loaded - if it didn't we're not online and use WebResource fallbackScript = @"<script type=""text/javascript"">if (typeof(jQuery) == 'undefined') document.write(unescape(""%3Cscript src='{0}' type='text/javascript'%3E%3C/script%3E""));</script>"; fallbackScript = string.Format(fallbackScript, WebUtils.ResolveUrl(ControlResources.jQueryCdnFallbackUrl)); } } string output = "<script src=\"" + jQueryUrl + "\" type=\"text/javascript\"></script>"; // add in the CDN fallback script code if (!string.IsNullOrEmpty(fallbackScript)) output += "\r\n" + fallbackScript + "\r\n"; return output; } There's one dependency here on WebUtils.ResolveUrl() which resolves Urls without access to a Page/Control (another one of those features that should be in the runtime, not in the WebForms or MVC engine). You can see there's only a little bit of logic in this code that deals with potentially different load modes. I can load scripts from a Url, WebResources or - my preferred way - from CDN. Based on the static settings the scripts to embed are composed to be returned as simple string <script> tag(s). I find this extremely useful especially when I'm not connected to the internet so that I can quickly swap in a local jQuery resource instead of loading from CDN. While CDN loading with the fallback works it can be a bit slow as the CDN is probed first before the fallback kicks in. Switching quickly in one place makes this trivial. It also makes it very easy once a new version of jQuery rolls around to move up to the new version and ensure that all pages are using the new version immediately. I'm not trying to make this out as 'the' definite way to load your resources, but rather provide it here as a pointer so you can maybe apply your own logic to determine where scripts come from and how they load. You could even automate this some more by using configuration settings or reading the locations/preferences out of some sort of data/metadata store that can be dynamically updated instead via recompilation. FWIW, I use a very similar approach for loading jQuery UI and my own ww.jquery library - the same concept can be applied to any kind of script you might be loading from different locations. Hopefully some of you find this a useful addition to your toolset. Resources Google CDN for jQuery Full ControlResources Source Code ControlResource Documentation Westwind.Web NuGet This method is part of the Westwind.Web library of the West Wind Web Toolkit or you can grab the Web library from NuGet and add to your Visual Studio project. This package includes a host of Web related utilities and script support features. © Rick Strahl, West Wind Technologies, 2005-2011Posted in ASP.NET  jQuery   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • ASP.NET and HTML5 Local Storage

    - by Stephen Walther
    My favorite feature of HTML5, hands-down, is HTML5 local storage (aka DOM storage). By taking advantage of HTML5 local storage, you can dramatically improve the performance of your data-driven ASP.NET applications by caching data in the browser persistently. Think of HTML5 local storage like browser cookies, but much better. Like cookies, local storage is persistent. When you add something to browser local storage, it remains there when the user returns to the website (possibly days or months later). Importantly, unlike the cookie storage limitation of 4KB, you can store up to 10 megabytes in HTML5 local storage. Because HTML5 local storage works with the latest versions of all modern browsers (IE, Firefox, Chrome, Safari), you can start taking advantage of this HTML5 feature in your applications right now. Why use HTML5 Local Storage? I use HTML5 Local Storage in the JavaScript Reference application: http://Superexpert.com/JavaScriptReference The JavaScript Reference application is an HTML5 app that provides an interactive reference for all of the syntax elements of JavaScript (You can read more about the application and download the source code for the application here). When you open the application for the first time, all of the entries are transferred from the server to the browser (all 300+ entries). All of the entries are stored in local storage. When you open the application in the future, only changes are transferred from the server to the browser. The benefit of this approach is that the application performs extremely fast. When you click the details link to view details on a particular entry, the entry details appear instantly because all of the entries are stored on the client machine. When you perform key-up searches, by typing in the filter textbox, matching entries are displayed very quickly because the entries are being filtered on the local machine. This approach can have a dramatic effect on the performance of any interactive data-driven web application. Interacting with data on the client is almost always faster than interacting with the same data on the server. Retrieving Data from the Server In the JavaScript Reference application, I use Microsoft WCF Data Services to expose data to the browser. WCF Data Services generates a REST interface for your data automatically. Here are the steps: Create your database tables in Microsoft SQL Server. For example, I created a database named ReferenceDB and a database table named Entities. Use the Entity Framework to generate your data model. For example, I used the Entity Framework to generate a class named ReferenceDBEntities and a class named Entities. Expose your data through WCF Data Services. I added a WCF Data Service to my project and modified the data service class to look like this:   using System.Data.Services; using System.Data.Services.Common; using System.Web; using JavaScriptReference.Models; namespace JavaScriptReference.Services { [System.ServiceModel.ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class EntryService : DataService<ReferenceDBEntities> { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { config.UseVerboseErrors = true; config.SetEntitySetAccessRule("*", EntitySetRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } // Define a change interceptor for the Products entity set. [ChangeInterceptor("Entries")] public void OnChangeEntries(Entry entry, UpdateOperations operations) { if (!HttpContext.Current.Request.IsAuthenticated) { throw new DataServiceException("Cannot update reference unless authenticated."); } } } }     The WCF data service is named EntryService. Notice that it derives from DataService<ReferenceEntitites>. Because it derives from DataService<ReferenceEntities>, the data service exposes the contents of the ReferenceEntitiesDB database. In the code above, I defined a ChangeInterceptor to prevent un-authenticated users from making changes to the database. Anyone can retrieve data through the service, but only authenticated users are allowed to make changes. After you expose data through a WCF Data Service, you can use jQuery to retrieve the data by performing an Ajax call. For example, I am using an Ajax call that looks something like this to retrieve the JavaScript entries from the EntryService.svc data service: $.ajax({ dataType: "json", url: “/Services/EntryService.svc/Entries”, success: function (result) { var data = callback(result["d"]); } });     Notice that you must unwrap the data using result[“d”]. After you unwrap the data, you have a JavaScript array of the entries. I’m transferring all 300+ entries from the server to the client when the application is opened for the first time. In other words, I transfer the entire database from the server to the client, once and only once, when the application is opened for the first time. The data is transferred using JSON. Here is a fragment: { "d" : [ { "__metadata": { "uri": "http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries(1)", "type": "ReferenceDBModel.Entry" }, "Id": 1, "Name": "Global", "Browsers": "ff3_6,ie8,ie9,c8,sf5,es3,es5", "Syntax": "object", "ShortDescription": "Contains global variables and functions", "FullDescription": "<p>\nThe Global object is determined by the host environment. In web browsers, the Global object is the same as the windows object.\n</p>\n<p>\nYou can use the keyword <code>this</code> to refer to the Global object when in the global context (outside of any function).\n</p>\n<p>\nThe Global object holds all global variables and functions. For example, the following code demonstrates that the global <code>movieTitle</code> variable refers to the same thing as <code>window.movieTitle</code> and <code>this.movieTitle</code>.\n</p>\n<pre>\nvar movieTitle = \"Star Wars\";\nconsole.log(movieTitle === this.movieTitle); // true\nconsole.log(movieTitle === window.movieTitle); // true\n</pre>\n", "LastUpdated": "634298578273756641", "IsDeleted": false, "OwnerId": null }, { "__metadata": { "uri": "http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries(2)", "type": "ReferenceDBModel.Entry" }, "Id": 2, "Name": "eval(string)", "Browsers": "ff3_6,ie8,ie9,c8,sf5,es3,es5", "Syntax": "function", "ShortDescription": "Evaluates and executes JavaScript code dynamically", "FullDescription": "<p>\nThe following code evaluates and executes the string \"3+5\" at runtime.\n</p>\n<pre>\nvar result = eval(\"3+5\");\nconsole.log(result); // returns 8\n</pre>\n<p>\nYou can rewrite the code above like this:\n</p>\n<pre>\nvar result;\neval(\"result = 3+5\");\nconsole.log(result);\n</pre>", "LastUpdated": "634298580913817644", "IsDeleted": false, "OwnerId": 1 } … ]} I worried about the amount of time that it would take to transfer the records. According to Google Chome, it takes about 5 seconds to retrieve all 300+ records on a broadband connection over the Internet. 5 seconds is a small price to pay to avoid performing any server fetches of the data in the future. And here are the estimated times using different types of connections using Fiddler: Notice that using a modem, it takes 33 seconds to download the database. 33 seconds is a significant chunk of time. So, I would not use the approach of transferring the entire database up front if you expect a significant portion of your website audience to connect to your website with a modem. Adding Data to HTML5 Local Storage After the JavaScript entries are retrieved from the server, the entries are stored in HTML5 local storage. Here’s the reference documentation for HTML5 storage for Internet Explorer: http://msdn.microsoft.com/en-us/library/cc197062(VS.85).aspx You access local storage by accessing the windows.localStorage object in JavaScript. This object contains key/value pairs. For example, you can use the following JavaScript code to add a new item to local storage: <script type="text/javascript"> window.localStorage.setItem("message", "Hello World!"); </script>   You can use the Google Chrome Storage tab in the Developer Tools (hit CTRL-SHIFT I in Chrome) to view items added to local storage: After you add an item to local storage, you can read it at any time in the future by using the window.localStorage.getItem() method: <script type="text/javascript"> window.localStorage.setItem("message", "Hello World!"); </script>   You only can add strings to local storage and not JavaScript objects such as arrays. Therefore, before adding a JavaScript object to local storage, you need to convert it into a JSON string. In the JavaScript Reference application, I use a wrapper around local storage that looks something like this: function Storage() { this.get = function (name) { return JSON.parse(window.localStorage.getItem(name)); }; this.set = function (name, value) { window.localStorage.setItem(name, JSON.stringify(value)); }; this.clear = function () { window.localStorage.clear(); }; }   If you use the wrapper above, then you can add arbitrary JavaScript objects to local storage like this: var store = new Storage(); // Add array to storage var products = [ {name:"Fish", price:2.33}, {name:"Bacon", price:1.33} ]; store.set("products", products); // Retrieve items from storage var products = store.get("products");   Modern browsers support the JSON object natively. If you need the script above to work with older browsers then you should download the JSON2.js library from: https://github.com/douglascrockford/JSON-js The JSON2 library will use the native JSON object if a browser already supports JSON. Merging Server Changes with Browser Local Storage When you first open the JavaScript Reference application, the entire database of JavaScript entries is transferred from the server to the browser. Two items are added to local storage: entries and entriesLastUpdated. The first item contains the entire entries database (a big JSON string of entries). The second item, a timestamp, represents the version of the entries. Whenever you open the JavaScript Reference in the future, the entriesLastUpdated timestamp is passed to the server. Only records that have been deleted, updated, or added since entriesLastUpdated are transferred to the browser. The OData query to get the latest updates looks like this: http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries?$filter=(LastUpdated%20gt%20634301199890494792L) If you remove URL encoding, the query looks like this: http://superexpert.com/javascriptreference/Services/EntryService.svc/Entries?$filter=(LastUpdated gt 634301199890494792L) This query returns only those entries where the value of LastUpdated > 634301199890494792 (the version timestamp). The changes – new JavaScript entries, deleted entries, and updated entries – are merged with the existing entries in local storage. The JavaScript code for performing the merge is contained in the EntriesHelper.js file. The merge() method looks like this:   merge: function (oldEntries, newEntries) { // concat (this performs the add) oldEntries = oldEntries || []; var mergedEntries = oldEntries.concat(newEntries); // sort this.sortByIdThenLastUpdated(mergedEntries); // prune duplicates (this performs the update) mergedEntries = this.pruneDuplicates(mergedEntries); // delete mergedEntries = this.removeIsDeleted(mergedEntries); // Sort this.sortByName(mergedEntries); return mergedEntries; },   The contents of local storage are then updated with the merged entries. I spent several hours writing the merge() method (much longer than I expected). I found two resources to be extremely useful. First, I wrote extensive unit tests for the merge() method. I wrote the unit tests using server-side JavaScript. I describe this approach to writing unit tests in this blog entry. The unit tests are included in the JavaScript Reference source code. Second, I found the following blog entry to be super useful (thanks Nick!): http://nicksnettravels.builttoroam.com/post/2010/08/03/OData-Synchronization-with-WCF-Data-Services.aspx One big challenge that I encountered involved timestamps. I originally tried to store an actual UTC time as the value of the entriesLastUpdated item. I quickly discovered that trying to work with dates in JSON turned out to be a big can of worms that I did not want to open. Next, I tried to use a SQL timestamp column. However, I learned that OData cannot handle the timestamp data type when doing a filter query. Therefore, I ended up using a bigint column in SQL and manually creating the value when a record is updated. I overrode the SaveChanges() method to look something like this: public override int SaveChanges(SaveOptions options) { var changes = this.ObjectStateManager.GetObjectStateEntries( EntityState.Modified | EntityState.Added | EntityState.Deleted); foreach (var change in changes) { var entity = change.Entity as IEntityTracking; if (entity != null) { entity.LastUpdated = DateTime.Now.Ticks; } } return base.SaveChanges(options); }   Notice that I assign Date.Now.Ticks to the entity.LastUpdated property whenever an entry is modified, added, or deleted. Summary After building the JavaScript Reference application, I am convinced that HTML5 local storage can have a dramatic impact on the performance of any data-driven web application. If you are building a web application that involves extensive interaction with data then I recommend that you take advantage of this new feature included in the HTML5 standard.

    Read the article

  • Code Contracts: Unit testing contracted code

    - by DigiMortal
    Code contracts and unit tests are not replacements for each other. They both have different purpose and different nature. It does not matter if you are using code contracts or not – you still have to write tests for your code. In this posting I will show you how to unit test code with contracts. In my previous posting about code contracts I showed how to avoid ContractExceptions that are defined in code contracts runtime and that are not accessible for us in design time. This was one step further to make my randomizer testable. In this posting I will complete the mission. Problems with current code This is my current code. public class Randomizer {     public static int GetRandomFromRangeContracted(int min, int max)     {         Contract.Requires<ArgumentOutOfRangeException>(             min < max,             "Min must be less than max"         );           Contract.Ensures(             Contract.Result<int>() >= min &&             Contract.Result<int>() <= max,             "Return value is out of range"         );           var rnd = new Random();         return rnd.Next(min, max);     } } As you can see this code has some problems: randomizer class is static and cannot be instantiated. We cannot move this class between components if we need to, GetRandomFromRangeContracted() is not fully testable because we cannot currently affect random number generator output and therefore we cannot test post-contract. Now let’s solve these problems. Making randomizer testable As a first thing I made Randomizer to be class that must be instantiated. This is simple thing to do. Now let’s solve the problem with Random class. To make Randomizer testable I define IRandomGenerator interface and RandomGenerator class. The public constructor of Randomizer accepts IRandomGenerator as argument. public interface IRandomGenerator {     int Next(int min, int max); }   public class RandomGenerator : IRandomGenerator {     private Random _random = new Random();       public int Next(int min, int max)     {         return _random.Next(min, max);     } } And here is our Randomizer after total make-over. public class Randomizer {     private IRandomGenerator _generator;       private Randomizer()     {         _generator = new RandomGenerator();     }       public Randomizer(IRandomGenerator generator)     {         _generator = generator;     }       public int GetRandomFromRangeContracted(int min, int max)     {         Contract.Requires<ArgumentOutOfRangeException>(             min < max,             "Min must be less than max"         );           Contract.Ensures(             Contract.Result<int>() >= min &&             Contract.Result<int>() <= max,             "Return value is out of range"         );           return _generator.Next(min, max);     } } It seems to be inconvenient to instantiate Randomizer now but you can always use DI/IoC containers and break compiled dependencies between the components of your system. Writing tests for randomizer IRandomGenerator solved problem with testing post-condition. Now it is time to write tests for Randomizer class. Writing tests for contracted code is not easy. The main problem is still ContractException that we are not able to access. Still it is the main exception we get as soon as contracts fail. Although pre-conditions are able to throw exceptions with type we want we cannot do much when post-conditions will fail. We have to use Contract.ContractFailed event and this event is called for every contract failure. This way we find ourselves in situation where supporting well input interface makes it impossible to support output interface well and vice versa. ContractFailed is nasty hack and it works pretty weird way. Although documentation sais that ContractFailed is good choice for testing contracts it is still pretty painful. As a last chance I got tests working almost normally when I wrapped them up. Can you remember similar solution from the times of Visual Studio 2008 unit tests? Cannot understand how Microsoft was able to mess up testing again. [TestClass] public class RandomizerTest {     private Mock<IRandomGenerator> _randomMock;     private Randomizer _randomizer;     private string _lastContractError;       public TestContext TestContext { get; set; }       public RandomizerTest()     {         Contract.ContractFailed += (sender, e) =>         {             e.SetHandled();             e.SetUnwind();               throw new Exception(e.FailureKind + ": " + e.Message);         };     }       [TestInitialize()]     public void RandomizerTestInitialize()     {         _randomMock = new Mock<IRandomGenerator>();         _randomizer = new Randomizer(_randomMock.Object);         _lastContractError = string.Empty;     }       #region InputInterfaceTests     [TestMethod]     [ExpectedException(typeof(Exception))]     public void GetRandomFromRangeContracted_should_throw_exception_when_min_is_not_less_than_max()     {         try         {             _randomizer.GetRandomFromRangeContracted(100, 10);         }         catch (Exception ex)         {             throw new Exception(string.Empty, ex);         }     }       [TestMethod]     [ExpectedException(typeof(Exception))]     public void GetRandomFromRangeContracted_should_throw_exception_when_min_is_equal_to_max()     {         try         {             _randomizer.GetRandomFromRangeContracted(10, 10);         }         catch (Exception ex)         {             throw new Exception(string.Empty, ex);         }     }       [TestMethod]     public void GetRandomFromRangeContracted_should_work_when_min_is_less_than_max()     {         int minValue = 10;         int maxValue = 100;         int returnValue = 50;           _randomMock.Setup(r => r.Next(minValue, maxValue))             .Returns(returnValue)             .Verifiable();           var result = _randomizer.GetRandomFromRangeContracted(minValue, maxValue);           _randomMock.Verify();         Assert.AreEqual<int>(returnValue, result);     }     #endregion       #region OutputInterfaceTests     [TestMethod]     [ExpectedException(typeof(Exception))]     public void GetRandomFromRangeContracted_should_throw_exception_when_return_value_is_less_than_min()     {         int minValue = 10;         int maxValue = 100;         int returnValue = 7;           _randomMock.Setup(r => r.Next(10, 100))             .Returns(returnValue)             .Verifiable();           try         {             _randomizer.GetRandomFromRangeContracted(minValue, maxValue);         }         catch (Exception ex)         {             throw new Exception(string.Empty, ex);         }           _randomMock.Verify();     }       [TestMethod]     [ExpectedException(typeof(Exception))]     public void GetRandomFromRangeContracted_should_throw_exception_when_return_value_is_more_than_max()     {         int minValue = 10;         int maxValue = 100;         int returnValue = 102;           _randomMock.Setup(r => r.Next(10, 100))             .Returns(returnValue)             .Verifiable();           try         {             _randomizer.GetRandomFromRangeContracted(minValue, maxValue);         }         catch (Exception ex)         {             throw new Exception(string.Empty, ex);         }           _randomMock.Verify();     }     #endregion        } Although these tests are pretty awful and contain hacks we are at least able now to make sure that our code works as expected. Here is the test list after running these tests. Conclusion Code contracts are very new stuff in Visual Studio world and as young technology it has some problems – like all other new bits and bytes in the world. As you saw then making our contracted code testable is easy only to the point when pre-conditions are considered. When we start dealing with post-conditions we will end up with hacked tests. I hope that future versions of code contracts will solve error handling issues the way that testing of contracted code will be easier than it is right now.

    Read the article

  • Stored Procedures with SSRS? Hmm… not so much

    - by Rob Farley
    Little Bobby Tables’ mother says you should always sanitise your data input. Except that I think she’s wrong. The SQL Injection aspect is for another post, where I’ll show you why I think SQL Injection is the same kind of attack as many other attacks, such as the old buffer overflow, but here I want to have a bit of a whinge about the way that some people sanitise data input, and even have a whinge about people who insist on using stored procedures for SSRS reports. Let me say that again, in case you missed it the first time: I want to have a whinge about people who insist on using stored procedures for SSRS reports. Let’s look at the data input sanitisation aspect – except that I’m going to call it ‘parameter validation’. I’m talking about code that looks like this: create procedure dbo.GetMonthSummaryPerSalesPerson(@eomdate datetime) as begin     /* First check that @eomdate is a valid date */     if isdate(@eomdate) != 1     begin         select 'Please enter a valid date' as ErrorMessage;         return;     end     /* Then check that time has passed since @eomdate */     if datediff(day,@eomdate,sysdatetime()) < 5     begin         select 'Sorry - EOM is not complete yet' as ErrorMessage;         return;     end         /* If those checks have succeeded, return the data */     select SalesPersonID, count(*) as NumSales, sum(TotalDue) as TotalSales     from Sales.SalesOrderHeader     where OrderDate >= dateadd(month,-1,@eomdate)         and OrderDate < @eomdate     group by SalesPersonID     order by SalesPersonID; end Notice that the code checks that a date has been entered. Seriously??!! This must only be to check for NULL values being passed in, because anything else would have to be a valid datetime to avoid an error. The other check is maybe fair enough, but I still don’t like it. The two problems I have with this stored procedure are the result sets and the small fact that the stored procedure even exists in the first place. But let’s consider the first one of these problems for starters. I’ll get to the second one in a moment. If you read Jes Borland (@grrl_geek)’s recent post about returning multiple result sets in Reporting Services, you’ll be aware that Reporting Services doesn’t support multiple results sets from a single query. And when it says ‘single query’, it includes ‘stored procedure call’. It’ll only handle the first result set that comes back. But that’s okay – we have RETURN statements, so our stored procedure will only ever return a single result set.  Sometimes that result set might contain a single field called ErrorMessage, but it’s still only one result set. Except that it’s not okay, because Reporting Services needs to know what fields to expect. Your report needs to hook into your fields, so SSRS needs to have a way to get that information. For stored procs, it uses an option called FMTONLY. When Reporting Services tries to figure out what fields are going to be returned by a query (or stored procedure call), it doesn’t want to have to run the whole thing. That could take ages. (Maybe it’s seen some of the stored procedures I’ve had to deal with over the years!) So it turns on FMTONLY before it makes the call (and turns it off again afterwards). FMTONLY is designed to be able to figure out the shape of the output, without actually running the contents. It’s very useful, you might think. set fmtonly on exec dbo.GetMonthSummaryPerSalesPerson '20030401'; set fmtonly off Without the FMTONLY lines, this stored procedure returns a result set that has three columns and fourteen rows. But with FMTONLY turned on, those rows don’t come back. But what I do get back hurts Reporting Services. It doesn’t run the stored procedure at all. It just looks for anything that could be returned and pushes out a result set in that shape. Despite the fact that I’ve made sure that the logic will only ever return a single result set, the FMTONLY option kills me by returning three of them. It would have been much better to push these checks down into the query itself. alter procedure dbo.GetMonthSummaryPerSalesPerson(@eomdate datetime) as begin     select SalesPersonID, count(*) as NumSales, sum(TotalDue) as TotalSales     from Sales.SalesOrderHeader     where     /* Make sure that @eomdate is valid */         isdate(@eomdate) = 1     /* And that it's sufficiently past */     and datediff(day,@eomdate,sysdatetime()) >= 5     /* And now use it in the filter as appropriate */     and OrderDate >= dateadd(month,-1,@eomdate)     and OrderDate < @eomdate     group by SalesPersonID     order by SalesPersonID; end Now if we run it with FMTONLY turned on, we get the single result set back. But let’s consider the execution plan when we pass in an invalid date. First let’s look at one that returns data. I’ve got a semi-useful index in place on OrderDate, which includes the SalesPersonID and TotalDue fields. It does the job, despite a hefty Sort operation. …compared to one that uses a future date: You might notice that the estimated costs are similar – the Index Seek is still 28%, the Sort is still 71%. But the size of that arrow coming out of the Index Seek is a whole bunch smaller. The coolest thing here is what’s going on with that Index Seek. Let’s look at some of the properties of it. Glance down it with me… Estimated CPU cost of 0.0005728, 387 estimated rows, estimated subtree cost of 0.0044385, ForceSeek false, Number of Executions 0. That’s right – it doesn’t run. So much for reading plans right-to-left... The key is the Filter on the left of it. It has a Startup Expression Predicate in it, which means that it doesn’t call anything further down the plan (to the right) if the predicate evaluates to false. Using this method, we can make sure that our stored procedure contains a single query, and therefore avoid any problems with multiple result sets. If we wanted, we could always use UNION ALL to make sure that we can return an appropriate error message. alter procedure dbo.GetMonthSummaryPerSalesPerson(@eomdate datetime) as begin     select SalesPersonID, count(*) as NumSales, sum(TotalDue) as TotalSales, /*Placeholder: */ '' as ErrorMessage     from Sales.SalesOrderHeader     where     /* Make sure that @eomdate is valid */         isdate(@eomdate) = 1     /* And that it's sufficiently past */     and datediff(day,@eomdate,sysdatetime()) >= 5     /* And now use it in the filter as appropriate */     and OrderDate >= dateadd(month,-1,@eomdate)     and OrderDate < @eomdate     group by SalesPersonID     /* Now include the error messages */     union all     select 0, 0, 0, 'Please enter a valid date' as ErrorMessage     where isdate(@eomdate) != 1     union all     select 0, 0, 0, 'Sorry - EOM is not complete yet' as ErrorMessage     where datediff(day,@eomdate,sysdatetime()) < 5     order by SalesPersonID; end But still I don’t like it, because it’s now a stored procedure with a single query. And I don’t like stored procedures that should be functions. That’s right – I think this should be a function, and SSRS should call the function. And I apologise to those of you who are now planning a bonfire for me. Guy Fawkes’ night has already passed this year, so I think you miss out. (And I’m not going to remind you about when the PASS Summit is in 2012.) create function dbo.GetMonthSummaryPerSalesPerson(@eomdate datetime) returns table as return (     select SalesPersonID, count(*) as NumSales, sum(TotalDue) as TotalSales, '' as ErrorMessage     from Sales.SalesOrderHeader     where     /* Make sure that @eomdate is valid */         isdate(@eomdate) = 1     /* And that it's sufficiently past */     and datediff(day,@eomdate,sysdatetime()) >= 5     /* And now use it in the filter as appropriate */     and OrderDate >= dateadd(month,-1,@eomdate)     and OrderDate < @eomdate     group by SalesPersonID     union all     select 0, 0, 0, 'Please enter a valid date' as ErrorMessage     where isdate(@eomdate) != 1     union all     select 0, 0, 0, 'Sorry - EOM is not complete yet' as ErrorMessage     where datediff(day,@eomdate,sysdatetime()) < 5 ); We’ve had to lose the ORDER BY – but that’s fine, as that’s a client thing anyway. We can have our reports leverage this stored query still, but we’re recognising that it’s a query, not a procedure. A procedure is designed to DO stuff, not just return data. We even get entries in sys.columns that confirm what the shape of the result set actually is, which makes sense, because a table-valued function is the right mechanism to return data. And we get so much more flexibility with this. If you haven’t seen the simplification stuff that I’ve preached on before, jump over to http://bit.ly/SimpleRob and watch the video of when I broke a microphone and nearly fell off the stage in Wales. You’ll see the impact of being able to have a simplifiable query. You can also read the procedural functions post I wrote recently, if you didn’t follow the link from a few paragraphs ago. So if we want the list of SalesPeople that made any kind of sales in a given month, we can do something like: select SalesPersonID from dbo.GetMonthSummaryPerSalesPerson(@eomonth) order by SalesPersonID; This doesn’t need to look up the TotalDue field, which makes a simpler plan. select * from dbo.GetMonthSummaryPerSalesPerson(@eomonth) where SalesPersonID is not null order by SalesPersonID; This one can avoid having to do the work on the rows that don’t have a SalesPersonID value, pushing the predicate into the Index Seek rather than filtering the results that come back to the report. If we had joins involved, we might see some of those being simplified out. We also get the ability to include query hints in individual reports. We shift from having a single-use stored procedure to having a reusable stored query – and isn’t that one of the main points of modularisation? Stored procedures in Reporting Services are just a bit limited for my liking. They’re useful in plenty of ways, but if you insist on using stored procedures all the time rather that queries that use functions – that’s rubbish. @rob_farley

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >