Search Results

Search found 24132 results on 966 pages for 'non clustered index'.

Page 355/966 | < Previous Page | 351 352 353 354 355 356 357 358 359 360 361 362  | Next Page >

  • How to develop an english .com domain value rating algorithm?

    - by Tom
    I've been thinking about an algorithm that should rougly be able to guess the value of an english .com domain in most cases. For this to work I want to perform tests that consider the strengths and weaknesses of an english .com domain. A simple point based system is what I had in mind, where each domain property can be given a certain weight to factor it's importance in. I had these properties in mind: domain character length Eg. initially 20 points are added. If the domain has 4 or less characters, no points are substracted. For each extra character, one or more points are substracted on an exponential basis (the more characters, the higher the penalty). domain characters Eg. initially 20 points are added. If the domain is only alphabetic, no points are substracted. For each non-alhabetic character, X points are substracted (exponential increase again). domain name words Scans through a big offline english database, including non-formal speech, eg. words like "tweet" should be recognized. Question 1 : where can I get a modern list of english words for use in such application? Are these lists available for free? Are there lists like these with non-formal words? The more words are found per character, the more points are added. So, a domain with a lot of characters will still not get a lot of points. words hype-level I believe this is a tricky one, but this should be the cause to differentiate perfect but boring domains from perfect and interesting domains. For example, the following domain is probably not that valueable: www.peanutgalaxy.com The algorithm should identify that peanuts and galaxies are not very popular topics on the web. This is just an example. On the other side, a domain like www.shopdeals.com should ring a bell to the hype test, as shops and deals are quite popular on the web. My initial thought would be to see how often these keywords are references to on the web, preferably with some database. Question 2: is this logic flawed, or does this hype level test have merit? Question 3: are such "hype databases" available? Or is there anything else that could work offline? The problem with eg. a query to google is that it requires a lot of requests due to the many domains to be tested. domain name spelling mistakes Domains like "freemoneyz.com" etc. are generally (notice I am making a lot of assumptions in this post but that's necessary I believe) not valueable due to the spelling mistakes. Question 4: are there any offline APIs available to check for spelling mistakes, preferably in javascript or some database that I can use interact with myself. Or should a word list help here as well? use of consonants, vowels etc. A domain that is easy to pronounce (eg. Google) is usually much more valueable than one that is not (eg. Gkyld). Question 5: how does one test for such pronuncability? Do you check for consonants, vowels, etc.? What does a valueable domain have? Has there been any work in this field, where should I look? That is what I came up with, which leads me to my final two questions. Question 6: can you think of any more english .com domain strengths or weaknesses? Which? How would you implement these? Question 7: do you believe this idea has any merit or all, or am I too naive? Anything I should know, read or hear about? Suggestions/comments? Thanks!

    Read the article

  • mobile browsers' can't login to my site

    - by imin
    i've tested my site on 2 phone models using the 'generic' browser that came with the phone, but sadly, everytime I tried to login, it will return me back to my index page. here's my login code <form name='login' method='POST' action='authentication.php'> <table border=0 cellpadding=2> <tr><td>Login:</td><td></td></tr> <tr><td>E-mail: </td><td><input type=text name='email' id='email' size=20 maxlength="200"></td></tr> <tr><td>Password: </td><td><input type=password name='password' id='password' size=20 maxlength="100"></td></tr> <tr><td></td><td><input type=submit value='Login'></td></tr> </table></form> and here's the authentication.php (snippet) $currentUserEmail = $_POST["email"]; $currentUserPwd = md5($_POST["password"]); $stmt = $dbi->prepare("select status from users where email=? and pwd=?"); $stmt->bind_param('ss', $currentUserEmail,$currentUserPwd); mysqli_stmt_execute($stmt); mysqli_stmt_store_result($stmt); $isUserAvailable = mysqli_stmt_num_rows($stmt); $stmt->bind_result($getUserStatus); $stmt->execute() or die (mysqli_error()); $stmt->store_result(); $stmt->fetch(); $stmt->close(); if($isUserAvailable > 0){ if ($getUserStatus == "PENDING") { $userIsLoggedIn = "NO"; $registeredUser = "NO"; unset($userIsLoggedIn); setcookie("currentMobileUserName", "", time()-3600); setcookie("currentMobileUserEmail", "", time()-3600); setcookie("currentMobileSessionID", "", time()-3600); setcookie("currentMobileUID", "", time()-3600); header('Location: '.$config['MOBILE_URL'].'/index.php?error=2&email='.$currentUserEmail); }elseif (($getUserStatus == "ACTIVE") || ($getUserStatus == "active")){ //means successfully logged in //set the cookie setcookie("currentMobileUserName", $currentUserName, $expire); setcookie("currentMobileUserEmail", $currentUserEmail, $expire); setcookie("currentMobileSessionID", $getGeneratedMobileUSID, $expire); setcookie("currentMobileUID", $currentUID, $expire); $userIsLoggedIn = "YES"; $registeredUser = "YES"; $result = $stmt->execute() or die (mysqli_error($dbi)); if ($caller == "indexLoginForm"){ header('Location: '.$config['MOBILE_URL'].'/home.php'); }else{ header('Location: '.$config['MOBILE_URL'].'/home.php'); } } }else{ $userIsLoggedIn = "NO"; $registeredUser = "NO"; unset($userIsLoggedIn); setcookie("currentMobileUserName", "", time()-3600); setcookie("currentMobileUserEmail", "", time()-3600); setcookie("currentMobileSessionID", "", time()-3600); setcookie("currentMobileUID", "", time()-3600); header('Location: '.$config['MOBILE_URL'].'/index.php?error=1'); } The only way I can access my mobile site is by using opera mini. Just FYI, both the 'generic browsers' i tested my site with supports cookie (at least this is what the browser settings said). thanks

    Read the article

  • Why is FubuMVC new()ing up my view model in PartialForEach?

    - by Jon M
    I'm getting started with FubuMVC and I have a simple Customer - Order relationship I'm trying to display using nested partials. My domain objects are as follows: public class Customer { private readonly IList<Order> orders = new List<Order>(); public string Name { get; set; } public IEnumerable<Order> Orders { get { return orders; } } public void AddOrder(Order order) { orders.Add(order); } } public class Order { public string Reference { get; set; } } I have the following controller classes: public class CustomersController { public IndexViewModel Index(IndexInputModel inputModel) { var customer1 = new Customer { Name = "John Smith" }; customer1.AddOrder(new Order { Reference = "ABC123" }); return new IndexViewModel { Customers = new[] { customer1 } }; } } public class IndexInputModel { } public class IndexViewModel { public IEnumerable<Customer> Customers { get; set; } } public class IndexView : FubuPage<IndexViewModel> { } public class CustomerPartial : FubuControl<Customer> { } public class OrderPartial : FubuControl<Order> { } IndexView.aspx: (standard html stuff trimmed) <div> <%= this.PartialForEach(x => x.Customers).Using<CustomerPartial>() %> </div> CustomerPartial.ascx: <%@ Control Language="C#" Inherits="FubuDemo.Controllers.Customers.CustomerPartial" %> <div> Customer Name: <%= this.DisplayFor(x => x.Name) %> <br /> Orders: (<%= Model.Orders.Count() %>) <br /> <%= this.PartialForEach(x => x.Orders) %> </div> OrderPartial.ascx: <%@ Control Language="C#" Inherits="FubuDemo.Controllers.Customers.OrderPartial" %> <div> Order <br /> Ref: <%= this.DisplayFor(x => x.Reference) %> </div> When I view Customers/Index, I see the following: Customers Customer Name: John Smith Orders: (1) It seems that in CustomerPartial.ascx, doing Model.Orders.Count() correctly picks up that 1 order exists. However PartialForEach(x = x.Orders) does not, as nothing is rendered for the order. If I set a breakpoint on the Order constructor, I see that it initially gets called by the Index method on CustomersController, but then it gets called by FubuMVC.Core.Models.StandardModelBinder.Bind, so it is getting re-instantiated by FubuMVC and losing the content of the Orders collection. This isn't quite what I'd expect, I would think that PartialForEach would just pass the domain object directly into the partial. Am I missing the point somewhere? What is the 'correct' way to achieve this kind of result in Fubu?

    Read the article

  • jQuery - Need help stopping animation on click command.

    - by iamtheratio
    With a few of your help I was able to get the jquery I wanted to work flawlessly, except for one thing.. the animation doesn't stop when i click on the buttons. Scenario: I have an Image, and 3 buttons underneath labeled "1","2", and "3". The jquery will automate the click function every 4500ms and switch from 1 to 2, then 2 to 3 and continuously loop. However the problem is, if I manually click on a 1,2,3 button the animation does not stop. Any ideas how I could accomplish this? jQuery: var tabs; var len; var index = 1; var robot; function automate() { tabs.eq((index%len)).trigger('click'); index++; } robot = setInterval(automate, 5500); jQuery(document).ready(function(){ jQuery(".imgs").hide(); jQuery(".img_selection a").click(function(){ stringref = this.href.split('#')[1]; $(".img_selection a[id$=_on]").removeAttr('id'); this.id = this.className + "_on"; jQuery('.imgs').hide(); if (jQuery.browser.msie && jQuery.browser.version.substr(0,3) == "6.0") { jQuery('.imgs#' + stringref).show(); } else jQuery('.imgs#' + stringref).fadeIn(); return false; }); $('.img_selection a').removeAttr('id').eq(0).trigger('click'); tabs = jQuery(".img_selection a"); len = tabs.size(); }); I tried adding the below code, with a lot of help from this website, but to no avail.. CODE: jQuery(document).ready(function(){ jQuery(".imgs").hide().click(function(){ clearInterval(robot); }); HTML: <!-- TOP IMAGE ROTATION --> <div id="upper_image"> <div id="img1" class="imgs"> <p><img src="images/top_image.jpg" width="900" height="250" alt="The Ratio - Print Projects!" border="0" /></p> </div> <div id="img2" class="imgs"> <p><img src="images/top_image2.jpg" width="900" height="250" alt="The Ratio - In The Works!" border="0" /></p> </div> <div id="img3" class="imgs"> <p><img src="images/top_image3.jpg" width="900" height="250" alt="The Ratio!" border="0" /></p> </div> </div> <!-- / TOP IMAGE ROTATION --> <!-- TOP IMAGE SELECTION --> <ul class="img_selection"> <li><a id="img1_on" class="img1" href="#img1">1</a></li> <li><a class="img2" href="#img2">2</a></li> <li><a class="img3" href="#img3">3</a></li> </ul> <!-- / TOP IMAGE SELECTION -->

    Read the article

  • Using my custom colormap in Java for images

    - by John
    Hi everyone! I've got a question concering a colormapping via index. I tried this code found on http://www.podgoretsky.pri.ee/ftp/Docs/Java/Tricks%20of%20the%20Java%20Programming%20Gurus/ch12.htm // Gradient.java // Imports import java.applet.Applet; import java.awt.; import java.awt.image.; public class Gradient extends Applet { final int colors = 32; final int width = 200; final int height = 200; Image img; public void init() { // Create the color map byte[] rbmap = new byte[colors]; byte[] gmap = new byte[colors]; for (int i = 0; i < colors; i++) gmap[i] = (byte)((i * 255) / (colors - 1)); // Create the color model int bits = (int)Math.ceil(Math.log(colors) / Math.log(2)); IndexColorModel model = new IndexColorModel(bits, colors, rbmap, gmap, rbmap); // Create the pixels int pixels[] = new int[width * height]; int index = 0; for (int y = 0; y < height; y++) for (int x = 0; x < width; x++) pixels[index++] = (x * colors) / width; // Create the image img = createImage(new MemoryImageSource(width, height, model, pixels, 0, width)); } public void paint(Graphics g) { g.drawImage(img, 0, 0, this); } } It worked great but I tried to load a custom image jpeg mapped on my own colormap but it didnt work right. I saw only a bunch of green and blue pixels drawn on a white background. My custom color map method here: public void inintByteArrays() { double[][] c = // basic color map { { 0.0000, 0.0000, 0.5625 }, { 0.0000, 0.0000, 0.6250 }, { 0.0000, 0.0000, 0.6875 }, { 0.0000, 0.0000, 0.6875 }, { 0.0000, 0.0000, 0.7500 }, { 0.0000, 0.0000, 0.8125 }, { 0.0000, 0.0000, 0.8750 }, { 0.0000, 0.0000, 0.9375 }, { 0.0000, 0.0000, 1.0000 }, { 0.0000, 0.0625, 1.0000 }, { 0.0000, 0.1250, 1.0000 }, { 0.0000, 0.1875, 1.0000 }, { 0.0000, 0.2500, 1.0000 }, { 0.0000, 0.3125, 1.0000 }, { 0.0000, 0.3750, 1.0000 }, { 0.0000, 0.4375, 1.0000 }, { 0.0000, 0.5000, 1.0000 }, { 0.0000, 0.5625, 1.0000 }, { 0.0000, 0.6250, 1.0000 }, { 0.0000, 0.6875, 1.0000 }, { 0.0000, 0.7500, 1.0000 }, { 0.0000, 0.8125, 1.0000 }, { 0.0000, 0.8750, 1.0000 }, { 0.0000, 0.9375, 1.0000 }, { 0.0000, 1.0000, 1.0000 }, { 0.0625, 1.0000, 0.9375 }, { 0.1250, 1.0000, 0.8750 }, { 0.1875, 1.0000, 0.8125 }, { 0.2500, 1.0000, 0.7500 }, { 0.3125, 1.0000, 0.6875 }, { 0.3750, 1.0000, 0.6250 }, { 0.4375, 1.0000, 0.5625 }, { 0.5000, 1.0000, 0.5000 }, { 0.5625, 1.0000, 0.4375 }, { 0.6250, 1.0000, 0.3750 }, { 0.6875, 1.0000, 0.3125 }, { 0.7500, 1.0000, 0.2500 }, { 0.8125, 1.0000, 0.1875 }, { 0.8750, 1.0000, 0.1250 }, { 0.9375, 1.0000, 0.0625 }, { 1.0000, 1.0000, 0.0000 }, { 1.0000, 0.9375, 0.0000 }, { 1.0000, 0.8750, 0.0000 }, { 1.0000, 0.8125, 0.0000 }, { 1.0000, 0.7500, 0.0000 }, { 1.0000, 0.6875, 0.0000 }, { 1.0000, 0.6250, 0.0000 }, { 1.0000, 0.5625, 0.0000 }, { 1.0000, 0.5000, 0.0000 }, { 1.0000, 0.4375, 0.0000 }, { 1.0000, 0.3750, 0.0000 }, { 1.0000, 0.3125, 0.0000 }, { 1.0000, 0.2500, 0.0000 }, { 1.0000, 0.1875, 0.0000 }, { 1.0000, 0.1250, 0.0000 }, { 1.0000, 0.0625, 0.0000 }, { 1.0000, 0.0000, 0.0000 }, { 0.9375, 0.0000, 0.0000 }, { 0.8750, 0.0000, 0.0000 }, { 0.8125, 0.0000, 0.0000 }, { 0.7500, 0.0000, 0.0000 }, { 0.6875, 0.0000, 0.0000 }, { 0.6250, 0.0000, 0.0000 }, { 0.5625, 0.0000, 0.0000 }, { 0.5000, 0.0000, 0.0000 } }; for (int i = 0; i < c.length; i++) { for (int j = 0; j < c[i].length; j++) { if (j == 0) r[i] = (byte) ((byte) c[i][j]*255); if (j == 1) g[i] = (byte) ((byte) c[i][j]*255); if (j == 2) b[i] = (byte) ((byte) c[i][j]*255); } } My question is how I can use my colormap for any image I want to load and map in the right way. Thank you very much! Greetings, protein1.

    Read the article

  • Is this a SEO SAFE anchor link

    - by Mayhem
    so... Is this a safe way to use internal links on your site.. By doing this i have the index page generating the usual php content section and handing it to the div element. THE MAIN QUESTION: Will google still index the pages using this method? Common sense tells me it does.. But just double checking and leaving this here as a base example as well if it is. As in. EXAMPLE ONLY PEOPLE The Server Side if (isset($_REQUEST['page'])) {$pageID=$_REQUEST['page'];} else {$pageID="home";} if (isset($_REQUEST['pageMode']) && $_REQUEST['pageMode']=="js") { require "content/".$pageID.".php"; exit; } // ELSE - REST OF WEBSITE WILL BE GENERATED USING THE page VARIABLE The Links <a class='btnMenu' href='?page=home'>Home Page</a> <a class='btnMenu' href='?page=about'>About</a> <a class='btnMenu' href='?page=Services'>Services</a> <a class='btnMenu' href='?page=contact'>Contact</a> The Javascript $(function() { $(".btnMenu").click(function(){return doNav(this);}); }); function doNav(objCaller) { var sPage = $(objCaller).attr("href").substring(6,255); $.get("index.php", { page: sPage, pageMode: 'js'}, function(data) { ("#siteContent").html(data).scrollTop(0); }); return false; } Forgive me if there are any errors, as just copied and pasted from my script then removed a bunch of junk to simplify it as still prototyping/white boarding the project its in. So yes it does look a little nasty at the moment. REASONS WHY: The main reason is bandwidth and speed, This will allow other scripts to run and control the site/application a little better and yes it will need to be locked down with some coding. -- FURTHER EXAMPLE-- INSERT PHP AT TOP <?php // PHP CODE HERE ?> <html> <head> <link rel="stylesheet" type="text/css" href="style.css" /> <script type="text/javascript" src="jquery.js"></script> <script type="text/javascript" src="scripts.js"></script> </head> <body> <div class='siteBody'> <div class='siteHeader'> <?php foreach ($pageList as $key => $value) { if ($pageID == $key) {$btnClass="btnMenuSel";} else {$btnClass="btnMenu";} echo "<a class='$btnClass' href='?page=".$key."'>".$pageList[$key]."</a>"; } ?> </div><div id="siteContent" style='margin-top:10px;'> <?php require "content/".$pageID.".php"; ?> </div><div class='siteFooter'> </div> </div> </body> </html>

    Read the article

  • JQuery Slide Onclick

    - by everreadyeddy
    I am using the code in example http://www.faridesign.net/2012/05/create-a-awesome-vertical-tabbed-content-area-using-css3-jquery/ I am trying to slide the div tags on a button click on the list so the current tab-content will slide in and the tab just clicked will slide out. I currently have the working example where I can switch between divs fine, but I need to slide in and out between divs. Is there any script I can do this with the current code. using .slide or .effect instead of .show() looks to display two divs at the same time. I'm not sure what I am doing wrong. <div id="v-nav"> <ul> <li tab="tab1" class="first current">Main Screen</li> <li tab="tab2">Div 1</li> <li tab="tab3">Div 2</li> <li tab="tab4">Div 3</li> <li tab="tab5">Div 4</li> <li tab="tab6">Div 5</li> <li tab="tab7">Div 6</li> <li tab="tab8" class="last">Div 7</li> </ul> <div class="tab-content"> <h4>Main Screen</h4> </div> <div class="tab-content"> <h4>Div 1</h4> </div> <div class="tab-content"> <h4>Div 2</h4> </div> <div class="tab-content"> <h4>Div 3</h4> </div> <div class="tab-content"> <h4>Div 4</h4> </div> <div class="tab-content"> <h4>Div 5</h4> </div> <div class="tab-content"> <h4>Div 6</h4> </div> <div class="tab-content"> <h4>Div 7</h4> </div> My Script looks like $(function () { var items = $('#v-nav>ul>li').each(function () { $(this).click(function () { //remove previous class and add it to clicked tab items.removeClass('current'); $(this).addClass('current'); //hide all content divs and show current one //$('#v-nav>div.tab-content').hide().eq(items.index($(this))).show(); //$('#v-nav>div.tab-content').hide().eq(items.index($(this))).fadeIn(100); $('#v-nav>div.tab-content').hide().eq(items.index($(this))).slideToggle(); window.location.hash = $(this).attr('tab'); }); }); if (location.hash) { showTab(location.hash); } else { showTab("tab1"); } function showTab(tab) { $("#v-nav ul li:[tab*=" + tab + "]").click(); } // Bind the event hashchange, using jquery-hashchange-plugin $(window).hashchange(function () { showTab(location.hash.replace("#", "")); }) // Trigger the event hashchange on page load, using jquery-hashchange-plugin $(window).hashchange(); });

    Read the article

  • Get Image Source URLs from a Different Page Using JS

    - by SDD
    Everyone: I'm trying to grab the source URLs of images from one page and use them in some JavaScript in another page. I know how to pull in images using JQuery .load(). However, rather than load all the images and display them on the page, I want to just grab the source URLs so I can use them in a JS array. Page 1 is just a page with images: <html> <head> </head> <body> <img id="image0" src="image0.jpg" /> <img id="image1" src="image1.jpg" /> <img id="image2" src="image2.jpg" /> <img id="image3" src="image3.jpg" /> </body> </html> Page 2 contains my JS. (Please note that the end goal is to load images into an array, randomize them, and using cookies, show a new image on page load every 10 seconds. All this is working. However, rather than hard code the image paths into my javascript as shown below, I'd prefer to take the paths from Page 1 based on their IDs. This way, the images won't always need to be titled "image1.jpg," etc.) <script type = "text/javascript"> var days = 730; var rotator = new Object(); var currentTime = new Date(); var currentMilli = currentTime.getTime(); var images = [], index = 0; images[0] = "image0.jpg"; images[1] = "image1.jpg"; images[2] = "image2.jpg"; images[3] = "image3.jpg"; rotator.getCookie = function(Name) { var re = new RegExp(Name+"=[^;]+", "i"); if (document.cookie.match(re)) return document.cookie.match(re)[0].split("=")[1]; return''; } rotator.setCookie = function(name, value, days) { var expireDate = new Date(); var expstring = expireDate.setDate(expireDate.getDate()+parseInt(days)); document.cookie = name+"="+value+"; expires="+expireDate.toGMTString()+"; path=/"; } rotator.randomize = function() { index = Math.floor(Math.random() * images.length); randomImageSrc = images[index]; } rotator.check = function() { if (rotator.getCookie("randomImage") == "") { rotator.randomize(); document.write("<img src=" + randomImageSrc + ">"); rotator.setCookie("randomImage", randomImageSrc, days); rotator.setCookie("timeClock", currentMilli, days); } else { var writtenTime = parseInt(rotator.getCookie("timeClock"),10); if ( currentMilli > writtenTime + 10000 ) { rotator.randomize(); var writtenImage = rotator.getCookie("randomImage") while ( randomImageSrc == writtenImage ) { rotator.randomize(); } document.write("<img src=" + randomImageSrc + ">"); rotator.setCookie("randomImage", randomImageSrc, days); rotator.setCookie("timeClock", currentMilli, days); } else { var writtenImage = rotator.getCookie("randomImage") document.write("<img src=" + writtenImage + ">"); } } } rotator.check() </script> Can anyone point me in the right direction? My hunch is to use JQuery .get(), but I've been unsuccessful so far. Please let me know if I can clarify!

    Read the article

  • NEED your opinion on .net Profile class VS session vars

    - by Ted
    To save trips to sql db in my older apps, I store *dozens of data points about the current user in an array and then store the array in a session. For example, info that might be used repeatedly during user’s session might be stored… Dim a(7) as string a(0) = “FirstName” a(1) = “LastName” a(2) = “Address” a(3) = “Address2” a(4) = “City” a(5) = “State” a(6) = “Zip” session.add(“s_a”, a) *Some apps have an array 100 in size. That is something I learned in my asp classic days. Referencing the correct index can be laborsome and I find it difficult to go back and add another data point in the array grouped with like data. For example, suppose I need to add Middle Initial to the array as a design alteration. Unless I redo the whole index mapping, I have to stick Middle Initial in the next open slot, which might be in the 50s. NOW, I am considering doing something easier to reference each time (eliminating the need to know the index of the value wanted). So I am looking to do this… session.add(“Firstname”, “FirstName”) session.add(“Lastname”, “LastName”) session.add(“Address”, “Address”) etc. BUT, before I do this, I would like some guidance. I am afraid this might be less efficient, even though easier to use. I don’t know if a new session object is created for each data point or if there is only one session object, and I am adding a name/value pair to that object? If I am adding a name/value pair to a single object, that seems like a good idea. Does anyone know? Or is there a more preferred way? Built-in Profile class? Re: Profile class I have an internal debate about scope. It seems that the .net Profile class is good for storing app-SPECIFIC user settings (i.e. style theme, object display properties, user role, etc.) The examples I give are information whose values are selected/edited by the user to customize the application experience. This information is not typically stored/edited elsewhere in the app db. But when you have data that 1) is stored already in the app db and 2) can be altered by other users (in this case: company reps may update client's status, address, etc.), then the persistence of the Profile data may be an issue. In this case, the Profile would need to be reset at the beginning and dropped like a session.abandon at the end of each user's session to prevent reloading info that had since been edited by someone. I believe this is possible, but not sure Currently, I use the session array to store both scopes, app-specific and user-specific data. If my session plan is good, I think I will create a class to set/get values from the session also. I appreciate your thoughts. I would like to know how others have handled this type of situation. Thanks.

    Read the article

  • Partial view is not rendering within the main view it's contained (instead it's rendered in it's own page)?

    - by JaJ
    I have a partial view that is contained in a simple index view. When I try to add a new object to my model and update my partial view to display that new object along with existing objects the partial view is rendered outside the page that it's contained? I'm using AJAX to update the partial view but what is wrong with the following code? Model: public class Product { public int ID { get; set; } public string Name { get; set; } [DataType(DataType.Currency)] public decimal Price { get; set; } } public class BoringStoreContext { List<Product> results = new List<Product>(); public BoringStoreContext() { Products = new List<Product>(); Products.Add(new Product() { ID = 1, Name = "Sure", Price = (decimal)(1.10) }); Products.Add(new Product() { ID = 2, Name = "Sure2", Price = (decimal)(2.10) }); } public List<Product> Products {get; set;} } public class ProductIndexViewModel { public Product NewProduct { get; set; } public IEnumerable<Product> Products { get; set; } } Index.cshtml View: @model AjaxPartialPageUpdates.Models.ProductIndexViewModel @using (Ajax.BeginForm("Index_AddItem", new AjaxOptions { UpdateTargetId = "productList" })) { <div> @Html.LabelFor(model => model.NewProduct.Name) @Html.EditorFor(model => model.NewProduct.Name) </div> <div> @Html.LabelFor(model => model.NewProduct.Price) @Html.EditorFor(model => model.NewProduct.Price) </div> <div> <input type="submit" value="Add Product" /> </div> } <div id='productList'> @{ Html.RenderPartial("ProductListControl", Model.Products); } </div> ProductListControl.cshtml Partial View @model IEnumerable<AjaxPartialPageUpdates.Models.Product> <table> <!-- Render the table headers. --> <tr> <th>Name</th> <th>Price</th> </tr> <!-- Render the name and price of each product. --> @foreach (var item in Model) { <tr> <td>@Html.DisplayFor(model => item.Name)</td> <td>@Html.DisplayFor(model => item.Price)</td> </tr> } </table> Controller: public class HomeController : Controller { public ActionResult Index() { BoringStoreContext db = new BoringStoreContext(); ProductIndexViewModel viewModel = new ProductIndexViewModel { NewProduct = new Product(), Products = db.Products }; return View(viewModel); } public ActionResult Index_AddItem(ProductIndexViewModel viewModel) { BoringStoreContext db = new BoringStoreContext(); db.Products.Add(viewModel.NewProduct); return PartialView("ProductListControl", db.Products); } }

    Read the article

  • JSF - Random Number using Beans (JAVA)

    - by Alex Encore Tr
    I am trying to create a jsf application which, upon page refresh increments the hit counter and generates two random numbers. What should be displayed on the window may look something like this: On your On your roll x you have thrown x and x For this program I decided to create two Beans, one to hold the page refresh counter and one to generate a random number. Those look like this for the moment: CounterBean.java package diceroll; public class CounterBean { int count=0; public CounterBean() { } public void setCount(int count) { this.count=count; } public int getCount() { count++; return count; } } RandomNumberBean.java package diceroll; import java.util.Random; public class RandomNumberBean { int rand=0; Random r = new Random(); public RandomNumberBean() { rand = r.nextInt(6); } public void setNextInt(int rand) { this.rand=rand; } public int getNextInt() { return rand; } } I have then created an index.jsp to display the above message. <html> <%@ taglib uri="http://java.sun.com/jsf/core" prefix="f"%> <%@ taglib uri="http://java.sun.com/jsf/html" prefix="h"%> <f:view> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <title>Roll the Dice</title> </head> <body> <h:form> <p> On your roll # <h:outputText value="#{CounterBean.count} " /> you have thrown <h:outputText value="#{RandomNumberBean.rand}" />and <h:outputText value="#{RandomNumberBean.rand} " /> </p> </h:form> </body> </f:view> </html> However, when I run the application, I get the following message: org.apache.jasper.el.JspPropertyNotFoundException: /index.jsp(14,20) '#{RandomNumberBean.rand}' Property 'rand' not found on type diceroll.RandomNumberBean Caused by: org.apache.jasper.el.JspPropertyNotFoundException - /index.jsp(14,20) '#{RandomNumberBean.rand}' Property 'rand' not found on type diceroll.RandomNumberBean I suppose there's a mistake with my faces-config.xml file, so I will post this here as well, see if somebody can provide some help: faces-config.xml <?xml version="1.0" encoding="UTF-8"?> <faces-config xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-facesconfig_2_0.xsd" version="2.0"> <managed-bean> <managed-bean-name>CounterBean</managed-bean-name> <managed-bean-class>diceroll.CounterBean</managed-bean-class> <managed-bean-scope>session</managed-bean-scope> </managed-bean> <managed-bean> <managed-bean-name>RandomNumberBean</managed-bean-name> <managed-bean-class>diceroll.RandomNumberBean</managed-bean-class> <managed-bean-scope>session</managed-bean-scope> </managed-bean> </faces-config>

    Read the article

  • Why is my emit not getting called?

    - by cRaZiRiCaN
    The client and server connect just fine. For some reason the emit on my client is not firing correctly. I am trying to get the testEmit and testEmit2 working. This is my server: express = require 'express' mongo = require 'mongodb' app = express() server = (require 'http').createServer(app) io = (require 'socket.io').listen(server) server.listen(8080) app.use(express.static(__dirname + '/public')) # db = new mongo.Db("documentsdb", new mongo.Server("localhost", 27017, auto_reconnect: true), {safe:true}) io.sockets.on 'connection', (socket) -> console.log 'Socket.io is connected!' #This returns an array of documents sorted via date by decreasing order. (Most recent documents first.) socket.on 'loadRecentDocuments', -> console.log 'Loading most recent documents.' db.collection 'documents', (err, collection) -> collection.find().sort(dateAdded: -1).toArray (err, documents) -> #This emit is recieved at index.html where a javascript function sendDocuments manages the documents. socket.emit 'sendDocuments', documents return #The index.html provides the code data from the search box via a javascript. io.sockets.on 'findDocuments', (code) -> #Returns an array of documents with the corresponding class code. documentCodeToSearch = code console.log 'Retreaving documents with code: ' + documentCodeToSearch db.collection 'documents', (err, collection) -> collection.find(code:documentCodeToSearch).toArray (err, documents) -> socket.emit 'sendDocuments', documents return #Uploads a document to the server. documentData is sent via javascript from submit.html io.sockets.on 'addDocument', (documentData) -> console.log 'Adding document: ' + documentData db.collection 'documents', (err, collection) -> collection.insert documentData, safe: true return #Test socket.io io.sockets.on 'testEmit', -> console.log('Emit recieved.') socket.emit 'testEmit2', 'caca' return app.listen 1337 console.log "Listening on port 1337..." This is my client: <!doctype HTML> <html> <head> <title>ProjectShare</title> <script src="http://localhost:8080/socket.io/socket.io.js"></script> <script src = "http://ajax.googleapis.com/ajax/libs/jquery/1.8.2/jquery.min.js"></script> <script> //Make sure DOM is ready before mucking around. $(document).ready(function() { console.log('jQuery entered!'); var socket = io.connect('http://localhost:8080'); socket.emit('testEmit'); socket.on('testEmit2', function(data) { console.log('Emit recieved at browser.'); console.log(data); }); console.log('jQuery exit.'); }); </script> </head> <body> <ol> <li><a href="index.html">ProjectShare</a></li> <li><a href="guidelines.html">Guidelines</a></li> <li><a href="upload.html">Upload</a></li> <li> <form> <input type = "search" placeholder = "enter class code"/> <input type = "submit" value = "Go"/> </form> </li> </ol> <ol id = "documentList"> </ol> </body> </html>

    Read the article

  • JQuery works on JsFiddle but not in project

    - by DanielNova
    Here is the JsFiddle I created: http://jsfiddle.net/ShHVy/ Everything works fine - different data is displayed in the column row to the right as I wish.. However, even having this exact code in my Project won't make it work The page in question is a popup view and it looks like this: <style type="text/css"> .highlighted { background-color: Orange; color: White; } </style> <script> var chosen = []; $("td").click(function () { var idx = $(this).index() + 1; $("td:nth-child(" + idx + ")").removeClass("highlighted"); $(this).addClass("highlighted"); chosen[idx] = $(this).parent("tr").index(); }); var data = { "Differdange": ["Differdange 1", "Differdange 2", "Differdange 3", "Differdange 4"], "Dippach": ["Dippach 1", "Dippach 2", "Dippach 3", "Dippach 4", ] }; function pushData(id, col) { $("#datachange table td:nth-child(" + 2 + ")").each(function (i, v) { $(this).html(data[id][i]) }); } $(function () { $("#datachange td").click(function () { var idx = $(this).index() + 1; $("td:nth-child(" + idx + ")").removeClass("highlighted"); $(this).addClass("highlighted"); pushData($(".highlighted").html(), 2); }); }); </script> <html> <head><title>Table Data Change</title></head> <body id="datachange" class="demo"> <table> <thead> <tr> <th>ID</th> <th>DATA</th> </tr> </thead> <tbody> <tr> <td>Differdange</td> <td></td> </tr> <tr> <td>Dippach</td> <td></td> </tr> <tr> <td>Dippach</td> <td></td> </tr> <tr> <td>Differdange</td> <td></td> </tr> </tbody> </table> </body> </html> Can anyone tell me why this small piece of JQuery doesn't work on mine (it's nothing to do with libraries as the top "td" function works 100% fine)

    Read the article

  • How to maintain GridPane's fixed-size after adding elemnts dynamically

    - by Eviatar G.
    I need to create board game that can be dynamically change. Its size can be 5x5, 6x6, 7x7 or 8x8. I am jusing JavaFX with NetBeans and Scene builder for the GUI. When the user choose board size greater than 5x5 this is what happens: This is the template on the scene builder before adding cells dynamically: To every cell in the GridPane I am adding StackPane + label of the cell number: @FXML GridPane boardGame; public void CreateBoard() { int boardSize = m_Engine.GetBoard().GetBoardSize(); int num = boardSize * boardSize; int maxColumns = m_Engine.GetNumOfCols(); int maxRows = m_Engine.GetNumOfRows(); for(int row = 0; row < maxRows ; row++) { for(int col = maxColumns - 1; col >= 0 ; col--) { StackPane stackPane = new StackPane(); stackPane.setPrefSize(150.0, 200.0); stackPane.getChildren().add(new Label(String.valueOf(num))); boardGame.add(stackPane, col, row); num--; } } boardGame.setGridLinesVisible(true); boardGame.autosize(); } The problem is the stack panes's size on the GridPane are getting smaller. I tried to set them equal minimum and maximum size but it didn't help they are still getting smaller. I searched on the web but didn't realy find same problem as mine. The only similar problem to mine was found here: Dynamically add elements to a fixed-size GridPane in JavaFX But his suggestion is to use TilePane and I need to use GridPane because this is a board game and it more easier to use GridPane when I need to do tasks such as getting to cell on row = 1 and column = 2 for example. EDIT: I removed the GridPane from the FXML and created it manually on the Controller but now it print a blank board: @FXML GridPane boardGame; public void CreateBoard() { int boardSize = m_Engine.GetBoard().GetBoardSize(); int num = boardSize * boardSize; int maxColumns = m_Engine.GetNumOfCols(); int maxRows = m_Engine.GetNumOfRows(); boardGame = new GridPane(); boardGame.setAlignment(Pos.CENTER); Collection<StackPane> stackPanes = new ArrayList<StackPane>(); for(int row = 0; row < maxRows ; row++) { for(int col = maxColumns - 1; col >= 0 ; col--) { StackPane stackPane = new StackPane(); stackPane.setPrefSize(150.0, 200.0); stackPane.getChildren().add(new Label(String.valueOf(num))); boardGame.add(stackPane, col, row); stackPanes.add(stackPane); num--; } } this.buildGridPane(boardSize); boardGame.setGridLinesVisible(true); boardGame.autosize(); boardGamePane.getChildren().addAll(stackPanes); } public void buildGridPane(int i_NumOfRowsAndColumns) { RowConstraints rowConstraint; ColumnConstraints columnConstraint; for(int index = 0 ; index < i_NumOfRowsAndColumns; index++) { rowConstraint = new RowConstraints(3, Control.USE_COMPUTED_SIZE, Double.POSITIVE_INFINITY, Priority.ALWAYS, VPos.CENTER, true); boardGame.getRowConstraints().add(rowConstraint); columnConstraint = new ColumnConstraints(3, Control.USE_COMPUTED_SIZE, Double.POSITIVE_INFINITY, Priority.ALWAYS, HPos.CENTER, true); boardGame.getColumnConstraints().add(columnConstraint); } }

    Read the article

  • Change columns order in bootstrap 3

    - by TNK
    I am trying to change bootstrap columns order in desktop and mobile screen. <div class="container"> <div class="row"> <section class="content-secondary col-sm-4"> <h4>Don't Miss!</h4> <p>Suspendisse et arcu felis, ac gravida turpis. Suspendisse potenti. Ut porta rhoncus ligula, sed fringilla felis feugiat eget. In non purus quis elit iaculis tincidunt. Donec at ultrices est.</p> <p><a class="btn btn-primary pull-right" href="#">Read more <span class="icon fa fa-arrow-circle-right"></span></a></p> <h4>Check it out</h4> <p>Suspendisse et arcu felis, ac gravida turpis. Suspendisse potenti. Ut porta rhoncus ligula, sed fringilla felis feugiat eget. In non purus quis elit iaculis tincidunt. Donec at ultrices est.</p> <p><a class="btn btn-primary pull-right" href="#">Read more <span class="icon fa fa-arrow-circle-right"></span></a></p> <h4>Finally</h4> <p>Suspendisse et arcu felis, ac gravida turpis. Suspendisse potenti. Ut porta rhoncus ligula, sed fringilla felis feugiat eget. In non purus quis elit iaculis tincidunt. Donec at ultrices est.</p> <p><a class="btn btn-primary pull-right" href="#">Read more <span class="icon fa fa-arrow-circle-right"></span></a></p> </section> <section class="content-primary col-sm-8"> <div class="main-content" ........... ........... </div> </section> </div><!-- /.row --> </div><!-- /.container --> View Port = SM columns order should be : |content- secondary | content-primary | View Port < SM columns order should be : | content-primary | |content- secondary | I tried it something like this using 'pulland 'push clases. <div class="row"> <section class="content-secondary col-sm-4 col-sm-push-4"> Content ...... </section> <section class="content-primary col-sm-8 col-sm-pull-8"> Content ...... </section> </div> But this is not working for me. Hope someone will help me out. Thank you.

    Read the article

  • Processing Email in Outlook

    - by Daniel Moth
    A. Why Goal 1 = Help others: Have at most a 24-hour response turnaround to internal (from colleague) emails, typically achieving same day response. Goal 2 = Help projects: Not to implicitly pass/miss an opportunity to have impact on electronic discussions around any project on the radar. Not achieving goals 1 & 2 = Colleagues stop relying on you, drop you off conversations, don't see you as a contributing resource or someone that cares, you are perceived as someone with no peripheral vision. Note this is perfect if all you are doing is cruising at your job, trying to fly under the radar, with no ambitions of having impact beyond your absolute minimum 'day job'. B. DON'T: Leave unread email lurking around Don't: Receive or process all incoming emails in a single folder ('inbox' or 'unread mail'). This is actually possible if you receive a small number of emails (e.g. new to the job, not working at a company like Microsoft). Even so, with (your future) success at any level (company, community) comes large incoming email, so learn to deal with it. With large volumes, it is best to let the system help you by doing some categorization and filtering on your behalf (instead of trying to do that in your head as you process the single folder). See later section on how to achieve this. Don't: Leave emails as 'unread' (or worse: read them, then mark them as unread). Often done by individuals who think they possess super powers ("I can mentally cache and distinguish between the emails I chose not to read, the ones that are actually new, and the ones I decided to revisit in the future; the fact that they all show up the same (bold = unread) does not confuse me"). Interactions with this super-powered individuals typically end up with them saying stuff like "I must have missed that email you are talking about (from 2 weeks ago)" or "I am a bit behind, so I haven't read your email, can you remind me". TIP: The only place where you are "allowed" unread email is in your Deleted Items folder. Don't: Interpret a read email as an email that has been processed. Doing that, means you will always end up with fake unread email (that you have actually read, but haven't dealt with completely so you then marked it as unread) lurking between actual unread email. Another side effect is reading the email and making a 'mental' note to action it, then leaving the email as read, so the only thing left to remind you to carry out the action is… you. You are not super human, you will forget. This is a key distinction. Reading (or even scanning) a new email, means you now know what needs to be done with it, in order for it to be truly considered processed. Truly processing an email is to, for example, write an email of your own (e.g. to reply or forward), or take a non-email related action (e.g. create calendar entry, do something on some website), or read it carefully to gain some knowledge (e.g. it had a spec as an attachment), or keep it around as reference etc. 'Reading' means that you know what to do, not that you have done it. An email that is read is an email that is triaged, not an email that is resolved. Sometimes the thing that needs to be done based on receiving the email, you can (and want) to do immediately after reading the email. That is fine, you read the email and you processed it (typically when it takes no longer than X minutes, where X is your personal tolerance – mine is roughly 2 minutes). Other times, you decide that you don't want to spend X minutes at that moment, so after reading the email you need a quick system for "marking" the email as to be processed later (and you still leave it as 'read' in outlook). See later section for how. C. DO: Use Outlook rules and have multiple folders where incoming email is automatically moved to Outlook email rules are very powerful and easy to configure. Use them to automatically file email into folders. Here are mine (note that if a rule catches an email message then no further rules get processed): "personal" Email is either personal or business related. Almost all personal email goes to my gmail account. The personal emails that end up on my work email account, go to a dedicated folder – that is achieved via a rule that looks at the email's 'From' field. For those that slip through, I use the new Outlook 2010  quick step of "Conversation To Folder" feature to let the slippage only occur once per conversation, and then update my rules. "External" and "ViaBlog" The remaining external emails either come from my blog (rule on the subject line) or are unsolicited (rule on the domain name not being microsoft) and they are filed accordingly. "invites" I may do a separate blog post on calendar management, but suffice to say it should be kept up to date. All invite requests end up in this folder, so that even if mail gets out of control, the calendar can stay under control (only 1 folder to check). I.e. so I can let the organizer know why I won't be attending their meeting (or that I will be). Note: This folder is the only one that shows the total number of items in it, instead of the total unread. "Inbox" The only email that ends up here is email sent TO me and me only. Note that this is also the only email that shows up above the systray icon in the notification toast – all other emails cannot interrupt. "ToMe++" Email where I am on the TO line, but there are other recipients as well (on the TO or CC line). "CC" Email where I am on the CC line. I need to read these, but nobody is expecting a response or action from me so they are not as urgent (and if they are and follow up with me, they'll receive a link to this). "@ XYZ" Emails to aliases that are about projects that I directly work on (and I wasn't on the TO or CC line, of course). Test: these projects are in my commitments that I get measured on at the end of the year. "Z Mass" and subfolders under it per distribution list (DL) Emails to aliases that are about topics that I am interested in, but not that I formally own/contribute to. Test: if I unsubscribed from these aliases, nobody could rightfully complain. "Admin" folder, which resides under "Z Mass" folder Emails to aliases that I was added typically by an admin, e.g. broad emails to the floor/group/org/building/division/company that I am a member of. "BCC" folder, which resides under "Z Mass" Emails where I was not on the TO or the CC line explicitly and the alias it was sent to is not one I explicitly subscribed to (or I have been added to the BCC line, which I briefly touched on in another post). When there are only a few quick minutes to catch up on email, read as much as possible from these folders, in this order: Invites, Inbox, ToMe++. Only when these folders are all read (remember that doesn't mean that each email in them has been fully dealt with), we can move on to the @XYZ and then the CC folders. Only when those are read we can go on to the remaining folders. Note that the typical flow in the "Z Mass" subfolders is to scan subject lines and use the new Ctrl+Delete Outlook 2010 feature to ignore conversations. D. DO: Use Outlook Search folders in combination with categories As you process each folder, when you open a new email (i.e. click on it and read it in the preview pane) the email becomes read and stays read and you have to decide whether: It can take 2 minutes to deal with for good, right now, or It will take longer than 2 minutes, so it needs to be postponed with a clear next step, which is one of ToReply – there may be intermediate action steps, but ultimately someone else needs to receive email about this Action – no email is required, but I need to do something ReadLater – no email is required from the quick scan, but this is too long to fully read now, so it needs to be read it later WaitingFor – the email is informing of an intermediate status and 'promising' a future email update. Need to track. SomedayMaybe – interesting but not important, non-urgent, non-time-bound information. I may want to spend part of one of my weekends reading it. For all these 'next steps' use Outlook categories (right click on the email and assign category, or use shortcut key). Note that I also use category 'WaitingFor' for email that I send where I am expecting a response and need to track it. Create a new search folder for each category (I dragged the search folders into my favorites at the top left of Outlook, above my inboxes). So after the activity of reading/triaging email in the normal folders (where the email arrived) is done, the result is a bunch of emails appearing in the search folders (configure them to show the total items, not the total unread items). To actually process email (that takes more than 2 minutes to deal with) process the search folders, starting with ToReply and Action. E. DO: Get into a Routine Now you have a system in place, get into a routine of using it. Here is how I personally use mine, but this part I keep tweaking: Spend short bursts of time (between meetings, during boring but mandatory meetings and, in general, 2-4 times a day) aiming to have no unread emails (and in the process deal with some emails that take less than 2 minutes). Spend around 30 minutes at the end of each day processing most urgent items in search folders. Spend as long as it takes each Friday (or even the weekend) ensuring there is no unnecessary email baggage carried forward to the following week. F. Other resources Official Outlook help on: Create custom actions rules, Manage e-mail messages with rules, creating a search folder. Video on ignoring conversations (Ctrl+Del). Official blog post on Quick Steps and in particular the Move Conversation to folder. If you've read "Getting Things Done" it is very obvious that my approach to email management is driven by GTD. A very similar approach was described previously by ScottHa (also influenced by GTD), worth reading here. He also described how he sets up 2 outlook rules ('invites' and 'external') which I also use – worth reading that too. Comments about this post welcome at the original blog.

    Read the article

  • Processing Email in Outlook

    - by Daniel Moth
    A. Why Goal 1 = Help others: Have at most a 24-hour response turnaround to internal (from colleague) emails, typically achieving same day response. Goal 2 = Help projects: Not to implicitly pass/miss an opportunity to have impact on electronic discussions around any project on the radar. Not achieving goals 1 & 2 = Colleagues stop relying on you, drop you off conversations, don't see you as a contributing resource or someone that cares, you are perceived as someone with no peripheral vision. Note this is perfect if all you are doing is cruising at your job, trying to fly under the radar, with no ambitions of having impact beyond your absolute minimum 'day job'. B. DON'T: Leave unread email lurking around Don't: Receive or process all incoming emails in a single folder ('inbox' or 'unread mail'). This is actually possible if you receive a small number of emails (e.g. new to the job, not working at a company like Microsoft). Even so, with (your future) success at any level (company, community) comes large incoming email, so learn to deal with it. With large volumes, it is best to let the system help you by doing some categorization and filtering on your behalf (instead of trying to do that in your head as you process the single folder). See later section on how to achieve this. Don't: Leave emails as 'unread' (or worse: read them, then mark them as unread). Often done by individuals who think they possess super powers ("I can mentally cache and distinguish between the emails I chose not to read, the ones that are actually new, and the ones I decided to revisit in the future; the fact that they all show up the same (bold = unread) does not confuse me"). Interactions with this super-powered individuals typically end up with them saying stuff like "I must have missed that email you are talking about (from 2 weeks ago)" or "I am a bit behind, so I haven't read your email, can you remind me". TIP: The only place where you are "allowed" unread email is in your Deleted Items folder. Don't: Interpret a read email as an email that has been processed. Doing that, means you will always end up with fake unread email (that you have actually read, but haven't dealt with completely so you then marked it as unread) lurking between actual unread email. Another side effect is reading the email and making a 'mental' note to action it, then leaving the email as read, so the only thing left to remind you to carry out the action is… you. You are not super human, you will forget. This is a key distinction. Reading (or even scanning) a new email, means you now know what needs to be done with it, in order for it to be truly considered processed. Truly processing an email is to, for example, write an email of your own (e.g. to reply or forward), or take a non-email related action (e.g. create calendar entry, do something on some website), or read it carefully to gain some knowledge (e.g. it had a spec as an attachment), or keep it around as reference etc. 'Reading' means that you know what to do, not that you have done it. An email that is read is an email that is triaged, not an email that is resolved. Sometimes the thing that needs to be done based on receiving the email, you can (and want) to do immediately after reading the email. That is fine, you read the email and you processed it (typically when it takes no longer than X minutes, where X is your personal tolerance – mine is roughly 2 minutes). Other times, you decide that you don't want to spend X minutes at that moment, so after reading the email you need a quick system for "marking" the email as to be processed later (and you still leave it as 'read' in outlook). See later section for how. C. DO: Use Outlook rules and have multiple folders where incoming email is automatically moved to Outlook email rules are very powerful and easy to configure. Use them to automatically file email into folders. Here are mine (note that if a rule catches an email message then no further rules get processed): "personal" Email is either personal or business related. Almost all personal email goes to my gmail account. The personal emails that end up on my work email account, go to a dedicated folder – that is achieved via a rule that looks at the email's 'From' field. For those that slip through, I use the new Outlook 2010  quick step of "Conversation To Folder" feature to let the slippage only occur once per conversation, and then update my rules. "External" and "ViaBlog" The remaining external emails either come from my blog (rule on the subject line) or are unsolicited (rule on the domain name not being microsoft) and they are filed accordingly. "invites" I may do a separate blog post on calendar management, but suffice to say it should be kept up to date. All invite requests end up in this folder, so that even if mail gets out of control, the calendar can stay under control (only 1 folder to check). I.e. so I can let the organizer know why I won't be attending their meeting (or that I will be). Note: This folder is the only one that shows the total number of items in it, instead of the total unread. "Inbox" The only email that ends up here is email sent TO me and me only. Note that this is also the only email that shows up above the systray icon in the notification toast – all other emails cannot interrupt. "ToMe++" Email where I am on the TO line, but there are other recipients as well (on the TO or CC line). "CC" Email where I am on the CC line. I need to read these, but nobody is expecting a response or action from me so they are not as urgent (and if they are and follow up with me, they'll receive a link to this). "@ XYZ" Emails to aliases that are about projects that I directly work on (and I wasn't on the TO or CC line, of course). Test: these projects are in my commitments that I get measured on at the end of the year. "Z Mass" and subfolders under it per distribution list (DL) Emails to aliases that are about topics that I am interested in, but not that I formally own/contribute to. Test: if I unsubscribed from these aliases, nobody could rightfully complain. "Admin" folder, which resides under "Z Mass" folder Emails to aliases that I was added typically by an admin, e.g. broad emails to the floor/group/org/building/division/company that I am a member of. "BCC" folder, which resides under "Z Mass" Emails where I was not on the TO or the CC line explicitly and the alias it was sent to is not one I explicitly subscribed to (or I have been added to the BCC line, which I briefly touched on in another post). When there are only a few quick minutes to catch up on email, read as much as possible from these folders, in this order: Invites, Inbox, ToMe++. Only when these folders are all read (remember that doesn't mean that each email in them has been fully dealt with), we can move on to the @XYZ and then the CC folders. Only when those are read we can go on to the remaining folders. Note that the typical flow in the "Z Mass" subfolders is to scan subject lines and use the new Ctrl+Delete Outlook 2010 feature to ignore conversations. D. DO: Use Outlook Search folders in combination with categories As you process each folder, when you open a new email (i.e. click on it and read it in the preview pane) the email becomes read and stays read and you have to decide whether: It can take 2 minutes to deal with for good, right now, or It will take longer than 2 minutes, so it needs to be postponed with a clear next step, which is one of ToReply – there may be intermediate action steps, but ultimately someone else needs to receive email about this Action – no email is required, but I need to do something ReadLater – no email is required from the quick scan, but this is too long to fully read now, so it needs to be read it later WaitingFor – the email is informing of an intermediate status and 'promising' a future email update. Need to track. SomedayMaybe – interesting but not important, non-urgent, non-time-bound information. I may want to spend part of one of my weekends reading it. For all these 'next steps' use Outlook categories (right click on the email and assign category, or use shortcut key). Note that I also use category 'WaitingFor' for email that I send where I am expecting a response and need to track it. Create a new search folder for each category (I dragged the search folders into my favorites at the top left of Outlook, above my inboxes). So after the activity of reading/triaging email in the normal folders (where the email arrived) is done, the result is a bunch of emails appearing in the search folders (configure them to show the total items, not the total unread items). To actually process email (that takes more than 2 minutes to deal with) process the search folders, starting with ToReply and Action. E. DO: Get into a Routine Now you have a system in place, get into a routine of using it. Here is how I personally use mine, but this part I keep tweaking: Spend short bursts of time (between meetings, during boring but mandatory meetings and, in general, 2-4 times a day) aiming to have no unread emails (and in the process deal with some emails that take less than 2 minutes). Spend around 30 minutes at the end of each day processing most urgent items in search folders. Spend as long as it takes each Friday (or even the weekend) ensuring there is no unnecessary email baggage carried forward to the following week. F. Other resources Official Outlook help on: Create custom actions rules, Manage e-mail messages with rules, creating a search folder. Video on ignoring conversations (Ctrl+Del). Official blog post on Quick Steps and in particular the Move Conversation to folder. If you've read "Getting Things Done" it is very obvious that my approach to email management is driven by GTD. A very similar approach was described previously by ScottHa (also influenced by GTD), worth reading here. He also described how he sets up 2 outlook rules ('invites' and 'external') which I also use – worth reading that too. Comments about this post welcome at the original blog.

    Read the article

  • NoSQL with MongoDB, NoRM and ASP.NET MVC

    - by shiju
     In this post, I will give an introduction to how to work on NoSQL and document database with MongoDB , NoRM and ASP.Net MVC 2. NoSQL and Document Database The NoSQL movement is getting big attention in this year and people are widely talking about document databases and NoSQL along with web application scalability. According to Wikipedia, "NoSQL is a movement promoting a loosely defined class of non-relational data stores that break with a long history of relational databases. These data stores may not require fixed table schemas, usually avoid join operations and typically scale horizontally. Academics and papers typically refer to these databases as structured storage". Document databases are schema free so that you can focus on the problem domain and don't have to worry about updating the schema when your domain is evolving. This enables truly a domain driven development. One key pain point of relational database is the synchronization of database schema with your domain entities when your domain is evolving.There are lots of NoSQL implementations are available and both CouchDB and MongoDB got my attention. While evaluating both CouchDB and MongoDB, I found that CouchDB can’t perform dynamic queries and later I picked MongoDB over CouchDB. There are many .Net drivers available for MongoDB document database. MongoDB MongoDB is an open source, scalable, high-performance, schema-free, document-oriented database written in the C++ programming language. It has been developed since October 2007 by 10gen. MongoDB stores your data as binary JSON (BSON) format . MongoDB has been getting a lot of attention and you can see the some of the list of production deployments from here - http://www.mongodb.org/display/DOCS/Production+Deployments NoRM – C# driver for MongoDB NoRM is a C# driver for MongoDB with LINQ support. NoRM project is available on Github at http://github.com/atheken/NoRM. Demo with ASP.NET MVC I will show a simple demo with MongoDB, NoRM and ASP.NET MVC. To work with MongoDB and  NoRM, do the following steps Download the MongoDB databse For Windows 32 bit, download from http://downloads.mongodb.org/win32/mongodb-win32-i386-1.4.1.zip  and for Windows 64 bit, download  from http://downloads.mongodb.org/win32/mongodb-win32-x86_64-1.4.1.zip . The zip contains the mongod.exe for run the server and mongo.exe for the client Download the NorM driver for MongoDB at http://github.com/atheken/NoRM Create a directory call C:\data\db. This is the default location of MongoDB database. You can override the behavior. Run C:\Mongo\bin\mongod.exe. This will start the MongoDb server Now I am going to demonstrate how to program with MongoDb and NoRM in an ASP.NET MVC application.Let’s write a domain class public class Category {            [MongoIdentifier]public ObjectId Id { get; set; } [Required(ErrorMessage = "Name Required")][StringLength(25, ErrorMessage = "Must be less than 25 characters")]public string Name { get; set;}public string Description { get; set; }}  ObjectId is a NoRM type that represents a MongoDB ObjectId. NoRM will automatically update the Id becasue it is decorated by the MongoIdentifier attribute. The next step is to create a mongosession class. This will do the all interactions to the MongoDB. internal class MongoSession<TEntity> : IDisposable{    private readonly MongoQueryProvider provider;     public MongoSession()    {        this.provider = new MongoQueryProvider("Expense");    }     public IQueryable<TEntity> Queryable    {        get { return new MongoQuery<TEntity>(this.provider); }    }     public MongoQueryProvider Provider    {        get { return this.provider; }    }     public void Add<T>(T item) where T : class, new()    {        this.provider.DB.GetCollection<T>().Insert(item);    }     public void Dispose()    {        this.provider.Server.Dispose();     }    public void Delete<T>(T item) where T : class, new()    {        this.provider.DB.GetCollection<T>().Delete(item);    }     public void Drop<T>()    {        this.provider.DB.DropCollection(typeof(T).Name);    }     public void Save<T>(T item) where T : class,new()    {        this.provider.DB.GetCollection<T>().Save(item);                }  }    The MongoSession constrcutor will create an instance of MongoQueryProvider that supports the LINQ expression and also create a database with name "Expense". If database is exists, it will use existing database, otherwise it will create a new databse with name  "Expense". The Save method can be used for both Insert and Update operations. If the object is new one, it will create a new record and otherwise it will update the document with given ObjectId.  Let’s create ASP.NET MVC controller actions for CRUD operations for the domain class Category public class CategoryController : Controller{ //Index - Get the category listpublic ActionResult Index(){    using (var session = new MongoSession<Category>())    {        var categories = session.Queryable.AsEnumerable<Category>();        return View(categories);    }} //edit a single category[HttpGet]public ActionResult Edit(ObjectId id) {     using (var session = new MongoSession<Category>())    {        var category = session.Queryable              .Where(c => c.Id == id)              .FirstOrDefault();         return View("Save",category);    } }// GET: /Category/Create[HttpGet]public ActionResult Create(){    var category = new Category();    return View("Save", category);}//insert or update a category[HttpPost]public ActionResult Save(Category category){    if (!ModelState.IsValid)    {        return View("Save", category);    }    using (var session = new MongoSession<Category>())    {        session.Save(category);        return RedirectToAction("Index");    } }//Delete category[HttpPost]public ActionResult Delete(ObjectId Id){    using (var session = new MongoSession<Category>())    {        var category = session.Queryable              .Where(c => c.Id == Id)              .FirstOrDefault();        session.Delete(category);        var categories = session.Queryable.AsEnumerable<Category>();        return PartialView("CategoryList", categories);    } }        }  You can easily work on MongoDB with NoRM and can use with ASP.NET MVC applications. I have created a repository on CodePlex at http://mongomvc.codeplex.com and you can download the source code of the ASP.NET MVC application from here

    Read the article

  • Mapping UrlEncoded POST Values in ASP.NET Web API

    - by Rick Strahl
    If there's one thing that's a bit unexpected in ASP.NET Web API, it's the limited support for mapping url encoded POST data values to simple parameters of ApiController methods. When I first looked at this I thought I was doing something wrong, because it seems mighty odd that you can bind query string values to parameters by name, but can't bind POST values to parameters in the same way. To demonstrate here's a simple example. If you have a Web API method like this:[HttpGet] public HttpResponseMessage Authenticate(string username, string password) { …} and then hit with a URL like this: http://localhost:88/samples/authenticate?Username=ricks&Password=sekrit it works just fine. The query string values are mapped to the username and password parameters of our API method. But if you now change the method to work with [HttpPost] instead like this:[HttpPost] public HttpResponseMessage Authenticate(string username, string password) { …} and hit it with a POST HTTP Request like this: POST http://localhost:88/samples/authenticate HTTP/1.1 Host: localhost:88 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Content-type: application/x-www-form-urlencoded Content-Length: 30 Username=ricks&Password=sekrit you'll find that while the request works, it doesn't actually receive the two string parameters. The username and password parameters are null and so the method is definitely going to fail. When I mentioned this over Twitter a few days ago I got a lot of responses back of why I'd want to do this in the first place - after all HTML Form submissions are the domain of MVC and not WebAPI which is a valid point. However, the more common use case is using POST Variables with AJAX calls. The following is quite common for passing simple values:$.post(url,{ Username: "Rick", Password: "sekrit" },function(result) {…}); but alas that doesn't work. How ASP.NET Web API handles Content Bodies Web API supports parsing content data in a variety of ways, but it does not deal with multiple posted content values. In effect you can only post a single content value to a Web API Action method. That one parameter can be very complex and you can bind it in a variety of ways, but ultimately you're tied to a single POST content value in your parameter definition. While it's possible to support multiple parameters on a POST/PUT operation, only one parameter can be mapped to the actual content - the rest have to be mapped to route values or the query string. Web API treats the whole request body as one big chunk of data that is sent to a Media Type Formatter that's responsible for de-serializing the content into whatever value the method requires. The restriction comes from async nature of Web API where the request data is read only once inside of the formatter that retrieves and deserializes it. Because it's read once, checking for content (like individual POST variables) first is not possible. However, Web API does provide a couple of ways to access the form POST data: Model Binding - object property mapping to bind POST values FormDataCollection - collection of POST keys/values ModelBinding POST Values - Binding POST data to Object Properties The recommended way to handle POST values in Web API is to use Model Binding, which maps individual urlencoded POST values to properties of a model object provided as the parameter. Model binding requires a single object as input to be bound to the POST data, with each POST key that matches a property name (including nested properties like Address.Street) being mapped and updated including automatic type conversion of simple types. This is a very nice feature - and a familiar one from MVC - that makes it very easy to have model objects mapped directly from inbound data. The obvious drawback with Model Binding is that you need a model for it to work: You have to provide a strongly typed object that can receive the data and this object has to map the inbound data. To rewrite the example above to use ModelBinding I have to create a class maps the properties that I need as parameters:public class LoginData { public string Username { get; set; } public string Password { get; set; } } and then accept the data like this in the API method:[HttpPost] public HttpResponseMessage Authenticate(LoginData login) { string username = login.Username; string password = login.Password; … } This works fine mapping the POST values to the properties of the login object. As a side benefit of this method definition, the method now also allows posting of JSON or XML to the same endpoint. If I change my request to send JSON like this: POST http://localhost:88/samples/authenticate HTTP/1.1 Host: localhost:88 Accept: application/jsonContent-type: application/json Content-Length: 40 {"Username":"ricks","Password":"sekrit"} it works as well and transparently, courtesy of the nice Content Negotiation features of Web API. There's nothing wrong with using Model binding and in fact it's a common practice to use (view) model object for inputs coming back from the client and mapping them into these models. But it can be  kind of a hassle if you have AJAX applications with a ton of backend hits, especially if many methods are very atomic and focused and don't effectively require a model or view. Not always do you have to pass structured data, but sometimes there are just a couple of simple response values that need to be sent back. If all you need is to pass a couple operational parameters, creating a view model object just for parameter purposes seems like overkill. Maybe you can use the query string instead (if that makes sense), but if you can't then you can often end up with a plethora of 'message objects' that serve no further  purpose than to make Model Binding work. Note that you can accept multiple parameters with ModelBinding so the following would still work:[HttpPost] public HttpResponseMessage Authenticate(LoginData login, string loginDomain) but only the object will be bound to POST data. As long as loginDomain comes from the querystring or route data this will work. Collecting POST values with FormDataCollection Another more dynamic approach to handle POST values is to collect POST data into a FormDataCollection. FormDataCollection is a very basic key/value collection (like FormCollection in MVC and Request.Form in ASP.NET in general) and then read the values out individually by querying each. [HttpPost] public HttpResponseMessage Authenticate(FormDataCollection form) { var username = form.Get("Username"); var password = form.Get("Password"); …} The downside to this approach is that it's not strongly typed, you have to handle type conversions on non-string parameters, and it gets a bit more complicated to test such as setup as you have to seed a FormDataCollection with data. On the other hand it's flexible and easy to use and especially with string parameters is easy to deal with. It's also dynamic, so if the client sends you a variety of combinations of values on which you make operating decisions, this is much easier to work with than a strongly typed object that would have to account for all possible values up front. The downside is that the code looks old school and isn't as self-documenting as a parameter list or object parameter would be. Nevertheless it's totally functionality and a viable choice for collecting POST values. What about [FromBody]? Web API also has a [FromBody] attribute that can be assigned to parameters. If you have multiple parameters on a Web API method signature you can use [FromBody] to specify which one will be parsed from the POST content. Unfortunately it's not terribly useful as it only returns content in raw format and requires a totally non-standard format ("=content") to specify your content. For more info in how FromBody works and several related issues to how POST data is mapped, you can check out Mike Stalls post: How WebAPI does Parameter Binding Not really sure where the Web API team thought [FromBody] would really be a good fit other than a down and dirty way to send a full string buffer. Extending Web API to make multiple POST Vars work? Don't think so Clearly there's no native support for multiple POST variables being mapped to parameters, which is a bit of a bummer. I know in my own work on one project my customer actually found this to be a real sticking point in their AJAX backend work, and we ended up not using Web API and using MVC JSON features instead. That's kind of sad because Web API is supposed to be the proper solution for AJAX backends. With all of ASP.NET Web API's extensibility you'd think there would be some way to build this functionality on our own, but after spending a bit of time digging and asking some of the experts from the team and Web API community I didn't hear anything that even suggests that this is possible. From what I could find I'd say it's not possible primarily because Web API's Routing engine does not account for the POST variable mapping. This means [HttpPost] methods with url encoded POST buffers are not mapped to the parameters of the endpoint, and so the routes would never even trigger a request that could be intercepted. Once the routing doesn't work there's not much that can be done. If somebody has an idea how this could be accomplished I would love to hear about it. Do we really need multi-value POST mapping? I think that that POST value mapping is a feature that one would expect of any API tool to have. If you look at common APIs out there like Flicker and Google Maps etc. they all work with POST data. POST data is very prominent much more so than JSON inputs and so supporting as many options that enable would seem to be crucial. All that aside, Web API does provide very nice features with Model Binding that allows you to capture many POST variables easily enough, and logistically this will let you build whatever you need with POST data of all shapes as long as you map objects. But having to have an object for every operation that receives a data input is going to take its toll in heavy AJAX applications, with a lot of types created that do nothing more than act as parameter containers. I also think that POST variable mapping is an expected behavior and Web APIs non-support will likely result in many, many questions like this one: How do I bind a simple POST value in ASP.NET WebAPI RC? with no clear answer to this question. I hope for V.next of WebAPI Microsoft will consider this a feature that's worth adding. Related Articles Passing multiple POST parameters to Web API Controller Methods Mike Stall's post: How Web API does Parameter Binding Where does ASP.NET Web API Fit?© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • SQL Server and Hyper-V Dynamic Memory Part 2

    - by SQLOS Team
    Part 1 of this series was an introduction and overview of Hyper-V Dynamic Memory. This part looks at SQL Server memory management and how the SQL engine responds to changing OS memory conditions.   Part 2: SQL Server Memory Management As with any Windows process, sqlserver.exe has a virtual address space (VAS) of 4GB on 32-bit and 8TB in 64-bit editions. Pages in its VAS are mapped to pages in physical memory when the memory is committed and referenced for the first time. The collection of VAS pages that have been recently referenced is known as the Working Set. How and when SQL Server allocates virtual memory and grows its working set depends on the memory model it uses. SQL Server supports three basic memory models:   1. Conventional Memory Model   The Conventional model is the default SQL Server memory model and has the following properties: - Dynamic - can grow or shrink its working set in response to load and external (operating system) memory conditions. - OS uses 4K pages – (not to be confused with SQL Server “pages” which are 8K regions of committed memory).- Pageable - Can be paged out to disk by the operating system.   2. Locked Page Model The locked page memory model is set when SQL Server is started with "Lock Pages in Memory" privilege*. It has the following characteristics: - Dynamic - can grow or shrink its working set in the same way as the Conventional model.- OS uses 4K pages - Non-Pageable – When memory is committed it is locked in memory, meaning that it will remain backed by physical memory and will not be paged out by the operating system. A common misconception is to interpret "locked" as non-dynamic. A SQL Server instance using the locked page memory model will grow and shrink (allocate memory and release memory) in response to changing workload and OS memory conditions in the same way as it does with the conventional model.   This is an important consideration when we look at Hyper-V Dynamic Memory – “locked” memory works perfectly well with “dynamic” memory.   * Note in “Denali” (Standard Edition and above), and in SQL 2008 R2 64-bit (Enterprise and above editions) the Lock Pages in Memory privilege is all that is required to set this model. In 2008 R2 64-Bit standard edition it also requires trace flag 845 to be set, in 2008 R2 32-bit editions it requires sp_configure 'awe enabled' 1.   3. Large Page Model The Large page model is set using trace flag 834 and potentially offers a small performance boost for systems that are configured with large pages. It is characterized by: - Static - memory is allocated at startup and does not change. - OS uses large (>2MB) pages - Non-Pageable The large page model is supported with Hyper-V Dynamic Memory (and Hyper-V also supports large pages), but you get no benefit from using Dynamic Memory with this model since SQL Server memory does not grow or shrink. The rest of this article will focus on the locked and conventional SQL Server memory models.   When does SQL Server grow? For “dynamic” configurations (Conventional and Locked memory models), the sqlservr.exe process grows – allocates and commits memory from the OS – in response to a workload. As much memory is allocated as is required to optimally run the query and buffer data for future queries, subject to limitations imposed by:   - SQL Server max server memory setting. If this configuration option is set, the buffer pool is not allowed to grow to more than this value. In SQL Server 2008 this value represents single page allocations, and in “Denali” it represents any size page allocations and also managed CLR procedure allocations.   - Memory signals from OS. The operating system sets a signal on memory resource notification objects to indicate whether it has memory available or whether it is low on available memory. If there is only 32MB free for every 4GB of memory a low memory signal is set, which continues until 64MB/4GB is free. If there is 96MB/4GB free the operating system sets a high memory signal. SQL Server only allocates memory when the high memory signal is set.   To summarize, for SQL Server to grow you need three conditions: a workload, max server memory setting higher than the current allocation, high memory signals from the OS.    When does SQL Server shrink caches? SQL Server as a rule does not like to return memory to the OS, but it will shrink its caches in response to memory pressure. Memory pressure can be divided into “internal” and “external”.   - External memory pressure occurs when the operating system is running low on memory and low memory signals are set. The SQL Server Resource Monitor checks for low memory signals approximately every 5 seconds and it will attempt to free memory until the signals stop.   To free memory SQL Server does the following: ·         Frees unused memory. ·         Notifies Memory Manager Clients to release memory o   Caches – Free unreferenced cache objects. o   Buffer pool - Based on oldest access times.   The freed memory is released back to the operating system. This process continues until the low memory resource notifications stop.    - Internal memory pressure occurs when the size of different caches and allocations increase but the SQL Server process needs to keep its total memory within a target value. For example if max server memory is set and certain caches are growing large, it will cause SQL to free memory for re-use internally, but not to release memory back to the OS. If you lower the value of max server memory you will generate internal memory pressure that will cause SQL to release memory back to the OS.    Memory pressure handling has not changed much since SQL 2005 and it was described in detail in a blog post by Slava Oks.   Note that SQL Server Express is an exception to the above behavior. Unlike other editions it does not assume it is the most important process running on the system but tries to be more “desktop” friendly. It will empty its working set after a period of inactivity.   How does SQL Server respond to changing OS memory?    In SQL Server 2005 support for Hot-Add memory was introduced. This feature, available in Enterprise and above editions, allows the server to make use of any extra physical memory that was added after SQL Server started. Being able to add physical memory when the system is running is limited to specialized hardware, but with the Hyper-V Dynamic Memory feature, when new memory is allocated to a guest virtual machine, it looks like hot-add physical memory to the guest. What this means is that thanks to the hot-add memory feature, SQL Server 2005 and higher can dynamically grow if more “physical” memory is granted to a guest VM by Hyper-V dynamic memory.   SQL Server checks OS memory every second and dynamically adjusts its “target” (based on available OS memory and max server memory) accordingly.   In “Denali” Standard Edition will also have sqlserver.exe support for hot-add memory when running virtualized (i.e. detecting and acting on Hyper-V Dynamic Memory allocations).   How does a SQL Server workload in a guest VM impact Hyper-V dynamic memory scheduling?   When a SQL workload causes the sqlserver.exe process to grow its working set, the Hyper-V memory scheduler will detect memory pressure in the guest VM and add memory to it. SQL Server will then detect the extra memory and grow according to workload demand. In our tests we have seen this feedback process cause a guest VM to grow quickly in response to SQL workload - we are still working on characterizing this ramp-up.    How does SQL Server respond when Hyper-V removes memory from a guest VM through ballooning?   If pressure from other VM's cause Hyper-V Dynamic Memory to take memory away from a VM through ballooning (allocating memory with a virtual device driver and returning it to the host OS), Windows Memory Manager will page out unlocked portions of memory and signal low resource notification events. When SQL Server detects these events it will shrink memory until the low memory notifications stop (see cache shrinking description above).    This raises another question. Can we make SQL Server release memory more readily and hence behave more "dynamically" without compromising performance? In certain circumstances where the application workload is predictable it may be possible to have a job which varies "max server memory" according to need, lowering it when the engine is inactive and raising it before a period of activity. This would have limited applicaability but it is something we're looking into.   What Memory Management changes are there in SQL Server “Denali”?   In SQL Server “Denali” (aka SQL11) the Memory Manager has been re-written to be more efficient. The main changes are summarized in this post. An important change with respect to Hyper-V Dynamic Memory support is that now the max server memory setting includes any size page allocations and managed CLR procedure allocations it now represents a closer approximation to total sqlserver.exe memory usage. This makes it easier to calculate a value for max server memory, which becomes important when configuring virtual machines to work well with Hyper-V Dynamic Memory Startup and Maximum RAM settings.   Another important change is no more AWE or hot-add support for 32-bit edition. This means if you're running a 32-bit edition of Denali you're limited to a 4GB address space and will not be able to take advantage of dynamically added OS memory that wasn't present when SQL Server started (though Hyper-V Dynamic Memory is still a supported configuration).   In part 3 we’ll develop some best practices for configuring and using SQL Server with Dynamic Memory. Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • ASP.NET MVC ‘Extendable-hooks’ – ControllerActionInvoker class

    - by nmarun
    There’s a class ControllerActionInvoker in ASP.NET MVC. This can be used as one of an hook-points to allow customization of your application. Watching Brad Wilsons’ Advanced MP3 from MVC Conf inspired me to write about this class. What MSDN says: “Represents a class that is responsible for invoking the action methods of a controller.” Well if MSDN says it, I think I can instill a fair amount of confidence into what the class does. But just to get to the details, I also looked into the source code for MVC. Seems like the base class Controller is where an IActionInvoker is initialized: 1: protected virtual IActionInvoker CreateActionInvoker() { 2: return new ControllerActionInvoker(); 3: } In the ControllerActionInvoker (the O-O-B behavior), there are different ‘versions’ of InvokeActionMethod() method that actually call the action method in question and return an instance of type ActionResult. 1: protected virtual ActionResult InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary<string, object> parameters) { 2: object returnValue = actionDescriptor.Execute(controllerContext, parameters); 3: ActionResult result = CreateActionResult(controllerContext, actionDescriptor, returnValue); 4: return result; 5: } I guess that’s enough on the ‘behind-the-screens’ of this class. Let’s see how we can use this class to hook-up extensions. Say I have a requirement that the user should be able to get different renderings of the same output, like html, xml, json, csv and so on. The user will type-in the output format in the url and should the get result accordingly. For example: http://site.com/RenderAs/ – renders the default way (the razor view) http://site.com/RenderAs/xml http://site.com/RenderAs/csv … and so on where RenderAs is my controller. There are many ways of doing this and I’m using a custom ControllerActionInvoker class (even though this might not be the best way to accomplish this). For this, my one and only route in the Global.asax.cs is: 1: routes.MapRoute("RenderAsRoute", "RenderAs/{outputType}", 2: new {controller = "RenderAs", action = "Index", outputType = ""}); Here the controller name is ‘RenderAsController’ and the action that’ll get called (always) is the Index action. The outputType parameter will map to the type of output requested by the user (xml, csv…). I intend to display a list of food items for this example. 1: public class Item 2: { 3: public int Id { get; set; } 4: public string Name { get; set; } 5: public Cuisine Cuisine { get; set; } 6: } 7:  8: public class Cuisine 9: { 10: public int CuisineId { get; set; } 11: public string Name { get; set; } 12: } Coming to my ‘RenderAsController’ class. I generate an IList<Item> to represent my model. 1: private static IList<Item> GetItems() 2: { 3: Cuisine cuisine = new Cuisine { CuisineId = 1, Name = "Italian" }; 4: Item item = new Item { Id = 1, Name = "Lasagna", Cuisine = cuisine }; 5: IList<Item> items = new List<Item> { item }; 6: item = new Item {Id = 2, Name = "Pasta", Cuisine = cuisine}; 7: items.Add(item); 8: //... 9: return items; 10: } My action method looks like 1: public IList<Item> Index(string outputType) 2: { 3: return GetItems(); 4: } There are two things that stand out in this action method. The first and the most obvious one being that the return type is not of type ActionResult (or one of its derivatives). Instead I’m passing the type of the model itself (IList<Item> in this case). We’ll convert this to some type of an ActionResult in our custom controller action invoker class later. The second thing (a little subtle) is that I’m not doing anything with the outputType value that is passed on to this action method. This value will be in the RouteData dictionary and we’ll use this in our custom invoker class as well. It’s time to hook up our invoker class. First, I’ll override the Initialize() method of my RenderAsController class. 1: protected override void Initialize(RequestContext requestContext) 2: { 3: base.Initialize(requestContext); 4: string outputType = string.Empty; 5:  6: // read the outputType from the RouteData dictionary 7: if (requestContext.RouteData.Values["outputType"] != null) 8: { 9: outputType = requestContext.RouteData.Values["outputType"].ToString(); 10: } 11:  12: // my custom invoker class 13: ActionInvoker = new ContentRendererActionInvoker(outputType); 14: } Coming to the main part of the discussion – the ContentRendererActionInvoker class: 1: public class ContentRendererActionInvoker : ControllerActionInvoker 2: { 3: private readonly string _outputType; 4:  5: public ContentRendererActionInvoker(string outputType) 6: { 7: _outputType = outputType.ToLower(); 8: } 9: //... 10: } So the outputType value that was read from the RouteData, which was passed in from the url, is being set here in  a private field. Moving to the crux of this article, I now override the CreateActionResult method. 1: protected override ActionResult CreateActionResult(ControllerContext controllerContext, ActionDescriptor actionDescriptor, object actionReturnValue) 2: { 3: if (actionReturnValue == null) 4: return new EmptyResult(); 5:  6: ActionResult result = actionReturnValue as ActionResult; 7: if (result != null) 8: return result; 9:  10: // This is where the magic happens 11: // Depending on the value in the _outputType field, 12: // return an appropriate ActionResult 13: switch (_outputType) 14: { 15: case "json": 16: { 17: JavaScriptSerializer serializer = new JavaScriptSerializer(); 18: string json = serializer.Serialize(actionReturnValue); 19: return new ContentResult { Content = json, ContentType = "application/json" }; 20: } 21: case "xml": 22: { 23: XmlSerializer serializer = new XmlSerializer(actionReturnValue.GetType()); 24: using (StringWriter writer = new StringWriter()) 25: { 26: serializer.Serialize(writer, actionReturnValue); 27: return new ContentResult { Content = writer.ToString(), ContentType = "text/xml" }; 28: } 29: } 30: case "csv": 31: controllerContext.HttpContext.Response.AddHeader("Content-Disposition", "attachment; filename=items.csv"); 32: return new ContentResult 33: { 34: Content = ToCsv(actionReturnValue as IList<Item>), 35: ContentType = "application/ms-excel" 36: }; 37: case "pdf": 38: string filePath = controllerContext.HttpContext.Server.MapPath("~/items.pdf"); 39: controllerContext.HttpContext.Response.AddHeader("content-disposition", 40: "attachment; filename=items.pdf"); 41: ToPdf(actionReturnValue as IList<Item>, filePath); 42: return new FileContentResult(StreamFile(filePath), "application/pdf"); 43:  44: default: 45: controllerContext.Controller.ViewData.Model = actionReturnValue; 46: return new ViewResult 47: { 48: TempData = controllerContext.Controller.TempData, 49: ViewData = controllerContext.Controller.ViewData 50: }; 51: } 52: } A big method there! The hook I was talking about kinda above actually is here. This is where different kinds / formats of output get returned based on the output type requested in the url. When the _outputType is not set (string.Empty as set in the Global.asax.cs file), the razor view gets rendered (lines 45-50). This is the default behavior in most MVC applications where-in a view (webform/razor) gets rendered on the browser. As you see here, this gets returned as a ViewResult. But then, for an outputType of json/xml/csv, a ContentResult gets returned, while for pdf, a FileContentResult is returned. Here are how the different kinds of output look like: This is how we can leverage this feature of ASP.NET MVC to developer a better application. I’ve used the iTextSharp library to convert to a pdf format. Mike gives quite a bit of detail regarding this library here. You can download the sample code here. (You’ll get an option to download once you open the link). Verdict: Hot chocolate: $3; Reebok shoes: $50; Your first car: $3000; Being able to extend a web application: Priceless.

    Read the article

  • Faceted search with Solr on Windows

    - by Dr.NETjes
    With over 10 million hits a day, funda.nl is probably the largest ASP.NET website which uses Solr on a Windows platform. While all our data (i.e. real estate properties) is stored in SQL Server, we're using Solr 1.4.1 to return the faceted search results as fast as we can.And yes, Solr is very fast. We did do some heavy stress testing on our Solr service, which allowed us to do over 1,000 req/sec on a single 64-bits Solr instance; and that's including converting search-url's to Solr http-queries and deserializing Solr's result-XML back to .NET objects! Let me tell you about faceted search and how to integrate Solr in a .NET/Windows environment. I'll bet it's easier than you think :-) What is faceted search? Faceted search is the clustering of search results into categories, allowing users to drill into search results. By showing the number of hits for each facet category, users can easily see how many results match that category. If you're still a bit confused, this example from CNET explains it all: The SQL solution for faceted search Our ("pre-Solr") solution for faceted search was done by adding a lot of redundant columns to our SQL tables and doing a COUNT(...) for each of those columns:   So if a user was searching for real estate properties in the city 'Amsterdam', our facet-query would be something like: SELECT COUNT(hasGarden), COUNT(has2Bathrooms), COUNT(has3Bathrooms), COUNT(etc...) FROM Houses WHERE city = 'Amsterdam' While this solution worked fine for a couple of years, it wasn't very easy for developers to add new facets. And also, performing COUNT's on all matched rows only performs well if you have a limited amount of rows in a table (i.e. less than a million). Enter Solr "Solr is an open source enterprise search server based on the Lucene Java search library, with XML/HTTP and JSON APIs, hit highlighting, faceted search, caching, replication, and a web administration interface." (quoted from Wikipedia's page on Solr) Solr isn't a database, it's more like a big index. Every time you upload data to Solr, it will analyze the data and create an inverted index from it (like the index-pages of a book). This way Solr can lookup data very quickly. To explain the inner workings of Solr is beyond the scope of this post, but if you want to learn more, please visit the Solr Wiki pages. Getting faceted search results from Solr is very easy; first let me show you how to send a http-query to Solr:    http://localhost:8983/solr/select?q=city:Amsterdam This will return an XML document containing the search results (in this example only three houses in the city of Amsterdam):    <response>     <result name="response" numFound="3" start="0">         <doc>            <long name="id">3203</long>            <str name="city">Amsterdam</str>            <str name="steet">Keizersgracht</str>            <int name="numberOfBathrooms">2</int>        </doc>         <doc>             <long name="id">3205</long>             <str name="city">Amsterdam</str>             <str name="steet">Vondelstraat</str>             <int name="numberOfBathrooms">3</int>          </doc>          <doc>             <long name="id">4293</long>             <str name="city">Amsterdam</str>             <str name="steet">Wibautstraat</str>             <int name="numberOfBathrooms">2</int>          </doc>       </result>   </response> By adding a facet-querypart for the field "numberOfBathrooms", Solr will return the facets for this particular field. We will see that there's one house in Amsterdam with three bathrooms and two houses with two bathrooms.    http://localhost:8983/solr/select?q=city:Amsterdam&facet=true&facet.field=numberOfBathrooms The complete XML response from Solr now looks like:    <response>      <result name="response" numFound="3" start="0">         <doc>            <long name="id">3203</long>            <str name="city">Amsterdam</str>            <str name="steet">Keizersgracht</str>            <int name="numberOfBathrooms">2</int>         </doc>         <doc>            <long name="id">3205</long>            <str name="city">Amsterdam</str>            <str name="steet">Vondelstraat</str>            <int name="numberOfBathrooms">3</int>         </doc>         <doc>            <long name="id">4293</long>            <str name="city">Amsterdam</str>            <str name="steet">Wibautstraat</str>            <int name="numberOfBathrooms">2</int>         </doc>      </result>      <lst name="facet_fields">         <lst name="numberOfBathrooms">            <int name="2">2</int>            <int name="3">1</int>         </lst>      </lst>   </response> Trying Solr for yourself To run Solr on your local machine and experiment with it, you should read the Solr tutorial. This tutorial really only takes 1 hour, in which you install Solr, upload sample data and get some query results. And yes, it works on Windows without a problem. Note that in the Solr tutorial, you're using Jetty as a Java Servlet Container (that's why you must start it using "java -jar start.jar"). In our environment we prefer to use Apache Tomcat to host Solr, which installs like a Windows service and works more like .NET developers expect. See the SolrTomcat page.Some best practices for running Solr on Windows: Use the 64-bits version of Tomcat. In our tests, this doubled the req/sec we were able to handle!Use a .NET XmlReader to convert Solr's XML output-stream to .NET objects. Don't use XPath; it won't scale well.Use filter queries ("fq" parameter) instead of the normal "q" parameter where possible. Filter queries are cached by Solr and will speed up Solr's response time (see FilterQueryGuidance)In my next post I’ll talk about how to keep Solr's indexed data in sync with the data in your SQL tables. Timestamps / rowversions will help you out here!

    Read the article

  • Silverlight for Windows Embedded tutorial (step 6)

    - by Valter Minute
    In this tutorial step we will develop a very simple clock application that may be used as a screensaver on our devices and will allow us to discover a new feature of Silverlight for Windows Embedded (transforms) and how to use an “old” feature of Windows CE (timers) inside a Silverlight for Windows Embedded application. Let’s start with some XAML, as usual: <UserControl xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Width="640" Height="480" FontSize="18" x:Name="Clock">   <Canvas x:Name="LayoutRoot" Background="#FF000000"> <Grid Height="24" Width="150" Canvas.Left="320" Canvas.Top="234" x:Name="SecondsHand" Background="#FFFF0000"> <TextBlock Text="Seconds" TextWrapping="Wrap" Width="50" HorizontalAlignment="Right" VerticalAlignment="Center" x:Name="SecondsText" Foreground="#FFFFFFFF" TextAlignment="Right" Margin="2,2,2,2"/> </Grid> <Grid Height="24" x:Name="MinutesHand" Width="100" Background="#FF00FF00" Canvas.Left="320" Canvas.Top="234"> <TextBlock HorizontalAlignment="Right" x:Name="MinutesText" VerticalAlignment="Center" Width="50" Text="Minutes" TextWrapping="Wrap" Foreground="#FFFFFFFF" TextAlignment="Right" Margin="2,2,2,2"/> </Grid> <Grid Height="24" x:Name="HoursHand" Width="50" Background="#FF0000FF" Canvas.Left="320" Canvas.Top="234"> <TextBlock HorizontalAlignment="Right" x:Name="HoursText" VerticalAlignment="Center" Width="50" Text="Hours" TextWrapping="Wrap" Foreground="#FFFFFFFF" TextAlignment="Right" Margin="2,2,2,2"/> </Grid> </Canvas> </UserControl> This XAML file defines three grid panels, one for each hand of our clock (we are implementing an analog clock using one of the most advanced technologies of the digital world… how cool is that?). Inside each hand we put a TextBlock that will be used to display the current hour, minute, second inside the dial (you can’t do that on plain old analog clocks, but it looks nice). As usual we use XAML2CPP to generate the boring part of our code. We declare a class named “Clock” and derives from the TClock template that XAML2CPP has declared for us. class Clock : public TClock<Clock> { ... }; Our WinMain function is more or less the same we used in all the previous samples. It initializes the XAML runtime, create an instance of our class, initialize it and shows it as a dialog: int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPTSTR lpCmdLine, int nCmdShow) { if (!XamlRuntimeInitialize()) return -1;   HRESULT retcode;   IXRApplicationPtr app; if (FAILED(retcode=GetXRApplicationInstance(&app))) return -1; Clock clock;   if (FAILED(clock.Init(hInstance,app))) return -1;     UINT exitcode;   if (FAILED(clock.GetVisualHost()->StartDialog(&exitcode))) return -1;   return exitcode; } Silverlight for Windows Embedded provides a lot of features to implement our UI, but it does not provide timers. How we can update our clock if we don’t have a timer feature? We just use plain old Windows timers, as we do in “regular” Windows CE applications! To use a timer in WinCE we should declare an id for it: #define IDT_CLOCKUPDATE 0x12341234 We also need an HWND that will be used to receive WM_TIMER messages. Our Silverlight for Windows Embedded page is “hosted” inside a GWES Window and we can retrieve its handle using the GetContainerHWND function of our VisualHost object. Let’s see how this is implemented inside our Clock class’ Init method: HRESULT Init(HINSTANCE hInstance,IXRApplication* app) { HRESULT retcode;   if (FAILED(retcode=TClock<Clock>::Init(hInstance,app))) return retcode;   // create the timer user to update the clock HWND clockhwnd;   if (FAILED(GetVisualHost()->GetContainerHWND(&clockhwnd))) return -1;   timer=SetTimer(clockhwnd,IDT_CLOCKUPDATE,1000,NULL); return 0; } We use SetTimer to create a new timer and GWES will send a WM_TIMER to our window every second, giving us a chance to update our clock. That sounds great… but how could we handle the WM_TIMER message if we didn’t implement a window procedure for our window? We have to move a step back and look how a visual host is created. This code is generated by XAML2CPP and is inside xaml2cppbase.h: virtual HRESULT CreateHost(HINSTANCE hInstance,IXRApplication* app) { HRESULT retcode; XRWindowCreateParams wp;   ZeroMemory(&wp, sizeof(XRWindowCreateParams)); InitWindowParms(&wp);   XRXamlSource xamlsrc;   SetXAMLSource(hInstance,&xamlsrc); if (FAILED(retcode=app->CreateHostFromXaml(&xamlsrc, &wp, &vhost))) return retcode;   if (FAILED(retcode=vhost->GetRootElement(&root))) return retcode; return S_OK; } As you can see the CreateHostFromXaml function of IXRApplication accepts a structure named XRWindowCreateParams that control how the “plain old” GWES Window is created by the runtime. This structure is initialized inside the InitWindowParm method: // Initializes Windows parameters, can be overridden in the user class to change its appearance virtual void InitWindowParms(XRWindowCreateParams* wp) { wp->Style = WS_OVERLAPPED; wp->pTitle = windowtitle; wp->Left = 0; wp->Top = 0; } This method set up the window style, title and position. But the XRWindowCreateParams contains also other fields and, since the function is declared as virtual, we could initialize them inside our version of InitWindowParms: // add hook procedure to the standard windows creation parms virtual void InitWindowParms(XRWindowCreateParams* wp) { TClock<Clock>::InitWindowParms(wp);   wp->pHookProc=StaticHostHookProc; wp->pvUserParam=this; } This method calls the base class implementation (useful to not having to re-write some code, did I told you that I’m quite lazy?) and then initializes the pHookProc and pvUserParam members of the XRWindowsCreateParams structure. Those members will allow us to install a “hook” procedure that will be called each time the GWES window “hosting” our Silverlight for Windows Embedded UI receives a message. We can declare a hook procedure inside our Clock class: // static hook procedure static BOOL CALLBACK StaticHostHookProc(VOID* pv,HWND hwnd,UINT Msg,WPARAM wParam,LPARAM lParam,LRESULT* pRetVal) { ... } You should notice two things here. First that the function is declared as static. This is required because a non-static function has a “hidden” parameters, that is the “this” pointer of our object. Having an extra parameter is not allowed for the type defined for the pHookProc member of the XRWindowsCreateParams struct and so we should implement our hook procedure as static. But in a static procedure we will not have a this pointer. How could we access the data member of our class? Here’s the second thing to notice. We initialized also the pvUserParam of the XRWindowsCreateParams struct. We set it to our this pointer. This value will be passed as the first parameter of the hook procedure. In this way we can retrieve our this pointer and use it to call a non-static version of our hook procedure: // static hook procedure static BOOL CALLBACK StaticHostHookProc(VOID* pv,HWND hwnd,UINT Msg,WPARAM wParam,LPARAM lParam,LRESULT* pRetVal) { return ((Clock*)pv)->HostHookProc(hwnd,Msg,wParam,lParam,pRetVal); } Inside our non-static hook procedure we will have access to our this pointer and we will be able to update our clock: // hook procedure (handles timers) BOOL HostHookProc(HWND hwnd,UINT Msg,WPARAM wParam,LPARAM lParam,LRESULT* pRetVal) { switch (Msg) { case WM_TIMER: if (wParam==IDT_CLOCKUPDATE) UpdateClock(); *pRetVal=0; return TRUE; } return FALSE; } The UpdateClock member function will update the text inside our TextBlocks and rotate the hands to reflect current time: // udates Hands positions and labels HRESULT UpdateClock() { SYSTEMTIME time; HRESULT retcode;   GetLocalTime(&time);   //updates the text fields TCHAR timebuffer[32];   _itow(time.wSecond,timebuffer,10);   SecondsText->SetText(timebuffer);   _itow(time.wMinute,timebuffer,10);   MinutesText->SetText(timebuffer);   _itow(time.wHour,timebuffer,10);   HoursText->SetText(timebuffer);   if (FAILED(retcode=RotateHand(((float)time.wSecond)*6-90,SecondsHand))) return retcode;   if (FAILED(retcode=RotateHand(((float)time.wMinute)*6-90,MinutesHand))) return retcode;   if (FAILED(retcode=RotateHand(((float)(time.wHour%12))*30-90,HoursHand))) return retcode;   return S_OK; } The function retrieves current time, convert hours, minutes and seconds to strings and display those strings inside the three TextBlocks that we put inside our clock hands. Then it rotates the hands to position them at the right angle (angles are in degrees and we have to subtract 90 degrees because 0 degrees means horizontal on Silverlight for Windows Embedded and usually a clock 0 is in the top position of the dial. The code of the RotateHand function uses transforms to rotate our clock hands on the screen: // rotates a Hand HRESULT RotateHand(float angle,IXRFrameworkElement* Hand) { HRESULT retcode; IXRRotateTransformPtr rotatetransform; IXRApplicationPtr app;   if (FAILED(retcode=GetXRApplicationInstance(&app))) return retcode;   if (FAILED(retcode=app->CreateObject(IID_IXRRotateTransform,&rotatetransform))) return retcode;     if (FAILED(retcode=rotatetransform->SetAngle(angle))) return retcode;   if (FAILED(retcode=rotatetransform->SetCenterX(0.0))) return retcode;   float height;   if (FAILED(retcode==Hand->GetActualHeight(&height))) return retcode;   if (FAILED(retcode=rotatetransform->SetCenterY(height/2))) return retcode; if (FAILED(retcode=Hand->SetRenderTransform(rotatetransform))) return retcode;   return S_OK; } It creates a IXRotateTransform object, set its rotation angle and origin (the default origin is at the top-left corner of our Grid panel, we move it in the vertical center to keep the hand rotating around a single point in a more “clock like” way. Then we can apply the transform to our UI object using SetRenderTransform. Every UI element (derived from IXRFrameworkElement) can be rotated! And using different subclasses of IXRTransform also moved, scaled, skewed and distorted in many ways. You can also concatenate multiple transforms and apply them at once suing a IXRTransformGroup object. The XAML engine uses vector graphics and object will not look “pixelated” when they are rotated or scaled. As usual you can download the code here: http://cid-9b7b0aefe3514dc5.skydrive.live.com/self.aspx/.Public/Clock.zip If you read up to (down to?) this point you seem to be interested in Silverlight for Windows Embedded. If you want me to discuss some specific topic, please feel free to point it out in the comments! Technorati Tags: Silverlight for Windows Embedded,Windows CE

    Read the article

  • GuestPost: Unit Testing Entity Framework (v1) Dependent Code using TypeMock Isolator

    - by Eric Nelson
    Time for another guest post (check out others in the series), this time bringing together the world of mocking with the world of Entity Framework. A big thanks to Moses for agreeing to do this. Unit Testing Entity Framework Dependent Code using TypeMock Isolator by Muhammad Mosa Introduction Unit testing data access code in my opinion is a challenging thing. Let us consider unit tests and integration tests. In integration tests you are allowed to have environmental dependencies such as a physical database connection to insert, update, delete or retrieve your data. However when performing unit tests it is often much more efficient and productive to remove environmental dependencies. Instead you will need to fake these dependencies. Faking a database (also known as mocking) can be relatively straight forward but the version of Entity Framework released with .Net 3.5 SP1 has a number of implementation specifics which actually makes faking the existence of a database quite difficult. Faking Entity Framework As mentioned earlier, to effectively unit test you will need to fake/simulate Entity Framework calls to the database. There are many free open source mocking frameworks that can help you achieve this but it will require additional effort to overcome & workaround a number of limitations in those frameworks. Examples of these limitations include: Not able to fake calls to non virtual methods Not able to fake sealed classes Not able to fake LINQ to Entities queries (replace database calls with in-memory collection calls) There is a mocking framework which is flexible enough to handle limitations such as those above. The commercially available TypeMock Isolator can do the job for you with less code and ultimately more readable unit tests. I’m going to demonstrate tackling one of those limitations using MoQ as my mocking framework. Then I will tackle the same issue using TypeMock Isolator. Mocking Entity Framework with MoQ One basic need when faking Entity Framework is to fake the ObjectContext. This cannot be done by passing any connection string. You have to pass a correct Entity Framework connection string that specifies CSDL, SSDL and MSL locations along with a provider connection string. Assuming we are going to do that, we’ll explore another limitation. The limitation we are going to face now is related to not being able to fake calls to non-virtual/overridable members with MoQ. I have the following repository method that adds an EntityObject (instance of a Blog entity) to Blogs entity set in an ObjectContext. public override void Add(Blog blog) { if(BlogContext.Blogs.Any(b=>b.Name == blog.Name)) { throw new InvalidOperationException("Blog with same name already exists!"); } BlogContext.AddToBlogs(blog); } The method does a very simple check that the name of the new Blog entity instance doesn’t exist. This is done through the simple LINQ query above. If the blog doesn’t already exist it simply adds it to the current context to be saved when SaveChanges of the ObjectContext instance (e.g. BlogContext) is called. However, if a blog with the same name exits, and exception (InvalideOperationException) will be thrown. Let us now create a unit test for the Add method using MoQ. [TestMethod] [ExpectedException(typeof(InvalidOperationException))] public void Add_Should_Throw_InvalidOperationException_When_Blog_With_Same_Name_Already_Exits() { //(1) We shouldn't depend on configuration when doing unit tests! But, //its a workaround to fake the ObjectContext string connectionString = ConfigurationManager .ConnectionStrings["MyBlogConnString"] .ConnectionString; //(2) Arrange: Fake ObjectContext var fakeContext = new Mock<MyBlogContext>(connectionString); //(3) Next Line will pass, as ObjectContext now can be faked with proper connection string var repo = new BlogRepository(fakeContext.Object); //(4) Create fake ObjectQuery<Blog>. Will be used to substitute MyBlogContext.Blogs property var fakeObjectQuery = new Mock<ObjectQuery<Blog>>("[Blogs]", fakeContext.Object); //(5) Arrange: Set Expectations //Next line will throw an exception by MoQ: //System.ArgumentException: Invalid setup on a non-overridable member fakeContext.SetupGet(c=>c.Blogs).Returns(fakeObjectQuery.Object); fakeObjectQuery.Setup(q => q.Any(b => b.Name == "NewBlog")).Returns(true); //Act repo.Add(new Blog { Name = "NewBlog" }); } This test method is checking to see if the correct exception ([ExpectedException(typeof(InvalidOperationException))]) is thrown when a developer attempts to Add a blog with a name that’s already exists. On (1) a connection string is initialized from configuration file. To retrieve the full connection string. On (2) a fake ObjectContext is being created. The ObjectContext here is MyBlogContext and its being created using this var fakeContext = new Mock<MyBlogContext>(connectionString); This way a fake context is being created using MoQ. On (3) a BlogRepository instance is created. BlogRepository has dependency on generate Entity Framework ObjectContext, MyObjectContext. And so the fake context is passed to the constructor. var repo = new BlogRepository(fakeContext.Object); On (4) a fake instance of ObjectQuery<Blog> is being created to use as a substitute to MyObjectContext.Blogs property as we will see in (5). On (5) setup an expectation for calling Blogs property of MyBlogContext and substitute the return result with the fake ObjectQuery<Blog> instance created on (4). When you run this test it will fail with MoQ throwing an exception because of this line: fakeContext.SetupGet(c=>c.Blogs).Returns(fakeObjectQuery.Object); This happens because the generate property MyBlogContext.Blogs is not virtual/overridable. And assuming it is virtual or you managed to make it virtual it will fail at the following line throwing the same exception: fakeObjectQuery.Setup(q => q.Any(b => b.Name == "NewBlog")).Returns(true); This time the test will fail because the Any extension method is not virtual/overridable. You won’t be able to replace ObjectQuery<Blog> with fake in memory collection to test your LINQ to Entities queries. Now lets see how replacing MoQ with TypeMock Isolator can help. Mocking Entity Framework with TypeMock Isolator The following is the same test method we had above for MoQ but this time implemented using TypeMock Isolator: [TestMethod] [ExpectedException(typeof(InvalidOperationException))] public void Add_New_Blog_That_Already_Exists_Should_Throw_InvalidOperationException() { //(1) Create fake in memory collection of blogs var fakeInMemoryBlogs = new List<Blog> {new Blog {Name = "FakeBlog"}}; //(2) create fake context var fakeContext = Isolate.Fake.Instance<MyBlogContext>(); //(3) Setup expected call to MyBlogContext.Blogs property through the fake context Isolate.WhenCalled(() => fakeContext.Blogs) .WillReturnCollectionValuesOf(fakeInMemoryBlogs.AsQueryable()); //(4) Create new blog with a name that already exits in the fake in memory collection in (1) var blog = new Blog {Name = "FakeBlog"}; //(5) Instantiate instance of BlogRepository (Class under test) var repo = new BlogRepository(fakeContext); //(6) Acting by adding the newly created blog () repo.Add(blog); } When running the above test method it will pass as the Add method of BlogRepository is going to throw an InvalidOperationException which is the expected behaviour. Nothing prevents us from faking out the database interaction! Even faking ObjectContext  at (2) didn’t require a connection string. On (3) Isolator sets up a faking result for MyBlogContext.Blogs when its being called through the fake instance fakeContext created on (2). The faking result is just an in-memory collection declared an initialized on (1). Finally at (6) action we call the Add method of BlogRepository passing a new Blog instance that has a name that’s already exists in the fake in-memory collection which we set up at (1). As expected the test will pass because it will throw the expected exception defined on top of the test method - InvalidOperationException. TypeMock Isolator succeeded in faking Entity Framework with ease. Conclusion We explored how to write a simple unit test using TypeMock Isolator for code which is using Entity Framework. We also explored a few of the limitations of other mocking frameworks which TypeMock is successfully able to handle. There are workarounds that you can use to overcome limitations when using MoQ or Rhino Mock, however the workarounds will require you to write more code and your tests will likely be more complex. For a comparison between different mocking frameworks take a look at this document produced by TypeMock. You might also want to check out this open source project to compare mocking frameworks. I hope you enjoyed this post Muhammad Mosa http://mosesofegypt.net/ http://twitter.com/mosessaur Screencast of unit testing Entity Framework Related Links GuestPost: Introduction to Mocking GuesPost: Typemock Isolator – Much more than an Isolation framework

    Read the article

  • top Tweets SOA Partner Community – August 2012

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity Lucas Jellema ?Published an article about organizing Fusion Middleware Administration: http://technology.amis.nl/2012/07/31/organizing-fusion-middleware-administration-in-a-smart-and-frugal-way … - many organizations are struggling with this. ServiceTechSymposium Countdown to the Early Bird Registration Discount deadline. Only 4 days left! http://ow.ly/cBCiv demed ?Good chatting w Bob Rhubart, Thomas Erl & Tim Hall on SOA & Cloud Symposium https://blogs.oracle.com/archbeat/entry/podcast_show_notes_thomas_erl … @soaschool @OTNArchBeat -- CU in London! SOA Community top Tweets SOA Partner Community July 2012 - are you one of them? If yes please rt! https://soacommunity.wordpress.com/2012/07/30/top-tweets-soa-partner-community-july-2012/ … #soacommunity SOA Community ?Are You a facebook member - do You follow http://www.facebook.com/soacommunity ? #soacommunity #soa SOA Community ?SOA 24/7 - Home Page: http://soa247.com/#.UBJsN8n3kyk.twitter … #soacommunity OracleBlogs ?Handling Large Payloads in SOA Suite 11g http://ow.ly/1lFAih OracleBlogs ?SOA Community Newsletter July 2012 http://ow.ly/1lFx6s OTNArchBeat Podcast Show Notes: Thomas Erl on SOA, Cloud, and Service Technology http://bit.ly/OOHTUJ SOA Community SOA Community Newsletter July 2012 http://wp.me/p10C8u-s7 OTNArchBeat ?OTN ArchBeat Podcast: Thomas Erl on SOA, Cloud, and Service Technology - Part 1 http://pub.vitrue.com/fMti OProcessAccel ?Just released! White Paper: Oracle Process Accelerators Best Practices http://www.oracle.com/technetwork/middleware/bpm/learnmore/processaccelbestpracticeswhitepaper-1708910.pdf … OTNArchBeat ?SOA, Cloud, and Service Technologies - Part 1 of 4 - A conversation with SOA, Cloud, and Service Technology Symposiu... http://ow.ly/1lDyAK OracleBlogs ?SOA Suite 11g PS5 Bundled Patch 3 (11.1.1.6.3) http://ow.ly/1lCW1S Simon Haslam My write-up of the virtues of the #ukoug App Server & Middleware SIG http://bit.ly/LMWdfY What's important to you for our next meeting? SOA Community SOA Partner Community Survey 2012 http://wp.me/p10C8u-qY Simone Geib ?RT @jswaroop: #Oracle positioned in the Leader's quadrant - Gartner Magic Quadrants for Application Infrastructure (SOA & SOA Gov)... ServiceTechSymposium New Supporting Organization, IBTI has joined the Symposium! http://www.servicetechsymposium.com/ orclateamsoa ?A-Team Blog #ateam: BPM 11g Task Form Version Considerations http://ow.ly/1lA7XS OTNArchBeat Oracle content at SOA, Cloud and Service Technology Symposium (and discount code!) http://pub.vitrue.com/FPcW OracleBlogs ?BPM 11g Task Form Version Considerations http://ow.ly/1lzOrX OTNArchBeat BPM 11g #ADF Task Form Versioning | Christopher Karl Chan #fusionmiddleware http://pub.vitrue.com/0qP2 OTNArchBeat Lightweight ADF Task Flow for BPM Human Tasks Overview | @AndrejusB #fusionmiddleware http://pub.vitrue.com/z7x9 SOA Community Oracle Fusion Middleware Summer Camps in Lisbon report by Link Consulting http://middlewarebylink.wordpress.com/2012/07/20/oracle-fusion-middleware-summer-camps-in-lisbon/ … #ofmsummercamps #soa #bpm SOA Community ?Clemens Utschig-Utschig & Manas Deb The Successful Execution of the SOA and BPM Vision Using a Business Capability Framework: Concepts… Simone Geib ?RT @oprocessaccel: Just released! White Paper: Oracle Process Accelerators Best Practices http://www.oracle.com/technetwork/middleware/bpm/learnmore/processaccelbestpracticeswhitepaper-1708910.pdf … jornica ?Report from Oracle Fusion Middleware Summer Camps in Munich: SOA Suite 11g advanced training experiences @soacommunity http://bit.ly/Mw3btE Simone Geib ?Bruce Tierney: Update - SOA & BPM Customer Insights Webcast Series: | https://blogs.oracle.com/SOA/entry/update_soa_bpm_customer_insights … OTNArchBeat Business SOA: Thinking is Dead | @mosesjones http://pub.vitrue.com/k8mw esentri ?had 3 great days in Munich at #Oracle #soacommunity Summercamp! Special thanks to Geoffroy de Lamalle from eProseed! Danilo Schmiedel ?Used my time in train to setup the ps5 soa/bpm vbox-image.Works like a dream. Setup-Readme is perfect! Saves a lot of time!!! @soacommunity 18 Jul SOA Community ?THANKS for the excellent OFM summer camps - save trip home - share your pictures at http://www.facebook.com/soacommunity #ofmsummercamps #soacommunity doors BBQ-party with Oracle @soacommunity. 5Star! #lovemunich #ofmsummercamps pic.twitter.com/ztfcGn2S leonsmiers ?New #Capgemini blog post "Continuous Improvement of Business Agility" http://bit.ly/Lr0EwG #bpm #yam Eric Elzinga ?MDS Explorer utility, http://see.sc/4qdb43 #soasuite ServiceTechSymposium ?@techsymp New speaker Demed L’Her from Oracle has been added to the symposium calendar. http://ow.ly/cjnyw SOA Community ?Last day of the Fusion Middleware summer camps - we continue at 9.00 am. send us your barbecue pictures! #ofmsummercamps #soacommunity SOA Community ?Delivering SOA Governance with EAMS and Oracle Enterprise Repository by Link Consulting http://middlewarebylink.wordpress.com/2012/06/26/delivering-soa-governance-with-eams-and-oracle-enterprise-repository/ … #soacommunity #soa #oer OracleBlogs ?Process Accelerator Kit http://ow.ly/1loaCw 15 Jul SOA Community ?Sun is back in Munich! Send your pictures Middleware summer camps! #ofmsummercamps We start tomorrow 11.00 at Oracle pic.twitter.com/6FStxomk Walter Montantes ?Gracias, Obrigado, Thank you, Danke a Lisboa y a @soacommunity @wlscommunity. From the Mexican guys!! cc @mikeintoch #ofmsummercamps Andrejus Baranovskis Tips & Tricks How to Run Oracle BPM 11g PS5 Workspace from Custom ADF 11g Application http://fb.me/1zOf3h2K8 JDeveloper & ADF ?Fusion Apps Enterprise Repository - Explained http://dlvr.it/1rpjWd Steve Walker ?Oracle #Exalogic is the logical choice for running business applications. Exalogic Software 2.0 launches 7/25. Reg at http://bit.ly/NedQ9L A. Chatziantoniou ?Landed in rainy Amsterdam after a great week in Lisbon for the #ofmsummercamps - multo obrigado for Jürgen for another fantastic event SOA Community ?Teams present #BPM11g POC results at #ofmsummercamps - great job! #soacommunity pic.twitter.com/0d4txkWF Sabine Leitner ?#DOAG SIG Middleware 29.08.2012 Köln über MW, Administration, Monitoring http://bit.ly/P47w82 @soacommunity @OracleMW @OracleFMW 12 Jul philmulhall ?Thanks @soacommunity for a great week at the #ofmsummercamps. Hard work done so time for a few cold ones in Lisboa. pic.twitter.com/LVUUuwTh peter230769 ?RT: andrea_rocco_31: RT @soacommunity: Enjoy the networking event at #ofmsummercamps want to attend next time ... pic.twitter.com/D1HRndi4 Niels Gorter ?#ofmsummercamps dinner in Lisbon. Great weather, scenery, training, people, on and on. Big THANKS @soacommunity JDeveloper & ADF ?Running Oracle BPM 11g PS5 Worklist Task Flow and Human Task Form on Non-SOA Domain http://dlvr.it/1r0c2j Andrea Rocco ?RT @soacommunity: Jamy pastry at cafe Belem - who is the ghost there?!? http://via.me/-2x33uk6 Simon Haslam ?Sounds great - sorry I couldn't make it. RT @soacommunity: 6pm BPM advanced training hard work to build the POC #ofmsummercamps philmulhall ?A well earned rest after a hard days work @soacommunity #summercamps pic.twitter.com/LKK7VOVS philmulhall ?Some more hard working delegates @soacommunity #summercamps pic.twitter.com/gWpk1HZh SOA Community ?Error message at the BPM POC - will The #ace director understand the message and solve it? #ofmsummercamps pic.twitter.com/LFTEzNck Daniel Kleine-Albers ?posted on the #thecattlecrew blog: Assigning more memory to JDeveloper http://thecattlecrew.wordpress.com/2012/07/10/jdeveloper-quicktip-assigning-more-memory/ … OTNArchBeat ?BAM design pointers | Kavitha Srinivasan http://pub.vitrue.com/TOhP SOA Community ?Did you receive the July SOA community newsletter? read it! Want to become a member http://www.oracle.com/goto/emea/soa #soacommunity #soa #opn OracleBlogs ?Markus Zirn, Big Data with CEP and SOA @ SOA, Cloud &amp; Service Technology Symposium 2012 http://ow.ly/1lcSkb Andrejus Baranovskis Running Oracle BPM 11g PS5 Worklist Task Flow and Human Task Form on Non-SOA Eric Elzinga ?Service Facade design pattern in OSB, http://bit.ly/NnOExN Eric Elzinga ?New BPEL Thread Pool in SOA 11g for Non-Blocking Invoke Activities from 11.1.1.6 (PS5), http://bit.ly/NnOc2G Gilberto Holms New Post: Siebel Connection Pool in Oracle Service Bus 11g http://wp.me/pRE8V-2z Oracle UPK & Tutor ?UPK Pre-Built Content Update: UPK pre-built content development efforts are always underway and growing. Ove... http://bit.ly/R2HeTj JDeveloper & ADF ?Troubleshooting BPMN process editor problems in 11.1.1.6 http://dlvr.it/1p0FfS orclateamsoa ?A-Team Blog #ateam: BAM design pointers - In working recently with a large Oracle customer on SOA and BAM, I discove... http://ow.ly/1kYqES SOA Community BPMN process editor problems in 11.1.1.6 by Mark Nelson http://redstack.wordpress.com/2012/06/27/bpmn-process-editor-problems-in-11-1-1-6 … #soacommunity #bpm OTNArchBeat ?SOA Learning Library: free short, topic-focused training on Oracle SOA & BPM products | @SOACommunity http://pub.vitrue.com/NE1G Andrejus Baranovskis ?ADF 11g PS5 Application with Customized BPM Worklist Task Flow (MDS Seeded Customization) http://fb.me/1coX4r1X1 OTNArchBeat ?A Universal JMX Client for Weblogic –Part 1: Monitoring BPEL Thread Pools in SOA 11g | Stefan Koser http://pub.vitrue.com/mQVZ OTNArchBeat ?BPM – Disable DBMS job to refresh B2B Materialized View | Mark Nelson http://pub.vitrue.com/3PR0 SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Community twitter,SOA Community,Oracle SOA,Oracle BPM,BPM Community,OPN,Jürgen Kress

    Read the article

< Previous Page | 351 352 353 354 355 356 357 358 359 360 361 362  | Next Page >