Search Results

Search found 8177 results on 328 pages for 'lost focus'.

Page 85/328 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • What is the best practice for including jQuery ext functions?

    - by Metropolis
    Hey everyone, Currently I have a file that I named JQuery.ext.js which I am including in all of my pages. Inside this file I have numerous functions that do things like the following, (function($) { /** * Checks to see is the value inside ele is blank * @param message string The message that needs to be displayed if the element is blank * @return bool */ $.fn.isEmpty = function(message) { var b_IsEmpty = false; //Loop through all elements this.each(function() { //Check to see if an empty value is found if($(this).val().length <= 0) { //If message is not blank if(message) { alert(message); $(this).focus().select(); } b_IsEmpty = true; return false; } return true; }); //Return false if the evaluation failed, otherwise return the jquery object so we can reuse it return (b_IsEmpty) ? true : false; }; /** * Checks to see if the value inside ele is numbers only * @param message string The message that needs to be displayed if the element is not numeric * @return bool */ $.fn.isNumeric = function(message) { var expression = /^[0-9]+$/; var b_IsNumeric = true; //Loop through elements checking each one this.each( function() { //Check to see if this value is not numeric if(!$(this).val().match(expression) && $(this).val().length > 0) { //If message is not blank if(message) { alert(message); $(this).focus().select(); } b_IsNumeric = false; } return b_IsNumeric; }); return b_IsNumeric; }; })(jQuery); Is there another way to do this? or is this the way most people do it? Thanks for any help, Metropolis

    Read the article

  • webpage scrollbar scrolls to top when typing in input box, how to fix?

    - by derei
    I have a HTML table that is scrollable and I'm forcing the scroll bar at the bottom. But every time I type something in a input box situated inside <thead> it scrolls back to top. I have no idea how to stop it to do that... I'm sorry for not explaining it better, if anybody wants to help, I could provide a link. I cannot place it public because is a private project. Thanks, let me know. EDIT -added jsfiddle example (below is the link) click here for jsfiddle example EDIT2 the issue seem to be present only in Chrome, but that it's more than enough (the app is intended to be used in chrome) EDIT3 I found a similar issue here: on webkit browsers typing into edit box causes scrolling , so the problem seem explained: the parent element gets focus on the side where the input-box is. I verified this on a mockup-template and it acts accordingly. *The question is:*how to prevent this to happen? I am forced by situation to have the input-box as child for the scrollable div, but I don't want that scroll to move (somehow to not give focus to the parent element, when I type in the input-box). Any idea?

    Read the article

  • How not to lose binding source updates?

    - by Fyodor Soikin
    Suppose I have a modal dialog with a textbox and OK/Cancel buttons. And it is built on MVVM - i.e. it has a ViewModel object with a string property that the textbox is bound to. Say, I enter some text in the textbox and then grab my mouse and click "OK". Everything works fine: at the moment of click, the textbox loses focus, which causes the binding engine to update the ViewModel's property. I get my data, everybody's happy. Now suppose I don't use my mouse. Instead, I just hit Enter on the keyboard. This also causes the "OK" button to "click", since it is marked as IsDefault="True". But guess what? The textbox doesn not lose focus in this case, and therefore, the binding engine remains innocently ignorant, and I don't get my data. Dang! Another variation of the same scenario: suppose I have a data entry form right in the main window, enter some data into it, and then hit Ctrl+S for "Save". Guess what? My latest entry doesn't get saved! This may be somewhat remedied by using UpdateSourceTrigger=PropertyChanged, but that is not always possible. One obvious case would be the use of StringFormat with binding - the text keeps jumping back into "formatted" state as I'm trying to enter it. And another case, which I have encountered myself, is when I have some time-consuming processing in the viewmodel's property setter, and I only want to perform it when the user is "done" entering text. This seems like an eternal problem: I remember trying to solve it systematically from ages ago, ever since I've started working with interactive interfaces, but I've never quite succeeded. In the past, I always ended up using some sort of hacks - like, say, adding an "EnsureDataSaved" method to every "presenter" (as in "MVP") and calling it at "critical" points, or something like that... But with all the cool technologies, as well as empty hype, of WPF, I expected they'd come up with some good solution.

    Read the article

  • How do I use cookies to store users' recent site history(PHP)?

    - by ggfan
    I decided to make a recent view box that allows users to see what links they clicked on before. Whenever they click on a posting, the posting's id gets stored in a cookie and displays it in the recent view box. In my ad.php, I have a definerecentview function that stores the posting's id (so I can call it later when trying to get the posting's information such as title, price from the database) in a cookie. How do I create a cookie array for this? **EXAMPLE:** user clicks on ad.php?posting_id='200' //this is in the ad.php function definerecentview() { $posting_id=$_GET['posting_id']; //this adds 30 days to the current time $Month = 2592000 + time(); $i=1; if (isset($posting_id)){ //lost here for($i=1,$i< ???,$i++){ setcookie("recentviewitem[$i]", $posting_id, $Month); } } } function displayrecentviews() { echo "<div class='recentviews'>"; echo "Recent Views"; if (isset($_COOKIE['recentviewitem'])) { foreach ($_COOKIE['recentviewitem'] as $name => $value) { echo "$name : $value <br />\n"; //right now just shows the posting_id } } echo "</div>"; } How do I use a for loop or foreach loop to make it that whenever a user clicks on an ad, it makes an array in the cookie? So it would be like.. 1. clicks on ad.php?posting_id=200 --- setcookie("recentviewitem[1]",200,$month); 2. clicks on ad.php?posting_id=201 --- setcookie("recentviewitem[2]",201,$month); 3. clicks on ad.php?posting_id=202 --- setcookie("recentviewitem[3]",202,$month); Then in the displayrecentitem function, I just echo however many cookies were set? I'm just totally lost in creating a for loop that sets the cookies. any help would be appreciated

    Read the article

  • HTML5 on iPhone Safari - data stored by localStorage does not always persist. Why?

    - by Aerodyne
    Hi, I write a simple iPhone web app using HTML5's localStorage. Tests on a 2G device show that data stored using localStorage does not persist after the Safari process is killed although the opened Safari windows are remembered. The data is also lost in a case where I am on a different site on a different Safari window, then I change the window to where the web app in subject is shown. When Safari loads the page it automatically refreshes the page. Then the data is lost. This is a simple test code: <html> <head> <meta name="viewport" content="height=device-height, width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no" /> </head> <body> <script> alert("1:" + localStorage.getItem("test")); localStorage.setItem("test", "123"); alert("2:" + localStorage.getItem("test")); </script> </body> As far as I understand the data should persist! Can anyone shed some light on this behavior? What should I do to get the persistence to work? Thanks! Tom.

    Read the article

  • How to "End Task" not "Kill" or "Terminate"?

    - by Luiscencio
    Hi community. I have a 3G card to provide internet to a remote computer... I have to run a program(provided with the card) to establish the connection... since connections suddenly is lost I wrote a script that Kills the program and reopens it so that the connection is reestablished, there are certain versions of this program that don't kill the connection when killed/terminated, just when closed properly. so I am looking for a script or program that "Properly Closes" a window so I can close it and reopen it in case the connection is lost. this is the code that kills the program Option Explicit Dim objWMIService, objProcess, colProcess Dim strComputer, strProcessKill strComputer = "." strProcessKill = "'Telcel3G.exe'" Set objWMIService = GetObject("winmgmts:" _ & "{impersonationLevel=impersonate}!\\" _ & strComputer & "\root\cimv2") Set colProcess = objWMIService.ExecQuery _ ("Select * from Win32_Process Where Name = " & strProcessKill ) For Each objProcess in colProcess objProcess.Terminate() Next WSCript.Echo "Just killed process " & strProcessKill _ & " on " & strComputer WScript.Quit

    Read the article

  • How to unselect checkbox in chrome browser

    - by hudi
    in my wicket application I have 3 checkbox in form: add(new CheckBox("1").setOutputMarkupId(true)); add(new CheckBox("2").setOutputMarkupId(true)); add(new CheckBox("3").setOutputMarkupId(true)); form also contain behavior which unselect checboxes add(new AjaxEventBehavior("onclick") { private static final long serialVersionUID = 1L; @Override protected void onEvent(AjaxRequestTarget target) { List<Component> components = new ArrayList<Component>(); if (target.getLastFocusedElementId() != null) { if (target.getLastFocusedElementId().equals("1")) { components.add(get("2")); components.add(get("3")); } else if (target.getLastFocusedElementId().equals("2")) { components.add(get("1")); } else if (target.getLastFocusedElementId().equals("3")) { components.add(get("1")); } for (Component component : components) { component.setDefaultModelObject(null); target.add(component); } } } }); this works good on mozilla browser but in chrome this doesnt work. How I can improve to work this on chrome too ? UPDATE problem is in: target.getLastFocusedElementId() in mozilla this return what I want but in chrome it always return null but I dont know wh UPDATE 2 google chrome has bug in focus element: http://code.google.com/p/chromium/issues/detail?id=1383&can=1&q=window.focus%20type%3aBug&colspec=ID%20Stars%20Pri%20Area%20Type%20Status%20Summary%20Modified%20Owner so I need to do this in other way

    Read the article

  • Asynchronous javascript issue

    - by amit
    I am trying to create a function which takes values from various html elements of the page to create a string and pass on to a variable. now this works great for all browsers except IE 8 and 9. IE tends to skip the part of fetching the values and goes straight to the variable and finds nothing.. is there a way to sync it all so that it works in IE? function seturl() { var qstring = returnQString(); $('span.keyword').text($.trim($('#hdnKeyWord').attr('value'))); $('input.search_box').attr('value', $.trim($('#hdnKeyWord').attr('value'))); $('#hdnSearchKeyword').attr('value', $.trim($('#hdnKeyWord').attr('value'))); $(".search_box").val($.trim($("#hdn_span_hdnKeyWord").text())); $(".header_inner input[type='text']").focus(); $(".search_term input[type='text']").focus(); $('#locationurl').attr('value', qstring); } function returnQString(){ var qstring = $.trim($('#locationurl').attr('init')); //initial value of the url qstring += "?type=" + $('#hdnSTSearch').attr('value'); // type of handler hit qstring += "&keyword=" + encodeURIComponent($('#hdnKeyWord').attr('value')); // keyword addition qstring += "&pagestart=" + $('#current_page').attr('value'); // pagestart(current page) addition qstring += "&pagesize=" + $('#show_per_page').attr('value'); // per page size addition qstring += "&facets=" // facetsearch $.each(selectedFilter.items, function (index, value) { qstring += value.filter + ","; }); qstring += "&selectedSection=" + selectedSection // Section Select return qstring; }

    Read the article

  • Javascript/Jquery pop-up window in Asp.Net MVC4

    - by Mark
    Below I have a "button" (just a span with an icon) that creates a pop-up view of a div in my application to allow users to compare information in seperate windows. However, I get and Asp.Net Error as follows: **Server Error in '/' Application. The resource cannot be found. Requested URL: /Home/[object Object]** Does anyone have an Idea of why this is happending? Below is my code: <div class="module_actions"> <div class="actions"> <span class="icon-expand2 pop-out"></span> </div> </div> <script> $(document).ajaxSuccess(function () { var Clone = $(".pop-out").click(function () { $(this).parents(".module").clone().appendTo("#NewWindow"); }); $(".pop-out").click(function popitup(url) { LeftPosition = (screen.width) ? (screen.width - 400) / 1 : 0; TopPosition = (screen.height) ? (screen.height - 700) / 1 : 0; var sheight = (screen.height) * 0.5; var swidth = (screen.width) * 0.5; settings = 'height=' + sheight + ',width=' + swidth + ',top=' + TopPosition + ',left=' + LeftPosition + ',scrollbars=yes,resizable=yes,toolbar=no,status=no,menu=no, directories=no,titlebar=no,location=no,addressbar=no' newwindow = window.open(url, '/Index', settings); if (window.focus) { newwindow.focus() } return false; }); });

    Read the article

  • dwoo template variables inside JavaScript?

    - by user344711
    Hi everyone! i have this code. {if $loginUrl} {literal} <script type="text/javascript"> var newwindow; var intId; function login() { var screenX = typeof window.screenX != 'undefined' ? window.screenX : window.screenLeft, screenY = typeof window.screenY != 'undefined' ? window.screenY : window.screenTop, outerWidth = typeof window.outerWidth != 'undefined' ? window.outerWidth : document.body.clientWidth, outerHeight = typeof window.outerHeight != 'undefined' ? window.outerHeight : (document.body.clientHeight - 22), width = 500, height = 270, left = parseInt(screenX + ((outerWidth - width) / 2), 10), top = parseInt(screenY + ((outerHeight - height) / 2.5), 10), features = ( 'width=' + width + ',height=' + height + ',left=' + left + ',top=' + top ); newwindow=window.open('{$loginUrl}','Login by facebook',features); if (window.focus) {newwindow.focus()} return false; } </script> {/literal} {/if} It is dwoo templates, i wonder how can i use my dwoo variables inside javascript? im trying to do it just at you can see at the code, but it doesnt work. I need to warp my code between {literal} so it can work.

    Read the article

  • Javascript access object variables from functions

    - by Parhs
    function init_exam_chooser(id,mode) { this.id=id; this.table=$("#" + this.id); this.htmlRowOrder="<tr>" + $("#" + this.id + " tbody tr:eq(0)").html() + "</tr>"; this.htmlRowNew="<tr>" + $("#" + this.id + " tbody tr:eq(1)").html() + "</tr>"; $("#" + this.id + " tbody tr").remove(); //Arxikopoiisi var rowNew=$(this.htmlRowNew); rowNew.find("input[type='text']").eq(0).autocomplete({ source: function (req,resp) { $.ajax({ url: "/medilab/prototypes/exams/searchQuick", cache: false, dataType: "json", data:{codeName:req.term}, success: function(data) { resp(data); } }); }, focus: function(event,ui) { return false; }, minLength :2 }); rowNew.find("input[type='text']").eq(1).autocomplete({ source: function (req,resp) { $.ajax({ url: "/medilab/prototypes/exams/searchQuick", cache: false, dataType: "json", data:{name:req.term}, success: function(data) { resp(data); } }); }, focus: function(event,ui) { return false; }, minLength :2 }); rowNew.find("input[type='text']").bind( "autocompleteselect", function(event, ui) { alert(htmlRowOrder); var row=$(htmlRowOrder); $(table).find("tbody tr:last").before(row); alert(ui.item.id); }); rowNew.appendTo($(this.table).find("tbody")); //this.htmlRowNew } The problem is at ,how i can access htmlRowOrder? I tried this.htmlRowOrder and didnt work.... Any ideas?? rowNew.find("input[type='text']").bind( "autocompleteselect", function(event, ui) { alert(htmlRowOrder); var row=$(htmlRowOrder); $(table).find("tbody tr:last").before(row); alert(ui.item.id); });

    Read the article

  • Pass variable by Post method from JQuery UI Autocomplete to PHP page

    - by Shahriar N Khondokar
    I have two JQuery UI autocomplete input fields. When an option is selected in the first one, the value of the selection will be used as condition for a database query that will send the source data for the second autocomplete field. My problem is how do I send the value of the first selection to the PHP page via Post method? The code so far is shown below (this code is from a tutorial which used the GET method; but I want to use Post): <script> $("input#divisions").autocomplete ({ //this is the first input source : [ { value: "81", label: "City1" }, { value: "82", label: "City2" }, { value: "83", label: "City3" } ], minLength : 0, select: function(event, ui) { $('#divisions').val(ui.item.label); return false; }, focus: function(event, ui){ $('#divisions').val(ui.item.label); return false; }, change: function(event, ui){ //the tutorial has this value sent by variables in the URL; I want the selection value sent by POST. How can I change this? c_t_v_choices = "c_t_v_choices.php?filter=" + ui.item.value; $("#c_t_v").autocomplete("option", "source", c_t_v_choices); } }).focus (function (event) { $(this).autocomplete ("search", ""); }); $("#c_t_v").autocomplete({ source: "", minLength: 2, select: function(event,ui){ //$('#city').val(ui.item.city); } }); </script> Can anyone please help? Dont hesitate to let me know if you have any questions.

    Read the article

  • Emacs: Often switching between Emacs and my IDE's editor, how can I 'synch' the file?

    - by WizardOfOdds
    I very often need to do some Emacs magic on some files and I need to go back and forth my IDE (IntelliJ IDEA) and Emacs. When a change is made under Emacs (and after I've saved the file) and I go back to IntelliJ the change appears immediately (if I recall correctly I configured IntelliJ to "always reload file when a modification is detected on disk" or something like that). I don't even need to reload: as soon as IntelliJ IDEA gains focus, it instantly reloads the file (and I hence have immediately access to the modifications I made from Emacs). So far, so very good. However "the other way round", it doesn't work yet. Can I configure Emacs so that everytime a file is changed on disk it reloads it? Or make Emacs, everytime it "gains focus", verify if any file currently opened has been modified on disk? I know I can start modifying the buffer under Emacs and it shall instantly warn that it has been modified, but I'd rather have it do it immediately (for example if I used my IDE to do some big change, when I come back to Emacs what I see may not be at all anymore what the file contains and it's a bit weird).

    Read the article

  • Is OpenID too complicated?

    - by John Leidegren
    I'm beginning to seriously doubt the OpenID community despite that fact that it works. I'm in the process of currently evaluating OpenID as an authentication service for 'this' site and while the promises are great, I just can't get it to work. And I'm really lost. I ask of the SO community to help me out here. Give me answers and show me examples so I can leverage this in the way it was meant to be. My scenario is very typical. I want to authenticate users through a specific Google Apps domain. If you have access to this Google Apps domain, then you have access to my web application. Where I get lost, is all the prerequisites and dependencies involved. What is XRD? What is Yadis? Why do I need XRD and Yadis? What do I need to do to deploy OpenID authentication on my website? Also, this is really important to me. When I login to SO, I use my Google Account. When I click the login button I'm presented with this confirmation page. Where I'm granting SO the right to use my Google Account credentials. Somehow, Google knows that it's "Stackoverflow.com" that's asking me if it's okay to login. And I wish to know what manner of control I have over this little text. I intend to deploy OpenID on several different domains but I would prefer if they would all work without having to be individually configured with special parameters, such as secret API keys and what not. However, I don't know for sure if this is a prerequisite of OpenID, that or the Federated Login API that Google provides.

    Read the article

  • How to change the picture of CustomButtonField on click event?

    - by Ujjal boruah Vinod
    I have posted this question previously but the answer is not appropiate. The solution provided just change the picture when the custombutton has focus and unfocus. Suppose in my application I need to change the picture if the user clicks on the customButton, n i m doing this by calling the same screen (ie UiApplication.getUiApplication().pushScreen(new Screen2(b));) . Screen2 is the screen which holds the customButton. On the click evevt i m pushing the same screen by passing aint variable pic_status that determines which picture to be drawn in the CustomButton in the new screen. Is there any way to update the picture in the CustomButtonField on click event without pushing the same Screen again and again. //code in Screen2 public void fieldChanged(Field field, int context) { if(field == bf1) { if(pic_status == 0) { pic_status=1; } UiApplication.getUiApplication().pushScreen(new Screen2(pic_status)); } //code in CustomButtonField CustomButtonField(String label,int pic_status,long style) { super(style); this.label = label; this.labelHeight = getFont().getHeight(); this.labelWidth = getFont().getAdvance(label); this.notice = s; if(pic_status ==0) { currentPicture1 = onPicture; currentPicture2 = onPicture; } if(pic_status ==1) { currentPicture1 = clickPicture; currentPicture2 = onPicture; } if( pic_status==2 ) { currentPicture1 = onPicture; currentPicture2 = clickPicture; } } I need a way to update the customButtonField text and picture on the buttonClick event not on focus/unfocus event without pushing the same Screen again and again. If my above description of problem is not satisfactory, plz add a comment n i can give more details explanation of my problem?

    Read the article

  • C# form - checkboxes do not respond to plus/minus keys - easy workaround?

    - by Scott
    On forms created with pre dotNET VB and C++ (MFC), a checkbox control responded to the plus/minus key without custom programming. When focus was on the checbox control, pressing PLUS would check the box, no matter what the previous state (checked/unchecked), while pressing MINUS would uncheck it, no matter the previous state. C# winform checkboxes do not seem to exhibit this behavior. Said behavior was very, very handy for automation, whereby the automating program would set focus to a checkbox control and issue a PLUS or MINUS to check or uncheck it. Without this capability, that cannot be done, as the automation program (at least the one I am using) is unable to query the current state of the checkbox (so it can decide whether to issue a SPACE key to toggle the state to the desired one). I've gone over the properties of a checkbox in the Visual Studio 2008 IDE and could not find anything that would restore/enable response to PLUS/MINUS. Since I am in control of the sourcecode for the WinForms in question, I could replace all checkbox controls with a custom checkbox control, but blech, I'd like to avoid that - heck, I don't think I could even consider that given the amount of refactoring that would need to be done. So the bottom line is: does anyone know of a way to get this behavior back more easily than a coding change?

    Read the article

  • Required attribute HTML5

    - by Joop
    First of all I will explain how I stumbled into this behavior. Within my web application I am using some custom validation for my form fields. Within the same form I have two buttons. One to actually submit the form and the other to cancel/reset the form. Mostly I use Safari as my default browser. Now Safari 5 is out and suddenly my cancel/reset button didn't work anymore. Every time I did hit the reset button the first field in my form did get the focus. However this is the same behavior as my custom form validation. When trying it with another browser everything just worked fine. I had to be a Safari 5 problem. I changed a bit in my Javascript code and I found out that the following line was causing the problem: document.getElementById("somefield").required = true; To be sure that would be really the problem I created a test scenario: <!DOCTYPE html> <html> <head> <title>Test</title> </head> <body> <form id="someform"> <label>Name:</label>&nbsp;<input type="text" id="name" required="true" /><br/> <label>Car:</label>&nbsp;<input type="text" id="car" required="true" /><br/> <br/> <input type="submit" id="btnsubmit" value="Submit!" /> </form> </body> </html> What I expected would happen did happen. The first field "name" did get the focus automatically. Anyone else stumbled into this?

    Read the article

  • How to determine which element(s) are visible in an overflowed <div>

    - by jjross
    Basically, I'm trying to implement a system that behaves similar to the reading pane that's built into the Google Reader interface. If you haven't seen it, Google Reader presents each article in a separate box and as you scroll it highlights the current box (and marks the article as read). In addition to this, you can move forward or backward in the article list by clicking the previous and next buttons in the UI. I've basically figured out how to do most of the functionality. However, I'm not sure how I can determine which of my divs is currently visible in in the scrollable pane. I have a div that is set to overflow:auto. Inside of this div, there are other divs, each one containing a piece of content. I've used the following jquery plugin to make everything scroll based on a click of the "next" or "previous" button and it works like a charm: http://demos.flesler.com/jquery/serialScroll/ But I can't tell which div has "focus" in the scrollable pane. I'd like to be able to do this for two reasons. I'd like to highlight the item that the user is currently reading (similar to Google Reader). I need to do this regardless of whether or not they used the plugin to get there or used the browser's scroll bar. I need to be able to tell the plugin which item has focus so that my call to scroll to the "next" pane actually uses the currently viewed pane (and not just the previous pane that the plugin scrolled from). I've tried doing some searching but I can't seem to figure out a way to do this. I found lots of ways to scroll to a particular item, but I can't find a way to determine which element is visible in an overflowed div. If I can determine which items are visible, I can (probably) figure out the rest. I'm using jquery if that helps. Thanks!

    Read the article

  • How to find "y" values of the already estimated monotone function of the non-monotone regression curve corresponding to the original "x" points?

    - by parenthesis
    The title sounds complicated but that is what I am looking for. Focus on the picture. ## data x <- c(1.009648,1.017896,1.021773,1.043659,1.060277,1.074578,1.075495,1.097086,1.106268,1.110550,1.117795,1.143573,1.166305,1.177850,1.188795,1.198032,1.200526,1.223329,1.235814,1.239068,1.243189,1.260003,1.262732,1.266907,1.269932,1.284472,1.307483,1.323714,1.326705,1.328625,1.372419,1.398703,1.404474,1.414360,1.415909,1.418254,1.430865,1.431476,1.437642,1.438682,1.447056,1.456152,1.457934,1.457993,1.465968,1.478041,1.478076,1.485995,1.486357,1.490379,1.490719) y <- c(0.5102649,0.0000000,0.6360097,0.0000000,0.8692671,0.0000000,1.0000000,0.0000000,0.4183691,0.8953987,0.3442624,0.0000000,0.7513169,0.0000000,0.0000000,0.0000000,0.0000000,0.1291901,0.4936121,0.7565551,1.0085108,0.0000000,0.0000000,0.1655482,0.0000000,0.1473168,0.0000000,0.0000000,0.0000000,0.1875293,0.4918018,0.0000000,0.0000000,0.8101771,0.6853480,0.0000000,0.0000000,0.0000000,0.0000000,0.4068802,1.1061434,0.0000000,0.0000000,0.0000000,0.0000000,0.0000000,0.0000000,0.0000000,0.0000000,0.0000000,0.6391678) fit1 <- c(0.5102649100,0.5153380934,0.5177234836,0.5255544980,0.5307668662,0.5068087080,0.5071001179,0.4825657520,0.4832969250,0.4836378194,0.4842147729,0.5004039310,0.4987301366,0.4978800742,0.4978042478,0.4969807064,0.5086987191,0.4989497612,0.4936121200,0.4922210302,0.4904593166,0.4775197108,0.4757040857,0.4729265271,0.4709141776,0.4612406896,0.4459316517,0.4351338346,0.4331439717,0.4318664278,0.3235179189,0.2907908968,0.1665721429,0.1474035158,0.1443999345,0.1398517097,0.1153991839,0.1142140393,0.1022584672,0.1002410843,0.0840033244,0.0663669309,0.0629119398,0.0627979240,0.0473336492,0.0239237481,0.0238556876,0.0084990298,0.0077970954,0.0000000000,-0.0006598571) fit2 <- c(-0.0006598571,0.0153328298,0.0228511733,0.0652889427,0.0975108758,0.1252414661,0.1270195143,0.1922510501,0.2965234797,0.3018551305,0.3108761043,0.3621749370,0.4184150225,0.4359301495,0.4432114081,0.4493565757,0.4510158144,0.4661865431,0.4744926045,0.4766574718,0.4796937554,0.4834718810,0.4836125426,0.4839450098,0.4841092849,0.4877317306,0.4930561638,0.4964939389,0.4970089201,0.4971376528,0.4990394601,0.5005881678,0.5023814257,0.5052125977,0.5056691690,0.5064254338,0.5115481820,0.5117259449,0.5146054557,0.5149729419,0.5184178197,0.5211542908,0.5216215426,0.5216426533,0.5239797875,0.5273573222,0.5273683002,0.5293994824,0.5295130266,0.5306236672,0.5307303109) ## picture plot(x, y) ## red regression curve points(x, fit1, col=2); lines(x, fit1, col=2) ## blue monotonic curve to the regression points(min(x) + cumsum(c(0, rev(diff(x)))), rev(fit2), col="blue"); lines(min(x) + cumsum(c(0, rev(diff(x)))), rev(fit2), col="blue") ## "x" original point matches with the regression estimated point ## but not with the estimated (fit2=estimate) monotonic curve abline(v=1.223329, lty=2, col="grey") Focus on the dashed grey line. The idea is to get y value of the monotonic blue curve corresponding to x original value. The grey line should cross three points (the original one "black", the regression estimate "red", the adjusted regression estimate "blue"). Can we do this? Methodology: The object "fit2" is the output of the function rearrangement(). It is always monotonically increasing. library(Rearrangement) fit2 <- rearrangement(x=as.data.frame(x), y=fit1)

    Read the article

  • Jquery .next() function not working

    - by Sundhar
    Guys i am trying to do something like this i have two href and a text box in the middle of those <- TEXT <+ So when i press the - and + the value in the txt must increase or decrease by one " value="<%=addProduct.getInteger("ATR_WebMinQuantity",1)/addProduct.getInteger(MCRConstants.DM_ATR_LEGACY_CASE_VENDOR_PACK_SIZE,1) %" name="ADD_CART_ITEM<quantity" class="text" maxlength="3" / --! and i am using a jquery to + and - the value in the text box. Whenever i press + its happening correctly but for - it takes the TEXT fields name instead of its value . Any solution for this to make it to take the value of the TEXT box Jquery used follows : $(".quantity .subtract").click(function () { var qtyInput = $(this).next('input'); var qty = parseInt(qtyInput.val()); if (qty 1) qtyInput.val(qty - 1); qtyInput.focus(); return false; }); $(".quantity .add").click(function () { var qtyInput = $(this).prev('input'); var qty = parseInt(qtyInput.val()); if (qty >= 0 && (qty + 1 <= 999)) qtyInput.val(qty + 1); qtyInput.focus(); return false; });

    Read the article

  • Is your team is a high-performing team?

    As a child I can remember looking out of the car window as my father drove along the Interstate in Florida while seeing prisoners wearing bright orange jump suits and prison guards keeping a watchful eye on them. The prisoners were taking part in a prison road gang. These road gangs were formed to help the state maintain the state highway infrastructure. The prisoner’s primary responsibilities are to pick up trash and debris from the roadway. This is a prime example of a work group or working group used by most prison systems in the United States. Work groups or working groups can be defined as a collection of individuals or entities working together to achieve a specific goal or accomplish a specific set of tasks. Typically these groups are only established for a short period of time and are dissolved once the desired outcome has been achieved. More often than not group members usually feel as though they are expendable to the group and some even dread that they are even in the group. "A team is a small number of people with complementary skills who are committed to a common purpose, performance goals, and approach for which they are mutually accountable." (Katzenbach and Smith, 1993) So how do you determine that a team is a high-performing team?  This can be determined by three base line criteria that include: consistently high quality output, the promotion of personal growth and well being of all team members, and most importantly the ability to learn and grow as a unit. Initially, a team can successfully create high-performing output without meeting all three criteria, however this will erode over time because team members will feel detached from the group or that they are not growing then the quality of the output will decline. High performing teams are similar to work groups because they both utilize a collection of individuals or entities to accomplish tasks. What distinguish a high-performing team from a work group are its characteristics. High-performing teams contain five core characteristics. These characteristics are what separate a group from a team. The five characteristics of a high-performing team include: Purpose, Performance Measures, People with Tasks and Relationship Skills, Process, and Preparation and Practice. A high-performing team is much more than a work group, and typically has a life cycle that can vary from team to team. The standard team lifecycle consists of five states and is comparable to a human life cycle. The five states of a high-performing team lifecycle include: Formulating, Storming, Normalizing, Performing, and Adjourning. The Formulating State of a team is first realized when the team members are first defined and roles are assigned to all members. This initial stage is very important because it can set the tone for the team and can ultimately determine its success or failure. In addition, this stage requires the team to have a strong leader because team members are normally unclear about specific roles, specific obstacles and goals that my lay ahead of them.  Finally, this stage is where most team members initially meet one another prior to working as a team unless the team members already know each other. The Storming State normally arrives directly after the formulation of a new team because there are still a lot of unknowns amongst the newly formed assembly. As a general rule most of the parties involved in the team are still getting used to the workload, pace of work, deadlines and the validity of various tasks that need to be performed by the group.  In this state everything is questioned because there are so many unknowns. Items commonly questioned include the credentials of others on the team, the actual validity of a project, and the leadership abilities of the team leader.  This can be exemplified by looking at the interactions between animals when they first meet.  If we look at a scenario where two people are walking directly toward each other with their dogs. The dogs will automatically enter the Storming State because they do not know the other dog. Typically in this situation, they attempt to define which is more dominating via play or fighting depending on how the dogs interact with each other. Once dominance has been defined and accepted by both dogs then they will either want to play or leave depending on how the dogs interacted and other environmental variables. Once the Storming State has been realized then the Normalizing State takes over. This state is entered by a team once all the questions of the Storming State have been answered and the team has been tested by a few tasks or projects.  Typically, participants in the team are filled with energy, and comradery, and a strong alliance with team goals and objectives.  A high school football team is a perfect example of the Normalizing State when they start their season.  The player positions have been assigned, the depth chart has been filled and everyone is focused on winning each game. All of the players encourage and expect each other to perform at the best of their abilities and are united by competition from other teams. The Performing State is achieved by a team when its history, working habits, and culture solidify the team as one working unit. In this state team members can anticipate specific behaviors, attitudes, reactions, and challenges are seen as opportunities and not problems. Additionally, each team member knows their role in the team’s success, and the roles of others. This is the most productive state of a group and is where all the time invested working together really pays off. If you look at an Olympic figure skating team skate you can easily see how the time spent working together benefits their performance. They skate as one unit even though it is comprised of two skaters. Each skater has their routine completely memorized as well as their partners. This allows them to anticipate each other’s moves on the ice makes their skating look effortless. The final state of a team is the Adjourning State. This state is where accomplishments by the team and each individual team member are recognized. Additionally, this state also allows for reflection of the interactions between team members, work accomplished and challenges that were faced. Finally, the team celebrates the challenges they have faced and overcome as a unit. Currently in the workplace teams are divided into two different types: Co-located and Distributed Teams. Co-located teams defined as the traditional group of people working together in an office, according to Andy Singleton of Assembla. This traditional type of a team has dominated business in the past due to inadequate technology, which forced workers to primarily interact with one another via face to face meetings.  Team meetings are primarily lead by the person with the highest status in the company. Having personally, participated in meetings of this type, usually a select few of the team members dominate the flow of communication which reduces the input of others in group discussions. Since discussions are dominated by a select few individuals the discussions and group discussion are skewed in favor of the individuals who communicate the most in meetings. In addition, Team members might not give their full opinions on a topic of discussion in part not to offend or create controversy amongst the team and can alter decision made in meetings towards those of the opinions of the dominating team members. Distributed teams are by definition spread across an area or subdivided into separate sections. That is exactly what distributed teams when compared to a more traditional team. It is common place for distributed teams to have team members across town, in the next state, across the country and even with the advances in technology over the last 20 year across the world. These teams allow for more diversity compared to the other type of teams because they allow for more flexibility regarding location. A team could consist of a 30 year old male Italian project manager from New York, a 50 year old female Hispanic from California and a collection of programmers from India because technology allows them to communicate as if they were standing next to one another.  In addition, distributed team members consult with more team members prior to making decisions compared to traditional teams, and take longer to come to decisions due to the changes in time zones and cultural events. However, team members feel more empowered to speak out when they do not agree with the team and to notify others of potential issues regarding the work that the team is doing. Virtual teams which are a subset of the distributed team type is changing organizational strategies due to the fact that a team can now in essence be working 24 hrs a day because of utilizing employees in various time zones and locations.  A primary example of this is with customer services departments, a company can have multiple call centers spread across multiple time zones allowing them to appear to be open 24 hours a day while all a employees work from 9AM to 5 PM every day. Virtual teams also allow human resources departments to go after the best talent for the company regardless of where the potential employee works because they will be a part of a virtual team all that is need is the proper technology to be setup to allow everyone to communicate. In addition to allowing employees to work from home, the company can save space and resources by not having to provide a desk for every team member. In fact, those team members that randomly come into the office can actually share one desk amongst multiple people. This is definitely a cost cutting plus given the current state of the economy. One thing that can turn a team into a high-performing team is leadership. High-performing team leaders need to focus on investing in ongoing personal development, provide team members with direction, structure, and resources needed to accomplish their work, make the right interventions at the right time, and help the team manage boundaries between the team and various external parties involved in the teams work. A team leader needs to invest in ongoing personal development in order to effectively manage their team. People have said that attitude is everything; this is very true about leaders and leadership. A team takes on the attitudes and behaviors of its leaders. This can potentially harm the team and the team’s output. Leaders must concentrate on self-awareness, and understanding their team’s group dynamics to fully understand how to lead them. In addition, always learning new leadership techniques from other effective leaders is also very beneficial. Providing team members with direction, structure, and resources that they need to accomplish their work collectively sounds easy, but it is not.  Leaders need to be able to effectively communicate with their team on how their work helps the company reach for its organizational vision. Conversely, the leader needs to allow his team to work autonomously within specific guidelines to turn the company’s vision into a reality.  This being said the team must be appropriately staffed according to the size of the team’s tasks and their complexity. These tasks should be clear, and be meaningful to the company’s objectives and allow for feedback to be exchanged with the leader and the team member and the leader and upper management. Now if the team is properly staffed, and has a clear and full understanding of what is to be done; the company also must supply the workers with the proper tools to achieve the tasks that they are asked to do. No one should be asked to dig a hole without being given a shovel.  Finally, leaders must reward their team members for accomplishments that they achieve. Awards could range from just a simple congratulatory email, a party to close the completion of a large project, or other monetary rewards. Managing boundaries is very important for team leaders because it can alter attitudes of team members and can add undue stress to the team which will force them to loose focus on the tasks at hand for the group. Team leaders should promote communication between team members so that burdens are shared amongst the team and solutions can be derived from hearing the opinions of multiple sources. This also reinforces team camaraderie and working as a unit. Team leaders must manage the type and timing of interventions as to not create an even bigger mess within the team. Poorly timed interventions can really deflate team members and make them question themselves. This could really increase further and undue interventions by the team leader. Typically, the best time for interventions is when the team is just starting to form so that all unproductive behaviors are removed from the team and that it can retain focus on its agenda. If an intervention is effectively executed the team will feel energized about the work that they are doing, promote communication and interaction amongst the group and improve moral overall. High-performing teams are very import to organizations because they consistently produce high quality output and develop a collective purpose for their work. This drive to succeed allows team members to utilize specific talents allowing for growth in these areas.  In addition, these team members usually take on a sense of ownership with their projects and feel that the other team members are irreplaceable. References: http://blog.assembla.com/assemblablog/tabid/12618/bid/3127/Three-ways-to-organize-your-team-co-located-outsourced-or-global.aspx Katzenbach, J.R. & Smith, D.K. (1993). The Wisdom of Teams: Creating the High-performance Organization. Boston: Harvard Business School.

    Read the article

  • Windows Azure Evolution &ndash; Caching (Preview)

    - by Shaun
    Caching is a popular topic when we are building a high performance and high scalable system not only on top of the cloud platform but the on-premise environment as well. On March 2011 the Windows Azure AppFabric Caching had been production launched. It provides an in-memory, distributed caching service over the cloud. And now, in this June 2012 update, the cache team announce a grand new caching solution on Windows Azure, which is called Windows Azure Caching (Preview). And the original Windows Azure AppFabric Caching was renamed to Windows Azure Shared Caching.   What’s Caching (Preview) If you had been using the Shared Caching you should know that it is constructed by a bunch of cache servers. And when you want to use you should firstly create a cache account from the developer portal and specify the size you want to use, which means how much memory you can use to store your data that wanted to be cached. Then you can add, get and remove them through your code through the cache URL. The Shared Caching is a multi-tenancy system which host all cached items across all users. So you don’t know which server your data was located. This caching mode works well and can take most of the cases. But it has some problems. The first one is the performance. Since the Shared Caching is a multi-tenancy system, which means all cache operations should go through the Shared Caching gateway and then routed to the server which have the data your are looking for. Even though there are some caches in the Shared Caching system it also takes time from your cloud services to the cache service. Secondary, the Shared Caching service works as a block box to the developer. The only thing we know is my cache endpoint, and that’s all. Someone may satisfied since they don’t want to care about anything underlying. But if you need to know more and want more control that’s impossible in the Shared Caching. The last problem would be the price and cost-efficiency. You pay the bill based on how much cache you requested per month. But when we host a web role or worker role, it seldom consumes all of the memory and CPU in the virtual machine (service instance). If using Shared Caching we have to pay for the cache service while waste of some of our memory and CPU locally. Since the issues above Microsoft offered a new caching mode over to us, which is the Caching (Preview). Instead of having a separated cache service, the Caching (Preview) leverage the memory and CPU in our cloud services (web role and worker role) as the cache clusters. Hence the Caching (Preview) runs on the virtual machines which hosted or near our cloud applications. Without any gateway and routing, since it located in the same data center and same racks, it provides really high performance than the Shared Caching. The Caching (Preview) works side-by-side to our application, initialized and worked as a Windows Service running in the virtual machines invoked by the startup tasks from our roles, we could get more information and control to them. And since the Caching (Preview) utilizes the memory and CPU from our existing cloud services, so it’s free. What we need to pay is the original computing price. And the resource on each machines could be used more efficiently.   Enable Caching (Preview) It’s very simple to enable the Caching (Preview) in a cloud service. Let’s create a new windows azure cloud project from Visual Studio and added an ASP.NET Web Role. Then open the role setting and select the Caching page. This is where we enable and configure the Caching (Preview) on a role. To enable the Caching (Preview) just open the “Enable Caching (Preview Release)” check box. And then we need to specify which mode of the caching clusters we want to use. There are two kinds of caching mode, co-located and dedicate. The co-located mode means we use the memory in the instances we run our cloud services (web role or worker role). By using this mode we must specify how many percentage of the memory will be used as the cache. The default value is 30%. So make sure it will not affect the role business execution. The dedicate mode will use all memory in the virtual machine as the cache. In fact it will reserve some for operation system, azure hosting etc.. But it will try to use as much as the available memory to be the cache. As you can see, the Caching (Preview) was defined based on roles, which means all instances of this role will apply the same setting and play as a whole cache pool, and you can consume it by specifying the name of the role, which I will demonstrate later. And in a windows azure project we can have more than one role have the Caching (Preview) enabled. Then we will have more caches. For example, let’s say I have a web role and worker role. The web role I specified 30% co-located caching and the worker role I specified dedicated caching. If I have 3 instances of my web role and 2 instances of my worker role, then I will have two caches. As the figure above, cache 1 was contributed by three web role instances while cache 2 was contributed by 2 worker role instances. Then we can add items into cache 1 and retrieve it from web role code and worker role code. But the items stored in cache 1 cannot be retrieved from cache 2 since they are isolated. Back to our Visual Studio we specify 30% of co-located cache and use the local storage emulator to store the cache cluster runtime status. Then at the bottom we can specify the named caches. Now we just use the default one. Now we had enabled the Caching (Preview) in our web role settings. Next, let’s have a look on how to consume our cache.   Consume Caching (Preview) The Caching (Preview) can only be consumed by the roles in the same cloud services. As I mentioned earlier, a cache contributed by web role can be connected from a worker role if they are in the same cloud service. But you cannot consume a Caching (Preview) from other cloud services. This is different from the Shared Caching. The Shared Caching is opened to all services if it has the connection URL and authentication token. To consume the Caching (Preview) we need to add some references into our project as well as some configuration in the Web.config. NuGet makes our life easy. Right click on our web role project and select “Manage NuGet packages”, and then search the package named “WindowsAzure.Caching”. In the package list install the “Windows Azure Caching Preview”. It will download all necessary references from the NuGet repository and update our Web.config as well. Open the Web.config of our web role and find the “dataCacheClients” node. Under this node we can specify the cache clients we are going to use. For each cache client it will use the role name to identity and find the cache. Since we only have this web role with the Caching (Preview) enabled so I pasted the current role name in the configuration. Then, in the default page I will add some code to show how to use the cache. I will have a textbox on the page where user can input his or her name, then press a button to generate the email address for him/her. And in backend code I will check if this name had been added in cache. If yes I will return the email back immediately. Otherwise, I will sleep the tread for 2 seconds to simulate the latency, then add it into cache and return back to the page. 1: protected void btnGenerate_Click(object sender, EventArgs e) 2: { 3: // check if name is specified 4: var name = txtName.Text; 5: if (string.IsNullOrWhiteSpace(name)) 6: { 7: lblResult.Text = "Error. Please specify name."; 8: return; 9: } 10:  11: bool cached; 12: var sw = new Stopwatch(); 13: sw.Start(); 14:  15: // create the cache factory and cache 16: var factory = new DataCacheFactory(); 17: var cache = factory.GetDefaultCache(); 18:  19: // check if the name specified is in cache 20: var email = cache.Get(name) as string; 21: if (email != null) 22: { 23: cached = true; 24: sw.Stop(); 25: } 26: else 27: { 28: cached = false; 29: // simulate the letancy 30: Thread.Sleep(2000); 31: email = string.Format("{0}@igt.com", name); 32: // add to cache 33: cache.Add(name, email); 34: } 35:  36: sw.Stop(); 37: lblResult.Text = string.Format( 38: "Cached = {0}. Duration: {1}s. {2} => {3}", 39: cached, sw.Elapsed.TotalSeconds.ToString("0.00"), name, email); 40: } The Caching (Preview) can be used on the local emulator so we just F5. The first time I entered my name it will take about 2 seconds to get the email back to me since it was not in the cache. But if we re-enter my name it will be back at once from the cache. Since the Caching (Preview) is distributed across all instances of the role, so we can scaling-out it by scaling-out our web role. Just use 2 instances and tweak some code to show the current instance ID in the page, and have another try. Then we can see the cache can be retrieved even though it was added by another instance.   Consume Caching (Preview) Across Roles As I mentioned, the Caching (Preview) can be consumed by all other roles within the same cloud service. For example, let’s add another web role in our cloud solution and add the same code in its default page. In the Web.config we add the cache client to one enabled in the last role, by specifying its role name here. Then we start the solution locally and go to web role 1, specify the name and let it generate the email to us. Since there’s no cache for this name so it will take about 2 seconds but will save the email into cache. And then we go to web role 2 and specify the same name. Then you can see it retrieve the email saved by the web role 1 and returned back very quickly. Finally then we can upload our application to Windows Azure and test again. Make sure you had changed the cache cluster status storage account to the real azure account.   More Awesome Features As a in-memory distributed caching solution, the Caching (Preview) has some fancy features I would like to highlight here. The first one is the high availability support. This is the first time I have heard that a distributed cache support high availability. In the distributed cache world if a cache cluster was failed, the data it stored will be lost. This behavior was introduced by Memcached and is followed by almost all distributed cache productions. But Caching (Preview) provides high availability, which means you can specify if the named cache will be backup automatically. If yes then the data belongs to this named cache will be replicated on another role instance of this role. Then if one of the instance was failed the data can be retrieved from its backup instance. To enable the backup just open the Caching page in Visual Studio. In the named cache you want to enable backup, change the Backup Copies value from 0 to 1. The value of Backup Copies only for 0 and 1. “0” means no backup and no high availability while “1” means enabled high availability with backup the data into another instance. But by using the high availability feature there are something we need to make sure. Firstly the high availability does NOT means the data in cache will never be lost for any kind of failure. For example, if we have a role with cache enabled that has 10 instances, and 9 of them was failed, then most of the cached data will be lost since the primary and backup instance may failed together. But normally is will not be happened since MS guarantees that it will use the instance in the different fault domain for backup cache. Another one is that, enabling the backup means you store two copies of your data. For example if you think 100MB memory is OK for cache, but you need at least 200MB if you enabled backup. Besides the high availability, the Caching (Preview) support more features introduced in Windows Server AppFabric Caching than the Windows Azure Shared Caching. It supports local cache with notification. It also support absolute and slide window expiration types as well. And the Caching (Preview) also support the Memcached protocol as well. This means if you have an application based on Memcached, you can use Caching (Preview) without any code changes. What you need to do is to change the configuration of how you connect to the cache. Similar as the Windows Azure Shared Caching, MS also offers the out-of-box ASP.NET session provider and output cache provide on top of the Caching (Preview).   Summary Caching is very important component when we building a cloud-based application. In the June 2012 update MS provides a new cache solution named Caching (Preview). Different from the existing Windows Azure Shared Caching, Caching (Preview) runs the cache cluster within the role instances we have deployed to the cloud. It gives more control, more performance and more cost-effect. So now we have two caching solutions in Windows Azure, the Shared Caching and Caching (Preview). If you need a central cache service which can be used by many cloud services and web sites, then you have to use the Shared Caching. But if you only need a fast, near distributed cache, then you’d better use Caching (Preview).   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Log message Request and Response in ASP.NET WebAPI

    - by Fredrik N
    By logging both incoming and outgoing messages for services can be useful in many scenarios, such as debugging, tracing, inspection and helping customers with request problems etc.  I have a customer that need to have both incoming and outgoing messages to be logged. They use the information to see strange behaviors and also to help customers when they call in  for help (They can by looking in the log see if the customers sends in data in a wrong or strange way).   Concerns Most loggings in applications are cross-cutting concerns and should not be  a core concern for developers. Logging messages like this:   // GET api/values/5 public string Get(int id) { //Cross-cutting concerns Log(string.Format("Request: GET api/values/{0}", id)); //Core-concern var response = DoSomething(); //Cross-cutting concerns Log(string.Format("Reponse: GET api/values/{0}\r\n{1}", id, response)); return response; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } will only result in duplication of code, and unnecessarily concerns for the developers to be aware of, if they miss adding the logging code, no logging will take place. Developers should focus on the core-concern, not the cross-cutting concerns. By just focus on the core-concern the above code will look like this: // GET api/values/5 public string Get(int id) { return DoSomething(); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The logging should then be placed somewhere else so the developers doesn’t need to focus care about the cross-concern. Using Message Handler for logging There are different ways we could place the cross-cutting concern of logging message when using WebAPI. We can for example create a custom ApiController and override the ApiController’s ExecutingAsync method, or add a ActionFilter, or use a Message Handler. The disadvantage with custom ApiController is that we need to make sure we inherit from it, the disadvantage of ActionFilter, is that we need to add the filter to the controllers, both will modify our ApiControllers. By using a Message Handler we don’t need to do any changes to our ApiControllers. So the best suitable place to add our logging would be in a custom Message Handler. A Message Handler will be used before the HttpControllerDispatcher (The part in the WepAPI pipe-line that make sure the right controller is used and called etc). Note: You can read more about message handlers here, it will give you a good understanding of the WebApi pipe-line. To create a Message Handle we can inherit from the DelegatingHandler class and override the SendAsync method: public class MessageHandler : DelegatingHandler { protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken) { return base.SendAsync(request, cancellationToken); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   If we skip the call to the base.SendAsync our ApiController’s methods will never be invoked, nor other Message Handlers. Everything placed before base.SendAsync will be called before the HttpControllerDispatcher (before WebAPI will take a look at the request which controller and method it should be invoke), everything after the base.SendAsync, will be executed after our ApiController method has returned a response. So a message handle will be a perfect place to add cross-cutting concerns such as logging. To get the content of our response within a Message Handler we can use the request argument of the SendAsync method. The request argument is of type HttpRequestMessage and has a Content property (Content is of type HttpContent. The HttpContent has several method that can be used to read the incoming message, such as ReadAsStreamAsync, ReadAsByteArrayAsync and ReadAsStringAsync etc. Something to be aware of is what will happen when we read from the HttpContent. When we read from the HttpContent, we read from a stream, once we read from it, we can’t be read from it again. So if we read from the Stream before the base.SendAsync, the next coming Message Handlers and the HttpControllerDispatcher can’t read from the Stream because it’s already read, so our ApiControllers methods will never be invoked etc. The only way to make sure we can do repeatable reads from the HttpContent is to copy the content into a buffer, and then read from that buffer. This can be done by using the HttpContent’s LoadIntoBufferAsync method. If we make a call to the LoadIntoBufferAsync method before the base.SendAsync, the incoming stream will be read in to a byte array, and then other HttpContent read operations will read from that buffer if it’s exists instead directly form the stream. There is one method on the HttpContent that will internally make a call to the  LoadIntoBufferAsync for us, and that is the ReadAsByteArrayAsync. This is the method we will use to read from the incoming and outgoing message. public abstract class MessageHandler : DelegatingHandler { protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken) { var requestMessage = await request.Content.ReadAsByteArrayAsync(); var response = await base.SendAsync(request, cancellationToken); var responseMessage = await response.Content.ReadAsByteArrayAsync(); return response; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The above code will read the content of the incoming message and then call the SendAsync and after that read from the content of the response message. The following code will add more logic such as creating a correlation id to combine the request with the response, and create a log entry etc: public abstract class MessageHandler : DelegatingHandler { protected override async Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, CancellationToken cancellationToken) { var corrId = string.Format("{0}{1}", DateTime.Now.Ticks, Thread.CurrentThread.ManagedThreadId); var requestInfo = string.Format("{0} {1}", request.Method, request.RequestUri); var requestMessage = await request.Content.ReadAsByteArrayAsync(); await IncommingMessageAsync(corrId, requestInfo, requestMessage); var response = await base.SendAsync(request, cancellationToken); var responseMessage = await response.Content.ReadAsByteArrayAsync(); await OutgoingMessageAsync(corrId, requestInfo, responseMessage); return response; } protected abstract Task IncommingMessageAsync(string correlationId, string requestInfo, byte[] message); protected abstract Task OutgoingMessageAsync(string correlationId, string requestInfo, byte[] message); } public class MessageLoggingHandler : MessageHandler { protected override async Task IncommingMessageAsync(string correlationId, string requestInfo, byte[] message) { await Task.Run(() => Debug.WriteLine(string.Format("{0} - Request: {1}\r\n{2}", correlationId, requestInfo, Encoding.UTF8.GetString(message)))); } protected override async Task OutgoingMessageAsync(string correlationId, string requestInfo, byte[] message) { await Task.Run(() => Debug.WriteLine(string.Format("{0} - Response: {1}\r\n{2}", correlationId, requestInfo, Encoding.UTF8.GetString(message)))); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   The code above will show the following in the Visual Studio output window when the “api/values” service (One standard controller added by the default WepAPI template) is requested with a Get http method : 6347483479959544375 - Request: GET http://localhost:3208/api/values 6347483479959544375 - Response: GET http://localhost:3208/api/values ["value1","value2"] .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Register a Message Handler To register a Message handler we can use the Add method of the GlobalConfiguration.Configration.MessageHandlers in for example Global.asax: public class WebApiApplication : System.Web.HttpApplication { protected void Application_Start() { GlobalConfiguration.Configuration.MessageHandlers.Add(new MessageLoggingHandler()); ... } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Summary By using a Message Handler we can easily remove cross-cutting concerns like logging from our controllers. You can also find the source code used in this blog post on ForkCan.com, feel free to make a fork or add comments, such as making the code better etc. Feel free to follow me on twitter @fredrikn if you want to know when I will write other blog posts etc.

    Read the article

  • Generating EF Code First model classes from an existing database

    - by Jon Galloway
    Entity Framework Code First is a lightweight way to "turn on" data access for a simple CLR class. As the name implies, the intended use is that you're writing the code first and thinking about the database later. However, I really like the Entity Framework Code First works, and I want to use it in existing projects and projects with pre-existing databases. For example, MVC Music Store comes with a SQL Express database that's pre-loaded with a catalog of music (including genres, artists, and songs), and while it may eventually make sense to load that seed data from a different source, for the MVC 3 release we wanted to keep using the existing database. While I'm not getting the full benefit of Code First - writing code which drives the database schema - I can still benefit from the simplicity of the lightweight code approach. Scott Guthrie blogged about how to use entity framework with an existing database, looking at how you can override the Entity Framework Code First conventions so that it can work with a database which was created following other conventions. That gives you the information you need to create the model classes manually. However, it turns out that with Entity Framework 4 CTP 5, there's a way to generate the model classes from the database schema. Once the grunt work is done, of course, you can go in and modify the model classes as you'd like, but you can save the time and frustration of figuring out things like mapping SQL database types to .NET types. Note that this template requires Entity Framework 4 CTP 5 or later. You can install EF 4 CTP 5 here. Step One: Generate an EF Model from your existing database The code generation system in Entity Framework works from a model. You can add a model to your existing project and delete it when you're done, but I think it's simpler to just spin up a separate project to generate the model classes. When you're done, you can delete the project without affecting your application, or you may choose to keep it around in case you have other database schema updates which require model changes. I chose to add the Model classes to the Models folder of a new MVC 3 application. Right-click the folder and select "Add / New Item..."   Next, select ADO.NET Entity Data Model from the Data Templates list, and name it whatever you want (the name is unimportant).   Next, select "Generate from database." This is important - it's what kicks off the next few steps, which read your database's schema.   Now it's time to point the Entity Data Model Wizard at your existing database. I'll assume you know how to find your database - if not, I covered that a bit in the MVC Music Store tutorial section on Models and Data. Select your database, uncheck the "Save entity connection settings in Web.config" (since we won't be using them within the application), and click Next.   Now you can select the database objects you'd like modeled. I just selected all tables and clicked Finish.   And there's your model. If you want, you can make additional changes here before going on to generate the code.   Step Two: Add the DbContext Generator Like most code generation systems in Visual Studio lately, Entity Framework uses T4 templates which allow for some control over how the code is generated. K Scott Allen wrote a detailed article on T4 Templates and the Entity Framework on MSDN recently, if you'd like to know more. Fortunately for us, there's already a template that does just what we need without any customization. Right-click a blank space in the Entity Framework model surface and select "Add Code Generation Item..." Select the Code groupt in the Installed Templates section and pick the ADO.NET DbContext Generator. If you don't see this listed, make sure you've got EF 4 CTP 5 installed and that you're looking at the Code templates group. Note that the DbContext Generator template is similar to the EF POCO template which came out last year, but with "fix up" code (unnecessary in EF Code First) removed.   As soon as you do this, you'll two terrifying Security Warnings - unless you click the "Do not show this message again" checkbox the first time. It will also be displayed (twice) every time you rebuild the project, so I checked the box and no immediate harm befell my computer (fingers crossed!).   Here's the payoff: two templates (filenames ending with .tt) have been added to the project, and they've generated the code I needed.   The "MusicStoreEntities.Context.tt" template built a DbContext class which holds the entity collections, and the "MusicStoreEntities.tt" template build a separate class for each table I selected earlier. We'll customize them in the next step. I recommend copying all the generated .cs files into your application at this point, since accidentally rebuilding the generation project will overwrite your changes if you leave them there. Step Three: Modify and use your POCO entity classes Note: I made a bunch of tweaks to my POCO classes after they were generated. You don't have to do any of this, but I think it's important that you can - they're your classes, and EF Code First respects that. Modify them as you need for your application, or don't. The Context class derives from DbContext, which is what turns on the EF Code First features. It holds a DbSet for each entity. Think of DbSet as a simple List, but with Entity Framework features turned on.   //------------------------------------------------------------------------------ // <auto-generated> // This code was generated from a template. // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //------------------------------------------------------------------------------ namespace EF_CodeFirst_From_Existing_Database.Models { using System; using System.Data.Entity; public partial class Entities : DbContext { public Entities() : base("name=Entities") { } public DbSet<Album> Albums { get; set; } public DbSet<Artist> Artists { get; set; } public DbSet<Cart> Carts { get; set; } public DbSet<Genre> Genres { get; set; } public DbSet<OrderDetail> OrderDetails { get; set; } public DbSet<Order> Orders { get; set; } } } It's a pretty lightweight class as generated, so I just took out the comments, set the namespace, removed the constructor, and formatted it a bit. Done. If I wanted, though, I could have added or removed DbSets, overridden conventions, etc. using System.Data.Entity; namespace MvcMusicStore.Models { public class MusicStoreEntities : DbContext { public DbSet Albums { get; set; } public DbSet Genres { get; set; } public DbSet Artists { get; set; } public DbSet Carts { get; set; } public DbSet Orders { get; set; } public DbSet OrderDetails { get; set; } } } Next, it's time to look at the individual classes. Some of mine were pretty simple - for the Cart class, I just need to remove the header and clean up the namespace. //------------------------------------------------------------------------------ // // This code was generated from a template. // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // //------------------------------------------------------------------------------ namespace EF_CodeFirst_From_Existing_Database.Models { using System; using System.Collections.Generic; public partial class Cart { // Primitive properties public int RecordId { get; set; } public string CartId { get; set; } public int AlbumId { get; set; } public int Count { get; set; } public System.DateTime DateCreated { get; set; } // Navigation properties public virtual Album Album { get; set; } } } I did a bit more customization on the Album class. Here's what was generated: //------------------------------------------------------------------------------ // // This code was generated from a template. // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // //------------------------------------------------------------------------------ namespace EF_CodeFirst_From_Existing_Database.Models { using System; using System.Collections.Generic; public partial class Album { public Album() { this.Carts = new HashSet(); this.OrderDetails = new HashSet(); } // Primitive properties public int AlbumId { get; set; } public int GenreId { get; set; } public int ArtistId { get; set; } public string Title { get; set; } public decimal Price { get; set; } public string AlbumArtUrl { get; set; } // Navigation properties public virtual Artist Artist { get; set; } public virtual Genre Genre { get; set; } public virtual ICollection Carts { get; set; } public virtual ICollection OrderDetails { get; set; } } } I removed the header, changed the namespace, and removed some of the navigation properties. One nice thing about EF Code First is that you don't have to have a property for each database column or foreign key. In the Music Store sample, for instance, we build the app up using code first and start with just a few columns, adding in fields and navigation properties as the application needs them. EF Code First handles the columsn we've told it about and doesn't complain about the others. Here's the basic class: using System.ComponentModel; using System.ComponentModel.DataAnnotations; using System.Web.Mvc; using System.Collections.Generic; namespace MvcMusicStore.Models { public class Album { public int AlbumId { get; set; } public int GenreId { get; set; } public int ArtistId { get; set; } public string Title { get; set; } public decimal Price { get; set; } public string AlbumArtUrl { get; set; } public virtual Genre Genre { get; set; } public virtual Artist Artist { get; set; } public virtual List OrderDetails { get; set; } } } It's my class, not Entity Framework's, so I'm free to do what I want with it. I added a bunch of MVC 3 annotations for scaffolding and validation support, as shown below: using System.ComponentModel; using System.ComponentModel.DataAnnotations; using System.Web.Mvc; using System.Collections.Generic; namespace MvcMusicStore.Models { [Bind(Exclude = "AlbumId")] public class Album { [ScaffoldColumn(false)] public int AlbumId { get; set; } [DisplayName("Genre")] public int GenreId { get; set; } [DisplayName("Artist")] public int ArtistId { get; set; } [Required(ErrorMessage = "An Album Title is required")] [StringLength(160)] public string Title { get; set; } [Required(ErrorMessage = "Price is required")] [Range(0.01, 100.00, ErrorMessage = "Price must be between 0.01 and 100.00")] public decimal Price { get; set; } [DisplayName("Album Art URL")] [StringLength(1024)] public string AlbumArtUrl { get; set; } public virtual Genre Genre { get; set; } public virtual Artist Artist { get; set; } public virtual List<OrderDetail> OrderDetails { get; set; } } } The end result was that I had working EF Code First model code for the finished application. You can follow along through the tutorial to see how I built up to the finished model classes, starting with simple 2-3 property classes and building up to the full working schema. Thanks to Diego Vega (on the Entity Framework team) for pointing me to the DbContext template.

    Read the article

  • The Incremental Architect&acute;s Napkin - #2 - Balancing the forces

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/02/the-incremental-architectacutes-napkin---2---balancing-the-forces.aspxCategorizing requirements is the prerequisite for ecconomic architectural decisions. Not all requirements are created equal. However, to truely understand and describe the requirement forces pulling on software development, I think further examination of the requirements aspects is varranted. Aspects of Functionality There are two sides to Functionality requirements. It´s about what a software should do. I call that the Operations it implements. Operations are defined by expressions and control structures or calls to frameworks of some sort, i.e. (business) logic statements. Operations calculate, transform, aggregate, validate, send, receive, load, store etc. Operations are about behavior; they take input and produce output by considering state. I´m not using the term “function” here, because functions - or methods or sub-programs - are not necessary to implement Operations. Functions belong to a different sub-aspect of requirements (see below). Operations alone are not enough, though, to make a customer happy with regard to his/her Functionality requirements. Only correctly implemented Operations provide full value. This should make clear, why testing is so important. And not just manual tests during development of some operational feature, but automated tests. Because only automated tests scale when over time the number of operations increases. Without automated tests there is no guarantee formerly correct operations are still correct after more got added. To retest all previous operations manually is infeasible. So whoever relies just on manual tests is not really balancing the two forces Operations and Correctness. With manual tests more weight is put on the side of the scale of Operations. That might be ok for a short period of time - but in the long run it will bite you. You need to plan for Correctness in the long run from the first day of your project on. Aspects of Quality As important as Functionality is, it´s not the driver for software development. No software has ever been written to just implement some operation in code. We don´t need computers just to do something. All computers can do with software we can do without them. Well, at least given enough time and resources. We could calculate the most complex formulas without computers. We could do auctions with millions of people without computers. The only reason we want computers to help us with this and a million other Operations is… We don´t want to wait for the results very long. Or we want less errors. Or we want easier accessability to complicated solutions. So the main reason for customers to buy/order software is some Quality. They want some Functionality with a higher Quality (e.g. performance, scalability, usability, security…) than without the software. But Qualities come in at least two flavors: Most important are Primary Qualities. That´s the Qualities software truely is written for. Take an online auction website for example. Its Primary Qualities are performance, scalability, and usability, I´d say. Auctions should come within reach of millions of people; setting up an auction should be very easy; finding a suitable auction and bidding on it should be as fast as possible. Only if those Qualities have been implemented does security become relevant. A secure auction website is important - but not as important as a fast auction website. Nobody would want to use the most secure auction website if it was unbearably slow. But there would be people willing to use the fastest auction website even it was lacking security. That´s why security - with regard to online auction software - is not a Primary Quality, but just a Secondary Quality. It´s a supporting quality, so to speak. It does not deliver value by itself. With a password manager software this might be different. There security might be a Primary Quality. Please get me right: I don´t want to denigrate any Quality. There´s a long list of non-functional requirements at Wikipedia. They are all created equal - but that does not mean they are equally important for all software projects. When confronted with Quality requirements check with the customer which are primary and which are secondary. That will help to make good economical decisions when in a crunch. Resources are always limited - but requirements are a bottomless ocean. Aspects of Security of Investment Functionality and Quality are traditionally the requirement aspects cared for most - by customers and developers alike. Even today, when pressure rises in a project, tunnel vision will focus on them. Any measures to create and hold up Security of Investment (SoI) will be out of the window pretty quickly. Resistance to customers and/or management is futile. As long as SoI is not placed on equal footing with Functionality and Quality it´s bound to suffer under pressure. To look closer at what SoI means will help to become more conscious about it and make customers and management aware of the risks of neglecting it. SoI to me has two facets: Production Efficiency (PE) is about speed of delivering value. Customers like short response times. Short response times mean less money spent. So whatever makes software development faster supports this requirement. This must not lead to duct tape programming and banging out features by the dozen, though. Because customers don´t just want Operations and Quality, but also Correctness. So if Correctness gets compromised by focussing too much on Production Efficiency it will fire back. Customers want PE not just today, but over the whole course of a software´s lifecycle. That means, it´s not just about coding speed, but equally about code quality. If code quality leads to rework the PE is on an unsatisfactory level. Also if code production leads to waste it´s unsatisfactory. Because the effort which went into waste could have been used to produce value. Rework and waste cost money. Rework and waste abound, however, as long as PE is not addressed explicitly with management and customers. Thanks to the Agile and Lean movements that´s increasingly the case. Nevertheless more could and should be done in many teams. Each and every developer should keep in mind that Production Efficiency is as important to the customer as Functionality and Quality - whether he/she states it or not. Making software development more efficient is important - but still sooner or later even agile projects are going to hit a glas ceiling. At least as long as they neglect the second SoI facet: Evolvability. Delivering correct high quality functionality in short cycles today is good. But not just any software structure will allow this to happen for an indefinite amount of time.[1] The less explicitly software was designed the sooner it´s going to get stuck. Big ball of mud, monolith, brownfield, legacy code, technical debt… there are many names for software structures that have lost the ability to evolve, to be easily changed to accomodate new requirements. An evolvable code base is the opposite of a brownfield. It´s code which can be easily understood (by developers with sufficient domain expertise) and then easily changed to accomodate new requirements. Ideally the costs of adding feature X to an evolvable code base is independent of when it is requested - or at least the costs should only increase linearly, not exponentially.[2] Clean Code, Agile Architecture, and even traditional Software Engineering are concerned with Evolvability. However, it seems no systematic way of achieving it has been layed out yet. TDD + SOLID help - but still… When I look at the design ability reality in teams I see much room for improvement. As stated previously, SoI - or to be more precise: Evolvability - can hardly be measured. Plus the customer rarely states an explicit expectation with regard to it. That´s why I think, special care must be taken to not neglect it. Postponing it to some large refactorings should not be an option. Rather Evolvability needs to be a core concern for every single developer day. This should not mean Evolvability is more important than any of the other requirement aspects. But neither is it less important. That´s why more effort needs to be invested into it, to bring it on par with the other aspects, which usually are much more in focus. In closing As you see, requirements are of quite different kinds. To not take that into account will make it harder to understand the customer, and to make economic decisions. Those sub-aspects of requirements are forces pulling in different directions. To improve performance might have an impact on Evolvability. To increase Production Efficiency might have an impact on security etc. No requirement aspect should go unchecked when deciding how to allocate resources. Balancing should be explicit. And it should be possible to trace back each decision to a requirement. Why is there a null-check on parameters at the start of the method? Why are there 5000 LOC in this method? Why are there interfaces on those classes? Why is this functionality running on the threadpool? Why is this function defined on that class? Why is this class depending on three other classes? These and a thousand more questions are not to mean anything should be different in a code base. But it´s important to know the reason behind all of these decisions. Because not knowing the reason possibly means waste and having decided suboptimally. And how do we ensure to balance all requirement aspects? That needs practices and transparency. Practices means doing things a certain way and not another, even though that might be possible. We´re dealing with dangerous tools here. Like a knife is a dangerous tool. Harm can be done if we use our tools in just any way at the whim of the moment. Over the centuries rules and practices have been established how to use knifes. You don´t put them in peoples´ legs just because you´re feeling like it. You hand over a knife with the handle towards the receiver. You might not even be allowed to cut round food like potatos or eggs with it. The same should be the case for dangerous tools like object-orientation, remote communication, threads etc. We need practices to use them in a way so requirements are balanced almost automatically. In addition, to be able to work on software as a team we need transparency. We need means to share our thoughts, to work jointly on mental models. So far our tools are focused on working with code. Testing frameworks, build servers, DI containers, intellisense, refactoring support… That´s all nice and well. I don´t want to miss any of that. But I think it´s not enough. We´re missing mental tools, tools for making thinking and talking about software (independently of code) easier. You might think, enough of such tools already exist like all those UML diagram types or Flow Charts. But then, isn´t it strange, hardly any team is using them to design software? Or is that just due to a lack of education? I don´t think so. It´s a matter value/weight ratio: the current mental tools are too heavy weight compared to the value they deliver. So my conclusion is, we need lightweight tools to really be able to balance requirements. Software development is complex. We need guidance not to forget important aspects. That´s like with flying an airplane. Pilots don´t just jump in and take off for their destination. Yes, there are times when they are “flying by the seats of their pants”, when they are just experts doing thing intuitively. But most of the time they are going through honed practices called checklist. See “The Checklist Manifesto” for very enlightening details on this. Maybe then I should say it like this: We need more checklists for the complex businss of software development.[3] But that´s what software development mostly is about: changing software over an unknown period of time. It needs to be corrected in order to finally provide promised operations. It needs to be enhanced to provide ever more operations and qualities. All this without knowing when it´s going to stop. Probably never - until “maintainability” hits a wall when the technical debt is too large, the brownfield too deep. Software development is not a sprint, is not a marathon, not even an ultra marathon. Because to all this there is a foreseeable end. Software development is like continuously and foreever running… ? And sometimes I dare to think that costs could even decrease over time. Think of it: With each feature a software becomes richer in functionality. So with each additional feature the chance of there being already functionality helping its implementation increases. That should lead to less costs of feature X if it´s requested later than sooner. X requested later could stand on the shoulders of previous features. Alas, reality seems to be far from this despite 20+ years of admonishing developers to think in terms of reusability.[1] ? Please don´t get me wrong: I don´t want to bog down the “art” of software development with heavyweight practices and heaps of rules to follow. The framework we need should be lightweight. It should not stand in the way of delivering value to the customer. It´s purpose is even to make that easier by helping us to focus and decreasing waste and rework. ?

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >