Search Results

Search found 16838 results on 674 pages for 'writing patterns dita cms'.

Page 660/674 | < Previous Page | 656 657 658 659 660 661 662 663 664 665 666 667  | Next Page >

  • show hidden div tag from another page

    - by neueweblernen
    I'm trying to link to an all-inclusive FAQ page from various pages. The answers are contained in tags, nested within a line item of an unordered list housed by categories. The FAQ page has the following categories: Practical Nurse Exam Online Renewal Practice Hours etc. Under Practical Nurse Exam, there are sub categories, subjects, with questions below in tags that expand onClick. (e.g. Examination Day, Exam Results, etc.) Let's say I'm on a different page called Registration and there's a link to the FAQs for Exam Results. I'm able to link to the page and included the hashtag on the anchor or Exam Results, but it does not expand the subcategory. I've read this thread but it didn't work for me. Please help! The code is below: <script type="text/javascript"> function toggle(Info,pic) { var CState = document.getElementById(Info); CState.style.display = (CState.style.display != 'block') ? 'block' : 'none'; } window.onload = function() { var hash = window.location.hash; // would be "#div1" or something if(hash != "") { var id = hash.substr(1); // get rid of # document.getElementById(id).style.display = 'block'; } } </script> <style type="text/css"> .FAQ { cursor:hand; cursor:pointer; } .FAA { display:none; padding-left:20px; text-indent:-20px; } #FAQlist li { list-style-type: none; } #FAQlist ul { margin-left:0px; } headingOne{ font-family:Arial, Helvetica, sans-serif; color:#66BBFF; font-size:20px; font-weight:bold;} </style> Here's the body (part of it anyway) <headingOne class="FAQ" onClick="toggle('CPNRE', this)">PRACTICAL NURSE EXAM</headingOne> <div class="FAA" id="CPNRE"> <h3><a name="applying">Applying to write the CPNRE</a></h3> <ul id="FAQlist" style="width:450px;"> <li class="FAQ"> <p onclick="toggle('faq1',this)"> <strong>Q: How much does it cost to write the exam?</strong></p> <div class="FAA" id="faq1"> <b>A.</b> In 2013, the cost for the first exam writing is $600.00 which includes the interim license fee. See <a href="https://www.clpnbc.org/What-is-an-LPN/Becoming-an-LPN/Canadian-Practical-Nurse-Registration-Examination/Fees-and-Deadlines.aspx"> fee schedule</a>.</div> <hr /> </li> and here's the body of the other page that contains the link and the same script syntax as the all-inclusive FAQ page. This is just a test, that's not exactly what it will say: <a onclick="toggle('CPNRE', this)" href="file:///S|/Designs/Web stuff/FAQ all inclusive.html#applying"> click here</a>

    Read the article

  • Extending JavaScript's Date.parse to allow for DD/MM/YYYY (non-US formatted dates)?

    - by Campbeln
    I've come up with this solution to extending JavaScript's Date.parse function to allow for dates formatted in DD/MM/YYYY (rather then the American standard [and default] MM/DD/YYYY): (function() { var fDateParse = Date.parse; Date.parse = function(sDateString) { var a_sLanguage = ['en','en-us'], a_sMatches = null, sCurrentLanguage, dReturn = null, i ; //#### Traverse the a_sLanguages (as reported by the browser) for (i = 0; i < a_sLanguage.length; i++) { //#### Collect the .toLowerCase'd sCurrentLanguage for this loop sCurrentLanguage = (a_sLanguage[i] + '').toLowerCase(); //#### If this is the first English definition if (sCurrentLanguage.indexOf('en') == 0) { //#### If this is a definition for a non-American based English (meaning dates are "DD MM YYYY") if (sCurrentLanguage.indexOf('en-us') == -1 && // en-us = English (United States) + Palau, Micronesia, Philippians sCurrentLanguage.indexOf('en-ca') == -1 && // en-ca = English (Canada) sCurrentLanguage.indexOf('en-bz') == -1 // en-bz = English (Belize) ) { //#### Setup a oRegEx to locate "## ## ####" (allowing for any sort of delimiter except a '\n') then collect the a_sMatches from the passed sDateString var oRegEx = new RegExp("(([0-9]{2}|[0-9]{1})[^0-9]*?([0-9]{2}|[0-9]{1})[^0-9]*?([0-9]{4}))", "i"); a_sMatches = oRegEx.exec(sDateString); } //#### Fall from the loop (as we've found the first English definition) break; } } //#### If we were able to find a_sMatches for a non-American English "DD MM YYYY" formatted date if (a_sMatches != null) { var oRegEx = new RegExp(a_sMatches[0], "i"); //#### .parse the sDateString via the normal Date.parse function, but replacing the "DD?MM?YYYY" with "YYYY/MM/DD" beforehand //#### NOTE: a_sMatches[0]=[Default]; a_sMatches[1]=DD?MM?YYYY; a_sMatches[2]=DD; a_sMatches[3]=MM; a_sMatches[4]=YYYY dReturn = fDateParse(sDateString.replace(oRegEx, a_sMatches[4] + "/" + a_sMatches[3] + "/" + a_sMatches[2])); } //#### Else .parse the sDateString via the normal Date.parse function else { dReturn = fDateParse(sDateString); } //#### return dReturn; } })(); In my actual (dotNet) code, I'm collecting the a_sLanguage array via: a_sLanguage = '<% Response.Write(Request.ServerVariables["HTTP_ACCEPT_LANGUAGE"]); %>'.split(','); Now, I'm not certain my approach to locating "us-en"/etc. is the most proper. Pretty much it's just the US and current/former US influenced areas (Palau, Micronesia, Philippines) + Belize & Canada that use the funky MM/DD/YYYY format (I am American, so I can call it funky =). So one could rightly argue that if the Locale is not "en-us"/etc. first, then DD/MM/YYYY should be used. Thoughts? As a side note... I "grew up" in PERL but it's been a wee while since I've done much heavy lifting in RegEx. Does that expression look right to everyone? This seems like a lot of work, but based on my research this is indeed about the best way to go about enabling DD/MM/YYYY dates within JavaScript. Is there an easier/more betterer way? PS- Upon re-reading this post just before submission... I've realized that this is more of a "can you code review this" rather then a question (or, an answer is embedded within the question). When I started writing this it was not my intention to end up here =)

    Read the article

  • antlr line after line processing

    - by pawloch
    I'm writing simple language in ANTLR, and I'd like to write shell where I can put line of code, hit ENTER and have it executed, enter another line, and have it executed. I have already written grammar which execute all alines of input at one. Example input: int a,b,c; string d; string e; d=\"dziala\"; a=4+7; b=a+33; c=(b/11)*2; grammar Kalkulator; options { language = Java; output=AST; ASTLabelType=CommonTree; } tokens { NEG; } @header { package lab4; } @lexer::header { package lab4; } line : (assignment | declaration)* EOF ; declaration : type^ IDENT (','! IDENT)* ';'! ; type : 'int' | 'string' ; assignment : IDENT '='^ expression ';'! ; term : IDENT | INTEGER | STRING_LITERAL | '('! expression ')'! ; unary : (( negation^ | '+'! ))* term ; negation : '-' -> NEG ; mult : unary ( ('*'^ | '/'^) unary )* ; exp2 :mult ( ('-'^ | '+'^) mult)* ; expression : exp2 ('&'^ exp2)* ; fragment LETTER : ('a'..'z'|'A'..'Z'); fragment DIGIT : '0'..'9'; INTEGER : DIGIT+; IDENT : LETTER (LETTER | DIGIT)* ; WS : (' ' | '\t' | '\n' | '\r' | '\f')+ {$channel=HIDDEN;}; STRING_LITERAL : '\"' .* '\"'; and: tree grammar Evaluator; options { language = Java; tokenVocab = Kalkulator; ASTLabelType = CommonTree; } @header { package lab4; import java.util.Map; import java.util.HashMap; } @members { private Map<String, Object> zmienne = new HashMap<String, Object>(); } line returns [Object result] : (declaration | assignment { result = $assignment.result; })* EOF ; declaration : ^(type ( IDENT { if("string".equals($type.result)){ zmienne.put($IDENT.text,""); //add definition } else{ zmienne.put($IDENT.text,0); //add definition } System.out.println($type.result + " " + $IDENT.text);//write output } )* ) ; assignment returns [Object result] : ^('=' IDENT e=expression) { if(zmienne.containsKey($IDENT.text)) {zmienne.put($IDENT.text, e); result = e; System.out.println(e); } else{ System.out.println("Blad: Niezadeklarowana zmienna"); } } ; type returns [Object result] : 'int' {result="int";}| 'string' {result="string";} ; expression returns [Object result] : ^('+' op1=expression op2=expression) { result = (Integer)op1 + (Integer)op2; } | ^('-' op1=expression op2=expression) { result = (Integer)op1 - (Integer)op2; } | ^('*' op1=expression op2=expression) { result = (Integer)op1 * (Integer)op2; } | ^('/' op1=expression op2=expression) { result = (Integer)op1 / (Integer)op2; } | ^('%' op1=expression op2=expression) { result = (Integer)op1 \% (Integer)op2; } | ^('&' op1=expression op2=expression) { result = (String)op1 + (String)op2; } | ^(NEG e=expression) { result = -(Integer)e; } | IDENT { result = zmienne.get($IDENT.text); } | INTEGER { result = Integer.parseInt($INTEGER.text); } | STRING_LITERAL { String t=$STRING_LITERAL.text; result = t.substring(1,t.length()-1); } ; Can I make it process line-by-line or is that easier to make it all again?

    Read the article

  • C# - Must declare the scalar variable "@ms_id" - Error

    - by user1075106
    I'm writing an web-app that keeps track of deadlines. With this app you have to be able to update records that are being saved in an SQL DB. However I'm having some problem with my update in my aspx-file. <asp:GridView ID="gv_editMilestones" runat="server" DataSourceID="sql_ds_milestones" CellPadding="4" ForeColor="#333333" GridLines="None" Font-Size="Small" AutoGenerateColumns="False" DataKeyNames="id" Visible="false" onrowupdated="gv_editMilestones_RowUpdated" onrowupdating="gv_editMilestones_RowUpdating" onrowediting="gv_editMilestones_RowEditing"> <RowStyle BackColor="#F7F6F3" ForeColor="#333333" /> <Columns> <asp:CommandField ShowEditButton="True" /> <asp:BoundField DataField="id" HeaderText="id" SortExpression="id" ReadOnly="True" Visible="false"/> <asp:BoundField DataField="ms_id" HeaderText="ms_id" SortExpression="ms_id" ReadOnly="True"/> <asp:BoundField DataField="ms_description" HeaderText="ms_description" SortExpression="ms_description"/> <%-- <asp:BoundField DataField="ms_resp_team" HeaderText="ms_resp_team" SortExpression="ms_resp_team"/>--%> <asp:TemplateField HeaderText="ms_resp_team" SortExpression="ms_resp_team"> <ItemTemplate> <%# Eval("ms_resp_team") %> </ItemTemplate> <EditItemTemplate> <asp:DropDownList ID="DDL_ms_resp_team" runat="server" DataSourceID="sql_ds_ms_resp_team" DataTextField="team_name" DataValueField="id"> <%--SelectedValue='<%# Bind("ms_resp_team") %>'--%> </asp:DropDownList> </EditItemTemplate> </asp:TemplateField> <asp:BoundField DataField="ms_focal_point" HeaderText="ms_focal_point" SortExpression="ms_focal_point" /> <asp:BoundField DataField="ms_exp_date" HeaderText="ms_exp_date" SortExpression="ms_exp_date" DataFormatString="{0:d}"/> <asp:BoundField DataField="ms_deal" HeaderText="ms_deal" SortExpression="ms_deal" ReadOnly="True"/> <asp:CheckBoxField DataField="ms_active" HeaderText="ms_active" SortExpression="ms_active"/> </Columns> <FooterStyle BackColor="#CCCC99" /> <PagerStyle BackColor="#F7F7DE" ForeColor="Black" HorizontalAlign="Right" /> <SelectedRowStyle BackColor="#CE5D5A" Font-Bold="True" ForeColor="White" /> <HeaderStyle BackColor="#5D7B9D" Font-Bold="True" ForeColor="White" /> <AlternatingRowStyle BackColor="White" /> <EditRowStyle BackColor="#999999" /> </asp:GridView> <asp:SqlDataSource ID="sql_ds_milestones" runat="server" ConnectionString="<%$ ConnectionStrings:testServer %>" SelectCommand="SELECT [id] ,[ms_id] ,[ms_description] ,(SELECT [team_name] FROM [NSBP].[dbo].[tbl_teams] as teams WHERE milestones.[ms_resp_team] = teams.[id]) as 'ms_resp_team' ,[ms_focal_point] ,[ms_exp_date] ,(SELECT [deal] FROM [NSBP].[dbo].[tbl_deals] as deals WHERE milestones.[ms_deal] = deals.[id]) as 'ms_deal' ,[ms_active] FROM [NSBP].[dbo].[tbl_milestones] as milestones" UpdateCommand="UPDATE [NSBP].[dbo].[tbl_milestones] SET [ms_description] = @ms_description ,[ms_focal_point] = @ms_focal_point ,[ms_active] = @ms_active WHERE [ms_id] = @ms_id"> <UpdateParameters> <asp:Parameter Name="ms_description" Type="String" /> <%-- <asp:Parameter Name="ms_resp_team" Type="String" />--%> <asp:Parameter Name="ms_focal_point" Type="String" /> <asp:Parameter Name="ms_exp_date" Type="DateTime" /> <asp:Parameter Name="ms_active" Type="Boolean" /> <%-- <asp:Parameter Name="ms_id" Type="String" />--%> </UpdateParameters> </asp:SqlDataSource> You can see my complete GridView-structure + my datasource bound to this GridView. There is nothing written in my onrowupdating-function in my code-behind file. Thx in advance

    Read the article

  • can't able to integrate base64decode in my class

    - by MaheshBabu
    Hi folks, i am getting the image that is in base64 encoded format. I need to decode it. i am writing the code for decoding is + (NSData *) base64DataFromString: (NSString *)string { unsigned long ixtext, lentext; unsigned char ch, input[4], output[3]; short i, ixinput; Boolean flignore, flendtext = false; const char *temporary; NSMutableData *result; if (!string) return [NSData data]; ixtext = 0; temporary = [string UTF8String]; lentext = [string length]; result = [NSMutableData dataWithCapacity: lentext]; ixinput = 0; while (true) { if (ixtext >= lentext) break; ch = temporary[ixtext++]; flignore = false; if ((ch >= 'A') && (ch <= 'Z')) ch = ch - 'A'; else if ((ch >= 'a') && (ch <= 'z')) ch = ch - 'a' + 26; else if ((ch >= '0') && (ch <= '9')) ch = ch - '0' + 52; else if (ch == '+') ch = 62; else if (ch == '=') flendtext = true; else if (ch == '/') ch = 63; else flignore = true; if (!flignore) { short ctcharsinput = 3; Boolean flbreak = false; if (flendtext) { if (ixinput == 0) break; if ((ixinput == 1) || (ixinput == 2)) { ctcharsinput = 1; else ctcharsinput = 2; ixinput = 3; flbreak = true; } input[ixinput++] = ch; if (ixinput == 4) ixinput = 0; output[0] = (input[0] << 2) | ((input[1] & 0x30) >> 4); output[1] = ((input[1] & 0x0F) << 4) | ((input[2] & 0x3C) >> 2); output[2] = ((input[2] & 0x03) << 6) | (input[3] & 0x3F); for (i = 0; i < ctcharsinput; i++) [result appendBytes: &output[i] length: 1]; } if (flbreak) break; } return result; } i am calling this in my method like this NSData *data = [base64DataFromString:theXML]; theXML is encoded data. but it shows error decodeBase64 undeclared. How can i use this method. can any one pls help me. Thank u in advance.

    Read the article

  • C Program not running as intended, hangs after input

    - by user41419
    The program I am writing to take a number and display that number as a calculator would display it (shown below) is compiling with no issues, but when I try to run it, I am able to input my number, but nothing happens. It seems like it is "hanging", since no further output is shown as I would have expected. Might anyone know what the problem is? #include <stdio.h> #define MAX_DIGITS 20 char segments[10][7] = /* seven segment array */ {{'1','1','1','1','1','1','0'}, /* zero */ {'0','1','1','0','0','0','0'}, /* one */ {'1','1','0','1','1','0','1'}, /* two */ {'1','1','1','1','0','0','1'}, /* three */ {'0','1','1','0','0','1','1'}, /* four */ {'1','0','1','1','0','1','1'}, /* five */ {'1','0','1','1','1','1','1'}, /* six */ {'1','1','1','0','0','0','0'}, /* seven */ {'1','1','1','1','1','1','1'}, /* eight */ {'1','1','1','0','0','1','1'}};/* nine */ char digits[3][MAX_DIGITS * 4]; /* digits array */ int i, j; /* count variables */ int adjust; /* output formatting */ int main(void) { clear_digits_array(); int digit[20]; for (i = 0; i < 20; i++) { digit[i] = 0; } int count = 20; int position = 0; printf("Enter a number: "); int number = scanf("%d%d%d%d%d%d%d%d%d%d%d%d%d%d%d%d%d%d%d%d", &digit[0], &digit[1], &digit[2], &digit[3], &digit[4], &digit[5], &digit[6], &digit[7], &digit[8], &digit[9], &digit[10], &digit[11], &digit[12], &digit[13], &digit[14], &digit[15], &digit[16], &digit[17], &digit[18], &digit[19]); //NOTHING HAPPENS AFTER HERE printf("Got input, number is %d", number); while (count > 0) { printf("Reading digits, count is %d", count); process_digit(digit[20 - count], position); position++; count--; } print_digits_array(); printf("\n"); return 0; } void clear_digits_array(void) { /* fill all positions in digits array with blank spaces */ for (i = 0; i < 3; i++) { for (j = 0; j < (MAX_DIGITS * 4); j++) { digits[i][j] = ' '; } } } void process_digit(int digit, int position) { /* check each segment to see if segment should be filled in for given digit */ for (i = 0; i < 7; i++) { printf("Processing digit %d at position %d, i is %d", digit, position, i); if (segments[digit][i] == 1) { switch (i) { case 0: digits[0][(position * 4) + 1] = '_'; break; case 1: digits[1][(position * 4) + 2] = '|'; break; case 2: digits[2][(position * 4) + 2] = '|'; break; case 3: digits[2][(position * 4) + 1] = '_'; break; case 4: digits[2][(position * 4) + 0] = '|'; break; case 5: digits[1][(position * 4) + 0] = '|'; break; case 6: digits[1][(position * 4) + 1] = '_'; break; } } } } void print_digits_array(void) { /* print each character in digits array */ for (i = 0; i < 3; i++) { for (j = 0; j < (MAX_DIGITS * 4); j++) { printf("%c", digits[i][j]); } printf("/n"); } }

    Read the article

  • Templates, interfaces (multiple inheritance) and static functions (named constructors)

    - by fledgling Cxx user
    Setup I have a graph library where I am trying to decompose things as much as possible, and the cleanest way to describe it that I found is the following: there is a vanilla type node implementing only a list of edges: class node { public: int* edges; int edge_count; }; Then, I would like to be able to add interfaces to this whole mix, like so: template <class T> class node_weight { public: T weight; }; template <class T> class node_position { public: T x; T y; }; and so on. Then, the actual graph class comes in, which is templated on the actual type of node: template <class node_T> class graph { protected: node_T* nodes; public: static graph cartesian(int n, int m) { graph r; r.nodes = new node_T[n * m]; return r; } }; The twist is that it has named constructors which construct some special graphs, like a Cartesian lattice. In this case, I would like to be able to add some extra information into the graph, depending on what interfaces are implemented by node_T. What would be the best way to accomplish this? Possible solution I thought of the following humble solution, through dynamic_cast<>: template <class node_T, class weight_T, class position_T> class graph { protected: node_T* nodes; public: static graph cartesian(int n, int m) { graph r; r.nodes = new node_T[n * m]; if (dynamic_cast<node_weight<weight_T>>(r.nodes[0]) != nullptr) { // do stuff knowing you can add weights } if (dynamic_cast<node_position<positionT>>(r.nodes[0]) != nullptr) { // do stuff knowing you can set position } return r; } }; which would operate on node_T being the following: template <class weight_T, class position_T> class node_weight_position : public node, public node_weight<weight_T>, public node_position<position_T> { // ... }; Questions Is this -- philosophically -- the right way to go? I know people don't look nicely at multiple inheritance, though with "interfaces" like these it should all be fine. There are unfortunately problems with this. From what I know at least, dynamic_cast<> involves quite a bit of run-time overhead. Hence, I run into a problem with what I had solved earlier: writing graph algorithms that require weights independently of whether the actual node_T class has weights or not. The solution with this 'interface' approach would be to write a function: template <class node_T, class weight_T> inline weight_T get_weight(node_T const & n) { if (dynamic_cast<node_weight<weight_T>>(n) != nullptr) { return dynamic_cast<node_weight<weight_T>>(n).weight; } return T(1); } but the issue with it is that it works using run-time information (dynamic_cast), yet in principle I would like to decide it at compile-time and thus make the code more efficient. If there is a different solution that would solve both problems, especially a cleaner and better one than what I have, I would love to hear about it!

    Read the article

  • HMTL5 Anti Aliasing Browser Disable

    - by Tappa Tappa
    I am forced to consider writing a library to handle the fundamental basics of drawing lines, thick lines, circles, squares etc. of an HTML5 canvas because I can't disable a feature embedded in the browser rendering of the core canvas algorithms. Am I forced to build the HTML5 Canvas rendering process from the ground up? If I am, who's in with me to do this? Who wants to change the world? Imagine a simple drawing application written in HTML5... you draw a shape... a closed shape like a rudimentary circle, free hand, more like an onion than a circle (well, that's what mine would look like!)... then imagine selecting a paint bucket icon and clicking inside that shape you drew and expecting it to be filled with a color of your choice. Imagine your surprise as you selected "Paint Bucket" and clicked in the middle of your shape and it filled your shape with color... BUT, not quite... HANG ON... this isn't right!!! On the inside of the edge of the shape you drew is a blur between the background color and your fill color and the edge color... the fill seems to be flawed. You wanted a straight forward "Paint Bucket" / "Fill"... you wanted to draw a shape and then fill it with a color... no fuss.... fill the whole damned inside of your shape with the color you choose. Your web browser has decided that when you draw the lines to define your shape they will be anti-aliased. If you draw a black line for your shape... well, the browser will draw grey pixels along the edges, in places... to make it look like a "better" line. Yeah, a "better" line that **s up the paint / flood fill process. How much does is cost to pay off the browser developers to expose a property to disable their anti-aliasing rendering? Disabling would save milliseconds for their rendering engine, surely! Bah, I really don't want to have to build my own canvas rendering engine using Bresenham line rendering algorithm... WHAT CAN BE DONE... HOW CAN THIS BE CHANGED!!!??? Do I need to start a petition aimed at the WC3???? Will you include your name if you are interested??? UPDATED function DrawLine(objContext, FromX, FromY, ToX, ToY) { var dx = Math.abs(ToX - FromX); var dy = Math.abs(ToY - FromY); var sx = (FromX < ToX) ? 1 : -1; var sy = (FromY < ToY) ? 1 : -1; var err = dx - dy; var CurX, CurY; CurX = FromX; CurY = FromY; while (true) { objContext.fillRect(CurX, CurY, objContext.lineWidth, objContext.lineWidth); if ((CurX == ToX) && (CurY == ToY)) break; var e2 = 2 * err; if (e2 > -dy) { err -= dy; CurX += sx; } if (e2 < dx) { err += dx; CurY += sy; } } }

    Read the article

  • allocating extra memory for a container class.

    - by sil3nt
    Hey there, I'm writing a template container class and for the past few hours have been trying to allocate new memory for extra data that comes into the container (...hit a brick wall..:| ) template <typename T> void Container<T>::insert(T item, int index){ if ( index < 0){ cout<<"Invalid location to insert " << index << endl; return; } if (index < sizeC){ //copying original array so that when an item is //placed in the middleeverything else is shifted forward T *arryCpy = 0; int tmpSize = 0; tmpSize = size(); arryCpy = new T[tmpSize]; int i = 0, j = 0; for ( i = 0; i < tmpSize; i++){ for ( j = index; j < tmpSize; j++){ arryCpy[i] = elements[j]; } } //overwriting and placing item and location index elements[index] = item; //copying back everything else after the location at index int k = 0, l = 0; for ( k =(index+1), l=0; k < sizeC || l < (sizeC-index); k++,l++){ elements[k] = arryCpy[l]; } delete[] arryCpy; arryCpy = 0; } //seeing if the location is more than the current capacity //and hence allocating more memory if (index+1 > capacityC){ int new_capacity = 0; int current_size = size(); new_capacity = ((index+1)-capacityC)+capacityC; //variable for new capacity T *tmparry2 = 0; tmparry2 = new T[new_capacity]; int n = 0; for (n = 0; n < current_size;n++){ tmparry2[n] = elements[n]; } delete[] elements; elements = 0; //copying back what we had before elements = new T[new_capacity]; int m = 0; for (m = 0; m < current_size; m++){ elements[m] = tmparry2[m]; } //placing item elements[index] = item; } else{ elements[index] = item; } //increasing the current count sizeC++; my testing condition is Container cnt4(3); and as soon as i hit the fourth element (when I use for egsomething.insert("random",3);) it crashes and the above doesnt work. where have I gone wrong?

    Read the article

  • Only Execute Code on Certain Requests Java

    - by BillPull
    I am building a little API for class and the teacher supplied us with a link to a tutorial that provided a simple webserver that implements Runnable. I have already written some code that will parse arguments the arguments ( or at least get me the request string ) and some code that will return some simple xml. however I think certain requests like the one for the favicon are sent I think it is messing up my code. I wrapped that in an if else but it does not seem to be working. package server; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import java.net.Socket; import java.util.*; import java.io.*; import java.net.*; import parkinglots.*; public class WorkerRunnable implements Runnable{ protected Socket clientSocket = null; protected String serverText = null; public WorkerRunnable(Socket clientSocket, String serverText) { this.clientSocket = clientSocket; this.serverText = serverText; } public Boolean authenticateAPI(String key){ //Authenticate Key against Stored Keys //TODO: Create Stored Keys and Compare return true; } public void run() { try { InputStream input = clientSocket.getInputStream(); OutputStream output = clientSocket.getOutputStream(); long time = System.currentTimeMillis(); //TODO: Parse args and output different formats and Authentication //Parse URL Arguments BufferedReader in = new BufferedReader( new InputStreamReader(clientSocket.getInputStream(), "8859_1")); String request = in.readLine(); //Server gets Favicon Request so skip that and goto args System.out.println(request); if ( request != "GET /favicon.ico HTTP/1.1" && request != "GET / HTTP/1.1" && request != null ){ String format = "", apikey =""; System.out.println("I am Here"); String request_location = request.split(" ")[1]; String request_args = request_location.replace("/",""); request_args = request_args.replace("?",""); String[] queries = request_args.split("&"); System.out.println(queries[0]); for ( int i = 0; i < queries.length; i++ ){ if( queries[i] == "format" ){ format = queries[i].split("=")[1]; } else if( queries[i] == "apikey" ){ apikey = queries[i].split("=")[1]; } } if( apikey == "" ){ apikey = "None"; } if( format == "" ){ format = "xml"; } Boolean auth = authenticateAPI(apikey); if ( auth ){ if ( format == "xml"){ // Retrieve XML Document String xml = LotFromDB.getParkingLotXML(); output.write((xml).getBytes()); }else{ //Retrieve JSON String json = LotFromDB.getParkingLotJSON(); output.write((json).getBytes()); } }else{ output.write(("Access Denied - User is Not Authenticated").getBytes()); } }else{ output.write(("Access Denied Must Pass API Key").getBytes()); } output.close(); input.close(); System.out.println("Request processed: " + time); } catch (IOException e) { //report exceptions e.printStackTrace(); } } } Console output I get I am Here format=json Request processed: 1333516648331 GET /favicon.ico HTTP/1.1 I am Here favicon.ico Request processed: 1333516648332 It always returns the XML as well. This is my first exposure to writing a web server and dealing with networking in Java, which frustrates me a lot in general, So any suggestions here are very appreciated.

    Read the article

  • Alphabetically Ordering an array of words

    - by Genesis
    I'm studying C on my own in preparation for my upcoming semester of school and was wondering what I was doing wrong with my code so far. If Things look weird it is because this is part of a much bigger grab bag of sorting functions I'm creating to get a sense of how to sort numbers,letters,arrays,and the like! I'm basically having some troubles with the manipulation of strings in C currently. Also, I'm quite limited in my knowledge of C at the moment! My main Consists of this: #include <stdio.h> #include <stdio.h> #include <stdlib.h> int numbers[10]; int size; int main(void){ setvbuf(stdout,NULL,_IONBF,0); //This is magical code that allows me to input. int wordNumber; int lengthOfWord = 50; printf("How many words do you want to enter: "); scanf("%i", &wordNumber); printf("%i\n",wordNumber); char words[wordNumber][lengthOfWord]; printf("Enter %i words:",wordNumber); int i; for(i=0;i<wordNumber+1;i++){ //+1 is because my words[0] is blank. fgets(&words[i], 50, stdin); } for(i=1;i<wordNumber+1;i++){ // Same as the above comment! printf("%s", words[i]); //prints my words out! } alphabetize(words,wordNumber); //I want to sort these arrays with this function. } My sorting "method" I am trying to construct is below! This function is seriously flawed, but I'd thought I'd keep it all to show you where my mind was headed when writing this. void alphabetize(char a[][],int size){ // This wont fly. size = size+1; int wordNumber; int lengthOfWord; char sortedWords[wordNumber][lengthOfWord]; //In effort for the for loop int i; int j; for(i=1;i<size;i++){ //My effort to copy over this array for manipulation for(j=1;j<size;j++){ sortedWords[i][j] = a[i][j]; } } //This should be kinda what I want when ordering words alphabetically, right? for(i=1;i<size;i++){ for(j=2;j<size;j++){ if(strcmp(sortedWords[i],sortedWords[j]) > 0){ char* temp = sortedWords[i]; sortedWords[i] = sortedWords[j]; sortedWords[j] = temp; } } } for(i=1;i<size;i++){ printf("%s, ",sortedWords[i]); } } I guess I also have another question as well... When I use fgets() it's doing this thing where I get a null word for the first spot of the array. I have had other issues recently trying to scanf() char[] in certain ways specifically spacing my input word variables which "magically" gets rid of the first null space before the character. An example of this is using scanf() to write "Hello" and getting " Hello" or " ""Hello"... Appreciate any thoughts on this, I've got all summer to study up so this doesn't need to be answered with haste! Also, thank you stack overflow as a whole for being so helpful in the past. This may be my first post, but I have been a frequent visitor for the past couple of years and it's been one of the best places for helpful advice/tips.

    Read the article

  • Override `drop` for a custom sequence

    - by Bruno Reis
    In short: in Clojure, is there a way to redefine a function from the standard sequence API (which is not defined on any interface like ISeq, IndexedSeq, etc) on a custom sequence type I wrote? 1. Huge data files I have big files in the following format: A long (8 bytes) containing the number n of entries n entries, each one being composed of 3 longs (ie, 24 bytes) 2. Custom sequence I want to have a sequence on these entries. Since I cannot usually hold all the data in memory at once, and I want fast sequential access on it, I wrote a class similar to the following: (deftype DataSeq [id ^long cnt ^long i cached-seq] clojure.lang.IndexedSeq (index [_] i) (count [_] (- cnt i)) (seq [this] this) (first [_] (first cached-seq)) (more [this] (if-let [s (next this)] s '())) (next [_] (if (not= (inc i) cnt) (if (next cached-seq) (DataSeq. id cnt (inc i) (next cached-seq)) (DataSeq. id cnt (inc i) (with-open [f (open-data-file id)] ; open a memory mapped byte array on the file ; seek to the exact position to begin reading ; decide on an optimal amount of data to read ; eagerly read and return that amount of data )))))) The main idea is to read ahead a bunch of entries in a list and then consume from that list. Whenever the cache is completely consumed, if there are remaining entries, they are read from the file in a new cache list. Simple as that. To create an instance of such a sequence, I use a very simple function like: (defn ^DataSeq load-data [id] (next (DataSeq. id (count-entries id) -1 []))) ; count-entries is a trivial "open file and read a long" memoized As you can see, the format of the data allowed me to implement count in very simply and efficiently. 3. drop could be O(1) In the same spirit, I'd like to reimplement drop. The format of these data files allows me to reimplement drop in O(1) (instead of the standard O(n)), as follows: if dropping less then the remaining cached items, just drop the same amount from the cache and done; if dropping more than cnt, then just return the empty list. otherwise, just figure out the position in the data file, jump right into that position, and read data from there. My difficulty is that drop is not implemented in the same way as count, first, seq, etc. The latter functions call a similarly named static method in RT which, in turn, calls my implementation above, while the former, drop, does not check if the instance of the sequence it is being called on provides a custom implementation. Obviously, I could provide a function named anything but drop that does exactly what I want, but that would force other people (including my future self) to remember to use it instead of drop every single time, which sucks. So, the question is: is it possible to override the default behaviour of drop? 4. A workaround (I dislike) While writing this question, I've just figured out a possible workaround: make the reading even lazier. The custom sequence would just keep an index and postpone the reading operation, that would happen only when first was called. The problem is that I'd need some mutable state: the first call to first would cause some data to be read into a cache, all the subsequent calls would return data from this cache. There would be a similar logic on next: if there's a cache, just next it; otherwise, don't bother populating it -- it will be done when first is called again. This would avoid unnecessary disk reads. However, this is still less than optimal -- it is still O(n), and it could easily be O(1). Anyways, I don't like this workaround, and my question is still open. Any thoughts? Thanks.

    Read the article

  • Keeping socket open to send files on timer calls?

    - by user3704768
    I'm writing a program that requires an image to be fetched from a remote server every 10 milliseconds or so, as that's how often the image is updated. My current method calls a timer to grab the image, but it encounters Socket Closed errors all the time, and sometimes does not work at all. How can I fix my methods to keep the socket open the whole time, so no reconnecting is needed? Here is the full class: import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.io.File; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.FileOutputStream; import java.io.IOException; import java.net.InetAddress; import java.net.InetSocketAddress; import java.net.ServerSocket; import java.net.Socket; import java.net.UnknownHostException; import javax.swing.Timer; public class Connection { public static void createServer() throws IOException { Capture.getScreen(); ServerSocket socket = null; try { socket = new ServerSocket(12345, 0, InetAddress.getByName("127.0.0.1")); } catch (UnknownHostException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } System.out.println("Server started on " + socket.getInetAddress().getHostAddress() + ":" + socket.getLocalPort() + ",\nWaiting for client to connect."); final Socket clientConnection = socket.accept(); System.out.println("Client accepted from " + clientConnection.getInetAddress().getHostAddress() + ", sending file"); ActionListener taskPerformer = new ActionListener() { public void actionPerformed(ActionEvent evt) { System.out.println("Sending File"); try { pipeStreams(new FileInputStream(new File( "captures/sCap.png")), clientConnection.getOutputStream(), 1024); } catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } } }; System.out.println("closing out connection"); try { clientConnection.close(); } catch (IOException e) { e.printStackTrace(); } try { socket.close(); } catch (IOException e) { e.printStackTrace(); } Timer timer = new Timer(10, taskPerformer); timer.setRepeats(true); timer.start(); } public static void createClient() throws IOException { System.out.println("Connecting to server."); final Socket socket = new Socket(); try { socket.connect(new InetSocketAddress(InetAddress .getByName("127.0.0.1"), 12345)); } catch (UnknownHostException e) { e.printStackTrace(); } catch (IOException e) { } ActionListener taskPerformer = new ActionListener() { public void actionPerformed(ActionEvent evt) { System.out.println("Success, retreiving file."); try { pipeStreams(socket.getInputStream(), new FileOutputStream( new File("captures/rCap.png")), 1024); } catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e) { } } }; System.out.println("Closing connection"); try { socket.close(); } catch (IOException e) { e.printStackTrace(); } Timer timer = new Timer(10, taskPerformer); timer.setRepeats(true); timer.start(); } public static void pipeStreams(java.io.InputStream source, java.io.OutputStream destination, int bufferSize) throws IOException { byte[] buffer = new byte[bufferSize]; int read = 0; while ((read = source.read(buffer)) != -1) { destination.write(buffer, 0, read); } destination.flush(); destination.close(); source.close(); } }

    Read the article

  • Strange behaviour of CUDA kernel

    - by username_4567
    I'm writing code for calculating prefix sum. Here is my kernel __global__ void prescan(int *indata,int *outdata,int n,long int *sums) { extern __shared__ int temp[]; int tid=threadIdx.x; int offset=1,start_id,end_id; int *global_sum=&temp[n+2]; if(tid==0) { temp[n]=blockDim.x*blockIdx.x; temp[n+1]=blockDim.x*(blockIdx.x+1)-1; start_id=temp[n]; end_id=temp[n+1]; //cuPrintf("Value of start %d and end %d\n",start_id,end_id); } __syncthreads(); start_id=temp[n]; end_id=temp[n+1]; temp[tid]=indata[start_id+tid]; temp[tid+1]=indata[start_id+tid+1]; for(int d=n>>1;d>0;d>>=1) { __syncthreads(); if(tid<d) { int ai=offset*(2*tid+1)-1; int bi=offset*(2*tid+2)-1; temp[bi]+=temp[ai]; } offset*=2; } if(tid==0) { sums[blockIdx.x]=temp[n-1]; temp[n-1]=0; cuPrintf("sums %d\n",sums[blockIdx.x]); } for(int d=1;d<n;d*=2) { offset>>=1; __syncthreads(); if(tid<d) { int ai=offset*(2*tid+1)-1; int bi=offset*(2*tid+2)-1; int t=temp[ai]; temp[ai]=temp[bi]; temp[bi]+=t; } } __syncthreads(); if(tid==0) { outdata[start_id]=0; } __threadfence_block(); __syncthreads(); outdata[start_id+tid]=temp[tid]; outdata[start_id+tid+1]=temp[tid+1]; __syncthreads(); if(tid==0) { temp[0]=0; outdata[start_id]=0; } __threadfence_block(); __syncthreads(); if(blockIdx.x==0 && threadIdx.x==0) { for(int i=1;i<gridDim.x;i++) { sums[i]=sums[i]+sums[i-1]; } } __syncthreads(); __threadfence(); if(blockIdx.x==0 && threadIdx.x==0) { for(int i=0;i<gridDim.x;i++) { cuPrintf("****sums[%d]=%d ",i,sums[i]); } } __syncthreads(); __threadfence(); if(blockIdx.x!=gridDim.x-1) { int tid=(blockIdx.x+1)*blockDim.x+threadIdx.x; if(threadIdx.x==0) cuPrintf("Adding %d \n",sums[blockIdx.x]); outdata[tid]+=sums[blockIdx.x]; } __syncthreads(); } In above kernel, sums array will accumulate prefix sum per block and and then first thread will calculate prefix sum of this sum array. Now if I print this sum array from device side it'll show correct results while in cuPrintf("Adding %d \n",sums[blockIdx.x]); this line it prints that it is taking old value. What could be the reason?

    Read the article

  • Using JavaScript, how do I write the same text to multiple HTML elements, or how do I write text to all HTML elements of the same class?

    - by myfavoritenoisemaker
    I am writing this program to take a root music note and populate tables with various scales from that root note. So, many of the tables cells will have the exact same value in them. I realize I can call my "useScale" function for every single that I need to write text to but since there will be repeats, it seemed like there should be a way to run my function once and apply the results to multiple but it did not work to use the document.getElementsByClassName("").innerHTML, I had been using "ById" which worked fine but each ID must be unique so, I can't write to multiple elements. Here's my code, I'd love some suggestions. many thanks Root Note <input type="text" name="defineRootNote" id="rootNoteCapture" size="2"/> <button onclick="findScale()">Submit</button> <table id="majorTriad"> <th>Major Triad</th> <tr><td>1st</td><td class="root"> </td></tr> <tr><td>3rd</td><td class="3rd"> </td></tr> <tr><td>5th</td><td class="5th"> </td></tr> </table> <table id="minorTriad"> <th>Minor Triad</th> <tr><td>1st</td><td class="root"> </td></tr> <tr><td>3 Flat</td><td class="3Flat"> </td></tr> <tr><td>5th</td><td class="5th"> </td></tr> </table> <script type="text/javascript"> function findScale(rootNote){ var rootNote = document.getElementById("rootNoteCapture").value; rootNote = rootNote.toUpperCase(); var scaleCheck = ["A", "A#", "AB", "B", "BB", "C", "C#", "D", "D#", "DB", "E", "EB", "F", "F#", "G", "G#", "GB"]; if (scaleCheck.indexOf(rootNote) == -1) { document.getElementById("root").innerHTML = "Invalid Entry"; } else { switch(rootNote){ case "AB": rootNote = "G#"; break; case "BB": rootNote = "A#"; break; case "DB": rootNote = "C#"; break; case "EB": rootNote = "D#"; break; case "GB": rootNote = "F#"; break; rootNote = rootNote; } document.getElementsByClassName("root").innerHTML = rootNote; document.getElementsByClassName("3rd").innerHTML = useScale(rootNote, 4); document.getElementsByClassName("5th").innerHTML = useScale(rootNote, 7); document.getElementsByClassName("3Flat").innerHTML = useScale(rootNote, 3); } } function useScale(startPoint, offset){ var scale = ["A", "A#", "B", "C", "C#", "D", "D#", "E", "F", "F#", "G", "G#"]; var returnNote = null; var scalePoint = scale.indexOf(startPoint); for (var i = 0; i < offset; ){ i = i + 1; //console.log(i); //console.log(scalePoint); scalePoint ++; if (scalePoint > 11) {scalePoint = 0;} } returnNote = scale[scalePoint]; return returnNote; } </script>

    Read the article

  • Incorrect data when passing pointer a list of pointers to a function. (C++)

    - by Phil Elm
    I'm writing code for combining data received over multiple sources. When the objects received (I'll call them MyPacket for now), they are stored in a standard list. However, whenever I reference the payload size of a partial MyPacket, the value shows up as 1 instead of the intended size. Here's the function code: MyPacket* CombinePackets(std::list<MyPacket*>* packets, uint8* current_packet){ uint32 total_payload_size = 0; if(packets->size() <= 0) return NULL; //For now. std::list<MyPacket*>::iterator it = packets->begin(); //Some minor code here, not relevant to the problem. for(uint8 index = 0; index < packets->size(); index++){ //(*it)->GetPayloadSize() returns 1 when it should show 1024. I've tried directly accessing the variable and more, but I just can't get it to work. total_payload_size += (*it)->GetPayloadSize(); cout << "Adding to total payload size value: " << (*it)->GetPayloadSize() << endl; std::advance(it,1); } MyPacket* packet = new MyPacket(); //Byte is just a typedef'd unsigned char. packet->payload = (byte) calloc(total_payload_size, sizeof(byte)); packet->payload_size = total_payload_size; it = packets->begin(); //Go back to the beginning again. uint32 big_payload_index = 0; for(uint8 index = 0; index < packets->size(); index++){ if(current_packet != NULL) *current_packet = index; for(uint32 payload_index = 0; payload_index < (*it)->GetPayloadSize(); payload_index++){ packet->payload[big_payload_index] = (*it)->payload[payload_index]; big_payload_index++; } std::advance(it,1); } return packet; } //Calling code std::list<MyPacket*> received = std::list<MyPacket*>(); //The code that fills it is here. std::list<MyPacket*>::iterator it = received.begin(); cout << (*it)->GetPayloadSize() << endl; // Outputs 1024 correctly! MyPacket* final = CombinePackets(&received,NULL); cout << final->GetPayloadSize() << endl; //Outputs 181, which happens to be the number of elements in the received list. So, as you can see above, when I reference (*it)-GetPayloadSize(), it returns 1 instead of the intended 1024. Can anyone see the problem and if so, do you have an idea on how to fix this? I've spent 4 hours searching and trying new solutions, but they all keep returning 1... EDIT:

    Read the article

  • Dynamic Type to do away with Reflection

    - by Rick Strahl
    The dynamic type in C# 4.0 is a welcome addition to the language. One thing I’ve been doing a lot with it is to remove explicit Reflection code that’s often necessary when you ‘dynamically’ need to walk and object hierarchy. In the past I’ve had a number of ReflectionUtils that used string based expressions to walk an object hierarchy. With the introduction of dynamic much of the ReflectionUtils code can be removed for cleaner code that runs considerably faster to boot. The old Way - Reflection Here’s a really contrived example, but assume for a second, you’d want to dynamically retrieve a Page.Request.Url.AbsoluteUrl based on a Page instance in an ASP.NET Web Page request. The strongly typed version looks like this: string path = Page.Request.Url.AbsolutePath; Now assume for a second that Page wasn’t available as a strongly typed instance and all you had was an object reference to start with and you couldn’t cast it (right I said this was contrived :-)) If you’re using raw Reflection code to retrieve this you’d end up writing 3 sets of Reflection calls using GetValue(). Here’s some internal code I use to retrieve Property values as part of ReflectionUtils: /// <summary> /// Retrieve a property value from an object dynamically. This is a simple version /// that uses Reflection calls directly. It doesn't support indexers. /// </summary> /// <param name="instance">Object to make the call on</param> /// <param name="property">Property to retrieve</param> /// <returns>Object - cast to proper type</returns> public static object GetProperty(object instance, string property) { return instance.GetType().GetProperty(property, ReflectionUtils.MemberAccess).GetValue(instance, null); } If you want more control over properties and support both fields and properties as well as array indexers a little more work is required: /// <summary> /// Parses Properties and Fields including Array and Collection references. /// Used internally for the 'Ex' Reflection methods. /// </summary> /// <param name="Parent"></param> /// <param name="Property"></param> /// <returns></returns> private static object GetPropertyInternal(object Parent, string Property) { if (Property == "this" || Property == "me") return Parent; object result = null; string pureProperty = Property; string indexes = null; bool isArrayOrCollection = false; // Deal with Array Property if (Property.IndexOf("[") > -1) { pureProperty = Property.Substring(0, Property.IndexOf("[")); indexes = Property.Substring(Property.IndexOf("[")); isArrayOrCollection = true; } // Get the member MemberInfo member = Parent.GetType().GetMember(pureProperty, ReflectionUtils.MemberAccess)[0]; if (member.MemberType == MemberTypes.Property) result = ((PropertyInfo)member).GetValue(Parent, null); else result = ((FieldInfo)member).GetValue(Parent); if (isArrayOrCollection) { indexes = indexes.Replace("[", string.Empty).Replace("]", string.Empty); if (result is Array) { int Index = -1; int.TryParse(indexes, out Index); result = CallMethod(result, "GetValue", Index); } else if (result is ICollection) { if (indexes.StartsWith("\"")) { // String Index indexes = indexes.Trim('\"'); result = CallMethod(result, "get_Item", indexes); } else { // assume numeric index int index = -1; int.TryParse(indexes, out index); result = CallMethod(result, "get_Item", index); } } } return result; } /// <summary> /// Returns a property or field value using a base object and sub members including . syntax. /// For example, you can access: oCustomer.oData.Company with (this,"oCustomer.oData.Company") /// This method also supports indexers in the Property value such as: /// Customer.DataSet.Tables["Customers"].Rows[0] /// </summary> /// <param name="Parent">Parent object to 'start' parsing from. Typically this will be the Page.</param> /// <param name="Property">The property to retrieve. Example: 'Customer.Entity.Company'</param> /// <returns></returns> public static object GetPropertyEx(object Parent, string Property) { Type type = Parent.GetType(); int at = Property.IndexOf("."); if (at < 0) { // Complex parse of the property return GetPropertyInternal(Parent, Property); } // Walk the . syntax - split into current object (Main) and further parsed objects (Subs) string main = Property.Substring(0, at); string subs = Property.Substring(at + 1); // Retrieve the next . section of the property object sub = GetPropertyInternal(Parent, main); // Now go parse the left over sections return GetPropertyEx(sub, subs); } As you can see there’s a fair bit of code involved into retrieving a property or field value reliably especially if you want to support array indexer syntax. This method is then used by a variety of routines to retrieve individual properties including one called GetPropertyEx() which can walk the dot syntax hierarchy easily. Anyway with ReflectionUtils I can  retrieve Page.Request.Url.AbsolutePath using code like this: string url = ReflectionUtils.GetPropertyEx(Page, "Request.Url.AbsolutePath") as string; This works fine, but is bulky to write and of course requires that I use my custom routines. It’s also quite slow as the code in GetPropertyEx does all sorts of string parsing to figure out which members to walk in the hierarchy. Enter dynamic – way easier! .NET 4.0’s dynamic type makes the above really easy. The following code is all that it takes: object objPage = Page; // force to object for contrivance :) dynamic page = objPage; // convert to dynamic from untyped object string scriptUrl = page.Request.Url.AbsolutePath; The dynamic type assignment in the first two lines turns the strongly typed Page object into a dynamic. The first assignment is just part of the contrived example to force the strongly typed Page reference into an untyped value to demonstrate the dynamic member access. The next line then just creates the dynamic type from the Page reference which allows you to access any public properties and methods easily. It also lets you access any child properties as dynamic types so when you look at Intellisense you’ll see something like this when typing Request.: In other words any dynamic value access on an object returns another dynamic object which is what allows the walking of the hierarchy chain. Note also that the result value doesn’t have to be explicitly cast as string in the code above – the compiler is perfectly happy without the cast in this case inferring the target type based on the type being assigned to. The dynamic conversion automatically handles the cast when making the final assignment which is nice making for natural syntnax that looks *exactly* like the fully typed syntax, but is completely dynamic. Note that you can also use indexers in the same natural syntax so the following also works on the dynamic page instance: string scriptUrl = page.Request.ServerVariables["SCRIPT_NAME"]; The dynamic type is going to make a lot of Reflection code go away as it’s simply so much nicer to be able to use natural syntax to write out code that previously required nasty Reflection syntax. Another interesting thing about the dynamic type is that it actually works considerably faster than Reflection. Check out the following methods that check performance: void Reflection() { Stopwatch stop = new Stopwatch(); stop.Start(); for (int i = 0; i < reps; i++) { // string url = ReflectionUtils.GetProperty(Page,"Title") as string;// "Request.Url.AbsolutePath") as string; string url = Page.GetType().GetProperty("Title", ReflectionUtils.MemberAccess).GetValue(Page, null) as string; } stop.Stop(); Response.Write("Reflection: " + stop.ElapsedMilliseconds.ToString()); } void Dynamic() { Stopwatch stop = new Stopwatch(); stop.Start(); dynamic page = Page; for (int i = 0; i < reps; i++) { string url = page.Title; //Request.Url.AbsolutePath; } stop.Stop(); Response.Write("Dynamic: " + stop.ElapsedMilliseconds.ToString()); } The dynamic code runs in 4-5 milliseconds while the Reflection code runs around 200+ milliseconds! There’s a bit of overhead in the first dynamic object call but subsequent calls are blazing fast and performance is actually much better than manual Reflection. Dynamic is definitely a huge win-win situation when you need dynamic access to objects at runtime.© Rick Strahl, West Wind Technologies, 2005-2010Posted in .NET  CSharp  

    Read the article

  • ASP.NET MVC Paging/Sorting/Filtering using the MVCContrib Grid and Pager

    - by rajbk
    This post walks you through creating a UI for paging, sorting and filtering a list of data items. It makes use of the excellent MVCContrib Grid and Pager Html UI helpers. A sample project is attached at the bottom. Our UI will eventually look like this. The application will make use of the Northwind database. The top portion of the page has a filter area region. The filter region is enclosed in a form tag. The select lists are wired up with jQuery to auto post back the form. The page has a pager region at the top and bottom of the product list. The product list has a link to display more details about a given product. The column headings are clickable for sorting and an icon shows the sort direction. Strongly Typed View Models The views are written to expect strongly typed objects. We suffix these strongly typed objects with ViewModel since they are designed specifically for passing data down to the view.  The following listing shows the ProductViewModel. This class will be used to hold information about a Product. We use attributes to specify if the property should be hidden and what its heading in the table should be. This metadata will be used by the MvcContrib Grid to render the table. Some of the properties are hidden from the UI ([ScaffoldColumn(false)) but are needed because we will be using those for filtering when writing our LINQ query. public ActionResult Index( string productName, int? supplierID, int? categoryID, GridSortOptions gridSortOptions, int? page) {   var productList = productRepository.GetProductsProjected();   // Set default sort column if (string.IsNullOrWhiteSpace(gridSortOptions.Column)) { gridSortOptions.Column = "ProductID"; }   // Filter on SupplierID if (supplierID.HasValue) { productList = productList.Where(a => a.SupplierID == supplierID); }   // Filter on CategoryID if (categoryID.HasValue) { productList = productList.Where(a => a.CategoryID == categoryID); }   // Filter on ProductName if (!string.IsNullOrWhiteSpace(productName)) { productList = productList.Where(a => a.ProductName.Contains(productName)); }   // Create all filter data and set current values if any // These values will be used to set the state of the select list and textbox // by sending it back to the view. var productFilterViewModel = new ProductFilterViewModel(); productFilterViewModel.SelectedCategoryID = categoryID ?? -1; productFilterViewModel.SelectedSupplierID = supplierID ?? -1; productFilterViewModel.Fill();   // Order and page the product list var productPagedList = productList .OrderBy(gridSortOptions.Column, gridSortOptions.Direction) .AsPagination(page ?? 1, 10);     var productListContainer = new ProductListContainerViewModel { ProductPagedList = productPagedList, ProductFilterViewModel = productFilterViewModel, GridSortOptions = gridSortOptions };   return View(productListContainer); } The following diagram shows the rest of the key ViewModels in our design. We have a container class called ProductListContainerViewModel which has nested classes. The ProductPagedList is of type IPagination<ProductViewModel>. The MvcContrib expects the IPagination<T> interface to determine the page number and page size of the collection we are working with. You convert any IEnumerable<T> into an IPagination<T> by calling the AsPagination extension method in the MvcContrib library. It also creates a paged set of type ProductViewModel. The ProductFilterViewModel class will hold information about the different select lists and the ProductName being searched on. It will also hold state of any previously selected item in the lists and the previous search criteria (you will recall that this type of state information was stored in Viewstate when working with WebForms). With MVC there is no state storage and so all state has to be fetched and passed back to the view. The GridSortOptions is a type defined in the MvcContrib library and is used by the Grid to determine the current column being sorted on and the current sort direction. The following shows the view and partial views used to render our UI. The Index view expects a type ProductListContainerViewModel which we described earlier. <%Html.RenderPartial("SearchFilters", Model.ProductFilterViewModel); %> <% Html.RenderPartial("Pager", Model.ProductPagedList); %> <% Html.RenderPartial("SearchResults", Model); %> <% Html.RenderPartial("Pager", Model.ProductPagedList); %> The View contains a partial view “SearchFilters” and passes it the ProductViewFilterContainer. The SearchFilter uses this Model to render all the search lists and textbox. The partial view “Pager” uses the ProductPageList which implements the interface IPagination. The “Pager” view contains the MvcContrib Pager helper used to render the paging information. This view is repeated twice since we want the pager UI to be available at the top and bottom of the product list. The Pager partial view is located in the Shared directory so that it can be reused across Views. The partial view “SearchResults” uses the ProductListContainer model. This partial view contains the MvcContrib Grid which needs both the ProdctPagedList and GridSortOptions to render itself. The Controller Action An example of a request like this: /Products?productName=test&supplierId=29&categoryId=4. The application receives this GET request and maps it to the Index method of the ProductController. Within the action we create an IQueryable<ProductViewModel> by calling the GetProductsProjected() method. /// <summary> /// This method takes in a filter list, paging/sort options and applies /// them to an IQueryable of type ProductViewModel /// </summary> /// <returns> /// The return object is a container that holds the sorted/paged list, /// state for the fiters and state about the current sorted column /// </returns> public ActionResult Index( string productName, int? supplierID, int? categoryID, GridSortOptions gridSortOptions, int? page) {   var productList = productRepository.GetProductsProjected();   // Set default sort column if (string.IsNullOrWhiteSpace(gridSortOptions.Column)) { gridSortOptions.Column = "ProductID"; }   // Filter on SupplierID if (supplierID.HasValue) { productList.Where(a => a.SupplierID == supplierID); }   // Filter on CategoryID if (categoryID.HasValue) { productList = productList.Where(a => a.CategoryID == categoryID); }   // Filter on ProductName if (!string.IsNullOrWhiteSpace(productName)) { productList = productList.Where(a => a.ProductName.Contains(productName)); }   // Create all filter data and set current values if any // These values will be used to set the state of the select list and textbox // by sending it back to the view. var productFilterViewModel = new ProductFilterViewModel(); productFilterViewModel.SelectedCategoryID = categoryID ?? -1; productFilterViewModel.SelectedSupplierID = supplierID ?? -1; productFilterViewModel.Fill();   // Order and page the product list var productPagedList = productList .OrderBy(gridSortOptions.Column, gridSortOptions.Direction) .AsPagination(page ?? 1, 10);     var productListContainer = new ProductListContainerViewModel { ProductPagedList = productPagedList, ProductFilterViewModel = productFilterViewModel, GridSortOptions = gridSortOptions };   return View(productListContainer); } The supplier, category and productname filters are applied to this IQueryable if any are present in the request. The ProductPagedList class is created by applying a sort order and calling the AsPagination method. Finally the ProductListContainerViewModel class is created and returned to the view. You have seen how to use strongly typed views with the MvcContrib Grid and Pager to render a clean lightweight UI with strongly typed views. You also saw how to use partial views to get data from the strongly typed model passed to it from the parent view. The code also shows you how to use jQuery to auto post back. The sample is attached below. Don’t forget to change your connection string to point to the server containing the Northwind database. NorthwindSales_MvcContrib.zip My name is Kobayashi. I work for Keyser Soze.

    Read the article

  • Building applications with WPF, MVVM and Prism(aka CAG)

    - by skjagini
    In this article I am going to walk through an application using WPF and Prism (aka composite application guidance, CAG) which simulates engaging a taxi (cab).  The rules are simple, the app would have3 screens A login screen to authenticate the user An information screen. A screen to engage the cab and roam around and calculating the total fare Metered Rate of Fare The meter is required to be engaged when a cab is occupied by anyone $3.00 upon entry $0.35 for each additional unit The unit fare is: one-fifth of a mile, when the cab is traveling at 6 miles an hour or more; or 60 seconds when not in motion or traveling at less than 12 miles per hour. Night surcharge of $.50 after 8:00 PM & before 6:00 AM Peak hour Weekday Surcharge of $1.00 Monday - Friday after 4:00 PM & before 8:00 PM New York State Tax Surcharge of $.50 per ride. Example: Friday (2010-10-08) 5:30pm Start at Lexington Ave & E 57th St End at Irving Pl & E 15th St Start = $3.00 Travels 2 miles at less than 6 mph for 15 minutes = $3.50 Travels at more than 12 mph for 5 minutes = $1.75 Peak hour Weekday Surcharge = $1.00 (ride started at 5:30 pm) New York State Tax Surcharge = $0.50 Before we dive into the app, I would like to give brief description about the framework.  If you want to jump on to the source code, scroll all the way to the end of the post. MVVM MVVM pattern is in no way related to the usage of PRISM in your application and should be considered if you are using WPF irrespective of PRISM or not. Lets say you are not familiar with MVVM, your typical UI would involve adding some UI controls like text boxes, a button, double clicking on the button,  generating event handler, calling a method from business layer and updating the user interface, it works most of the time for developing small scale applications. The problem with this approach is that there is some amount of code specific to business logic wrapped in UI specific code which is hard to unit test it, mock it and MVVM helps to solve the exact problem. MVVM stands for Model(M) – View(V) – ViewModel(VM),  based on the interactions with in the three parties it should be called VVMM,  MVVM sounds more like MVC (Model-View-Controller) so the name. Why it should be called VVMM: View – View Model - Model WPF allows to create user interfaces using XAML and MVVM takes it to the next level by allowing complete separation of user interface and business logic. In WPF each view will have a property, DataContext when set to an instance of a class (which happens to be your view model) provides the data the view is interested in, i.e., view interacts with view model and at the same time view model interacts with view through DataContext. Sujith, if view and view model are interacting directly with each other how does MVVM is helping me separation of concerns? Well, the catch is DataContext is of type Object, since it is of type object view doesn’t know exact type of view model allowing views and views models to be loosely coupled. View models aggregate data from models (data access layer, services, etc) and make it available for views through properties, methods etc, i.e., View Models interact with Models. PRISM Prism is provided by Microsoft Patterns and Practices team and it can be downloaded from codeplex for source code,  samples and documentation on msdn.  The name composite implies, to compose user interface from different modules (views) without direct dependencies on each other, again allowing  loosely coupled development. Well Sujith, I can already do that with user controls, why shall I learn another framework?  That’s correct, you can decouple using user controls, but you still have to manage some amount of coupling, like how to do you communicate between the controls, how do you subscribe/unsubscribe, loading/unloading views dynamically. Prism is not a replacement for user controls, provides the following features which greatly help in designing the composite applications. Dependency Injection (DI)/ Inversion of Control (IoC) Modules Regions Event Aggregator  Commands Simply put, MVVM helps building a single view and Prism helps building an application using the views There are other open source alternatives to Prism, like MVVMLight, Cinch, take a look at them as well. Lets dig into the source code.  1. Solution The solution is made of the following projects Framework: Holds the common functionality in building applications using WPF and Prism TaxiClient: Start up project, boot strapping and app styling TaxiCommon: Helps with the business logic TaxiModules: Holds the meat of the application with views and view models TaxiTests: To test the application 2. DI / IoC Dependency Injection (DI) as the name implies refers to injecting dependencies and Inversion of Control (IoC) means the calling code has no direct control on the dependencies, opposite of normal way of programming where dependencies are passed by caller, i.e inversion; aside from some differences in terminology the concept is same in both the cases. The idea behind DI/IoC pattern is to reduce the amount of direct coupling between different components of the application, the higher the dependency the more tightly coupled the application resulting in code which is hard to modify, unit test and mock.  Initializing Dependency Injection through BootStrapper TaxiClient is the starting project of the solution and App (App.xaml)  is the starting class that gets called when you run the application. From the App’s OnStartup method we will invoke BootStrapper.   namespace TaxiClient { /// <summary> /// Interaction logic for App.xaml /// </summary> public partial class App : Application { protected override void OnStartup(StartupEventArgs e) { base.OnStartup(e);   (new BootStrapper()).Run(); } } } BootStrapper is your contact point for initializing the application including dependency injection, creating Shell and other frameworks. We are going to use Unity for DI and there are lot of open source DI frameworks like Spring.Net, StructureMap etc with different feature set  and you can choose a framework based on your preferences. Note that Prism comes with in built support for Unity, for example we are deriving from UnityBootStrapper in our case and for any other DI framework you have to extend the Prism appropriately   namespace TaxiClient { public class BootStrapper: UnityBootstrapper { protected override IModuleCatalog CreateModuleCatalog() { return new ConfigurationModuleCatalog(); } protected override DependencyObject CreateShell() { Framework.FrameworkBootStrapper.Run(Container, Application.Current.Dispatcher);   Shell shell = new Shell(); shell.ResizeMode = ResizeMode.NoResize; shell.Show();   return shell; } } } Lets take a look into  FrameworkBootStrapper to check out how to register with unity container. namespace Framework { public class FrameworkBootStrapper { public static void Run(IUnityContainer container, Dispatcher dispatcher) { UIDispatcher uiDispatcher = new UIDispatcher(dispatcher); container.RegisterInstance<IDispatcherService>(uiDispatcher);   container.RegisterType<IInjectSingleViewService, InjectSingleViewService>( new ContainerControlledLifetimeManager());   . . . } } } In the above code we are registering two components with unity container. You shall observe that we are following two different approaches, RegisterInstance and RegisterType.  With RegisterInstance we are registering an existing instance and the same instance will be returned for every request made for IDispatcherService   and with RegisterType we are requesting unity container to create an instance for us when required, i.e., when I request for an instance for IInjectSingleViewService, unity will create/return an instance of InjectSingleViewService class and with RegisterType we can configure the life time of the instance being created. With ContaienrControllerLifetimeManager, the unity container caches the instance and reuses for any subsequent requests, without recreating a new instance. Lets take a look into FareViewModel.cs and it’s constructor. The constructor takes one parameter IEventAggregator and if you try to find all references in your solution for IEventAggregator, you will not find a single location where an instance of EventAggregator is passed directly to the constructor. The compiler still finds an instance and works fine because Prism is already configured when used with Unity container to return an instance of EventAggregator when requested for IEventAggregator and in this particular case it is called constructor injection. public class FareViewModel:ObservableBase, IDataErrorInfo { ... private IEventAggregator _eventAggregator;   public FareViewModel(IEventAggregator eventAggregator) { _eventAggregator = eventAggregator; InitializePropertyNames(); InitializeModel(); PropertyChanged += OnPropertyChanged; } ... 3. Shell Shells are very similar in operation to Master Pages in asp.net or MDI in Windows Forms. And shells contain regions which display the views, you can have as many regions as you wish in a given view. You can also nest regions. i.e, one region can load a view which in itself may contain other regions. We have to create a shell at the start of the application and are doing it by overriding CreateShell method from BootStrapper From the following Shell.xaml you shall notice that we have two content controls with Region names as ‘MenuRegion’ and ‘MainRegion’.  The idea here is that you can inject any user controls into the regions dynamically, i.e., a Menu User Control for MenuRegion and based on the user action you can load appropriate view into MainRegion.    <Window x:Class="TaxiClient.Shell" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:Regions="clr-namespace:Microsoft.Practices.Prism.Regions;assembly=Microsoft.Practices.Prism" Title="Taxi" Height="370" Width="800"> <Grid Margin="2"> <ContentControl Regions:RegionManager.RegionName="MenuRegion" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" HorizontalContentAlignment="Stretch" VerticalContentAlignment="Stretch" />   <ContentControl Grid.Row="1" Regions:RegionManager.RegionName="MainRegion" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" HorizontalContentAlignment="Stretch" VerticalContentAlignment="Stretch" /> <!--<Border Grid.ColumnSpan="2" BorderThickness="2" CornerRadius="3" BorderBrush="LightBlue" />-->   </Grid> </Window> 4. Modules Prism provides the ability to build composite applications and modules play an important role in it. For example if you are building a Mortgage Loan Processor application with 3 components, i.e. customer’s credit history,  existing mortgages, new home/loan information; and consider that the customer’s credit history component involves gathering data about his/her address, background information, job details etc. The idea here using Prism modules is to separate the implementation of these 3 components into their own visual studio projects allowing to build components with no dependency on each other and independently. If we need to add another component to the application, the component can be developed by in house team or some other team in the organization by starting with a new Visual Studio project and adding to the solution at the run time with very little knowledge about the application. Prism modules are defined by implementing the IModule interface and each visual studio project to be considered as a module should implement the IModule interface.  From the BootStrapper.cs you shall observe that we are overriding the method by returning a ConfiguratingModuleCatalog which returns the modules that are registered for the application using the app.config file  and you can also add module using code. Lets take a look into configuration file.   <?xml version="1.0"?> <configuration> <configSections> <section name="modules" type="Microsoft.Practices.Prism.Modularity.ModulesConfigurationSection, Microsoft.Practices.Prism"/> </configSections> <modules> <module assemblyFile="TaxiModules.dll" moduleType="TaxiModules.ModuleInitializer, TaxiModules" moduleName="TaxiModules"/> </modules> </configuration> Here we are adding TaxiModules project to our solution and TaxiModules.ModuleInitializer implements IModule interface   5. Module Mapper With Prism modules you can dynamically add or remove modules from the regions, apart from that Prism also provides API to control adding/removing the views from a region within the same module. Taxi Information Screen: Engage the Taxi Screen: The sample application has two screens, ‘Taxi Information’ and ‘Engage the Taxi’ and they both reside in same module, TaxiModules. ‘Engage the Taxi’ is again made of two user controls, FareView on the left and TotalView on the right. We have created a Shell with two regions, MenuRegion and MainRegion with menu loaded into MenuRegion. We can create a wrapper user control called EngageTheTaxi made of FareView and TotalView and load either TaxiInfo or EngageTheTaxi into MainRegion based on the user action. Though it will work it tightly binds the user controls and for every combination of user controls, we need to create a dummy wrapper control to contain them. Instead we can apply the principles we learned so far from Shell/regions and introduce another template (LeftAndRightRegionView.xaml) made of two regions Region1 (left) and Region2 (right) and load  FareView and TotalView dynamically.  To help with loading of the views dynamically I have introduce an helper an interface, IInjectSingleViewService,  idea suggested by Mike Taulty, a must read blog for .Net developers. using System; using System.Collections.Generic; using System.ComponentModel;   namespace Framework.PresentationUtility.Navigation {   public interface IInjectSingleViewService : INotifyPropertyChanged { IEnumerable<CommandViewDefinition> Commands { get; } IEnumerable<ModuleViewDefinition> Modules { get; }   void RegisterViewForRegion(string commandName, string viewName, string regionName, Type viewType); void ClearViewFromRegion(string viewName, string regionName); void RegisterModule(string moduleName, IList<ModuleMapper> moduleMappers); } } The Interface declares three methods to work with views: RegisterViewForRegion: Registers a view with a particular region. You can register multiple views and their regions under one command.  When this particular command is invoked all the views registered under it will be loaded into their regions. ClearViewFromRegion: To unload a specific view from a region. RegisterModule: The idea is when a command is invoked you can load the UI with set of controls in their default position and based on the user interaction, you can load different contols in to different regions on the fly.  And it is supported ModuleViewDefinition and ModuleMappers as shown below. namespace Framework.PresentationUtility.Navigation { public class ModuleViewDefinition { public string ModuleName { get; set; } public IList<ModuleMapper> ModuleMappers; public ICommand Command { get; set; } }   public class ModuleMapper { public string ViewName { get; set; } public string RegionName { get; set; } public Type ViewType { get; set; } } } 6. Event Aggregator Prism event aggregator enables messaging between components as in Observable pattern, Notifier notifies the Observer which receives notification it is interested in. When it comes to Observable pattern, Observer has to unsubscribes for notifications when it no longer interested in notifications, which allows the Notifier to remove the Observer’s reference from it’s local cache. Though .Net has managed garbage collection it cannot remove inactive the instances referenced by an active instance resulting in memory leak, keeping the Observers in memory as long as Notifier stays in memory.  Developers have to be very careful to unsubscribe when necessary and it often gets overlooked, to overcome these problems Prism Event Aggregator uses weak references to cache the reference (Observer in this case)  and releases the reference (memory) once the instance goes out of scope. Using event aggregator is very simple, declare a generic type of CompositePresenationEvent by inheriting from it. using Microsoft.Practices.Prism.Events; using TaxiCommon.BAO;   namespace TaxiCommon.CompositeEvents { public class TaxiOnMoveEvent:CompositePresentationEvent<TaxiOnMove> { } }   TaxiOnMove.cs includes the properties which we want to exchange between the parties, FareView and TotalView. using System;   namespace TaxiCommon.BAO { public class TaxiOnMove { public TimeSpan MinutesAtTweleveMPH { get; set; } public double MilesAtSixMPH { get; set; } } }   Lets take a look into FareViewodel (Notifier) and how it raises the event.  Here we are raising the event by getting the event through GetEvent<..>() and publishing it with the payload private void OnAddMinutes(object obj) { TaxiOnMove payload = new TaxiOnMove(); if(MilesAtSixMPH != null) payload.MilesAtSixMPH = MilesAtSixMPH.Value; if(MinutesAtTweleveMPH != null) payload.MinutesAtTweleveMPH = new TimeSpan(0,0,MinutesAtTweleveMPH.Value,0);   _eventAggregator.GetEvent<TaxiOnMoveEvent>().Publish(payload); ResetMinutesAndMiles(); } And TotalViewModel(Observer) subscribes to notifications by getting the event through GetEvent<..>() namespace TaxiModules.ViewModels { public class TotalViewModel:ObservableBase { .... private IEventAggregator _eventAggregator;   public TotalViewModel(IEventAggregator eventAggregator) { _eventAggregator = eventAggregator; ... }   private void SubscribeToEvents() { _eventAggregator.GetEvent<TaxiStartedEvent>() .Subscribe(OnTaxiStarted, ThreadOption.UIThread,false,(filter) => true); _eventAggregator.GetEvent<TaxiOnMoveEvent>() .Subscribe(OnTaxiMove, ThreadOption.UIThread, false, (filter) => true); _eventAggregator.GetEvent<TaxiResetEvent>() .Subscribe(OnTaxiReset, ThreadOption.UIThread, false, (filter) => true); }   ... private void OnTaxiMove(TaxiOnMove taxiOnMove) { OnMoveFare fare = new OnMoveFare(taxiOnMove); Fares.Add(fare); SetTotalFare(new []{fare}); }   .... 7. MVVM through example In this section we are going to look into MVVM implementation through example.  I have all the modules declared in a single project, TaxiModules, again it is not necessary to have them into one project. Once the user logs into the application, will be greeted with the ‘Engage the Taxi’ screen which is made of two user controls, FareView.xaml and TotalView.Xaml. As you can see from the solution explorer, each of them have their own code behind files and  ViewModel classes, FareViewMode.cs, TotalViewModel.cs Lets take a look in to the FareView and how it interacts with FareViewModel using MVVM implementation. FareView.xaml acts as a view and FareViewMode.cs is it’s view model. The FareView code behind class   namespace TaxiModules.Views { /// <summary> /// Interaction logic for FareView.xaml /// </summary> public partial class FareView : UserControl { public FareView(FareViewModel viewModel) { InitializeComponent(); this.Loaded += (s, e) => { this.DataContext = viewModel; }; } } } The FareView is bound to FareViewModel through the data context  and you shall observe that DataContext is of type Object, i.e. the FareView doesn’t really know the type of ViewModel (FareViewModel). This helps separation of View and ViewModel as View and ViewModel are independent of each other, you can bind FareView to FareViewModel2 as well and the application compiles just fine. Lets take a look into FareView xaml file  <UserControl x:Class="TaxiModules.Views.FareView" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:Toolkit="clr-namespace:Microsoft.Windows.Controls;assembly=WPFToolkit" xmlns:Commands="clr-namespace:Microsoft.Practices.Prism.Commands;assembly=Microsoft.Practices.Prism"> <Grid Margin="10" > ....   <Border Style="{DynamicResource innerBorder}" Grid.Row="0" Grid.Column="0" Grid.RowSpan="11" Grid.ColumnSpan="2" Panel.ZIndex="1"/>   <Label Grid.Row="0" Content="Engage the Taxi" Style="{DynamicResource innerHeader}"/> <Label Grid.Row="1" Content="Select the State"/> <ComboBox Grid.Row="1" Grid.Column="1" ItemsSource="{Binding States}" Height="auto"> <ComboBox.ItemTemplate> <DataTemplate> <TextBlock Text="{Binding Name}"/> </DataTemplate> </ComboBox.ItemTemplate> <ComboBox.SelectedItem> <Binding Path="SelectedState" Mode="TwoWay"/> </ComboBox.SelectedItem> </ComboBox> <Label Grid.Row="2" Content="Select the Date of Entry"/> <Toolkit:DatePicker Grid.Row="2" Grid.Column="1" SelectedDate="{Binding DateOfEntry, ValidatesOnDataErrors=true}" /> <Label Grid.Row="3" Content="Enter time 24hr format"/> <TextBox Grid.Row="3" Grid.Column="1" Text="{Binding TimeOfEntry, TargetNullValue=''}"/> <Button Grid.Row="4" Grid.Column="1" Content="Start the Meter" Commands:Click.Command="{Binding StartMeterCommand}" />   <Label Grid.Row="5" Content="Run the Taxi" Style="{DynamicResource innerHeader}"/> <Label Grid.Row="6" Content="Number of Miles &lt;@6mph"/> <TextBox Grid.Row="6" Grid.Column="1" Text="{Binding MilesAtSixMPH, TargetNullValue='', ValidatesOnDataErrors=true}"/> <Label Grid.Row="7" Content="Number of Minutes @12mph"/> <TextBox Grid.Row="7" Grid.Column="1" Text="{Binding MinutesAtTweleveMPH, TargetNullValue=''}"/> <Button Grid.Row="8" Grid.Column="1" Content="Add Minutes and Miles " Commands:Click.Command="{Binding AddMinutesCommand}"/> <Label Grid.Row="9" Content="Other Operations" Style="{DynamicResource innerHeader}"/> <Button Grid.Row="10" Grid.Column="1" Content="Reset the Meter" Commands:Click.Command="{Binding ResetCommand}"/>   </Grid> </UserControl> The highlighted code from the above code shows data binding, for example ComboBox which displays list of states has it’s ItemsSource bound to States property, with DataTemplate bound to Name and SelectedItem  to SelectedState. You might be wondering what are all these properties and how it is able to bind to them.  The answer lies in data context, i.e., when you bound a control, WPF looks for data context on the root object (Grid in this case) and if it can’t find data context it will look into root’s root, i.e. FareView UserControl and it is bound to FareViewModel.  Each of those properties have be declared on the ViewModel for the View to bind correctly. To put simply, View is bound to ViewModel through data context of type object and every control that is bound on the View actually binds to the public property on the ViewModel. Lets look into the ViewModel code (the following code is not an exact copy of FareViewMode.cs, pasted relevant code for this section)   namespace TaxiModules.ViewModels { public class FareViewModel:ObservableBase, IDataErrorInfo { public List<USState> States { get { return USStates.StateList; } }   public USState SelectedState { get { return _selectedState; } set { _selectedState = value; RaisePropertyChanged(_selectedStatePropertyName); } }   public DateTime? DateOfEntry { get { return _dateOfEntry; } set { _dateOfEntry = value; RaisePropertyChanged(_dateOfEntryPropertyName); } }   public TimeSpan? TimeOfEntry { get { return _timeOfEntry; } set { _timeOfEntry = value; RaisePropertyChanged(_timeOfEntryPropertyName); } }   public double? MilesAtSixMPH { get { return _milesAtSixMPH; } set { _milesAtSixMPH = value; RaisePropertyChanged(_distanceAtSixMPHPropertyName); } }   public int? MinutesAtTweleveMPH { get { return _minutesAtTweleveMPH; } set { _minutesAtTweleveMPH = value; RaisePropertyChanged(_minutesAtTweleveMPHPropertyName); } }   public ICommand StartMeterCommand { get { if(_startMeterCommand == null) { _startMeterCommand = new DelegateCommand<object>(OnStartMeter, CanStartMeter); } return _startMeterCommand; } }   public ICommand AddMinutesCommand { get { if(_addMinutesCommand == null) { _addMinutesCommand = new DelegateCommand<object>(OnAddMinutes, CanAddMinutes); } return _addMinutesCommand; } }   public ICommand ResetCommand { get { if(_resetCommand == null) { _resetCommand = new DelegateCommand<object>(OnResetCommand); } return _resetCommand; } }   } private void OnStartMeter(object obj) { _eventAggregator.GetEvent<TaxiStartedEvent>().Publish( new TaxiStarted() { EngagedOn = DateOfEntry.Value.Date + TimeOfEntry.Value, EngagedState = SelectedState.Value });   _isMeterStarted = true; OnPropertyChanged(this,null); } And views communicate user actions like button clicks, tree view item selections, etc using commands. When user clicks on ‘Start the Meter’ button it invokes the method StartMeterCommand, which calls the method OnStartMeter which publishes the event to TotalViewModel using event aggregator  and TaxiStartedEvent. namespace TaxiModules.ViewModels { public class TotalViewModel:ObservableBase { ... private IEventAggregator _eventAggregator;   public TotalViewModel(IEventAggregator eventAggregator) { _eventAggregator = eventAggregator;   InitializePropertyNames(); InitializeModel(); SubscribeToEvents(); }   public decimal? TotalFare { get { return _totalFare; } set { _totalFare = value; RaisePropertyChanged(_totalFarePropertyName); } } .... private void SubscribeToEvents() { _eventAggregator.GetEvent<TaxiStartedEvent>().Subscribe(OnTaxiStarted, ThreadOption.UIThread,false,(filter) => true); _eventAggregator.GetEvent<TaxiOnMoveEvent>().Subscribe(OnTaxiMove, ThreadOption.UIThread, false, (filter) => true); _eventAggregator.GetEvent<TaxiResetEvent>().Subscribe(OnTaxiReset, ThreadOption.UIThread, false, (filter) => true); }   private void OnTaxiStarted(TaxiStarted taxiStarted) { Fares.Add(new EntryFare()); Fares.Add(new StateTaxFare(taxiStarted)); Fares.Add(new NightSurchargeFare(taxiStarted)); Fares.Add(new PeakHourWeekdayFare(taxiStarted));   SetTotalFare(Fares); }   private void SetTotalFare(IEnumerable<IFare> fares) { TotalFare = (_totalFare ?? 0) + TaxiFareHelper.GetTotalFare(fares); } ....   } }   TotalViewModel subscribes to events, TaxiStartedEvent and rest. When TaxiStartedEvent gets invoked it calls the OnTaxiStarted method which sets the total fare which includes entry fee, state tax, nightly surcharge, peak hour weekday fare.   Note that TotalViewModel derives from ObservableBase which implements the method RaisePropertyChanged which we are invoking in Set of TotalFare property, i.e, once we update the TotalFare property it raises an the event that  allows the TotalFare text box to fetch the new value through the data context. ViewModel is communicating with View through data context and it has no knowledge about View, helping in loose coupling of ViewModel and View.   I have attached the source code (.Net 4.0, Prism 4.0, VS 2010) , download and play with it and don’t forget to leave your comments.  

    Read the article

  • Version Assemblies with TFS 2010 Continuous Integration

    - by Steve Michelotti
    When I first heard that TFS 2010 had moved to Workflow Foundation for Team Build, I was *extremely* skeptical. I’ve loved MSBuild and didn’t quite understand the reasons for this change. In fact, given that I’ve been exclusively using Cruise Control for Continuous Integration (CI) for the last 5+ years of my career, I was skeptical of TFS for CI in general. However, after going through the learning process for TFS 2010 recently, I’m starting to become a believer. I’m also starting to see some of the benefits with Workflow Foundation for the overall processing because it gives you constructs not available in MSBuild such as parallel tasks, better control flow constructs, and a slightly better customization story. The first customization I had to make to the build process was to version the assemblies of my solution. This is not new. In fact, I’d recommend reading Mike Fourie’s well known post on Versioning Code in TFS before you get started. This post describes several foundational aspects of versioning assemblies regardless of your version of TFS. The main points are: 1) don’t use source control operations for your version file, 2) use a schema like <Major>.<Minor>.<IncrementalNumber>.0, and 3) do not keep AssemblyVersion and AssemblyFileVersion in sync. To do this in TFS 2010, the best post I’ve found has been Jim Lamb’s post of building a custom TFS 2010 workflow activity. Overall, this post is excellent but the primary issue I have with it is that the assembly version numbers produced are based in a date and look like this: “2010.5.15.1”. This is definitely not what I want. I want to be able to communicate to the developers and stakeholders that we are producing the “1.1 release” or “1.2 release” – which would have an assembly version number of “1.1.317.0” for example. In this post, I’ll walk through the process of customizing the assembly version number based on this method – customizing the concepts in Lamb’s post to suit my needs. I’ll also be combining this with the concepts of Fourie’s post – particularly with regards to the standards around how to version the assemblies. The first thing I’ll do is add a file called SolutionAssemblyVersionInfo.cs to the root of my solution that looks like this: 1: using System; 2: using System.Reflection; 3: [assembly: AssemblyVersion("1.1.0.0")] 4: [assembly: AssemblyFileVersion("1.1.0.0")] I’ll then add that file as a Visual Studio link file to each project in my solution by right-clicking the project, “Add – Existing Item…” then when I click the SolutionAssemblyVersionInfo.cs file, making sure I “Add As Link”: Now the Solution Explorer will show our file. We can see that it’s a “link” file because of the black arrow in the icon within all our projects. Of course you’ll need to remove the AssemblyVersion and AssemblyFileVersion attributes from the AssemblyInfo.cs files to avoid the duplicate attributes since they now leave in the SolutionAssemblyVersionInfo.cs file. This is an extremely common technique so that all the projects in our solution can be versioned as a unit. At this point, we’re ready to write our custom activity. The primary consideration is that I want the developer and/or tech lead to be able to easily be in control of the Major.Minor and then I want the CI process to add the third number with a unique incremental number. We’ll leave the fourth position always “0” for now – it’s held in reserve in case the day ever comes where we need to do an emergency patch to Production based on a branched version.   Writing the Custom Workflow Activity Similar to Lamb’s post, I’m going to write two custom workflow activities. The “outer” activity (a xaml activity) will be pretty straight forward. It will check if the solution version file exists in the solution root and, if so, delegate the replacement of version to the AssemblyVersionInfo activity which is a CodeActivity highlighted in red below:   Notice that the arguments of this activity are the “solutionVersionFile” and “tfsBuildNumber” which will be passed in. The tfsBuildNumber passed in will look something like this: “CI_MyApplication.4” and we’ll need to grab the “4” (i.e., the incremental revision number) and put that in the third position. Then we’ll need to honor whatever was specified for Major.Minor in the SolutionAssemblyVersionInfo.cs file. For example, if the SolutionAssemblyVersionInfo.cs file had “1.1.0.0” for the AssemblyVersion (as shown in the first code block near the beginning of this post), then we want to resulting file to have “1.1.4.0”. Before we do anything, let’s put together a unit test for all this so we can know if we get it right: 1: [TestMethod] 2: public void Assembly_version_should_be_parsed_correctly_from_build_name() 3: { 4: // arrange 5: const string versionFile = "SolutionAssemblyVersionInfo.cs"; 6: WriteTestVersionFile(versionFile); 7: var activity = new VersionAssemblies(); 8: var arguments = new Dictionary<string, object> { 9: { "tfsBuildNumber", "CI_MyApplication.4"}, 10: { "solutionVersionFile", versionFile} 11: }; 12:   13: // act 14: var result = WorkflowInvoker.Invoke(activity, arguments); 15:   16: // assert 17: Assert.AreEqual("1.2.4.0", (string)result["newAssemblyFileVersion"]); 18: var lines = File.ReadAllLines(versionFile); 19: Assert.IsTrue(lines.Contains("[assembly: AssemblyVersion(\"1.2.0.0\")]")); 20: Assert.IsTrue(lines.Contains("[assembly: AssemblyFileVersion(\"1.2.4.0\")]")); 21: } 22: 23: private void WriteTestVersionFile(string versionFile) 24: { 25: var fileContents = "using System.Reflection;\n" + 26: "[assembly: AssemblyVersion(\"1.2.0.0\")]\n" + 27: "[assembly: AssemblyFileVersion(\"1.2.0.0\")]"; 28: File.WriteAllText(versionFile, fileContents); 29: }   At this point, the code for our AssemblyVersion activity is pretty straight forward: 1: [BuildActivity(HostEnvironmentOption.Agent)] 2: public class AssemblyVersionInfo : CodeActivity 3: { 4: [RequiredArgument] 5: public InArgument<string> FileName { get; set; } 6:   7: [RequiredArgument] 8: public InArgument<string> TfsBuildNumber { get; set; } 9:   10: public OutArgument<string> NewAssemblyFileVersion { get; set; } 11:   12: protected override void Execute(CodeActivityContext context) 13: { 14: var solutionVersionFile = this.FileName.Get(context); 15: 16: // Ensure that the file is writeable 17: var fileAttributes = File.GetAttributes(solutionVersionFile); 18: File.SetAttributes(solutionVersionFile, fileAttributes & ~FileAttributes.ReadOnly); 19:   20: // Prepare assembly versions 21: var majorMinor = GetAssemblyMajorMinorVersionBasedOnExisting(solutionVersionFile); 22: var newBuildNumber = GetNewBuildNumber(this.TfsBuildNumber.Get(context)); 23: var newAssemblyVersion = string.Format("{0}.{1}.0.0", majorMinor.Item1, majorMinor.Item2); 24: var newAssemblyFileVersion = string.Format("{0}.{1}.{2}.0", majorMinor.Item1, majorMinor.Item2, newBuildNumber); 25: this.NewAssemblyFileVersion.Set(context, newAssemblyFileVersion); 26:   27: // Perform the actual replacement 28: var contents = this.GetFileContents(newAssemblyVersion, newAssemblyFileVersion); 29: File.WriteAllText(solutionVersionFile, contents); 30:   31: // Restore the file's original attributes 32: File.SetAttributes(solutionVersionFile, fileAttributes); 33: } 34:   35: #region Private Methods 36:   37: private string GetFileContents(string newAssemblyVersion, string newAssemblyFileVersion) 38: { 39: var cs = new StringBuilder(); 40: cs.AppendLine("using System.Reflection;"); 41: cs.AppendFormat("[assembly: AssemblyVersion(\"{0}\")]", newAssemblyVersion); 42: cs.AppendLine(); 43: cs.AppendFormat("[assembly: AssemblyFileVersion(\"{0}\")]", newAssemblyFileVersion); 44: return cs.ToString(); 45: } 46:   47: private Tuple<string, string> GetAssemblyMajorMinorVersionBasedOnExisting(string filePath) 48: { 49: var lines = File.ReadAllLines(filePath); 50: var versionLine = lines.Where(x => x.Contains("AssemblyVersion")).FirstOrDefault(); 51:   52: if (versionLine == null) 53: { 54: throw new InvalidOperationException("File does not contain [assembly: AssemblyVersion] attribute"); 55: } 56:   57: return ExtractMajorMinor(versionLine); 58: } 59:   60: private static Tuple<string, string> ExtractMajorMinor(string versionLine) 61: { 62: var firstQuote = versionLine.IndexOf('"') + 1; 63: var secondQuote = versionLine.IndexOf('"', firstQuote); 64: var version = versionLine.Substring(firstQuote, secondQuote - firstQuote); 65: var versionParts = version.Split('.'); 66: return new Tuple<string, string>(versionParts[0], versionParts[1]); 67: } 68:   69: private string GetNewBuildNumber(string buildName) 70: { 71: return buildName.Substring(buildName.LastIndexOf(".") + 1); 72: } 73:   74: #endregion 75: }   At this point the final step is to incorporate this activity into the overall build template. Make a copy of the DefaultTempate.xaml – we’ll call it DefaultTemplateWithVersioning.xaml. Before the build and labeling happens, drag the VersionAssemblies activity in. Then set the LabelName variable to “BuildDetail.BuildDefinition.Name + "-" + newAssemblyFileVersion since the newAssemblyFileVersion was produced by our activity.   Configuring CI Once you add your solution to source control, you can configure CI with the build definition window as shown here. The main difference is that we’ll change the Process tab to reflect a different build number format and choose our custom build process file:   When the build completes, we’ll see the name of our project with the unique revision number:   If we look at the detailed build log for the latest build, we’ll see the label being created with our custom task:     We can now look at the history labels in TFS and see the project name with the labels (the Assignment activity I added to the workflow):   Finally, if we look at the physical assemblies that are produced, we can right-click on any assembly in Windows Explorer and see the assembly version in its properties:   Full Traceability We now have full traceability for our code. There will never be a question of what code was deployed to Production. You can always see the assembly version in the properties of the physical assembly. That can be traced back to a label in TFS where the unique revision number matches. The label in TFS gives you the complete snapshot of the code in your source control repository at the time the code was built. This type of process for full traceability has been used for many years for CI – in fact, I’ve done similar things with CCNet and SVN for quite some time. This is simply the TFS implementation of that pattern. The new features that TFS 2010 give you to make these types of customizations in your build process are quite easy once you get over the initial curve.

    Read the article

  • Using LINQ Distinct: With an Example on ASP.NET MVC SelectListItem

    - by Joe Mayo
    One of the things that might be surprising in the LINQ Distinct standard query operator is that it doesn’t automatically work properly on custom classes. There are reasons for this, which I’ll explain shortly. The example I’ll use in this post focuses on pulling a unique list of names to load into a drop-down list. I’ll explain the sample application, show you typical first shot at Distinct, explain why it won’t work as you expect, and then demonstrate a solution to make Distinct work with any custom class. The technologies I’m using are  LINQ to Twitter, LINQ to Objects, Telerik Extensions for ASP.NET MVC, ASP.NET MVC 2, and Visual Studio 2010. The function of the example program is to show a list of people that I follow.  In Twitter API vernacular, these people are called “Friends”; though I’ve never met most of them in real life. This is part of the ubiquitous language of social networking, and Twitter in particular, so you’ll see my objects named accordingly. Where Distinct comes into play is because I want to have a drop-down list with the names of the friends appearing in the list. Some friends are quite verbose, which means I can’t just extract names from each tweet and populate the drop-down; otherwise, I would end up with many duplicate names. Therefore, Distinct is the appropriate operator to eliminate the extra entries from my friends who tend to be enthusiastic tweeters. The sample doesn’t do anything with the drop-down list and I leave that up to imagination for what it’s practical purpose could be; perhaps a filter for the list if I only want to see a certain person’s tweets or maybe a quick list that I plan to combine with a TextBox and Button to reply to a friend. When the program runs, you’ll need to authenticate with Twitter, because I’m using OAuth (DotNetOpenAuth), for authentication, and then you’ll see the drop-down list of names above the grid with the most recent tweets from friends. Here’s what the application looks like when it runs: As you can see, there is a drop-down list above the grid. The drop-down list is where most of the focus of this article will be. There is some description of the code before we talk about the Distinct operator, but we’ll get there soon. This is an ASP.NET MVC2 application, written with VS 2010. Here’s the View that produces this screen: <%@ Page Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<TwitterFriendsViewModel>" %> <%@ Import Namespace="DistinctSelectList.Models" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server">     Home Page </asp:Content><asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server">     <fieldset>         <legend>Twitter Friends</legend>         <div>             <%= Html.DropDownListFor(                     twendVM => twendVM.FriendNames,                     Model.FriendNames,                     "<All Friends>") %>         </div>         <div>             <% Html.Telerik().Grid<TweetViewModel>(Model.Tweets)                    .Name("TwitterFriendsGrid")                    .Columns(cols =>                     {                         cols.Template(col =>                             { %>                                 <img src="<%= col.ImageUrl %>"                                      alt="<%= col.ScreenName %>" />                         <% });                         cols.Bound(col => col.ScreenName);                         cols.Bound(col => col.Tweet);                     })                    .Render(); %>         </div>     </fieldset> </asp:Content> As shown above, the Grid is from Telerik’s Extensions for ASP.NET MVC. The first column is a template that renders the user’s Avatar from a URL provided by the Twitter query. Both the Grid and DropDownListFor display properties that are collections from a TwitterFriendsViewModel class, shown below: using System.Collections.Generic; using System.Web.Mvc; namespace DistinctSelectList.Models { /// /// For finding friend info on screen /// public class TwitterFriendsViewModel { /// /// Display names of friends in drop-down list /// public List FriendNames { get; set; } /// /// Display tweets in grid /// public List Tweets { get; set; } } } I created the TwitterFreindsViewModel. The two Lists are what the View consumes to populate the DropDownListFor and Grid. Notice that FriendNames is a List of SelectListItem, which is an MVC class. Another custom class I created is the TweetViewModel (the type of the Tweets List), shown below: namespace DistinctSelectList.Models { /// /// Info on friend tweets /// public class TweetViewModel { /// /// User's avatar /// public string ImageUrl { get; set; } /// /// User's Twitter name /// public string ScreenName { get; set; } /// /// Text containing user's tweet /// public string Tweet { get; set; } } } The initial Twitter query returns much more information than we need for our purposes and this a special class for displaying info in the View.  Now you know about the View and how it’s constructed. Let’s look at the controller next. The controller for this demo performs authentication, data retrieval, data manipulation, and view selection. I’ll skip the description of the authentication because it’s a normal part of using OAuth with LINQ to Twitter. Instead, we’ll drill down and focus on the Distinct operator. However, I’ll show you the entire controller, below,  so that you can see how it all fits together: using System.Linq; using System.Web.Mvc; using DistinctSelectList.Models; using LinqToTwitter; namespace DistinctSelectList.Controllers { [HandleError] public class HomeController : Controller { private MvcOAuthAuthorization auth; private TwitterContext twitterCtx; /// /// Display a list of friends current tweets /// /// public ActionResult Index() { auth = new MvcOAuthAuthorization(InMemoryTokenManager.Instance, InMemoryTokenManager.AccessToken); string accessToken = auth.CompleteAuthorize(); if (accessToken != null) { InMemoryTokenManager.AccessToken = accessToken; } if (auth.CachedCredentialsAvailable) { auth.SignOn(); } else { return auth.BeginAuthorize(); } twitterCtx = new TwitterContext(auth); var friendTweets = (from tweet in twitterCtx.Status where tweet.Type == StatusType.Friends select new TweetViewModel { ImageUrl = tweet.User.ProfileImageUrl, ScreenName = tweet.User.Identifier.ScreenName, Tweet = tweet.Text }) .ToList(); var friendNames = (from tweet in friendTweets select new SelectListItem { Text = tweet.ScreenName, Value = tweet.ScreenName }) .Distinct() .ToList(); var twendsVM = new TwitterFriendsViewModel { Tweets = friendTweets, FriendNames = friendNames }; return View(twendsVM); } public ActionResult About() { return View(); } } } The important part of the listing above are the LINQ to Twitter queries for friendTweets and friendNames. Both of these results are used in the subsequent population of the twendsVM instance that is passed to the view. Let’s dissect these two statements for clarification and focus on what is happening with Distinct. The query for friendTweets gets a list of the 20 most recent tweets (as specified by the Twitter API for friend queries) and performs a projection into the custom TweetViewModel class, repeated below for your convenience: var friendTweets = (from tweet in twitterCtx.Status where tweet.Type == StatusType.Friends select new TweetViewModel { ImageUrl = tweet.User.ProfileImageUrl, ScreenName = tweet.User.Identifier.ScreenName, Tweet = tweet.Text }) .ToList(); The LINQ to Twitter query above simplifies what we need to work with in the View and the reduces the amount of information we have to look at in subsequent queries. Given the friendTweets above, the next query performs another projection into an MVC SelectListItem, which is required for binding to the DropDownList.  This brings us to the focus of this blog post, writing a correct query that uses the Distinct operator. The query below uses LINQ to Objects, querying the friendTweets collection to get friendNames: var friendNames = (from tweet in friendTweets select new SelectListItem { Text = tweet.ScreenName, Value = tweet.ScreenName }) .Distinct() .ToList(); The above implementation of Distinct seems normal, but it is deceptively incorrect. After running the query above, by executing the application, you’ll notice that the drop-down list contains many duplicates.  This will send you back to the code scratching your head, but there’s a reason why this happens. To understand the problem, we must examine how Distinct works in LINQ to Objects. Distinct has two overloads: one without parameters, as shown above, and another that takes a parameter of type IEqualityComparer<T>.  In the case above, no parameters, Distinct will call EqualityComparer<T>.Default behind the scenes to make comparisons as it iterates through the list. You don’t have problems with the built-in types, such as string, int, DateTime, etc, because they all implement IEquatable<T>. However, many .NET Framework classes, such as SelectListItem, don’t implement IEquatable<T>. So, what happens is that EqualityComparer<T>.Default results in a call to Object.Equals, which performs reference equality on reference type objects.  You don’t have this problem with value types because the default implementation of Object.Equals is bitwise equality. However, most of your projections that use Distinct are on classes, just like the SelectListItem used in this demo application. So, the reason why Distinct didn’t produce the results we wanted was because we used a type that doesn’t define its own equality and Distinct used the default reference equality. This resulted in all objects being included in the results because they are all separate instances in memory with unique references. As you might have guessed, the solution to the problem is to use the second overload of Distinct that accepts an IEqualityComparer<T> instance. If you were projecting into your own custom type, you could make that type implement IEqualityComparer<T>, but SelectListItem belongs to the .NET Framework Class Library.  Therefore, the solution is to create a custom type to implement IEqualityComparer<T>, as in the SelectListItemComparer class, shown below: using System.Collections.Generic; using System.Web.Mvc; namespace DistinctSelectList.Models { public class SelectListItemComparer : EqualityComparer { public override bool Equals(SelectListItem x, SelectListItem y) { return x.Value.Equals(y.Value); } public override int GetHashCode(SelectListItem obj) { return obj.Value.GetHashCode(); } } } The SelectListItemComparer class above doesn’t implement IEqualityComparer<SelectListItem>, but rather derives from EqualityComparer<SelectListItem>. Microsoft recommends this approach for consistency with the behavior of generic collection classes. However, if your custom type already derives from a base class, go ahead and implement IEqualityComparer<T>, which will still work. EqualityComparer is an abstract class, that implements IEqualityComparer<T> with Equals and GetHashCode abstract methods. For the purposes of this application, the SelectListItem.Value property is sufficient to determine if two items are equal.   Since SelectListItem.Value is type string, the code delegates equality to the string class. The code also delegates the GetHashCode operation to the string class.You might have other criteria in your own object and would need to define what it means for your object to be equal. Now that we have an IEqualityComparer<SelectListItem>, let’s fix the problem. The code below modifies the query where we want distinct values: var friendNames = (from tweet in friendTweets select new SelectListItem { Text = tweet.ScreenName, Value = tweet.ScreenName }) .Distinct(new SelectListItemComparer()) .ToList(); Notice how the code above passes a new instance of SelectListItemComparer as the parameter to the Distinct operator. Now, when you run the application, the drop-down list will behave as you expect, showing only a unique set of names. In addition to Distinct, other LINQ Standard Query Operators have overloads that accept IEqualityComparer<T>’s, You can use the same techniques as shown here, with SelectListItemComparer, with those other operators as well. Now you know how to resolve problems with getting Distinct to work properly and also have a way to fix problems with other operators that require equality comparisons. @JoeMayo

    Read the article

  • Tip/Trick: Fix Common SEO Problems Using the URL Rewrite Extension

    - by ScottGu
    Search engine optimization (SEO) is important for any publically facing web-site.  A large % of traffic to sites now comes directly from search engines, and improving your site’s search relevancy will lead to more users visiting your site from search engine queries.  This can directly or indirectly increase the money you make through your site. This blog post covers how you can use the free Microsoft URL Rewrite Extension to fix a bunch of common SEO problems that your site might have.  It takes less than 15 minutes (and no code changes) to apply 4 simple URL Rewrite rules to your site, and in doing so cause search engines to drive more visitors and traffic to your site.  The techniques below work equally well with both ASP.NET Web Forms and ASP.NET MVC based sites.  They also works with all versions of ASP.NET (and even work with non-ASP.NET content). [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Measuring the SEO of your website with the Microsoft SEO Toolkit A few months ago I blogged about the free SEO Toolkit that we’ve shipped.  This useful tool enables you to automatically crawl/scan your site for SEO correctness, and it then flags any SEO issues it finds.  I highly recommend downloading and using the tool against any public site you work on.  It makes it easy to spot SEO issues you might have in your site, and pinpoint ways to optimize it further. Below is a simple example of a report I ran against one of my sites (www.scottgu.com) prior to applying the URL Rewrite rules I’ll cover later in this blog post:   Search Relevancy and URL Splitting Two of the important things that search engines evaluate when assessing your site’s “search relevancy” are: How many other sites link to your content.  Search engines assume that if a lot of people around the web are linking to your content, then it is likely useful and so weight it higher in relevancy. The uniqueness of the content it finds on your site.  If search engines find that the content is duplicated in multiple places around the Internet (or on multiple URLs on your site) then it is likely to drop the relevancy of the content. One of the things you want to be very careful to avoid when building public facing sites is to not allow different URLs to retrieve the same content within your site.  Doing so will hurt with both of the situations above.  In particular, allowing external sites to link to the same content with multiple URLs will cause your link-count and page-ranking to be split up across those different URLs (and so give you a smaller page rank than what it would otherwise be if it was just one URL).  Not allowing external sites to link to you in different ways sounds easy in theory – but you might wonder what exactly this means in practice and how you avoid it. 4 Really Common SEO Problems Your Sites Might Have Below are 4 really common scenarios that can cause your site to inadvertently expose multiple URLs for the same content.  When this happens external sites linking to yours will end up splitting their page links across multiple URLs - and as a result cause you to have a lower page ranking with search engines than you deserve. SEO Problem #1: Default Document IIS (and other web servers) supports the concept of a “default document”.  This allows you to avoid having to explicitly specify the page you want to serve at either the root of the web-site/application, or within a sub-directory.  This is convenient – but means that by default this content is available via two different publically exposed URLs (which is bad).  For example: http://scottgu.com/ http://scottgu.com/default.aspx SEO Problem #2: Different URL Casings Web developers often don’t realize URLs are case sensitive to search engines on the web.  This means that search engines will treat the following links as two completely different URLs: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx SEO Problem #3: Trailing Slashes Consider the below two URLs – they might look the same at first, but they are subtly different. The trailing slash creates yet another situation that causes search engines to treat the URLs as different and so split search rankings: http://scottgu.com http://scottgu.com/ SEO Problem #4: Canonical Host Names Sometimes sites support scenarios where they support a web-site with both a leading “www” hostname prefix as well as just the hostname itself.  This causes search engines to treat the URLs as different and split search rankling: http://scottgu.com/albums.aspx/ http://www.scottgu.com/albums.aspx/ How to Easily Fix these SEO Problems in 10 minutes (or less) using IIS Rewrite If you haven’t been careful when coding your sites, chances are you are suffering from one (or more) of the above SEO problems.  Addressing these issues will improve your search engine relevancy ranking and drive more traffic to your site. The “good news” is that fixing the above 4 issues is really easy using the URL Rewrite Extension.  This is a completely free Microsoft extension available for IIS 7.x (on Windows Server 2008, Windows Server 2008 R2, Windows 7 and Windows Vista).  The great thing about using the IIS Rewrite extension is that it allows you to fix the above problems *without* having to change any code within your applications.  You can easily install the URL Rewrite Extension in under 3 minutes using the Microsoft Web Platform Installer (a free tool we ship that automates setting up web servers and development machines).  Just click the green “Install Now” button on the URL Rewrite Spotlight page to install it on your Windows Server 2008, Windows 7 or Windows Vista machine: Once installed you’ll find that a new “URL Rewrite” icon is available within the IIS 7 Admin Tool: Double-clicking the icon will open up the URL Rewrite admin panel – which will display the list of URL Rewrite rules configured for a particular application or site: Notice that our rewrite rule list above is currently empty (which is the default when you first install the extension).  We can click the “Add Rule…” link button in the top-right of the panel to add and enable new URL Rewriting logic for our site.  Scenario 1: Handling Default Document Scenarios One of the SEO problems I discussed earlier in this post was the scenario where the “default document” feature of IIS causes you to inadvertently expose two URLs for the same content on your site.  For example: http://scottgu.com/ http://scottgu.com/default.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the second URL to instead go to the first one.  We will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  Let’s look at how we can create such a rule.  We’ll begin by clicking the “Add Rule” link in the screenshot above.  This will cause the below dialog to display: We’ll select the “Blank Rule” template within the “Inbound rules” section to create a new custom URL Rewriting rule.  This will display an empty pane like below: Don’t worry – setting up the above rule is easy.  The following 4 steps explain how to do so: Step 1: Name the Rule Our first step will be to name the rule we are creating.  Naming it with a descriptive name will make it easier to find and understand later.  Let’s name this rule our “Default Document URL Rewrite” rule: Step 2: Setup the Regular Expression that Matches this Rule Our second step will be to specify a regular expression filter that will cause this rule to execute when an incoming URL matches the regex pattern.   Don’t worry if you aren’t good with regular expressions - I suck at them too. The trick is to know someone who is good at them or copy/paste them from a web-site.  Below we are going to specify the following regular expression as our pattern rule: (.*?)/?Default\.aspx$ This pattern will match any URL string that ends with Default.aspx. The "(.*?)" matches any preceding character zero or more times. The "/?" part says to match the slash symbol zero or one times. The "$" symbol at the end will ensure that the pattern will only match strings that end with Default.aspx.  Combining all these regex elements allows this rule to work not only for the root of your web site (e.g. http://scottgu.com/default.aspx) but also for any application or subdirectory within the site (e.g. http://scottgu.com/photos/default.aspx.  Because the “ignore case” checkbox is selected it will match both “Default.aspx” as well as “default.aspx” within the URL.   One nice feature built-into the rule editor is a “Test pattern” button that you can click to bring up a dialog that allows you to test out a few URLs with the rule you are configuring: Above I've added a “products/default.aspx” URL and clicked the “Test” button.  This will give me immediate feedback on whether the rule will execute for it.  Step 3: Setup a Permanent Redirect Action We’ll then setup an action to occur when our regular expression pattern matches the incoming URL: In the dialog above I’ve changed the “Action Type” drop down to be a “Redirect” action.  The “Redirect Type” will be a HTTP 301 Permanent redirect – which means search engines will follow it. I’ve also set the “Redirect URL” property to be: {R:1}/ This indicates that we want to redirect the web client requesting the original URL to a new URL that has the originally requested URL path - minus the "Default.aspx" in it.  For example, requests for http://scottgu.com/default.aspx will be redirected to http://scottgu.com/, and requests for http://scottgu.com/photos/default.aspx will be redirected to http://scottgu.com/photos/ The "{R:N}" regex construct, where N >= 0, is called a back-reference and N is the back-reference index. In the case of our pattern "(.*?)/?Default\.aspx$", if the input URL is "products/Default.aspx" then {R:0} will contain "products/Default.aspx" and {R:1} will contain "products".  We are going to use this {R:1}/ value to be the URL we redirect users to.  Step 4: Apply and Save the Rule Our final step is to click the “Apply” button in the top right hand of the IIS admin tool – which will cause the tool to persist the URL Rewrite rule into our application’s root web.config file (under a <system.webServer/rewrite> configuration section): <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Because IIS 7.x and ASP.NET share the same web.config files, you can actually just copy/paste the above code into your web.config files using Visual Studio and skip the need to run the admin tool entirely.  This also makes adding/deploying URL Rewrite rules with your ASP.NET applications really easy. Step 5: Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com/ http://scottgu.com/default.aspx Notice that the second URL automatically redirects to the first one.  Because it is a permanent redirect, search engines will follow the URL and should update the page ranking of http://scottgu.com to include links to http://scottgu.com/default.aspx as well. Scenario 2: Different URL Casing Another common SEO problem I discussed earlier in this post is that URLs are case sensitive to search engines on the web.  This means that search engines will treat the following links as two completely different URLs: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL to instead go to the second (all lower-case) one.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve. To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: Unlike the previous scenario (where we created a “Blank Rule”), with this scenario we can take advantage of a built-in “Enforce lowercase URLs” rule template.  When we click the “ok” button we’ll see the following dialog which asks us if we want to create a rule that enforces the use of lowercase letters in URLs: When we click the “Yes” button we’ll get a pre-written rule that automatically performs a permanent redirect if an incoming URL has upper-case characters in it – and automatically send users to a lower-case version of the URL: We can click the “Apply” button to use this rule “as-is” and have it apply to all incoming URLs to our site.  Because my www.scottgu.com site uses ASP.NET Web Forms, I’m going to make one small change to the rule we generated above – which is to add a condition that will ensure that URLs to ASP.NET’s built-in “WebResource.axd” handler are excluded from our case-sensitivity URL Rewrite logic.  URLs to the WebResource.axd handler will only come from server-controls emitted from my pages – and will never be linked to from external sites.  While my site will continue to function fine if we redirect these URLs to automatically be lower-case – doing so isn’t necessary and will add an extra HTTP redirect to many of my pages.  The good news is that adding a condition that prevents my URL Rewriting rule from happening with certain URLs is easy.  We simply need to expand the “Conditions” section of the form above We can then click the “Add” button to add a condition clause.  This will bring up the “Add Condition” dialog: Above I’ve entered {URL} as the Condition input – and said that this rule should only execute if the URL does not match a regex pattern which contains the string “WebResource.axd”.  This will ensure that WebResource.axd URLs to my site will be allowed to execute just fine without having the URL be re-written to be all lower-case. Note: If you have static resources (like references to .jpg, .css, and .js files) within your site that currently use upper-case characters you’ll probably want to add additional condition filter clauses so that URLs to them also don’t get redirected to be lower-case (just add rules for patterns like .jpg, .gif, .js, etc).  Your site will continue to work fine if these URLs get redirected to be lower case (meaning the site won’t break) – but it will cause an extra HTTP redirect to happen on your site for URLs that don’t need to be redirected for SEO reasons.  So setting up a condition clause makes sense to add. When I click the “ok” button above and apply our lower-case rewriting rule the admin tool will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com/Albums.aspx http://scottgu.com/albums.aspx Notice that the first URL (which has a capital “A”) automatically does a redirect to a lower-case version of the URL.  Scenario 3: Trailing Slashes Another common SEO problem I discussed earlier in this post is the scenario of trailing slashes within URLs.  The trailing slash creates yet another situation that causes search engines to treat the URLs as different and so split search rankings: http://scottgu.com http://scottgu.com/ We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL (that does not have a trailing slash) to instead go to the second one that does.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: The URL Rewrite admin tool has a built-in “Append or remove the trailing slash symbol” rule template.  When we select it and click the “ok” button we’ll see the following dialog which asks us if we want to create a rule that automatically redirects users to a URL with a trailing slash if one isn’t present: Like within our previous lower-casing rewrite rule we’ll add one additional condition clause that will exclude WebResource.axd URLs from being processed by this rule.  This will avoid an unnecessary redirect for happening for those URLs. When we click the “OK” button we’ll get a pre-written rule that automatically performs a permanent redirect if the URL doesn’t have a trailing slash – and if the URL is not processed by either a directory or a file.  This will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>                 <rule name="Trailing Slash" stopProcessing="true">                     <match url="(.*[^/])$" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />                         <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://scottgu.com http://scottgu.com/ Notice that the first URL (which has no trailing slash) automatically does a redirect to a URL with the trailing slash.  Because it is a permanent redirect, search engines will follow the URL and update the page ranking. Scenario 4: Canonical Host Names The final SEO problem I discussed earlier are scenarios where a site works with both a leading “www” hostname prefix as well as just the hostname itself.  This causes search engines to treat the URLs as different and split search rankling: http://www.scottgu.com/albums.aspx http://scottgu.com/albums.aspx We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL (that has a www prefix) to instead go to the second URL.  Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.  To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again.  This will cause the “Add Rule” dialog to appear again: The URL Rewrite admin tool has a built-in “Canonical domain name” rule template.  When we select it and click the “ok” button we’ll see the following dialog which asks us if we want to create a redirect rule that automatically redirects users to a primary host name URL: Above I’m entering the primary URL address I want to expose to the web: scottgu.com.  When we click the “OK” button we’ll get a pre-written rule that automatically performs a permanent redirect if the URL has another leading domain name prefix.  This will save the following additional rule to our web.config file: <configuration>     <system.webServer>         <rewrite>             <rules>                 <rule name="Cannonical Hostname">                     <match url="(.*)" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{HTTP_HOST}" pattern="^scottgu\.com$" negate="true" />                     </conditions>                     <action type="Redirect" url="http://scottgu.com/{R:1}" />                 </rule>                 <rule name="Default Document" stopProcessing="true">                     <match url="(.*?)/?Default\.aspx$" />                     <action type="Redirect" url="{R:1}/" />                 </rule>                 <rule name="Lower Case URLs" stopProcessing="true">                     <match url="[A-Z]" ignoreCase="false" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{ToLower:{URL}}" />                 </rule>                 <rule name="Trailing Slash" stopProcessing="true">                     <match url="(.*[^/])$" />                     <conditions logicalGrouping="MatchAll" trackAllCaptures="false">                         <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />                         <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />                         <add input="{URL}" pattern="WebResource.axd" negate="true" />                     </conditions>                     <action type="Redirect" url="{R:1}/" />                 </rule>             </rules>         </rewrite>     </system.webServer> </configuration> Try the Rule Out Now that we’ve saved the rule, let’s try it out on our site.  Try the following two URLs on my site: http://www.scottgu.com/albums.aspx http://scottgu.com/albums.aspx Notice that the first URL (which has the “www” prefix) now automatically does a redirect to the second URL which does not have the www prefix.  Because it is a permanent redirect, search engines will follow the URL and update the page ranking. 4 Simple Rules for Improved SEO The above 4 rules are pretty easy to setup and should take less than 15 minutes to configure on existing sites you already have.  The beauty of using a solution like the URL Rewrite Extension is that you can take advantage of it without having to change code within your web-site – and without having to break any existing links already pointing at your site.  Users who follow existing links will be automatically redirected to the new URLs you wish to publish.  And search engines will start to give your site a higher search relevancy ranking – which will list your site higher in search results and drive more traffic to it. Customizing your URL Rewriting rules further is easy to-do either by editing the web.config file directly, or alternatively, just double click the URL Rewrite icon within the IIS 7.x admin tool and it will list all the active rules for your web-site or application: Clicking any of the rules above will open the rules editor back up and allow you to tweak/customize/save them further. Summary Measuring and improving SEO is something every developer building a public-facing web-site needs to think about and focus on.  If you haven’t already, download and use the SEO Toolkit to analyze the SEO of your sites today. New URL Routing features in ASP.NET MVC and ASP.NET Web Forms 4 make it much easier to build applications that have more control over the URLs that are published.  Tools like the URL Rewrite Extension that I’ve talked about in this blog post make it much easier to improve the URLs that are published from sites you already have built today – without requiring you to change a lot of code. The URL Rewrite Extension provides a bunch of additional great capabilities – far beyond just SEO - as well.  I’ll be covering these additional capabilities more in future blog posts. Hope this helps, Scott

    Read the article

  • Installing SharePoint 2010 and PowerPivot for SharePoint on Windows 7

    - by smisner
    Many people like me want (or need) to do their business intelligence development work on a laptop. As someone who frequently speaks at various events or teaches classes on all subjects related to the Microsoft business intelligence stack, I need a way to run multiple server products on my laptop with reasonable performance. Once upon a time, that requirement meant only that I had to load the current version of SQL Server and the client tools of choice. In today's post, I'll review my latest experience with trying to make the newly released Microsoft BI products work with a Windows 7 operating system.The entrance of Microsoft Office SharePoint Server 2007 into the BI stack complicated matters and I started using Virtual Server to establish a "suitable" environment. As part of the team that delivered a lot of education as part of the Yukon pre-launch activities (that would be SQL Server 2005 for the uninitiated), I was working with four - yes, four - virtual servers. That was a pretty brutal workload for a 2GB laptop, which worked if I was very, very careful. It could also be a finicky and unreliable configuration as I learned to my dismay at one TechEd session several years ago when I had to reboot a very carefully cached set of servers just minutes before my session started. Although it worked, it came back to life very, very slowly much to the displeasure of the audience. They couldn't possibly have been less pleased than me.At that moment, I resolved to get the beefiest environment I could afford and consolidate to a single virtual server. Enter the 4GB 64-bit laptop to preserve my sanity and my livelihood. Likewise, for SQL Server 2008, I managed to keep everything within a single virtual server and I could function reasonably well with this approach.Now we have SQL Server 2008 R2 plus Office SharePoint Server 2010. That means a 64-bit operating system. Period. That means no more Virtual Server. That means I must use Hyper-V or another alternative. I've heard alternatives exist, but my few dabbles in this area did not yield positive results. It might have been just me having issues rather than any failure of those technologies to adequately support the requirements.My first run at working with the new BI stack configuration was to set up a 64-bit 4GB laptop with a dual-boot to run Windows Server 2008 R2 with Hyper-V. However, I was generally not happy with running Windows Server 2008 R2 on my laptop. For one, I couldn't put it into sleep mode, which is helpful if I want to prepare for a presentation beforehand and then walk to the podium without the need to hold my laptop in its open state along the way (my strategy at the TechEd session long, long ago). Secondly, it was finicky with projectors. I had issues from time to time and while I always eventually got it to work, I didn't appreciate those nerve-wracking moments wondering whether this would be the time that it wouldn't work.Somewhere along the way, I learned that it was possible to load SharePoint 2010 in a Windows 7 which piqued my interest. I had just acquired a new laptop running Windows 7 64-bit, and thought surely running the BI stack natively on my laptop must be better than running Hyper-V. (I have not tried booting to Hyper-V VHD yet, but that's on my list of things to try so the jury of one is still out on this approach.) Recently, I had to build up a server with the RTM versions of SQL Server 2008 R2 and Sharepoint Server 2010 and decided to follow suit on my Windows 7 Ultimate 64-bit laptop. The process is slightly different, but I'm happy to report that it IS possible, although I had some fits and starts along the way.DISCLAIMER: These products are NOT intended to be run in production mode on the Windows 7 operating system. The configuration described in this post is strictly for development or learning purposes and not supported by Microsoft. If you have trouble, you will NOT get help from them. I might be able to help, but I provide no guarantees of my ability or availablity to help. I won't provide the step-by-step instructions in this post as there are other resources that provide these details, but I will provide an overview of my approach, point you to the relevant resources, describe some of the problems I encountered, and explain how I addressed those problems to achieve my desired goal.Because my goal was not simply to set up SharePoint Server 2010 on my laptop, but specifically PowerPivot for SharePoint, I started out by referring to the installation instructions at the PowerPiovt-Info site, but mainly to confirm that I was performing steps in the proper sequence. I didn't perform the steps in Part 1 because those steps are applicable only to a server operating system which I am not running on my laptop. Then, the instructions in Part 2, won't work exactly as written for the same reason. Instead, I followed the instructions on MSDN, Setting Up the Development Environment for SharePoint 2010 on Windows Vista, Windows 7, and Windows Server 2008. In general, I found the following differences in installation steps from the steps at PowerPivot-Info:You must copy the SharePoint installation media to the local drive so that you can edit the config.xml to allow installation on a Windows client.You also have to manually install the prerequisites. The instructions provides links to each item that you must manually install and provides a command-line instruction to execute which enables required Windows features.I will digress for a moment to save you some grief in the sequence of steps to perform. I discovered later that a missing step in the MSDN instructions is to install the November CTP Reporting Services add-in for SharePoint. When I went to test my SharePoint site (I believe I tested after I had a successful PowerPivot installation), I ran into the following error: Could not load file or assembly 'RSSharePointSoapProxy, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' or one of its dependencies. The system cannot find the file specified. I was rather surprised that Reporting Services was required. Then I found an article by Alan le Marquand, Working Together: SQL Server 2008 R2 Reporting Services Integration in SharePoint 2010,that instructed readers to install the November add-in. My first reaction was, "Really?!?" But I confirmed it in another TechNet article on hardware and software requirements for SharePoint Server 2010. It doesn't refer explicitly to the November CTP but following the link took me there. (Interestingly, I retested today and there's no longer any reference to the November CTP. Here's the link to download the latest and greatest Reporting Services Add-in for SharePoint Technologies 2010.) You don't need to download the add-in anymore if you're doing a regular server-based installation of SharePoint because it installs as part of the prerequisites automatically.When it was time to start the installation of SharePoint, I deviated from the MSDN instructions and from the PowerPivot-Info instructions:On the Choose the installation you want page of the installation wizard, I chose Server Farm.On the Server Type page, I chose Complete.At the end of the installation, I did not run the configuration wizard.Returning to the PowerPivot-Info instructions, I tried to follow the instructions in Part 3 which describe installing SQL Server 2008 R2 with the PowerPivot option. These instructions tell you to choose the New Server option on the Setup Role page where you add PowerPivot for SharePoint. However, I ran into problems with this approach and got installation errors at the end.It wasn't until much later as I was investigating an error that I encountered Dave Wickert's post that installing PowerPivot for SharePoint on Windows 7 is unsupported. Uh oh. But he did want to hear about it if anyone succeeded, so I decided to take the plunge. Perseverance paid off, and I can happily inform Dave that it does work so far. I haven't tested absolutely everything with PowerPivot for SharePoint but have successfully deployed a workbook and viewed the PowerPivot Management Dashboard. I have not yet tested the data refresh feature, but I have installed. Continue reading to see how I accomplished my objective.I unintalled SQL Server 2008 R2 and started again. I had different problems which I don't recollect now. However, I uninstalled again and approached installation from a different angle and my next attempt succeeded. The downside of this approach is that you must do all of the things yourself that are done automatically when you install PowerPivot as a new server. Here are the steps that I followed:Install SQL Server 2008 R2 to get a database engine instance installed.Run the SharePoint configuration wizard to set up the SharePoint databases.In Central Administration, create a Web application using classic mode authentication as per a TechNet article on PowerPivot Authentication and Authorization.Then I followed the steps I found at How to: Install PowerPivot for SharePoint on an Existing SharePoint Server. Especially important to note - you must launch setup by using Run as administrator. I did not have to manually deploy the PowerPivot solution as the instructions specify, but it's good to know about this step because it tells you where to look in Central Administration to confirm a successful deployment.I did spot some incorrect steps in the instructions (at the time of this writing) in How To: Configure Stored Credentials for PowerPivot Data Refresh. Specifically, in the section entitled Step 1: Create a target application and set the credentials, both steps 10 and 12 are incorrect. They tell you to provide an actual Windows user name and password on the page where you are simply defining the prompts for your application in the Secure Store Service. To add the Windows user name and password that you want to associate with the application - after you have successfully created the target application - you select the target application and then click Set credentials in the ribbon.Lastly, I followed the instructions at How to: Install Office Data Connectivity Components on a PowerPivot server. However, I have yet to test this in my current environment.I did have several stops and starts throughout this process and edited those out to spare you from reading non-essential information. I believe the explanation I have provided here accurately reflect the steps I followed to produce a working configuration. If you follow these steps and get a different result, please let me know so that together we can work through the issue and correct these instructions. I'm sure there are many other folks in the Microsoft BI community that will appreciate the ability to set up the BI stack in a Windows 7 environment for development or learning purposes. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Mauritius Software Craftsmanship Community

    There we go! I finally managed to push myself forward and pick up an old, actually too old, idea since I ever arrived here in Mauritius more than six years ago. I'm talking about a community for all kind of ICT connected people. In the past (back in Germany), I used to be involved in various community activities. For example, I was part of the Microsoft Community Leader/Influencer Program (CLIP) in Germany due to an FAQ on Visual FoxPro, actually Active FoxPro Pages (AFP) to be more precise. Then in 2003/2004 I addressed the responsible person of the dFPUG user group in Speyer in order to assist him in organising monthly user group meetings. Well, he handed over management completely, and attended our meetings regularly. Why did it take you so long? Well, I don't want to bother you with the details but short version is that I was too busy on either job (building up new companies) or private life (got married and we have two lovely children, eh 'monsters') or even both. But now is the time where I was starting to look for new fields given the fact that I gained some spare time. My businesses are up and running, the kids are in school, and I am finally in a position where I can commit myself again to community activities. And I love to do that! Why a new user group? Good question... And 'easy' to answer. Since back in 2007 I did my usual research, eh Google searches, to see whether there existing user groups in Mauritius and in which field of interest. And yes, there are! If I recall this correctly, then there are communities for PHP, Drupal, Python (just recently), Oracle, and Linux (which used to be even two). But... either they do not exist anymore, they are dormant, or there is only a low heart-beat, frankly speaking. And yes, I went to meetings of the Linux User Group Meta (Mauritius) back in 2010/2011 and just recently. I really like the setup and the way the LUGM is organised. It's just that I have a slightly different point of view on how a user group or community should organise itself and how to approach future members. Don't get me wrong, I'm not criticizing others doing a very good job, I'm only saying that I'd like to do it differently. The last meeting of the LUGM was awesome; read my feedback about it. Ok, so what's up with 'Mauritius Software Craftsmanship Community' or short: MSCC? As I've already written in my article on 'Communities - The importance of exchange and discussion' I think it is essential in a world of IT to stay 'connected' with a good number of other people in the same field. There is so much dynamic and every day's news that it is almost impossible to keep on track with all of them. The MSCC is going to provide a common platform to exchange experience and share knowledge between each other. You might be a newbie and want to know what to expect working as a software developer, or as a database administrator, or maybe as an IT systems administrator, or you're an experienced geek that loves to share your ideas or solutions that you implemented to solve a specific problem, or you're the business (or HR) guy that is looking for 'fresh' blood to enforce your existing team. Or... you're just interested and you'd like to communicate with like-minded people. Meetup of 26.06.2013 @ L'arabica: Of course there are laptops around. Free WiFi, power outlet, coffee, code and Linux in one go. The MSCC is technology-agnostic and spans an umbrella over any kind of technology. Simply because you can't ignore other technologies anymore in a connected IT world as we have. A front-end developer for iOS applications should have the chance to connect with a Python back-end coder and eventually with a DBA for MySQL or PostgreSQL and exchange their experience. Furthermore, I'm a huge fan of cross-platform development, and it is very pleasant to have pure Web developers - with all that HTML5, CSS3, JavaScript and JS libraries stuff - and passionate C# or Java coders at the same table. This diversity of knowledge can assist and boost your personal situation. And last but not least, there are projects and open positions 'flying' around... People might like to hear others opinion about an employer or get new impulses on how to tackle down an issue at their workspace, etc. This is about community. And that's how I see the MSCC in general - free of any limitations be it by programming language or technology. Having the chance to exchange experience and to discuss certain aspects of technology saves you time and money, and it's a pleasure to enjoy. Compared to dusty books and remote online resources. It's human! Organising meetups (meetings, get-together, gatherings - you name it!) As of writing this article, the MSCC is currently meeting every Wednesday for the weekly 'Code & Coffee' session at various locations (suggestions are welcome!) in Mauritius. This might change in the future eventually but especially at the beginning I think it is very important to create awareness in the Mauritian IT world. Yes, we are here! Come and join us! ;-) The MSCC's main online presence is located at Meetup.com because it allows me to handle the organisation of events and meeting appointments very easily, and any member can have a look who else is involved so that an exchange of contacts is given at any time. In combination with the other entities (G+ Communities, FB Pages or in Groups) I advertise and manage all future activities here: Mauritius Software Craftsmanship Community This is a community for those who care and are proud of what they do. For those developers, regardless how experienced they are, who want to improve and master their craft. This is a community for those who believe that being average is just not good enough. I know, there are not many 'craftsmen' yet but it's a start... Let's see how it looks like by the end of the year. There are free smartphone apps for Android and iOS from Meetup.com that allow you to keep track of meetings and to stay informed on latest updates. And last but not least, there is a Trello workspace to collect and share ideas and provide downloads of slides, etc. Trello is also available as free smartphone app. Sharing is caring! As mentioned, the #MSCC is present in various social media networks in order to cover as many people as possible here in Mauritius. Following is an overview of the current networks: Twitter - Latest updates and quickies Google+ - Community channel Facebook - Community Page LinkedIn - Community Group Trello - Collaboration workspace to share and develop ideas Hopefully, this covers the majority of computer-related people in Mauritius. Please spread the word about the #MSCC between your colleagues, your friends and other interested 'geeks'. Your future looks bright Running and participating in a user group or any kind of community usually provides quite a number of advantages for anyone. On the one side it is very joyful for me to organise appointments and get in touch with people that might be interested to present a little demo of their projects or their recent problems they had to tackle down, and on the other side there are lots of companies that have various support programs or sponsorships especially tailored for user groups. At the moment, I already have a couple of gimmicks that I would like to hand out in small contests or raffles during one of the upcoming meetings, and as said, companies provide all kind of goodies, books free of charge, or sometimes even licenses for communities. Meeting other software developers or IT guys also opens up your point of view on the local market and there might be interesting projects or job offers available, too. A community like the Mauritius Software Craftsmanship Community is great for freelancers, self-employed, students and of course employees. Meetings will be organised on a regular basis, and I'm open to all kind of suggestions from you. Please leave a comment here in blog or join the conversations in the above mentioned social networks. Let's get this community up and running, my fellow Mauritians! Recent updates The MSCC is now officially participating in the O'Reilly UK User Group programm and we are allowed to request review or recension copies of recent titles. Additionally, we have a discount code for any books or ebooks that you might like to order on shop.oreilly.com. More applications for user group sponsorship programms are pending and I'm looking forward to a couple of announcement very soon. And... we need some kind of 'corporate identity' - Over at the MSCC website there is a call for action (or better said a contest with prizes) to create a unique design for the MSCC. This would include a decent colour palette, a logo, graphical banners for Meetup, Google+, Facebook, LinkedIn, etc. and of course badges for our craftsmen to add to their personal blogs and websites. Please spread the word and contribute. Thanks!

    Read the article

  • Visual Studio 2013 Static Code Analysis in depth: What? When and How?

    - by Hosam Kamel
    In this post I'll illustrate in details the following points What is static code analysis? When to use? Supported platforms Supported Visual Studio versions How to use Run Code Analysis Manually Run Code Analysis Automatically Run Code Analysis while check-in source code to TFS version control (TFSVC) Run Code Analysis as part of Team Build Understand the Code Analysis results & learn how to fix them Create your custom rule set Q & A References What is static Rule analysis? Static Code Analysis feature of Visual Studio performs static code analysis on code to help developers identify potential design, globalization, interoperability, performance, security, and a lot of other categories of potential problems according to Microsoft's rules that mainly targets best practices in writing code, and there is a large set of those rules included with Visual Studio grouped into different categorized targeting specific coding issues like security, design, Interoperability, globalizations and others. Static here means analyzing the source code without executing it and this type of analysis can be performed through automated tools (like Visual Studio 2013 Code Analysis Tool) or manually through Code Review which already supported in Visual Studio 2012 and 2013 (check Using Code Review to Improve Quality video on Channel9) There is also Dynamic analysis which performed on executing programs using software testing techniques such as Code Coverage for example. When to use? Running Code analysis tool at regular intervals during your development process can enhance the quality of your software, examines your code for a set of common defects and violations is always a good programming practice. Adding that Code analysis can also find defects in your code that are difficult to discover through testing allowing you to achieve first level quality gate for you application during development phase before you release it to the testing team. Supported platforms .NET Framework, native (C and C++) Database applications. Support Visual Studio versions All version of Visual Studio starting Visual Studio 2013 (except Visual Studio Test Professional) check Feature comparisons Create and modify a custom rule set required Visual Studio Premium or Ultimate. How to use? Code Analysis can be run manually at any time from within the Visual Studio IDE, or even setup to automatically run as part of a Team Build or check-in policy for Team Foundation Server. Run Code Analysis Manually To run code analysis manually on a project, on the Analyze menu, click Run Code Analysis on your project or simply right click on the project name on the Solution Explorer choose Run Code Analysis from the context menu Run Code Analysis Automatically To run code analysis each time that you build a project, you select Enable Code Analysis on Build on the project's Property Page Run Code Analysis while check-in source code to TFS version control (TFSVC) Team Foundation Version Control (TFVC) provides a way for organizations to enforce practices that lead to better code and more efficient group development through Check-in policies which are rules that are set at the team project level and enforced on developer computers before code is allowed to be checked in. (This is available only if you're using Team Foundation Server) Require permissions on Team Foundation Server: you must have the Edit project-level information permission set to Allow typically your account must be part of Project Administrators, Project Collection Administrators, for more information about Team Foundation permissions check http://msdn.microsoft.com/en-us/library/ms252587(v=vs.120).aspx In Team Explorer, right-click the team project name, point to Team Project Settings, and then click Source Control. In the Source Control dialog box, select the Check-in Policy tab. Click Add to create a new check-in policy. Double-click the existing Code Analysis item in the Policy Type list to change the policy. Check or Uncheck the policy option based on the configurations you need to perform as illustrated below: Enforce check-in to only contain files that are part of current solution: code analysis can run only on files specified in solution and project configuration files. This policy guarantees that all code that is part of a solution is analyzed. Enforce C/C++ Code Analysis (/analyze): Requires that all C or C++ projects be built with the /analyze compiler option to run code analysis before they can be checked in. Enforce Code Analysis for Managed Code: Requires that all managed projects run code analysis and build before they can be checked in. Check Code analysis rule set reference on MSDN What is Rule Set? Rule Set is a group of code analysis rules like the example below where Microsoft.Design is the rule set name where "Do not declare static members on generic types" is the code analysis rule Once you configured the Analysis rule the policy will be enabled for all the team member in this project whenever a team member check-in any source code to the TFSVC the policy section will highlight the Code Analysis policy as below TFS is a very extensible platform so you can simply implement your own custom Code Analysis Check-in policy, check this link for more details http://msdn.microsoft.com/en-us/library/dd492668.aspx but you have to be aware also about compatibility between different TFS versions check http://msdn.microsoft.com/en-us/library/bb907157.aspx Run Code Analysis as part of Team Build With Team Foundation Build (TFBuild), you can create and manage build processes that automatically compile and test your applications, and perform other important functions. Code Analysis can be enabled in the Build Definition file by selecting the correct value for the build process parameter "Perform Code Analysis" Once configure, Kick-off your build definition to queue a new build, Code Analysis will run as part of build workflow and you will be able to see code analysis warning as part of build report Understand the Code Analysis results & learn how to fix them Now after you went through Code Analysis configurations and the different ways of running it, we will go through the Code Analysis result how to understand them and how to resolve them. Code Analysis window in Visual Studio will show all the analysis results based on the rule sets you configured in the project file properties, let's dig deep into what each result item contains: 1 Check ID The unique identifier for the rule. CheckId and Category are used for in-source suppression of a warning.       2 Title The title of warning message       3 Description A description of the problem or suggested fix 4 File Name File name and the line of code number which violate the code analysis rule set 5 Category The code analysis category for this error 6 Warning /Error Depend on how you configure it in the rule set the default is Warning level 7 Action Copy: copy the warning information to the clipboard Create Work Item: If you're connected to Team Foundation Server you can create a work item most probably you may create a Task or Bug and assign it for a developer to fix certain code analysis warning Suppress Message: There are times when you might decide not to fix a code analysis warning. You might decide that resolving the warning requires too much recoding in relation to the probability that the issue will arise in any real-world implementation of your code. Or you might believe that the analysis that is used in the warning is inappropriate for the particular context. You can suppress individual warnings so that they no longer appear in the Code Analysis window. Two options available: In Source inserts a SuppressMessage attribute in the source file above the method that generated the warning. This makes the suppression more discoverable. In Suppression File adds a SuppressMessage attribute to the GlobalSuppressions.cs file of the project. This can make the management of suppressions easier. Note that the SuppressMessage attribute added to GlobalSuppression.cs also targets the method that generated the warning. It does not suppress the warning globally.       Visual Studio makes it very easy to fix Code analysis warning, all you have to do is clicking on the Check Id hyperlink if you are not aware how to fix the warring and you'll be directed to MSDN online or local copy based on the configuration you did while installing Visual Studio and you will find all the information about the warring including how to fix it. Create a Custom Code Analysis Rule Set The Microsoft standard rule sets provide groups of rules that are organized by function and depth. For example, the Microsoft Basic Design Guidelines Rules and the Microsoft Extended Design Guidelines Rules contain rules that focus on usability and maintainability issues, with added emphasis on naming rules in the Extended rule set, you can create and modify a custom rule set to meet specific project needs associated with code analysis. To create a custom rule set, you open one or more standard rule sets in the rule set editor. Create and modify a custom rule set required Visual Studio Premium or Ultimate. You can check How to: Create a Custom Rule Set on MSDN for more details http://msdn.microsoft.com/en-us/library/dd264974.aspx Q & A Visual Studio static code analysis vs. FxCop vs. StyleCpp http://www.excella.com/blog/stylecop-vs-fxcop-difference-between-code-analysis-tools/ Code Analysis for SharePoint Apps and SPDisposeCheck? This post lists some of the rule set you can run specifically for SharePoint applications and how to integrate SPDisposeCheck as well. Code Analysis for SQL Server Database Projects? This post illustrate how to run static code analysis on T-SQL through SSDT ReSharper 8 vs. Visual Studio 2013? This document lists some of the features that are provided by ReSharper 8 but are missing or not as fully implemented in Visual Studio 2013. References A Few Billion Lines of Code Later: Using Static Analysis to Find Bugs in the Real World http://cacm.acm.org/magazines/2010/2/69354-a-few-billion-lines-of-code-later/fulltext What is New in Code Analysis for Visual Studio 2013 http://blogs.msdn.com/b/visualstudioalm/archive/2013/07/03/what-is-new-in-code-analysis-for-visual-studio-2013.aspx Analyze the code quality of Windows Store apps using Visual Studio static code analysis http://msdn.microsoft.com/en-us/library/windows/apps/hh441471.aspx [Hands-on-lab] Using Code Analysis with Visual Studio 2012 to Improve Code Quality http://download.microsoft.com/download/A/9/2/A9253B14-5F23-4BC8-9C7E-F5199DB5F831/Using%20Code%20Analysis%20with%20Visual%20Studio%202012%20to%20Improve%20Code%20Quality.docx Originally posted at "Hosam Kamel| Developer & Platform Evangelist" http://blogs.msdn.com/hkamel

    Read the article

< Previous Page | 656 657 658 659 660 661 662 663 664 665 666 667  | Next Page >