Search Results

Search found 7415 results on 297 pages for 'man behind'.

Page 286/297 | < Previous Page | 282 283 284 285 286 287 288 289 290 291 292 293  | Next Page >

  • onCommand on button not firing inside gridView??

    - by sah302
    This is really driving me crazy. I've got a button inside a gridview to remove that item from the gridview (its datasource is a list). I've got the list being saved to session anytime a change is being made to it, and on page_load check if that session variable is empty, if not, then set that list to bind to the gridview. Code Behind: Public accomplishmentTypeDao As New AccomplishmentTypeDao() Public accomplishmentDao As New AccomplishmentDao() Public userDao As New UserDao() Public facultyDictionary As New Dictionary(Of Guid, String) Public facultyList As New List(Of User) Public associatedFaculty As New List(Of User) Public facultyId As New Guid Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load 'If Not Session("associatedFaculty") Is Nothing Then' ' Dim associatedFacultyArray As User() = DirectCast(Session("associatedFaculty"), User())' ' associatedFaculty = associatedFacultyArray.ToList()' 'End If' Page.Title = "Add a New Faculty Accomplishment" ddlAccomplishmentType.DataSource = accomplishmentTypeDao.getEntireTable() ddlAccomplishmentType.DataTextField = "Name" ddlAccomplishmentType.DataValueField = "Id" ddlAccomplishmentType.DataBind() facultyList = userDao.getListOfUsersByUserGroupName("Faculty") For Each faculty As User In facultyList facultyDictionary.Add(faculty.Id, faculty.LastName & ", " & faculty.FirstName) Next If Not Page.IsPostBack Then ddlFacultyList.DataSource = facultyDictionary ddlFacultyList.DataTextField = "Value" ddlFacultyList.DataValueField = "Key" ddlFacultyList.DataBind() End If gvAssociatedUsers.DataSource = associatedFaculty gvAssociatedUsers.DataBind() End Sub Protected Sub deleteUser(ByVal sender As Object, ByVal e As System.Web.UI.WebControls.CommandEventArgs) facultyId = New Guid(e.CommandArgument.ToString()) associatedFaculty.Remove(associatedFaculty.Find(Function(user) user.Id = facultyId)) Session("associatedFaculty") = associatedFaculty.ToArray() gvAssociatedUsers.DataBind() upAssociatedFaculty.Update() End Sub Protected Sub btnAddUser_Click(ByVal sender As Object, ByVal e As EventArgs) Handles btnAddUser.Click facultyId = New Guid(ddlFacultyList.SelectedValue) associatedFaculty.Add(facultyList.Find(Function(user) user.Id = facultyId)) Session.Add("associatedFaculty", associatedFaculty.ToArray()) gvAssociatedUsers.DataBind() upAssociatedFaculty.Update() End Sub Protected Sub Delete(ByVal sender As Object, ByVal e As System.Web.UI.WebControls.CommandEventArgs) End Sub End Class Markup: <asp:UpdatePanel ID="upAssociatedFaculty" runat="server" UpdateMode="Conditional"> <ContentTemplate> <p><b>Created By:</b> <asp:Label ID="lblCreatedBy" runat="server"></asp:Label></p> <p><b>Accomplishment Type: </b><asp:DropDownList ID="ddlAccomplishmentType" runat="server"></asp:DropDownList></p> <p><b>Accomplishment Applies To: </b><asp:DropDownList ID="ddlFacultyList" runat="server"></asp:DropDownList> &nbsp;<asp:Button ID="btnAddUser" runat="server" Text="Add Faculty" OnClientClick="incrementCounter();" /></p> <p> <asp:GridView ID="gvAssociatedUsers" runat="server" AutoGenerateColumns="false" GridLines="None" ShowHeader="false"> <Columns> <asp:BoundField DataField="Id" HeaderText="Id" Visible="False" /> <asp:TemplateField ShowHeader="False"> <ItemTemplate> <span style="margin-left: 15px;"> <p><%#Eval("LastName")%>, <%#Eval("FirstName")%> <asp:Button ID="btnUnassignUser" runat="server" CausesValidation="false" CommandArgument='<%# Eval("Id") %>' CommandName="Delete" OnCommand="deleteUser" Text='Remove' /></p> </span> </ItemTemplate> </asp:TemplateField> </Columns> <EmptyDataTemplate> <em>There are currently no faculty associated with this accomplishment.</em> </EmptyDataTemplate> </asp:GridView> </p> </ContentTemplate> </asp:UpdatePanel> Now here is the crazy part I am simply boggled by, if I uncomment the If Not Session... block of page_load, then deleteUser will never fire when btnUnassignUser is clicked. If I keep it commented out...it fires no problem, but then of course my list can never have more than one item since I am not loading the saved list from session into the gridview but just a fresh one. But the button click is being registered, because page_load is being stepped through again when I am viewing in debug mode, just deleteUser never fires. Why is this happening?? And how can I fix it??

    Read the article

  • Custom Online Backup Solution Advice

    - by Martín Marconcini
    I have to implement a way so our customers can backup their SQL 2000/5/8 databasase online. The application they use is a C#/.NET35 Winforms application that connects to a SQL Server (can be 2000/2005/2008, sometimes express editions). The SQL Server is on the same LAN. Our application has a very specific UI and we must code each form following those guidelines. There’s lots of GDI+ to give it the look and feel we want. For that reason, using a 3rd party application is not a very good idea. We need to charge the customer on a monthly/annual basis for the service. Preferably, the customer doesn’t need to care about bandwidth and storage space. It must be transparent. Given the above reqs., my first thoughts are: Solution 1: Code some sort of FTP basic functionality with behind the scenes SQL Backup mechanism, then hire a Hosting service and compress-transfer the .BAK to the Hosting. Maintain a series of Folders (for each customer). They won’t see what’s happening. They will just see a list of their files and a big “Backup now” button that will perform the SQL backup, compress it and upload it (and update the file list) ;) Pros: Not very complicated to implement, simple to use, fairly simple to configure (could have a dedicated ftp user/pass) Cons: Finding a “ftp” only hosting plan is not probably going to be easy, they usually come with a bunch of stuff. FTP is not always the best protocol. more? Solution 2: Similar to 1, but instead of FTP, find a cloud computing service like Amazon S3, Mosso or similar. Pros: Cloud Storage is fast, reliable, etc. It’s kind of easy to implement (specially if there are APIs like AWS or Mosso). Cons: I have been unable to come up with a service optimized for resellers where I can give multiple sub-accounts (one for each customer). Billing is going to be a nightmare cuz these services bill per/GB and with One account it’s impossible to differentiate each customer. Solution 3: Similar to 2, but letting the user create their own account on Amazon S3 (for example). Pros: You forget about billing and such. Cons: A mess for the customer who has to open the Amazon (or whatever) account, will be charged for that and not from you. You can’t really charge the customer (since you’re just not doing anything). Solution 4: Use one of the many backup online solutions that use the tech in cloud storage. Pros: many of these include SQL Server backup, and a lot of features that we’d have to implement. Plus web access and stuff like that will come included. Cons: Still have the billing problem described in number 2. Little of these companies (if any) offers “reseller” accounts. You have to eventually use their software (some offer certain branding). Any better approach? Summary: You have a software (.NET Winapp). You want your users to be able to backup their SQL Server databases online (and be able to retrieve the backups if needed). You ideally would like to charge the customer for this service (i.e. XX € a year).

    Read the article

  • App is crashing as soon as UITableView gets reloaded

    - by OhhMee
    Hello, I'm developing an app where TableView needs to reload as soon as the login process gets completed. The app crashes with error EXC_BAD_ACCESS when the table data gets reloaded. It doesn't crash when I remove all case instances except case 0: What could be the reason behind it? Here's the code: - (void)viewDidLoad { [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(loginDone:) name:@"loginDone" object:nil]; statTableView.backgroundColor = [UIColor clearColor]; } - (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger) section { return 6; } - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier]; } // Configure the cell. switch (indexPath.row) { case 0 : cell.textLabel.text = @"Foo:"; NSLog(@"%@", data7); UILabel *myLabel2 = [[UILabel alloc] initWithFrame:CGRectMake(200, 10, 20, 30)]; myLabel2.text = (@"%@", data7); myLabel2.textColor = [UIColor blackColor]; myLabel2.backgroundColor = [UIColor whiteColor]; myLabel2.font = [UIFont fontWithName:@"Trebuchet MS" size:14]; [cell.contentView addSubview:myLabel2]; break; case 1: cell.textLabel.text = @"Foo: "; UILabel *myLabel4 = [[UILabel alloc] initWithFrame:CGRectMake(200, 10, 20, 30)]; myLabel4.text = (@"%@", data11); myLabel4.textColor = [UIColor blackColor]; myLabel4.backgroundColor = [UIColor whiteColor]; myLabel4.font = [UIFont fontWithName:@"Trebuchet MS" size:14]; [cell.contentView addSubview:myLabel4]; break; case 2: cell.textLabel.text = @"Foo: "; UILabel *myLabel8 = [[UILabel alloc] initWithFrame:CGRectMake(200, 10, 20, 30)]; myLabel8.text = (@"%@", data3); myLabel8.textColor = [UIColor blackColor]; myLabel8.backgroundColor = [UIColor whiteColor]; myLabel8.font = [UIFont fontWithName:@"Trebuchet MS" size:14]; [cell.contentView addSubview:myLabel8]; break; case 3: cell.textLabel.text = @"Foo: "; UILabel *myLabel10 = [[UILabel alloc] initWithFrame:CGRectMake(200, 10, 20, 30)]; myLabel10.text = [NSString stringWithFormat:@"%@", data4]; if ([data4 isEqualToString:@"0"]) { myLabel10.text = @"None"; } myLabel10.textColor = [UIColor blackColor]; myLabel10.backgroundColor = [UIColor whiteColor]; myLabel10.font = [UIFont fontWithName:@"Trebuchet MS" size:14]; [cell.contentView addSubview:myLabel10]; break; case 4: cell.textLabel.text = @"Foo: "; UILabel *myLabel12 = [[UILabel alloc] initWithFrame:CGRectMake(200, 10, 20, 30)]; myLabel12.text = [NSString stringWithFormat:@"%@", data5]; myLabel12.textColor = [UIColor blackColor]; if ([data5 isEqualToString:@"Foo"]) { myLabel12.textColor = [UIColor redColor]; myLabel12.text = @"Nil"; } myLabel12.backgroundColor = [UIColor whiteColor]; myLabel12.font = [UIFont fontWithName:@"Trebuchet MS" size:14]; [cell.contentView addSubview:myLabel12]; break; case 5: cell.textLabel.text = @"Foo: "; UILabel *myLabel14 = [[UILabel alloc] initWithFrame:CGRectMake(200, 10, 50, 30)]; if ([data6 isEqualToString:@"Foo"]) { myLabel14.textColor = [UIColor colorWithRed:(0/255.f) green:(100/255.f) blue:(0/255.f) alpha:1.0]; myLabel14.text = @"No Dues"; } else { myLabel14.text = [NSString stringWithFormat:@"%@", data6]; myLabel14.textColor = [UIColor redColor]; } myLabel14.backgroundColor = [UIColor whiteColor]; myLabel14.font = [UIFont fontWithName:@"Trebuchet MS" size:14]; [cell.contentView addSubview:myLabel14]; break; /* [myLabel2 release]; [myLabel4 release]; [myLabel8 release]; [myLabel10 release]; [myLabel12 release]; [myLabel14 release]; */ } return cell; }

    Read the article

  • Silverlight: Binding a custom control to an arbitrary object

    - by Ryan Bates
    I am planning on writing a hierarchical organizational control, similar to an org chart. Several org chart implementations are out there, but not quite fit what I have in mind. Binding fields in a DataTemplate to a custom object does not seem to work. I started with a generic, custom control, i.e. public class NodeBodyBlock : ContentControl { public NodeBodyBlock() { this.DefaultStyleKey = typeof(NodeBodyBlock); } } It has a simple style in generic.xaml: <Style TargetType="org:NodeBodyBlock"> <Setter Property="Width" Value="200" /> <Setter Property="Height" Value="100" /> <Setter Property="Background" Value="Lavender" /> <Setter Property="FontSize" Value="11" /> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="org:NodeBodyBlock"> <Border Width="{TemplateBinding Width}" Height="{TemplateBinding Height}" Background="{TemplateBinding Background}" CornerRadius="4" BorderBrush="Black" BorderThickness="1" > <Grid> <VisualStateManager/> ... clipped for brevity </VisualStateManager.VisualStateGroups> <ContentPresenter Content="{TemplateBinding Content}" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" /> </Grid> </Border> </ControlTemplate> </Setter.Value> </Setter> </Style> My plan now is to be able to use this common definition as a base definition of sorts, with customized version of it used to display different types of content. A simple example would be to use this on a user control with the following style: <Style TargetType="org:NodeBodyBlock" x:Key="TOCNode2"> <Setter Property="ContentTemplate"> <Setter.Value> <DataTemplate> <StackPanel> <TextBlock Text="{Binding Path=NodeTitle}"/> </StackPanel> </DataTemplate> </Setter.Value> </Setter> </Style> and an instance defined as <org:NodeBodyBlock Style="{StaticResource TOCNode2}" x:Name="stTest" DataContext="{StaticResource DummyData}" /> The DummyData is defined as <toc:Node NodeNumber="mynum" NodeStatus="A" NodeTitle="INLine Node Title!" x:Key="DummyData"/> With a simple C# class behind it, where each of the fields is a public property. When running the app, the Dummy Data values simply do not show up in the GUI. A trivial test such as <TextBlock Text="{Binding NodeTitle}" DataContext="{StaticResource DummyData}"/> works just fine. Any ideas around where I am missing the plot?

    Read the article

  • Text misaligns in IE

    - by kingrichard2005
    I have a ASP.net web page I'm working with, I didn't create it myself, with the following HTML code: <DIV style="POSITION: absolute; TEXT-ALIGN: center; WIDTH: 1400px; TOP: 60px; LEFT: 125px"> <SPAN style="TEXT-ALIGN: center; FONT-SIZE: xx-large" id=labelInstructions>Some Text: <BR><BR></SPAN> <TABLE style="WIDTH: 1200px" border=1 align=center> <TBODY> <TR> <TD><LABEL style="FONT-SIZE: x-large" for=FileUpload1>ENTER Path: </LABEL><INPUT id=FileUpload1 size=70 type=file name=FileUpload1></TD> </TR> <TR> <TD><SPAN style="COLOR: red; FONT-SIZE: medium" id=fileUploadError><BR><BR></SPAN></TD> </TR> <TR> <TD> <TABLE style="WIDTH: 1200px" border=1> <TBODY> <TR> <TD style="WIDTH: 400px; FONT-SIZE: x-large" vAlign=top align=right>FILE CONTENT INSTRUCTIONS:</TD> <TD style="WIDTH: 850px; FONT-SIZE: x-large" vAlign=top align=left>INSTRUCTION 1<BR>INSTRUCTION 2<BR></TD></TR> <TR><TD></TD></TR> <TR> <TD style="WIDTH: 400px; FONT-SIZE: x-large" vAlign=top align=right>FILE CONTENT EXAMPLE:</TD> <TD style="WIDTH: 850px; FONT-SIZE: x-large" vAlign=top align=left>EXAMPLE 1<BR>EXAMPLE 2<BR><BR></TD> </TR> </TBODY> </TABLE> </TD> </TR> </TBODY> </TABLE> </DIV> When this html is displayed in IE, I notice that the alignment of the text in the cells in the inner table, i.e. the table that is in the third cell of the outer table, is distorted when zooming in and out on it. I have a fixed table setting in pixels instead of percentages, so I don't understand why this is an issue. I want the text in the cells to stay in the same position when zooming. The code must be manipulated from the code behind, so I cannot create a separate CSS file. Any help is appreciated. Here are two examples to illustrate what I'm talking about: Normal zoom at 100%: Zoom at 75%: Notice in the second image the two table cells at the bottom are slightly offset to the left. UPDATE: Yes, I understand, we will be implementing a new system in the near future. Obviously this is old and very non-standard, this was dropped in my lap when I started working with it. And we're coming up with plans for a new system to replace it, in the meantime, this is what I have to deal with.

    Read the article

  • MediaElement not showing in custom 3D class

    - by user3271180
    I'm trying to display a videostream in a Viewport3d. When I add the MediaElement via xaml, the video plays without a problem; even when I add the video as ModelVisual3D in the code-behind, the video works. When I abstract the video into a class, however, the video stops appearing. This happens with both web and local video files. I tried compiling with both x86 and 64 bit. Any way to fix this behaviour? Why is this happening? I have the following viewport: <Viewport3D> <!-- Camera --> <Viewport3D.Camera> <PerspectiveCamera Position="0,0,100" LookDirection="0,0,-1" UpDirection="0,1,0" /> </Viewport3D.Camera> <!-- Light --> <ModelVisual3D> <ModelVisual3D.Content> <AmbientLight Color="White" /> </ModelVisual3D.Content> </ModelVisual3D> <!-- this doesn't work --> <mediaElementTest:VideoControl /> <!-- but this does? --> <!--<ModelVisual3D> <ModelVisual3D.Content> <GeometryModel3D> <GeometryModel3D.Geometry> <MeshGeometry3D Positions="-100,-100,0 100,-100,0 100,100,0 -100,100,0" TextureCoordinates="0,1 1,1 1,0 0,0" TriangleIndices="0 1 2 0 2 3" /> </GeometryModel3D.Geometry> <GeometryModel3D.Material> <DiffuseMaterial> <DiffuseMaterial.Brush> <VisualBrush> <VisualBrush.Visual> <MediaElement Source="http://www.quirksmode.org/html5/videos/big_buck_bunny.mp4" /> </VisualBrush.Visual> </VisualBrush> </DiffuseMaterial.Brush> </DiffuseMaterial> </GeometryModel3D.Material> </GeometryModel3D> </ModelVisual3D.Content> </ModelVisual3D>--> </Viewport3D> VideoControl.xaml <UIElement3D x:Class="MediaElementTest.VideoControl" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"/> VideoControl.xaml.cs public partial class VideoControl { public VideoControl() { InitializeComponent(); Visual3DModel = CreateModel(); } private GeometryModel3D CreateModel() { return new GeometryModel3D { Geometry = new MeshGeometry3D { Positions = new Point3DCollection { new Point3D(-100, -100, 0), new Point3D(100, -100, 0), new Point3D(100, 100, 0), new Point3D(-100, 100, 0) }, TextureCoordinates = new PointCollection { new Point(0, 1), new Point(1, 1), new Point(1, 0), new Point(0, 0) }, TriangleIndices = new Int32Collection { 0, 1, 2, 0, 2, 3 } }, Material = new DiffuseMaterial(new VisualBrush(new MediaElement { Source = new Uri("http://www.quirksmode.org/html5/videos/big_buck_bunny.mp4", UriKind.RelativeOrAbsolute) })) }; } }

    Read the article

  • Referenced vector does not pass through functions

    - by kylepayne
    The referenced vector to functions does not hold the information in memory. Do I have to use pointers? Thanks. #include <iostream> #include <cstdlib> #include <vector> #include <string> using namespace std; void menu(); void addvector(vector<string>& vec); void subvector(vector<string>& vec); void vectorsize(const vector<string>& vec); void printvec(const vector<string>& vec); void printvec_bw(const vector<string>& vec); int main() { vector<string> svector; menu(); return 0; } //functions definitions void menu() { vector<string> svector; int choice = 0; cout << "Thanks for using this program! \n" << "Enter 1 to add a string to the vector \n" << "Enter 2 to remove the last string from the vector \n" << "Enter 3 to print the vector size \n" << "Enter 4 to print the contents of the vector \n" << "Enter 5 ----------------------------------- backwards \n" << "Enter 6 to end the program \n"; cin >> choice; switch(choice) { case 1: addvector(svector); menu(); break; case 2: subvector(svector); menu(); break; case 3: vectorsize(svector); menu(); break; case 4: printvec(svector); menu(); break; case 5: printvec_bw(svector); menu(); break; case 6: exit(1); default: cout << "not a valid choice \n"; // menu is structured so that all other functions are called from it. } } void addvector(vector<string>& vec) { //string line; //int i = 0; //cin.ignore(1, '\n'); //cout << "Enter the string please \n"; //getline(cin, line); vec.push_back("the police man's beard is half-constructed"); } void subvector(vector<string>& vec) { vec.pop_back(); return; } void vectorsize(const vector<string>& vec) { if (vec.empty()) { cout << "vector is empty"; } else { cout << vec.size() << endl; } return; } void printvec(const vector<string>& vec) { for(int i = 0; i < vec.size(); i++) { cout << vec[i] << endl; } return; } void printvec_bw(const vector<string>& vec) { for(int i = vec.size(); i > 0; i--) { cout << vec[i] << endl; } return; }

    Read the article

  • Separation of presentation and business logic in PHP

    - by Markus Ossi
    I am programming my first real PHP website and am wondering how to make my code more readable to myself. The reference book I am using is PHP and MySQL Web Development 4th ed. The aforementioned book gives three approaches to separating logic and content: include files function or class API template system I haven't chosen any of these yet, as wrapping my brains around these concepts is taking some time. However, my code has become some hybrid of the first two as I am just copy-pasting away here and modifying as I go. On presentation side, all of my pages have these common elements: header, top navigation, sidebar navigation, content, right sidebar and footer. The function-based examples in the book suggest that I could have these display functions that handle all the presentation example. So, my page code will be like this: display_header(); display_navigation(); display_content(); display_footer(); However, I don't like this because the examples in the book have these print statements with HTML and PHP mixed up like this: echo "<tr bgcolor=\"".$color."\"><td><a href=\"".$url."\">" ... I would rather like to have HTML with some PHP in the middle, not the other way round. I am thinking of making my pages so that at the beginning of my page, I will fetch all the data from database and put it in arrays. I will also get the data for variables. If there are any errors in any of these processes, I will put them into error strings. Then, at the HTML code, I will loop through these arrays using foreach and display the content. In some cases, there will be some variables that will be shown. If there is an error variable that is set, I will display that at the proper position. (As a side note: The thing I do not understand is that in most example code, if some database query or whatnot gives an error, there is always: else echo 'Error'; This baffles me, because when the example code gives an error, it is sometimes echoed out even before the HTML has started...) For people who have used ASP.NET, I have gotten somewhat used to the code-behind files and lblError and I am trying to do something similar here. The thing I haven't figured out is how could I do this "do logic first, then presentation" thing so that I would not have to replicate for example the navigation logic and navigation presentation in all of the pages. Should I do some include files or could I use functions here but a little bit differently? Are there any good articles where these "styles" of separating presentation and logic are explained a little bit more thoroughly. The book I have only has one paragraph about this stuff. What I am thinking is that I am talking about some concepts or ways of doing PHP programming here, but I just don't know the terms for them yet. I know this isn't a straight forward question, I just need some help in organizing my thoughts.

    Read the article

  • Rogue PropertyChanged notifications from ViewModel

    - by user1886323
    The following simple program is causing me a Databinding headache. I'm new to this which is why I suspect it has a simple answer. Basically, I have two text boxes bound to the same property myString. I have not set up the ViewModel (simply a class with one property, myString) to provide any notifications to the View for when myString is changed, so even although both text boxes operate a two way binding there should be no way that the text boxes update when myString is changed, am I right? Except... In most circumstances this is true - I use the 'change value' button at the bottom of the window to change the value of myString to whatever the user types into the adjacent text box, and the two text boxes at the top, even although they are bound to myString, do not change. Fine. However, if I edit the text in TextBox1, thus changing the value of myString (although only when the text box loses focus due to the default UpdateSourceTrigger property, see reference), TextBox2 should NOT update as it shouldn't receive any updates that myString has changed. However, as soon as TextBox1 loses focus (say click inside TextBox2) TextBox2 is updated with the new value of myString. My best guess so far is that because the TextBoxes are bound to the same property, something to do with TextBox1 updating myString gives TextBox2 a notification that it has changed. Very confusing as I haven't used INotifyPropertyChanged or anything like that. To clarify, I am not asking how to fix this. I know I could just change the binding mode to a oneway option. I am wondering if anyone can come up with an explanation for this strange behaviour? ViewModel: namespace WpfApplication1 { class ViewModel { public ViewModel() { _myString = "initial message"; } private string _myString; public string myString { get { return _myString; } set { if (_myString != value) { _myString = value; } } } } } View: <Window x:Class="WpfApplication1.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:WpfApplication1" Title="MainWindow" Height="350" Width="525"> <Window.DataContext> <local:ViewModel /> </Window.DataContext> <Grid> <!-- The culprit text boxes --> <TextBox Height="23" HorizontalAlignment="Left" Margin="166,70,0,0" Name="textBox1" VerticalAlignment="Top" Width="120" Text="{Binding Path=myString, Mode=TwoWay}" /> <TextBox Height="23" HorizontalAlignment="Left" Margin="166,120,0,0" Name="textBox2" VerticalAlignment="Top" Width="120" Text="{Binding Path=myString, Mode=TwoWay}"/> <!--The buttons allowing manual change of myString--> <Button Name="changevaluebutton" Content="change value" Click="ButtonUpdateArtist_Click" Margin="12,245,416,43" Width="75" /> <Button Content="Show value" Height="23" HorizontalAlignment="Left" Margin="12,216,0,0" Name="showvaluebutton" VerticalAlignment="Top" Width="75" Click="showvaluebutton_Click" /> <Label Content="" Height="23" HorizontalAlignment="Left" Margin="116,216,0,0" Name="showvaluebox" VerticalAlignment="Top" Width="128" /> <TextBox Height="23" HorizontalAlignment="Left" Margin="116,245,0,0" Name="changevaluebox" VerticalAlignment="Top" Width="128" /> <!--simply some text--> <Label Content="TexBox1" Height="23" HorizontalAlignment="Left" Margin="99,70,0,0" Name="label1" VerticalAlignment="Top" Width="61" /> <Label Content="TexBox2" Height="23" HorizontalAlignment="Left" Margin="99,118,0,0" Name="label2" VerticalAlignment="Top" Width="61" /> </Grid> </Window> Code behind for view: namespace WpfApplication1 { /// <summary> /// Interaction logic for MainWindow.xaml /// </summary> public partial class MainWindow : Window { ViewModel viewModel; public MainWindow() { InitializeComponent(); viewModel = (ViewModel)this.DataContext; } private void showvaluebutton_Click(object sender, RoutedEventArgs e) { showvaluebox.Content = viewModel.myString; } private void ButtonUpdateArtist_Click(object sender, RoutedEventArgs e) { viewModel.myString = changevaluebox.Text; } } }

    Read the article

  • ASA 5540 v8.4(3) vpn to ASA 5505 v8.2(5), tunnel up but I cant ping from 5505 to IP on other side

    - by user223833
    I am having problems pinging from a 5505(remote) to IP 10.160.70.10 in the network behind the 5540(HQ side). 5505 inside IP: 10.56.0.1 Out: 71.43.109.226 5540 Inside: 10.1.0.8 out: 64.129.214.27 I Can ping from 5540 to 5505 inside 10.56.0.1. I also ran ASDM packet tracer in both directions, it is ok from 5540 to 5505, but drops the packet from 5505 to 5540. It gets through the ACL and dies at the NAT. Here is the 5505 config, I am sure it is something simple I am missing. ASA Version 8.2(5) ! hostname ASA-CITYSOUTHDEPOT domain-name rngint.net names ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Ethernet0/2 ! interface Ethernet0/3 ! interface Ethernet0/4 ! interface Ethernet0/5 ! interface Ethernet0/6 ! interface Ethernet0/7 ! interface Vlan1 nameif inside security-level 100 ip address 10.56.0.1 255.255.0.0 ! interface Vlan2 nameif outside security-level 0 ip address 71.43.109.226 255.255.255.252 ! banner motd ***ASA-CITYSOUTHDEPOT*** banner asdm CITY SOUTH DEPOT ASA5505 ftp mode passive clock timezone EST -5 clock summer-time EDT recurring dns server-group DefaultDNS domain-name rngint.net access-list outside_1_cryptomap extended permit ip host 71.43.109.226 host 10.1.0.125 access-list outside_1_cryptomap extended permit ip 10.56.0.0 255.255.0.0 10.0.0.0 255.0.0.0 access-list outside_1_cryptomap extended permit ip 10.56.0.0 255.255.0.0 10.106.70.0 255.255.255.0 access-list outside_1_cryptomap extended permit ip 10.56.0.0 255.255.0.0 10.106.130.0 255.255.255.0 access-list outside_1_cryptomap extended permit ip host 71.43.109.226 host 10.160.70.10 access-list inside_nat0_outbound extended permit ip host 71.43.109.226 host 10.1.0.125 access-list inside_nat0_outbound extended permit ip 10.56.0.0 255.255.0.0 10.0.0.0 255.0.0.0 access-list inside_nat0_outbound extended permit ip 10.56.0.0 255.255.0.0 10.106.130.0 255.255.255.0 access-list inside_nat0_outbound extended permit ip 10.56.0.0 255.255.0.0 10.106.70.0 255.255.255.0 access-list inside_nat0_outbound extended permit ip host 71.43.109.226 10.106.70.0 255.255.255.0 pager lines 24 logging enable logging buffer-size 25000 logging buffered informational logging asdm warnings mtu inside 1500 mtu outside 1500 icmp unreachable rate-limit 1 burst-size 1 icmp permit any inside no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 0 access-list inside_nat0_outbound nat (inside) 1 0.0.0.0 0.0.0.0 route outside 0.0.0.0 0.0.0.0 71.43.109.225 1 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 timeout floating-conn 0:00:00 dynamic-access-policy-record DfltAccessPolicy aaa-server TACACS+ protocol tacacs+ aaa-server TACACS+ (inside) host 10.106.70.36 key ***** aaa authentication http console LOCAL aaa authentication ssh console LOCAL aaa authorization exec authentication-server http server enable http 192.168.1.0 255.255.255.0 inside http 10.0.0.0 255.0.0.0 inside http 0.0.0.0 0.0.0.0 outside snmp-server host inside 10.106.70.7 community ***** no snmp-server location no snmp-server contact snmp-server community ***** snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto ipsec transform-set ESP-AES-128-SHA esp-aes esp-sha-hmac crypto ipsec transform-set ESP-AES-128-MD5 esp-aes esp-md5-hmac crypto ipsec transform-set ESP-AES-192-SHA esp-aes-192 esp-sha-hmac crypto ipsec transform-set ESP-AES-192-MD5 esp-aes-192 esp-md5-hmac crypto ipsec transform-set ESP-AES-256-SHA esp-aes-256 esp-sha-hmac crypto ipsec transform-set ESP-AES-256-MD5 esp-aes-256 esp-md5-hmac crypto ipsec transform-set ESP-3DES-MD5 esp-3des esp-md5-hmac crypto ipsec transform-set ESP-DES-SHA esp-des esp-sha-hmac crypto ipsec transform-set ESP-DES-MD5 esp-des esp-md5-hmac crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto map outside_map 1 match address outside_1_cryptomap crypto map outside_map 1 set pfs group1 crypto map outside_map 1 set peer 64.129.214.27 crypto map outside_map 1 set transform-set ESP-3DES-SHA crypto map outside_map interface outside crypto isakmp enable outside crypto isakmp policy 1 authentication pre-share encryption des hash md5 group 2 lifetime 86400 telnet timeout 5 ssh 10.0.0.0 255.0.0.0 inside ssh 0.0.0.0 0.0.0.0 outside ssh timeout 5 console timeout 0 management-access inside dhcpd auto_config outside ! dhcpd address 10.56.0.100-10.56.0.121 inside dhcpd dns 10.1.0.125 interface inside dhcpd auto_config outside interface inside ! dhcprelay server 10.1.0.125 outside dhcprelay enable inside dhcprelay setroute inside dhcprelay timeout 60 threat-detection basic-threat threat-detection statistics access-list no threat-detection statistics tcp-intercept tftp-server inside 10.1.1.25 CITYSOUTHDEPOT-ASA-Confg webvpn tunnel-group 64.129.214.27 type ipsec-l2l tunnel-group 64.129.214.27 ipsec-attributes pre-shared-key ***** ! ! prompt hostname context

    Read the article

  • How can I make the storage of C++ lambda objects more efficient?

    - by Peter Ruderman
    I've been thinking about storing C++ lambda's lately. The standard advice you see on the Internet is to store the lambda in a std::function object. However, none of this advice ever considers the storage implications. It occurred to me that there must be some seriously black voodoo going on behind the scenes to make this work. Consider the following class that stores an integer value: class Simple { public: Simple( int value ) { puts( "Constructing simple!" ); this->value = value; } Simple( const Simple& rhs ) { puts( "Copying simple!" ); this->value = rhs.value; } Simple( Simple&& rhs ) { puts( "Moving simple!" ); this->value = rhs.value; } ~Simple() { puts( "Destroying simple!" ); } int Get() const { return this->value; } private: int value; }; Now, consider this simple program: int main() { Simple test( 5 ); std::function<int ()> f = [test] () { return test.Get(); }; printf( "%d\n", f() ); } This is the output I would hope to see from this program: Constructing simple! Copying simple! Moving simple! Destroying simple! 5 Destroying simple! Destroying simple! First, we create the value test. We create a local copy on the stack for the temporary lambda object. We then move the temporary lambda object into memory allocated by std::function. We destroy the temporary lambda. We print our output. We destroy the std::function. And finally, we destroy the test object. Needless to say, this is not what I see. When I compile this on Visual C++ 2010 (release or debug mode), I get this output: Constructing simple! Copying simple! Copying simple! Copying simple! Copying simple! Destroying simple! Destroying simple! Destroying simple! 5 Destroying simple! Destroying simple! Holy crap that's inefficient! Not only did the compiler fail to use my move constructor, but it generated and destroyed two apparently superfluous copies of the lambda during the assignment. So, here finally are the questions: (1) Is all this copying really necessary? (2) Is there some way to coerce the compiler into generating better code? Thanks for reading!

    Read the article

  • Update transaction in SQL Server 2008 R2 from ASP.Net not working

    - by Amarus
    Hello! Even though I've been a stalker here for ages, this is the first post I'm making. Hopefully, it won't end here and more optimistically future posts might actually be me trying to give a hand to someone else, I do owe this community that much and more. Now, what I'm trying to do is simple and most probably the reason behind it not working is my own stupidity. However, I'm stumped here. I'm working on an ASP.Net website that interacts with an SQL Server 2008 R2 database. So far everything has been going okay but updating a row (or more) just won't work. I even tried copying and pasting code from this site and others but it's always the same thing. In short: No exception or errors are shown when the update command executes (it even gives the correct count of affected rows) but no changes are actually made on the database. Here's a simplified version of my code (the original had more commands and tons of parameters each, but even when it's like this it doesn't work): protected void btSubmit_Click(object sender, EventArgs e) { using (SqlConnection connection = new SqlConnection(System.Configuration.ConfigurationManager.ConnectionStrings["ApplicationServices"].ConnectionString)) { string commandString = "UPDATE [impoundLotAlpha].[dbo].[Vehicle]" + "SET [VehicleMake] = @VehicleMake" + " WHERE [ComplaintID] = @ComplaintID"; using (SqlCommand command = new SqlCommand(commandString, connection)) { SqlTransaction transaction = null; try { command.Connection.Open(); transaction = connection.BeginTransaction(IsolationLevel.Serializable); command.Transaction = transaction; SqlParameter complaintID = new SqlParameter("@complaintID", SqlDbType.Int); complaintID.Value = HttpContext.Current.Request.QueryString["complaintID"]; command.Parameters.Add(complaintID); SqlParameter VehicleMake = new SqlParameter("@VehicleMake", SqlDbType.VarChar, 20); VehicleMake.Value = tbVehicleMake.Text; command.Parameters.Add(VehicleMake); command.ExecuteNonQuery(); transaction.Commit(); } catch { transaction.Rollback(); throw; } finally { connection.Close(); } } } } I've tried this with the "SqlTransaction" stuff and without it and nothing changes. Also, since I'm doing multiple updates at once, I want to have them act as a single transaction. I've found that it can be either done like this or by use of the classes included in the System.Transactions namespace (CommittableTransaction, TransactionScope...). I tried all I could find but didn't get any different results. The connection string in web.config is as follows: <connectionStrings> <add name="ApplicationServices" connectionString="Data Source=localhost;Initial Catalog=ImpoundLotAlpha;Integrated Security=True" providerName="System.Data.SqlClient"/> </connectionStrings> So, tldr; version: What is the mistake that I did with that record update attempt? (Figured it out, check below if you're having a similar issue.) What is the best method to gather multiple update commands as a single transaction? Thanks in advance for any kind of help and/or suggestions! Edit: It seems that I was lacking some sleep yesterday cause this time it only took me 5 minutes to figure out my mistake. Apparently the update was working properly but I failed to notice that the textbox values were being overwritten in Page_Load. For some reason I had this part commented: if (IsPostBack) return; The second part of the question still stands. But should I post this as an answer to my own question or keep it like this?

    Read the article

  • What needs updating when moving a bootable Windows 7 (or Vista) partition?

    - by SuperTempel
    When I move a bootable NTFS partition with Windows on it to a different block offset, what needs updating to make it bootable again? In particular, here's what I tried: I have a disk with several partitions, one of which is the NTFS partition with Windows on it, and the disk uses the plain old MBR block 0 for the partitions layout (no more than 4 partitions). Now I format and partition a new, larger, disk. There I make room for the NTFS partition and copy the contents from the old disk's NTFS Windows partition into. And I make the partition "active". However, when I try to boot from this disk, I get a "read error" message immediately and the booting stops, the exact text is: A disk read error occurred Press Ctrl+Alt+Del to restart I verified that both disks have the same boot sector code in block 0. It seems to me that something else might need updating. I guess that somewhere there's a absolute block reference that I need to update, probably pointing to the next level loader or to the NT kernel. Update: I found this article going quite into the depth of what I want to know. However, it says to modify boot.ini, but I have Windows 7 installed here, where such things appear to have changed: No boot.ini but a folder called System Volume Information with GUID and other data in it that sounds related to my problem. Going to keep digging... Update 2: Thanks to the terrible looking but very informative website by starman, I was able to figure out the first step: The NTFS boot sector has a field for "hidden" sectors. This feld has to contain the sector number of the boot sector. This solves the "read error" message. Now, however, I get a "BOOTMGR is missing" error instead. Looks like there's another place where a block number has to be adjusted, but I can't find anything in the code listing about this. I do find a lot of help sites suggesting Windows tools for fixing this "BOOTMGR is missing" problem, but none seem to know what goes on behind the scenes. Kind of like suggesting to re-install Windows when there's a little problem with it. At least, those fixes seem to work, mostly involving the Bcdedit and Bootrec tools. Now, who knows what they do, especially the latter, in regards to a moved partition? Update 3: After lots of trial-and-error attempts, I believe now that the solution lies in the BCD-Template registry file, residing usually inside \Windows\System32\config. If I get this updated using the "bcdboot" command, Windows starts up from it. I am now in the middle of figuring out what information this registry contains relevant to the above question. Any pointers to the contents of this registry are welcome. Update 4: Turns out that while the BCD-Template file gets rewritten and has different binary contents than its predecessor, the values inside do not change. So it must be something else that bcdboot.exe writes. I had previously already checked if it changes the first 32 boot blocks of the partition, but they appear to remain unchanged. Parititon map doesn't get changed, either. So what is it that bcdboot modifies besides the BCD registry? Any tips on how I can trace that? Are there low level tools that show me what files a program writes to? Update 5: The answer seems to be: c:\Boot\BCD is also changed, and that appears to be the key file for the boot manager's process. I'll investigate this later... Update 6: It seems to be an important detail that I had originally two partitions created when I installed Windows 7: A small partition of 204800 sectors which appears to be a bootstrap partition, followed by the actual, large, partition containing the Windows system (drive C:). When I tried to transfer this installation to a new, larger, disk, I had kept the same two partitions intact on the new drive, although they ended up at a different offset. This alone led to the "BOOTMGR is missing" message. Since then, I've used bcdboot.exe only on the Windows partition, which added the \Boot\BCD file on that partition. That file (and folder) did originally only exist on the smaller partition. Hence, this problem may be more complicated in my case as one partition (the boot strapper) referred to another partition (the one containing the OS), whereas other people may only have to deal with one partition containing both, and maybe there the solution is simpler. Update 7: Found one more detail: The \Boot\BCD file records the MBR's serial number. If that number doesn't match, the system won't boot. Next I'll test if there's also an absolute block reference stored in there.

    Read the article

  • C# - Must declare the scalar variable "@ms_id" - Error

    - by user1075106
    I'm writing an web-app that keeps track of deadlines. With this app you have to be able to update records that are being saved in an SQL DB. However I'm having some problem with my update in my aspx-file. <asp:GridView ID="gv_editMilestones" runat="server" DataSourceID="sql_ds_milestones" CellPadding="4" ForeColor="#333333" GridLines="None" Font-Size="Small" AutoGenerateColumns="False" DataKeyNames="id" Visible="false" onrowupdated="gv_editMilestones_RowUpdated" onrowupdating="gv_editMilestones_RowUpdating" onrowediting="gv_editMilestones_RowEditing"> <RowStyle BackColor="#F7F6F3" ForeColor="#333333" /> <Columns> <asp:CommandField ShowEditButton="True" /> <asp:BoundField DataField="id" HeaderText="id" SortExpression="id" ReadOnly="True" Visible="false"/> <asp:BoundField DataField="ms_id" HeaderText="ms_id" SortExpression="ms_id" ReadOnly="True"/> <asp:BoundField DataField="ms_description" HeaderText="ms_description" SortExpression="ms_description"/> <%-- <asp:BoundField DataField="ms_resp_team" HeaderText="ms_resp_team" SortExpression="ms_resp_team"/>--%> <asp:TemplateField HeaderText="ms_resp_team" SortExpression="ms_resp_team"> <ItemTemplate> <%# Eval("ms_resp_team") %> </ItemTemplate> <EditItemTemplate> <asp:DropDownList ID="DDL_ms_resp_team" runat="server" DataSourceID="sql_ds_ms_resp_team" DataTextField="team_name" DataValueField="id"> <%--SelectedValue='<%# Bind("ms_resp_team") %>'--%> </asp:DropDownList> </EditItemTemplate> </asp:TemplateField> <asp:BoundField DataField="ms_focal_point" HeaderText="ms_focal_point" SortExpression="ms_focal_point" /> <asp:BoundField DataField="ms_exp_date" HeaderText="ms_exp_date" SortExpression="ms_exp_date" DataFormatString="{0:d}"/> <asp:BoundField DataField="ms_deal" HeaderText="ms_deal" SortExpression="ms_deal" ReadOnly="True"/> <asp:CheckBoxField DataField="ms_active" HeaderText="ms_active" SortExpression="ms_active"/> </Columns> <FooterStyle BackColor="#CCCC99" /> <PagerStyle BackColor="#F7F7DE" ForeColor="Black" HorizontalAlign="Right" /> <SelectedRowStyle BackColor="#CE5D5A" Font-Bold="True" ForeColor="White" /> <HeaderStyle BackColor="#5D7B9D" Font-Bold="True" ForeColor="White" /> <AlternatingRowStyle BackColor="White" /> <EditRowStyle BackColor="#999999" /> </asp:GridView> <asp:SqlDataSource ID="sql_ds_milestones" runat="server" ConnectionString="<%$ ConnectionStrings:testServer %>" SelectCommand="SELECT [id] ,[ms_id] ,[ms_description] ,(SELECT [team_name] FROM [NSBP].[dbo].[tbl_teams] as teams WHERE milestones.[ms_resp_team] = teams.[id]) as 'ms_resp_team' ,[ms_focal_point] ,[ms_exp_date] ,(SELECT [deal] FROM [NSBP].[dbo].[tbl_deals] as deals WHERE milestones.[ms_deal] = deals.[id]) as 'ms_deal' ,[ms_active] FROM [NSBP].[dbo].[tbl_milestones] as milestones" UpdateCommand="UPDATE [NSBP].[dbo].[tbl_milestones] SET [ms_description] = @ms_description ,[ms_focal_point] = @ms_focal_point ,[ms_active] = @ms_active WHERE [ms_id] = @ms_id"> <UpdateParameters> <asp:Parameter Name="ms_description" Type="String" /> <%-- <asp:Parameter Name="ms_resp_team" Type="String" />--%> <asp:Parameter Name="ms_focal_point" Type="String" /> <asp:Parameter Name="ms_exp_date" Type="DateTime" /> <asp:Parameter Name="ms_active" Type="Boolean" /> <%-- <asp:Parameter Name="ms_id" Type="String" />--%> </UpdateParameters> </asp:SqlDataSource> You can see my complete GridView-structure + my datasource bound to this GridView. There is nothing written in my onrowupdating-function in my code-behind file. Thx in advance

    Read the article

  • DataGrid : Binding with two different classes with lists ? WPF C#

    - by MyRestlessDream
    It is my first question on StackOverflow so I hope I am doing nothing wrong ! Sorry if it is the case ! I need some help because I can not find the solution of my problem. Of course I have searched everywhere on the web but I can not find it (can not post the links that I am using because of my low reputation :( ). Moreover, I am new in C# and WPF (and self-learning). I used to work in C++/Qt so I do not know how everything works in WPF. And sorry for my English, I am French. My problem My basic classes are that an Employee can use a computer. The id of the computer and the date of use are stored into the class Connection. I would like to display the list information in a DataGrid and in RowDetailsTemplate like here : http://i.stack.imgur.com/Bvn1z.png So it will do a binding to the Employee class but also to the Connection class with only the last value of the property (here the last value of the list "Computer ID" and the last value of the list "Connection Date" on this last computer). So it is a loop in the different lists. How can I do it ? Is it too much to do ? :( I succeed to get the Employee informations but I do not know how to bind the list of computer. When I am trying, it shows me "(Collection)" so it does not go inside the list :( Summary of Questions How to display/bind a value from a list AND from a different class in a DataGrid ? How to display all the values of a list into the RowDetailsTemplate ? Under Windows 7 and Visual Studio 2010 Pro version. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ EDIT ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Solution I have used the solution of Stefan Denchev. Here the modification of my class : http://i.stack.imgur.com/Ijx5i.png And the code used: <DataGrid ItemsSource="{Binding}" Name="table"> <DataGrid.Columns> <DataGridTextColumn Header="First Name" Binding="{Binding FirstName}"/> <DataGridTextColumn Header="Last Name" Binding="{Binding LastName}"/> <DataGridTextColumn Header="Gender" Binding="{Binding Gender}"/> <DataGridTextColumn Header="Last computer used" Binding="{Binding LastComputerID}"/> <DataGridTextColumn Header="Last connection date" Binding="{Binding LastDate}"/> </DataGrid.Columns> <DataGrid.RowDetailsTemplate> <DataTemplate> <DataGrid ItemsSource="{Binding ListOfConnection}"> <DataGrid.Columns> <DataGridTextColumn Header="Computer ID" Binding="{Binding ComputerID}"/> <DataGridTemplateColumn> <DataGridTemplateColumn.CellTemplate> <DataTemplate> <ListView ItemsSource="{Binding ListOfDate}"/> </DataTemplate> </DataGridTemplateColumn.CellTemplate> </DataGridTemplateColumn> </DataGrid.Columns> </DataGrid> </DataTemplate> </DataGrid.RowDetailsTemplate> </DataGrid> With in code behind : List<Employee> allEmployees = WorkflowMgr.Instance.AllEmployees; table.DataContext = allEmployees; And it works ! I have tryed to improve my fake example :) Hope it will help to another developer !

    Read the article

  • Processing Email in Outlook

    - by Daniel Moth
    A. Why Goal 1 = Help others: Have at most a 24-hour response turnaround to internal (from colleague) emails, typically achieving same day response. Goal 2 = Help projects: Not to implicitly pass/miss an opportunity to have impact on electronic discussions around any project on the radar. Not achieving goals 1 & 2 = Colleagues stop relying on you, drop you off conversations, don't see you as a contributing resource or someone that cares, you are perceived as someone with no peripheral vision. Note this is perfect if all you are doing is cruising at your job, trying to fly under the radar, with no ambitions of having impact beyond your absolute minimum 'day job'. B. DON'T: Leave unread email lurking around Don't: Receive or process all incoming emails in a single folder ('inbox' or 'unread mail'). This is actually possible if you receive a small number of emails (e.g. new to the job, not working at a company like Microsoft). Even so, with (your future) success at any level (company, community) comes large incoming email, so learn to deal with it. With large volumes, it is best to let the system help you by doing some categorization and filtering on your behalf (instead of trying to do that in your head as you process the single folder). See later section on how to achieve this. Don't: Leave emails as 'unread' (or worse: read them, then mark them as unread). Often done by individuals who think they possess super powers ("I can mentally cache and distinguish between the emails I chose not to read, the ones that are actually new, and the ones I decided to revisit in the future; the fact that they all show up the same (bold = unread) does not confuse me"). Interactions with this super-powered individuals typically end up with them saying stuff like "I must have missed that email you are talking about (from 2 weeks ago)" or "I am a bit behind, so I haven't read your email, can you remind me". TIP: The only place where you are "allowed" unread email is in your Deleted Items folder. Don't: Interpret a read email as an email that has been processed. Doing that, means you will always end up with fake unread email (that you have actually read, but haven't dealt with completely so you then marked it as unread) lurking between actual unread email. Another side effect is reading the email and making a 'mental' note to action it, then leaving the email as read, so the only thing left to remind you to carry out the action is… you. You are not super human, you will forget. This is a key distinction. Reading (or even scanning) a new email, means you now know what needs to be done with it, in order for it to be truly considered processed. Truly processing an email is to, for example, write an email of your own (e.g. to reply or forward), or take a non-email related action (e.g. create calendar entry, do something on some website), or read it carefully to gain some knowledge (e.g. it had a spec as an attachment), or keep it around as reference etc. 'Reading' means that you know what to do, not that you have done it. An email that is read is an email that is triaged, not an email that is resolved. Sometimes the thing that needs to be done based on receiving the email, you can (and want) to do immediately after reading the email. That is fine, you read the email and you processed it (typically when it takes no longer than X minutes, where X is your personal tolerance – mine is roughly 2 minutes). Other times, you decide that you don't want to spend X minutes at that moment, so after reading the email you need a quick system for "marking" the email as to be processed later (and you still leave it as 'read' in outlook). See later section for how. C. DO: Use Outlook rules and have multiple folders where incoming email is automatically moved to Outlook email rules are very powerful and easy to configure. Use them to automatically file email into folders. Here are mine (note that if a rule catches an email message then no further rules get processed): "personal" Email is either personal or business related. Almost all personal email goes to my gmail account. The personal emails that end up on my work email account, go to a dedicated folder – that is achieved via a rule that looks at the email's 'From' field. For those that slip through, I use the new Outlook 2010  quick step of "Conversation To Folder" feature to let the slippage only occur once per conversation, and then update my rules. "External" and "ViaBlog" The remaining external emails either come from my blog (rule on the subject line) or are unsolicited (rule on the domain name not being microsoft) and they are filed accordingly. "invites" I may do a separate blog post on calendar management, but suffice to say it should be kept up to date. All invite requests end up in this folder, so that even if mail gets out of control, the calendar can stay under control (only 1 folder to check). I.e. so I can let the organizer know why I won't be attending their meeting (or that I will be). Note: This folder is the only one that shows the total number of items in it, instead of the total unread. "Inbox" The only email that ends up here is email sent TO me and me only. Note that this is also the only email that shows up above the systray icon in the notification toast – all other emails cannot interrupt. "ToMe++" Email where I am on the TO line, but there are other recipients as well (on the TO or CC line). "CC" Email where I am on the CC line. I need to read these, but nobody is expecting a response or action from me so they are not as urgent (and if they are and follow up with me, they'll receive a link to this). "@ XYZ" Emails to aliases that are about projects that I directly work on (and I wasn't on the TO or CC line, of course). Test: these projects are in my commitments that I get measured on at the end of the year. "Z Mass" and subfolders under it per distribution list (DL) Emails to aliases that are about topics that I am interested in, but not that I formally own/contribute to. Test: if I unsubscribed from these aliases, nobody could rightfully complain. "Admin" folder, which resides under "Z Mass" folder Emails to aliases that I was added typically by an admin, e.g. broad emails to the floor/group/org/building/division/company that I am a member of. "BCC" folder, which resides under "Z Mass" Emails where I was not on the TO or the CC line explicitly and the alias it was sent to is not one I explicitly subscribed to (or I have been added to the BCC line, which I briefly touched on in another post). When there are only a few quick minutes to catch up on email, read as much as possible from these folders, in this order: Invites, Inbox, ToMe++. Only when these folders are all read (remember that doesn't mean that each email in them has been fully dealt with), we can move on to the @XYZ and then the CC folders. Only when those are read we can go on to the remaining folders. Note that the typical flow in the "Z Mass" subfolders is to scan subject lines and use the new Ctrl+Delete Outlook 2010 feature to ignore conversations. D. DO: Use Outlook Search folders in combination with categories As you process each folder, when you open a new email (i.e. click on it and read it in the preview pane) the email becomes read and stays read and you have to decide whether: It can take 2 minutes to deal with for good, right now, or It will take longer than 2 minutes, so it needs to be postponed with a clear next step, which is one of ToReply – there may be intermediate action steps, but ultimately someone else needs to receive email about this Action – no email is required, but I need to do something ReadLater – no email is required from the quick scan, but this is too long to fully read now, so it needs to be read it later WaitingFor – the email is informing of an intermediate status and 'promising' a future email update. Need to track. SomedayMaybe – interesting but not important, non-urgent, non-time-bound information. I may want to spend part of one of my weekends reading it. For all these 'next steps' use Outlook categories (right click on the email and assign category, or use shortcut key). Note that I also use category 'WaitingFor' for email that I send where I am expecting a response and need to track it. Create a new search folder for each category (I dragged the search folders into my favorites at the top left of Outlook, above my inboxes). So after the activity of reading/triaging email in the normal folders (where the email arrived) is done, the result is a bunch of emails appearing in the search folders (configure them to show the total items, not the total unread items). To actually process email (that takes more than 2 minutes to deal with) process the search folders, starting with ToReply and Action. E. DO: Get into a Routine Now you have a system in place, get into a routine of using it. Here is how I personally use mine, but this part I keep tweaking: Spend short bursts of time (between meetings, during boring but mandatory meetings and, in general, 2-4 times a day) aiming to have no unread emails (and in the process deal with some emails that take less than 2 minutes). Spend around 30 minutes at the end of each day processing most urgent items in search folders. Spend as long as it takes each Friday (or even the weekend) ensuring there is no unnecessary email baggage carried forward to the following week. F. Other resources Official Outlook help on: Create custom actions rules, Manage e-mail messages with rules, creating a search folder. Video on ignoring conversations (Ctrl+Del). Official blog post on Quick Steps and in particular the Move Conversation to folder. If you've read "Getting Things Done" it is very obvious that my approach to email management is driven by GTD. A very similar approach was described previously by ScottHa (also influenced by GTD), worth reading here. He also described how he sets up 2 outlook rules ('invites' and 'external') which I also use – worth reading that too. Comments about this post welcome at the original blog.

    Read the article

  • SQL SERVER – Shrinking NDF and MDF Files – Readers’ Opinion

    - by pinaldave
    Previously, I had written a blog post about SQL SERVER – Shrinking NDF and MDF Files – A Safe Operation. After that, I have written the following blog post that talks about the advantage and disadvantage of Shrinking and why one should not be Shrinking a file SQL SERVER – SHRINKFILE and TRUNCATE Log File in SQL Server 2008. On this subject, SQL Server Expert Imran Mohammed left an excellent comment. I just feel that his comment is worth a big article itself. For everybody to read his wonderful explanation, I am posting this blog post here. Thanks Imran! Shrinking Database always creates performance degradation and increases fragmentation in the database. I suggest that you keep that in mind before you start reading the following comment. If you are going to say Shrinking Database is bad and evil, here I am saying it first and loud. Now, the comment of Imran is written while keeping in mind only the process showing how the Shrinking Database Operation works. Imran has already explained his understanding and requests further explanation. I have removed the Best Practices section from Imran’s comments, as there are a few corrections. Comments from Imran - Before I explain to you the concept of Shrink Database, let us understand the concept of Database Files. When we create a new database inside the SQL Server, it is typical that SQl Server creates two physical files in the Operating System: one with .MDF Extension, and another with .LDF Extension. .MDF is called as Primary Data File. .LDF is called as Transactional Log file. If you add one or more data files to a database, the physical file that will be created in the Operating System will have an extension of .NDF, which is called as Secondary Data File; whereas, when you add one or more log files to a database, the physical file that will be created in the Operating System will have the same extension as .LDF. The questions now are, “Why does a new data file have a different extension (.NDF)?”, “Why is it called as a secondary data file?” and, “Why is .MDF file called as a primary data file?” Answers: Note: The following explanation is based on my limited knowledge of SQL Server, so experts please do comment. A data file with a .MDF extension is called a Primary Data File, and the reason behind it is that it contains Database Catalogs. Catalogs mean Meta Data. Meta Data is “Data about Data”. An example for Meta Data includes system objects that store information about other objects, except the data stored by the users. sysobjects stores information about all objects in that database. sysindexes stores information about all indexes and rows of every table in that database. syscolumns stores information about all columns that each table has in that database. sysusers stores how many users that database has. Although Meta Data stores information about other objects, it is not the transactional data that a user enters; rather, it’s a system data about the data. Because Primary Data File (.MDF) contains important information about the database, it is treated as a special file. It is given the name Primary Data file because it contains the Database Catalogs. This file is present in the Primary File Group. You can always create additional objects (Tables, indexes etc.) in the Primary data file (This file is present in the Primary File group), by mentioning that you want to create this object under the Primary File Group. Any additional data file that you add to the database will have only transactional data but no Meta Data, so that’s why it is called as the Secondary Data File. It is given the extension name .NDF so that the user can easily identify whether a specific data file is a Primary Data File or a Secondary Data File(s). There are many advantages of storing data in different files that are under different file groups. You can put your read only in the tables in one file (file group) and read-write tables in another file (file group) and take a backup of only the file group that has read the write data, so that you can avoid taking the backup of a read-only data that cannot be altered. Creating additional files in different physical hard disks also improves I/O performance. A real-time scenario where we use Files could be this one: Let’s say you have created a database called MYDB in the D-Drive which has a 50 GB space. You also have 1 Database File (.MDF) and 1 Log File on D-Drive and suppose that all of that 50 GB space has been used up and you do not have any free space left but you still want to add an additional space to the database. One easy option would be to add one more physical hard disk to the server, add new data file to MYDB database and create this new data file in a new hard disk then move some of the objects from one file to another, and put the file group under which you added new file as default File group, so that any new object that is created gets into the new files, unless specified. Now that we got a basic idea of what data files are, what type of data they store and why they are named the way they are, let’s move on to the next topic, Shrinking. First of all, I disagree with the Microsoft terminology for naming this feature as “Shrinking”. Shrinking, in regular terms, means to reduce the size of a file by means of compressing it. BUT in SQL Server, Shrinking DOES NOT mean compressing. Shrinking in SQL Server means to remove an empty space from database files and release the empty space either to the Operating System or to SQL Server. Let’s examine this through an example. Let’s say you have a database “MYDB” with a size of 50 GB that has a free space of about 20 GB, which means 30GB in the database is filled with data and the 20 GB of space is free in the database because it is not currently utilized by the SQL Server (Database); it is reserved and not yet in use. If you choose to shrink the database and to release an empty space to Operating System, and MIND YOU, you can only shrink the database size to 30 GB (in our example). You cannot shrink the database to a size less than what is filled with data. So, if you have a database that is full and has no empty space in the data file and log file (you don’t have an extra disk space to set Auto growth option ON), YOU CANNOT issue the SHRINK Database/File command, because of two reasons: There is no empty space to be released because the Shrink command does not compress the database; it only removes the empty space from the database files and there is no empty space. Remember, the Shrink command is a logged operation. When we perform the Shrink operation, this information is logged in the log file. If there is no empty space in the log file, SQL Server cannot write to the log file and you cannot shrink a database. Now answering your questions: (1) Q: What are the USEDPAGES & ESTIMATEDPAGES that appear on the Results Pane after using the DBCC SHRINKDATABASE (NorthWind, 10) ? A: According to Books Online (For SQL Server 2000): UsedPages: the number of 8-KB pages currently used by the file. EstimatedPages: the number of 8-KB pages that SQL Server estimates the file could be shrunk down to. Important Note: Before asking any question, make sure you go through Books Online or search on the Google once. The reasons for doing so have many advantages: 1. If someone else already has had this question before, chances that it is already answered are more than 50 %. 2. This reduces your waiting time for the answer. (2) Q: What is the difference between Shrinking the Database using DBCC command like the one above & shrinking it from the Enterprise Manager Console by Right-Clicking the database, going to TASKS & then selecting SHRINK Option, on a SQL Server 2000 environment? A: As far as my knowledge goes, there is no difference, both will work the same way, one advantage of using this command from query analyzer is, your console won’t be freezed. You can do perform your regular activities using Enterprise Manager. (3) Q: What is this .NDF file that is discussed above? I have never heard of it. What is it used for? Is it used by end-users, DBAs or the SERVER/SYSTEM itself? A: .NDF File is a secondary data file. You never heard of it because when database is created, SQL Server creates database by default with only 1 data file (.MDF) and 1 log file (.LDF) or however your model database has been setup, because a model database is a template used every time you create a new database using the CREATE DATABASE Command. Unless you have added an extra data file, you will not see it. This file is used by the SQL Server to store data which are saved by the users. Hope this information helps. I would like to as the experts to please comment if what I understand is not what the Microsoft guys meant. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Readers Contribution, Readers Question, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Using LINQ Distinct: With an Example on ASP.NET MVC SelectListItem

    - by Joe Mayo
    One of the things that might be surprising in the LINQ Distinct standard query operator is that it doesn’t automatically work properly on custom classes. There are reasons for this, which I’ll explain shortly. The example I’ll use in this post focuses on pulling a unique list of names to load into a drop-down list. I’ll explain the sample application, show you typical first shot at Distinct, explain why it won’t work as you expect, and then demonstrate a solution to make Distinct work with any custom class. The technologies I’m using are  LINQ to Twitter, LINQ to Objects, Telerik Extensions for ASP.NET MVC, ASP.NET MVC 2, and Visual Studio 2010. The function of the example program is to show a list of people that I follow.  In Twitter API vernacular, these people are called “Friends”; though I’ve never met most of them in real life. This is part of the ubiquitous language of social networking, and Twitter in particular, so you’ll see my objects named accordingly. Where Distinct comes into play is because I want to have a drop-down list with the names of the friends appearing in the list. Some friends are quite verbose, which means I can’t just extract names from each tweet and populate the drop-down; otherwise, I would end up with many duplicate names. Therefore, Distinct is the appropriate operator to eliminate the extra entries from my friends who tend to be enthusiastic tweeters. The sample doesn’t do anything with the drop-down list and I leave that up to imagination for what it’s practical purpose could be; perhaps a filter for the list if I only want to see a certain person’s tweets or maybe a quick list that I plan to combine with a TextBox and Button to reply to a friend. When the program runs, you’ll need to authenticate with Twitter, because I’m using OAuth (DotNetOpenAuth), for authentication, and then you’ll see the drop-down list of names above the grid with the most recent tweets from friends. Here’s what the application looks like when it runs: As you can see, there is a drop-down list above the grid. The drop-down list is where most of the focus of this article will be. There is some description of the code before we talk about the Distinct operator, but we’ll get there soon. This is an ASP.NET MVC2 application, written with VS 2010. Here’s the View that produces this screen: <%@ Page Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<TwitterFriendsViewModel>" %> <%@ Import Namespace="DistinctSelectList.Models" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server">     Home Page </asp:Content><asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server">     <fieldset>         <legend>Twitter Friends</legend>         <div>             <%= Html.DropDownListFor(                     twendVM => twendVM.FriendNames,                     Model.FriendNames,                     "<All Friends>") %>         </div>         <div>             <% Html.Telerik().Grid<TweetViewModel>(Model.Tweets)                    .Name("TwitterFriendsGrid")                    .Columns(cols =>                     {                         cols.Template(col =>                             { %>                                 <img src="<%= col.ImageUrl %>"                                      alt="<%= col.ScreenName %>" />                         <% });                         cols.Bound(col => col.ScreenName);                         cols.Bound(col => col.Tweet);                     })                    .Render(); %>         </div>     </fieldset> </asp:Content> As shown above, the Grid is from Telerik’s Extensions for ASP.NET MVC. The first column is a template that renders the user’s Avatar from a URL provided by the Twitter query. Both the Grid and DropDownListFor display properties that are collections from a TwitterFriendsViewModel class, shown below: using System.Collections.Generic; using System.Web.Mvc; namespace DistinctSelectList.Models { /// /// For finding friend info on screen /// public class TwitterFriendsViewModel { /// /// Display names of friends in drop-down list /// public List FriendNames { get; set; } /// /// Display tweets in grid /// public List Tweets { get; set; } } } I created the TwitterFreindsViewModel. The two Lists are what the View consumes to populate the DropDownListFor and Grid. Notice that FriendNames is a List of SelectListItem, which is an MVC class. Another custom class I created is the TweetViewModel (the type of the Tweets List), shown below: namespace DistinctSelectList.Models { /// /// Info on friend tweets /// public class TweetViewModel { /// /// User's avatar /// public string ImageUrl { get; set; } /// /// User's Twitter name /// public string ScreenName { get; set; } /// /// Text containing user's tweet /// public string Tweet { get; set; } } } The initial Twitter query returns much more information than we need for our purposes and this a special class for displaying info in the View.  Now you know about the View and how it’s constructed. Let’s look at the controller next. The controller for this demo performs authentication, data retrieval, data manipulation, and view selection. I’ll skip the description of the authentication because it’s a normal part of using OAuth with LINQ to Twitter. Instead, we’ll drill down and focus on the Distinct operator. However, I’ll show you the entire controller, below,  so that you can see how it all fits together: using System.Linq; using System.Web.Mvc; using DistinctSelectList.Models; using LinqToTwitter; namespace DistinctSelectList.Controllers { [HandleError] public class HomeController : Controller { private MvcOAuthAuthorization auth; private TwitterContext twitterCtx; /// /// Display a list of friends current tweets /// /// public ActionResult Index() { auth = new MvcOAuthAuthorization(InMemoryTokenManager.Instance, InMemoryTokenManager.AccessToken); string accessToken = auth.CompleteAuthorize(); if (accessToken != null) { InMemoryTokenManager.AccessToken = accessToken; } if (auth.CachedCredentialsAvailable) { auth.SignOn(); } else { return auth.BeginAuthorize(); } twitterCtx = new TwitterContext(auth); var friendTweets = (from tweet in twitterCtx.Status where tweet.Type == StatusType.Friends select new TweetViewModel { ImageUrl = tweet.User.ProfileImageUrl, ScreenName = tweet.User.Identifier.ScreenName, Tweet = tweet.Text }) .ToList(); var friendNames = (from tweet in friendTweets select new SelectListItem { Text = tweet.ScreenName, Value = tweet.ScreenName }) .Distinct() .ToList(); var twendsVM = new TwitterFriendsViewModel { Tweets = friendTweets, FriendNames = friendNames }; return View(twendsVM); } public ActionResult About() { return View(); } } } The important part of the listing above are the LINQ to Twitter queries for friendTweets and friendNames. Both of these results are used in the subsequent population of the twendsVM instance that is passed to the view. Let’s dissect these two statements for clarification and focus on what is happening with Distinct. The query for friendTweets gets a list of the 20 most recent tweets (as specified by the Twitter API for friend queries) and performs a projection into the custom TweetViewModel class, repeated below for your convenience: var friendTweets = (from tweet in twitterCtx.Status where tweet.Type == StatusType.Friends select new TweetViewModel { ImageUrl = tweet.User.ProfileImageUrl, ScreenName = tweet.User.Identifier.ScreenName, Tweet = tweet.Text }) .ToList(); The LINQ to Twitter query above simplifies what we need to work with in the View and the reduces the amount of information we have to look at in subsequent queries. Given the friendTweets above, the next query performs another projection into an MVC SelectListItem, which is required for binding to the DropDownList.  This brings us to the focus of this blog post, writing a correct query that uses the Distinct operator. The query below uses LINQ to Objects, querying the friendTweets collection to get friendNames: var friendNames = (from tweet in friendTweets select new SelectListItem { Text = tweet.ScreenName, Value = tweet.ScreenName }) .Distinct() .ToList(); The above implementation of Distinct seems normal, but it is deceptively incorrect. After running the query above, by executing the application, you’ll notice that the drop-down list contains many duplicates.  This will send you back to the code scratching your head, but there’s a reason why this happens. To understand the problem, we must examine how Distinct works in LINQ to Objects. Distinct has two overloads: one without parameters, as shown above, and another that takes a parameter of type IEqualityComparer<T>.  In the case above, no parameters, Distinct will call EqualityComparer<T>.Default behind the scenes to make comparisons as it iterates through the list. You don’t have problems with the built-in types, such as string, int, DateTime, etc, because they all implement IEquatable<T>. However, many .NET Framework classes, such as SelectListItem, don’t implement IEquatable<T>. So, what happens is that EqualityComparer<T>.Default results in a call to Object.Equals, which performs reference equality on reference type objects.  You don’t have this problem with value types because the default implementation of Object.Equals is bitwise equality. However, most of your projections that use Distinct are on classes, just like the SelectListItem used in this demo application. So, the reason why Distinct didn’t produce the results we wanted was because we used a type that doesn’t define its own equality and Distinct used the default reference equality. This resulted in all objects being included in the results because they are all separate instances in memory with unique references. As you might have guessed, the solution to the problem is to use the second overload of Distinct that accepts an IEqualityComparer<T> instance. If you were projecting into your own custom type, you could make that type implement IEqualityComparer<T>, but SelectListItem belongs to the .NET Framework Class Library.  Therefore, the solution is to create a custom type to implement IEqualityComparer<T>, as in the SelectListItemComparer class, shown below: using System.Collections.Generic; using System.Web.Mvc; namespace DistinctSelectList.Models { public class SelectListItemComparer : EqualityComparer { public override bool Equals(SelectListItem x, SelectListItem y) { return x.Value.Equals(y.Value); } public override int GetHashCode(SelectListItem obj) { return obj.Value.GetHashCode(); } } } The SelectListItemComparer class above doesn’t implement IEqualityComparer<SelectListItem>, but rather derives from EqualityComparer<SelectListItem>. Microsoft recommends this approach for consistency with the behavior of generic collection classes. However, if your custom type already derives from a base class, go ahead and implement IEqualityComparer<T>, which will still work. EqualityComparer is an abstract class, that implements IEqualityComparer<T> with Equals and GetHashCode abstract methods. For the purposes of this application, the SelectListItem.Value property is sufficient to determine if two items are equal.   Since SelectListItem.Value is type string, the code delegates equality to the string class. The code also delegates the GetHashCode operation to the string class.You might have other criteria in your own object and would need to define what it means for your object to be equal. Now that we have an IEqualityComparer<SelectListItem>, let’s fix the problem. The code below modifies the query where we want distinct values: var friendNames = (from tweet in friendTweets select new SelectListItem { Text = tweet.ScreenName, Value = tweet.ScreenName }) .Distinct(new SelectListItemComparer()) .ToList(); Notice how the code above passes a new instance of SelectListItemComparer as the parameter to the Distinct operator. Now, when you run the application, the drop-down list will behave as you expect, showing only a unique set of names. In addition to Distinct, other LINQ Standard Query Operators have overloads that accept IEqualityComparer<T>’s, You can use the same techniques as shown here, with SelectListItemComparer, with those other operators as well. Now you know how to resolve problems with getting Distinct to work properly and also have a way to fix problems with other operators that require equality comparisons. @JoeMayo

    Read the article

  • Processing Email in Outlook

    - by Daniel Moth
    A. Why Goal 1 = Help others: Have at most a 24-hour response turnaround to internal (from colleague) emails, typically achieving same day response. Goal 2 = Help projects: Not to implicitly pass/miss an opportunity to have impact on electronic discussions around any project on the radar. Not achieving goals 1 & 2 = Colleagues stop relying on you, drop you off conversations, don't see you as a contributing resource or someone that cares, you are perceived as someone with no peripheral vision. Note this is perfect if all you are doing is cruising at your job, trying to fly under the radar, with no ambitions of having impact beyond your absolute minimum 'day job'. B. DON'T: Leave unread email lurking around Don't: Receive or process all incoming emails in a single folder ('inbox' or 'unread mail'). This is actually possible if you receive a small number of emails (e.g. new to the job, not working at a company like Microsoft). Even so, with (your future) success at any level (company, community) comes large incoming email, so learn to deal with it. With large volumes, it is best to let the system help you by doing some categorization and filtering on your behalf (instead of trying to do that in your head as you process the single folder). See later section on how to achieve this. Don't: Leave emails as 'unread' (or worse: read them, then mark them as unread). Often done by individuals who think they possess super powers ("I can mentally cache and distinguish between the emails I chose not to read, the ones that are actually new, and the ones I decided to revisit in the future; the fact that they all show up the same (bold = unread) does not confuse me"). Interactions with this super-powered individuals typically end up with them saying stuff like "I must have missed that email you are talking about (from 2 weeks ago)" or "I am a bit behind, so I haven't read your email, can you remind me". TIP: The only place where you are "allowed" unread email is in your Deleted Items folder. Don't: Interpret a read email as an email that has been processed. Doing that, means you will always end up with fake unread email (that you have actually read, but haven't dealt with completely so you then marked it as unread) lurking between actual unread email. Another side effect is reading the email and making a 'mental' note to action it, then leaving the email as read, so the only thing left to remind you to carry out the action is… you. You are not super human, you will forget. This is a key distinction. Reading (or even scanning) a new email, means you now know what needs to be done with it, in order for it to be truly considered processed. Truly processing an email is to, for example, write an email of your own (e.g. to reply or forward), or take a non-email related action (e.g. create calendar entry, do something on some website), or read it carefully to gain some knowledge (e.g. it had a spec as an attachment), or keep it around as reference etc. 'Reading' means that you know what to do, not that you have done it. An email that is read is an email that is triaged, not an email that is resolved. Sometimes the thing that needs to be done based on receiving the email, you can (and want) to do immediately after reading the email. That is fine, you read the email and you processed it (typically when it takes no longer than X minutes, where X is your personal tolerance – mine is roughly 2 minutes). Other times, you decide that you don't want to spend X minutes at that moment, so after reading the email you need a quick system for "marking" the email as to be processed later (and you still leave it as 'read' in outlook). See later section for how. C. DO: Use Outlook rules and have multiple folders where incoming email is automatically moved to Outlook email rules are very powerful and easy to configure. Use them to automatically file email into folders. Here are mine (note that if a rule catches an email message then no further rules get processed): "personal" Email is either personal or business related. Almost all personal email goes to my gmail account. The personal emails that end up on my work email account, go to a dedicated folder – that is achieved via a rule that looks at the email's 'From' field. For those that slip through, I use the new Outlook 2010  quick step of "Conversation To Folder" feature to let the slippage only occur once per conversation, and then update my rules. "External" and "ViaBlog" The remaining external emails either come from my blog (rule on the subject line) or are unsolicited (rule on the domain name not being microsoft) and they are filed accordingly. "invites" I may do a separate blog post on calendar management, but suffice to say it should be kept up to date. All invite requests end up in this folder, so that even if mail gets out of control, the calendar can stay under control (only 1 folder to check). I.e. so I can let the organizer know why I won't be attending their meeting (or that I will be). Note: This folder is the only one that shows the total number of items in it, instead of the total unread. "Inbox" The only email that ends up here is email sent TO me and me only. Note that this is also the only email that shows up above the systray icon in the notification toast – all other emails cannot interrupt. "ToMe++" Email where I am on the TO line, but there are other recipients as well (on the TO or CC line). "CC" Email where I am on the CC line. I need to read these, but nobody is expecting a response or action from me so they are not as urgent (and if they are and follow up with me, they'll receive a link to this). "@ XYZ" Emails to aliases that are about projects that I directly work on (and I wasn't on the TO or CC line, of course). Test: these projects are in my commitments that I get measured on at the end of the year. "Z Mass" and subfolders under it per distribution list (DL) Emails to aliases that are about topics that I am interested in, but not that I formally own/contribute to. Test: if I unsubscribed from these aliases, nobody could rightfully complain. "Admin" folder, which resides under "Z Mass" folder Emails to aliases that I was added typically by an admin, e.g. broad emails to the floor/group/org/building/division/company that I am a member of. "BCC" folder, which resides under "Z Mass" Emails where I was not on the TO or the CC line explicitly and the alias it was sent to is not one I explicitly subscribed to (or I have been added to the BCC line, which I briefly touched on in another post). When there are only a few quick minutes to catch up on email, read as much as possible from these folders, in this order: Invites, Inbox, ToMe++. Only when these folders are all read (remember that doesn't mean that each email in them has been fully dealt with), we can move on to the @XYZ and then the CC folders. Only when those are read we can go on to the remaining folders. Note that the typical flow in the "Z Mass" subfolders is to scan subject lines and use the new Ctrl+Delete Outlook 2010 feature to ignore conversations. D. DO: Use Outlook Search folders in combination with categories As you process each folder, when you open a new email (i.e. click on it and read it in the preview pane) the email becomes read and stays read and you have to decide whether: It can take 2 minutes to deal with for good, right now, or It will take longer than 2 minutes, so it needs to be postponed with a clear next step, which is one of ToReply – there may be intermediate action steps, but ultimately someone else needs to receive email about this Action – no email is required, but I need to do something ReadLater – no email is required from the quick scan, but this is too long to fully read now, so it needs to be read it later WaitingFor – the email is informing of an intermediate status and 'promising' a future email update. Need to track. SomedayMaybe – interesting but not important, non-urgent, non-time-bound information. I may want to spend part of one of my weekends reading it. For all these 'next steps' use Outlook categories (right click on the email and assign category, or use shortcut key). Note that I also use category 'WaitingFor' for email that I send where I am expecting a response and need to track it. Create a new search folder for each category (I dragged the search folders into my favorites at the top left of Outlook, above my inboxes). So after the activity of reading/triaging email in the normal folders (where the email arrived) is done, the result is a bunch of emails appearing in the search folders (configure them to show the total items, not the total unread items). To actually process email (that takes more than 2 minutes to deal with) process the search folders, starting with ToReply and Action. E. DO: Get into a Routine Now you have a system in place, get into a routine of using it. Here is how I personally use mine, but this part I keep tweaking: Spend short bursts of time (between meetings, during boring but mandatory meetings and, in general, 2-4 times a day) aiming to have no unread emails (and in the process deal with some emails that take less than 2 minutes). Spend around 30 minutes at the end of each day processing most urgent items in search folders. Spend as long as it takes each Friday (or even the weekend) ensuring there is no unnecessary email baggage carried forward to the following week. F. Other resources Official Outlook help on: Create custom actions rules, Manage e-mail messages with rules, creating a search folder. Video on ignoring conversations (Ctrl+Del). Official blog post on Quick Steps and in particular the Move Conversation to folder. If you've read "Getting Things Done" it is very obvious that my approach to email management is driven by GTD. A very similar approach was described previously by ScottHa (also influenced by GTD), worth reading here. He also described how he sets up 2 outlook rules ('invites' and 'external') which I also use – worth reading that too. Comments about this post welcome at the original blog.

    Read the article

  • Bug Triage

    In this blog post brain dump, I'll attempt to describe the process my team tries to follow when dealing with new bug reports (specifically, code defect reports). This is not official Microsoft policy, just the way we do things… if you do things differently and want to share, you can do so at the bottom in the comments (or on your blog).Feature Triage TeamA subset of the feature crew, the triage team (which has representations from the PM, Dev and QA disciplines), looks at all unassigned bugs at regular intervals. This can be weekly or daily (or other frequency) dependent on which part of the product cycle we are in and what the untriaged bug load looks like. They discuss each bug considering the evidence and make a decision of whether the bug goes from Not Yet Assigned to Assigned (plus the name of the DEV to fix this) or whether it goes from Active to Resolved (which means it gets assigned back to the requestor for closure or further debate if they were not present at the triage meeting). Close to critical milestones, the feature triage team needs to further justify bugs they take to additional higher-level triage teams.Bug Opened = Not Yet AssignedSomeone (typically an SDET from the QA team) creates the bug item (e.g. in TFS), ensuring they populate all the relevant fields including: Title, Description, Repro Steps (including the Actual Result at the end of the steps), attachments of code and/or screenshots, Build number that they observed the issue in, regression details if applicable, how it was found, if a test case exists or needs to be created etc. They also indicate their opinion on the Priority and Severity. The bug status is left as Not Yet Assigned."Issue" versus "Fix for issue"The solution to some bugs is easy to determine, e.g. "bug: the column name is misspelled". Obviously the fix is to correct the spelling – still, the triage team should be explicit and enter the correct spelling in the bug's Description. Note that a bad bug name here would be "bug: fix the spelling of the column" (it describes the solution, rather than the problem).Other solutions are trickier to establish, e.g. "bug: the column header is not accessible (can only be clicked on with the mouse, not reached via keyboard)". What is the correct solution here? The last thing to do is leave this undetermined and just assign it to a developer. The solution has to be entered in the description. Behind this type of a bug usually hides a spec defect or a new feature request.The person opening the bug should focus on describing the issue, rather than the solution. The person indicates what the fix is in their opinion by stating the Expected Result (immediately after stating the Actual Result). If they have a complex suggested solution, that should be split out in a separate part, but the triage team has the final say before assigning it. If the solution is lengthy/complicated to describe, the bug can be assigned to the PM. Note: the strict interpretation suggests that any bug with no clear, obvious solution is always a hole in the spec and should always go to the PM. This also ensures the spec gets updated.Not Yet Assigned - Not Yet Assigned (on someone else's plate)If the bug is observed in our feature, but the cause is actually another team, we change the Area Path (which is the way we identify teams in TFS) and leave it as Not Yet Assigned. The triage team may add more comments as appropriate including potentially changing the repro steps. In some cases, we may even resolve the bug in our area path and open a new bug in the area path of the other team.Even though there is no action on a dev on the team, the bug still needs to be tracked. One way of doing this is to implement some notification system that informs the team when the tracked bug changed status; another way is to occasionally run a global query (against all area paths) for bugs that have been opened by a member of the team and follow up with the current owners for stale bugs.Not Yet Assigned - ResolvedThis state transition can only be made by the Feature Triage Team.0. Sometimes the bug description is not clear and in that case it gets Resolved as More Information Needed, so the original requestor can provide it.After understanding what the bug item is about, the first decision is to determine whether it needs to go to a dev.1. If it is a known bug, it gets resolved as "Duplicate" and linked to the existing bug.2. If it is "By Design" it gets resolved as such, indicating that the triage team does not think this is a bug.3. If the bug does not repro on latest bits, it is resolved as "No Repro"4. The most painful: If it is decided that we cannot fix it for this release it gets resolved as "Postponed" or "Won't Fix". The former is typically due to resources and time constraints, while the latter is due to deciding that it is not important enough to consume our resources in any release (yes, not all bugs must be fixed!). For both cases, there are other factors that contribute to the decision such as: existence of a reasonable workaround, frequency we expect users to encounter the issue, dependencies on other team to offer a solution, whether it breaks a core scenario, whether it prohibits customer feedback on a major feature, is it a regression from a previous release, impact of the fix on other partner teams (e.g. User Education, User Experience, Localization/Globalization), whether this is the right fix, does the fix impact performance goals, and last but not least, severity of bug (e.g. loss of customer data, security threat, crash, hang). The bar for fixing a bug goes up as the release date approaches. The triage team becomes hardnosed about which bugs to take, while the developers are busy resolving assigned bugs thus everyone drives for Zero Bug Bounce (ZBB). ZBB is when you have 0 active bugs older than 48 hours.Not Yet Assigned - AssignedIf the bug is something we decide to fix in this release and the solution is known, then it is assigned to a DEV. This is either the developer that will do the work, or a Lead that can further assign it to one of his developer team based on a load balancing algorithm of their choosing.Sometimes, the triage team needs the dev to do some investigation work before deciding whether to take the fix; similarly, the checkin for the fix may be gated on code review by the triage team. In these cases, these instructions are provided in the comments section of the bug and when the developer is done they notify the triage team for final decision.Additionally, a Priority and Severity (from 0 to 4) has to be entered, e.g. a P0 means "drop anything you are doing and fix this now" whereas a P4 is something you get to after all P0,1,2,3 bugs are fixed.From a testing perspective, if the bug was found through ad-hoc testing or an external team, the decision is made whether test cases should be added to avoid future regressions. This is communicated to the QA team.Assigned - ResolvedWhen the developer receives the bug (they should be checking daily for new bugs on their plate looking at bugs in order of priority and from older to newer) they can send it back to triage if the information is not clear. Otherwise, they investigate the bug, setting the Sub Status to "Investigating"; if they cannot make progress, they set the Sub Status to "Blocked" and discuss this with triage or whoever else can help them get unblocked. Once they are unblocked, they set the Sub Status to "Working on Solution"; once they are code complete they send a code review request, setting the Sub Status to "Fix Available". After the iterative code review process is over and everyone is happy with the fix, the developer checks it in and changes the state of the bug from Active (and Assigned to them) to Resolved (and Assigned to someone else).The developer needs to ensure that when the status is changed to Resolved that it is assigned to a QA person. For example, maybe the PM opened the bug, but it should be a QA person that will verify the fix - the developer needs to manually change the assignee in that case. Typically the QA person will send an email to the original requestor notifying them that the fix is verified.Resolved - ??In all cases above, note that the final state was Resolved. What happens after that? The final step should be Closed. The bug is closed once the QA person verifying the fix is happy with it. If the person is not happy, then they change the state from Resolved to Active, thus sending it back to the developer. If the developer and QA person cannot reach agreement, then triage can be brought into it. An easy way to do that is change the status back to Not Yet Assigned with appropriate comments so the triage team can re-review.It is important to note that only QA can close a bug. That means that if the opener of the bug was a PM, when the bug gets resolved by the dev it may land on the PM's plate and after a quick review, the PM would re-assign to an SDET, which is the only role that can close bugs. One exception to this is if the person that filed the bug is external: in that case, we leave it Resolved and assigned to them and also send them a notification that they need to verify the fix. Another exception is if specialized developer knowledge is needed for verifying the bug fix (e.g. it was a refactoring suggestion bug typically not observable by the user) in which case it is fine to have a developer verify the fix, and ideally a different developer to the one that opened the bug.Other links on bug triageA quick search reveals that others have talked about this subject, e.g. here, here, here, here and here.Your take?If you have other best practices your team uses to deal with incoming bug reports, feel free to share in the comments below or on your blog. Comments about this post welcome at the original blog.

    Read the article

  • Como Exportar Crystal Reports a Excel, Word, Rich Text, PDF ó HTML

    - by jaullo
    Cuando trabajamos con reportes siempre requerimos la funcionalidad de exportación. En crystal reports para asp.net, realizar esta tarea es sumamente sencillo. Sin embargo la pregunta más grande que salta siempre, es como realizarlo utilizando código Behind. Para poder acceder a las librerias de crystal y sus componentes, primero debemos importar los espacios de nombres: Normal 0 21 false false false ES X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Imports CrystalDecisions.CrystalReports.Engine Imports CrystalDecisions.Shared  CrystalDecisions.CrystalReports.Engine, nos servirá para poder manejar nuestro reportDocument y CrystalDecisions.Shared, será el medio que utilicemos para la exportación. Así que, veamos como podemos exportar nuestro informe sin tener que enviarlo a la impresora, recordemos que por defecto crystal reports ya tiene la opcion de exportar a PDF sin embargo debemos hacerlo tal como si fueramos a imprimir y que es lo que evitaremos acá. Colocamos un botón en nuestra pagina asp Normal 0 21 false false false ES X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} <asp:Button ID="btntopdf" runat="server" Text="Exportar a PDF" /> Y en nuestro boton deberemos ejecutar la siguiente rutina: Normal 0 21 false false false ES X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Protected Sub btntodpf_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles btntopdf.Click          'Cargar reporte. Enlazando a la fuente de datos.        LoadReporte()          'Mas adelante veremos que estas lineas las podemos obviar        Response.Buffer = False        Response.Clear()  'ClearContent, ClearHeaders          reporteDoc.ExportToHttpResponse(ExportFormatType.PortableDocFormat, Response, True, "NombreArchivo")       End Sub LoadReport, es el encargado de llenar nuestro crystal con la fuente de datos. Está fue la primer forma de exporta nuestro crystal reports, pero no es la única, así que vamos a ver otra forma en la cual utilizaremos el metodo v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false 21 false false false ES X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} ExportToHttpResponse  Para este metodo, nuestro código en el botón cambia relativamente, pero antes de ello, daremos un repaso a los metodos utilizados. Nuestro primer parametro FormatType es un valor de tipo ExportFormatType, que puede corresponder a cualquiera de los metodos que enumeramos a continuación: CrystalReport: El formato al cual se exporta es de Tipo CrystalReport. Excel: El formato al cual se exporta es de tipo Excel ExcelRecord: El formato al cual se exporta es de Tipo Excel Record. NoFormat: No se ha especificado un formato de exportación. PortableDocFormat: El formato al cual se exporta es de Tipo PDF.  No voy a enumerar todos, pues me imagino que ya sabrán la idea de cada uno de los formatos, los numerados arriba son los mas importantes. Nuestro segundo parametro el objeto response nos permite adozar el archivo. Y por último, nuestro tercer parametro, definirá si debe ir como un objeto adjunto o no. Si lo colocamos en TRUE, estaremos enviando nuestro archivo como parametro, esto hará que no necesitemos las siguientes líneas de código: Normal 0 21 false false false ES X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Response.Buffer = False Response.Clear()   Con esto realizado, ya contamos con la posibilidad de enviar el archivo directamente al cliente.   Ahora si, veamos cuanto se ha reducido nuestro código: Unicamente nos quedan dos líneas de código en nuestro botón Normal 0 21 false false false ES X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tabla normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}        'Cargar reporte. Enlazando a la fuente de datos.        LoadReport()          reporteDoc.ExportToHttpResponse(ExportFormatType.PortableDocFormat, Response, True, "NombreArchivo")   Para finalizar, nada mas decir que espero esto les sea de ayuda y por supuesto,  que les facilite la vida con el uso de crystal reports.

    Read the article

  • Parallelism in .NET – Part 4, Imperative Data Parallelism: Aggregation

    - by Reed
    In the article on simple data parallelism, I described how to perform an operation on an entire collection of elements in parallel.  Often, this is not adequate, as the parallel operation is going to be performing some form of aggregation. Simple examples of this might include taking the sum of the results of processing a function on each element in the collection, or finding the minimum of the collection given some criteria.  This can be done using the techniques described in simple data parallelism, however, special care needs to be taken into account to synchronize the shared data appropriately.  The Task Parallel Library has tools to assist in this synchronization. The main issue with aggregation when parallelizing a routine is that you need to handle synchronization of data.  Since multiple threads will need to write to a shared portion of data.  Suppose, for example, that we wanted to parallelize a simple loop that looked for the minimum value within a dataset: double min = double.MaxValue; foreach(var item in collection) { double value = item.PerformComputation(); min = System.Math.Min(min, value); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This seems like a good candidate for parallelization, but there is a problem here.  If we just wrap this into a call to Parallel.ForEach, we’ll introduce a critical race condition, and get the wrong answer.  Let’s look at what happens here: // Buggy code! Do not use! double min = double.MaxValue; Parallel.ForEach(collection, item => { double value = item.PerformComputation(); min = System.Math.Min(min, value); }); This code has a fatal flaw: min will be checked, then set, by multiple threads simultaneously.  Two threads may perform the check at the same time, and set the wrong value for min.  Say we get a value of 1 in thread 1, and a value of 2 in thread 2, and these two elements are the first two to run.  If both hit the min check line at the same time, both will determine that min should change, to 1 and 2 respectively.  If element 1 happens to set the variable first, then element 2 sets the min variable, we’ll detect a min value of 2 instead of 1.  This can lead to wrong answers. Unfortunately, fixing this, with the Parallel.ForEach call we’re using, would require adding locking.  We would need to rewrite this like: // Safe, but slow double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach(collection, item => { double value = item.PerformComputation(); lock(syncObject) min = System.Math.Min(min, value); }); This will potentially add a huge amount of overhead to our calculation.  Since we can potentially block while waiting on the lock for every single iteration, we will most likely slow this down to where it is actually quite a bit slower than our serial implementation.  The problem is the lock statement – any time you use lock(object), you’re almost assuring reduced performance in a parallel situation.  This leads to two observations I’ll make: When parallelizing a routine, try to avoid locks. That being said: Always add any and all required synchronization to avoid race conditions. These two observations tend to be opposing forces – we often need to synchronize our algorithms, but we also want to avoid the synchronization when possible.  Looking at our routine, there is no way to directly avoid this lock, since each element is potentially being run on a separate thread, and this lock is necessary in order for our routine to function correctly every time. However, this isn’t the only way to design this routine to implement this algorithm.  Realize that, although our collection may have thousands or even millions of elements, we have a limited number of Processing Elements (PE).  Processing Element is the standard term for a hardware element which can process and execute instructions.  This typically is a core in your processor, but many modern systems have multiple hardware execution threads per core.  The Task Parallel Library will not execute the work for each item in the collection as a separate work item. Instead, when Parallel.ForEach executes, it will partition the collection into larger “chunks” which get processed on different threads via the ThreadPool.  This helps reduce the threading overhead, and help the overall speed.  In general, the Parallel class will only use one thread per PE in the system. Given the fact that there are typically fewer threads than work items, we can rethink our algorithm design.  We can parallelize our algorithm more effectively by approaching it differently.  Because the basic aggregation we are doing here (Min) is communitive, we do not need to perform this in a given order.  We knew this to be true already – otherwise, we wouldn’t have been able to parallelize this routine in the first place.  With this in mind, we can treat each thread’s work independently, allowing each thread to serially process many elements with no locking, then, after all the threads are complete, “merge” together the results. This can be accomplished via a different set of overloads in the Parallel class: Parallel.ForEach<TSource,TLocal>.  The idea behind these overloads is to allow each thread to begin by initializing some local state (TLocal).  The thread will then process an entire set of items in the source collection, providing that state to the delegate which processes an individual item.  Finally, at the end, a separate delegate is run which allows you to handle merging that local state into your final results. To rewriting our routine using Parallel.ForEach<TSource,TLocal>, we need to provide three delegates instead of one.  The most basic version of this function is declared as: public static ParallelLoopResult ForEach<TSource, TLocal>( IEnumerable<TSource> source, Func<TLocal> localInit, Func<TSource, ParallelLoopState, TLocal, TLocal> body, Action<TLocal> localFinally ) The first delegate (the localInit argument) is defined as Func<TLocal>.  This delegate initializes our local state.  It should return some object we can use to track the results of a single thread’s operations. The second delegate (the body argument) is where our main processing occurs, although now, instead of being an Action<T>, we actually provide a Func<TSource, ParallelLoopState, TLocal, TLocal> delegate.  This delegate will receive three arguments: our original element from the collection (TSource), a ParallelLoopState which we can use for early termination, and the instance of our local state we created (TLocal).  It should do whatever processing you wish to occur per element, then return the value of the local state after processing is completed. The third delegate (the localFinally argument) is defined as Action<TLocal>.  This delegate is passed our local state after it’s been processed by all of the elements this thread will handle.  This is where you can merge your final results together.  This may require synchronization, but now, instead of synchronizing once per element (potentially millions of times), you’ll only have to synchronize once per thread, which is an ideal situation. Now that I’ve explained how this works, lets look at the code: // Safe, and fast! double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach( collection, // First, we provide a local state initialization delegate. () => double.MaxValue, // Next, we supply the body, which takes the original item, loop state, // and local state, and returns a new local state (item, loopState, localState) => { double value = item.PerformComputation(); return System.Math.Min(localState, value); }, // Finally, we provide an Action<TLocal>, to "merge" results together localState => { // This requires locking, but it's only once per used thread lock(syncObj) min = System.Math.Min(min, localState); } ); Although this is a bit more complicated than the previous version, it is now both thread-safe, and has minimal locking.  This same approach can be used by Parallel.For, although now, it’s Parallel.For<TLocal>.  When working with Parallel.For<TLocal>, you use the same triplet of delegates, with the same purpose and results. Also, many times, you can completely avoid locking by using a method of the Interlocked class to perform the final aggregation in an atomic operation.  The MSDN example demonstrating this same technique using Parallel.For uses the Interlocked class instead of a lock, since they are doing a sum operation on a long variable, which is possible via Interlocked.Add. By taking advantage of local state, we can use the Parallel class methods to parallelize algorithms such as aggregation, which, at first, may seem like poor candidates for parallelization.  Doing so requires careful consideration, and often requires a slight redesign of the algorithm, but the performance gains can be significant if handled in a way to avoid excessive synchronization.

    Read the article

  • Entity Association Mapping with Code First Part 1 : Mapping Complex Types

    - by mortezam
    Last week the CTP5 build of the new Entity Framework Code First has been released by data team at Microsoft. Entity Framework Code-First provides a pretty powerful code-centric way to work with the databases. When it comes to associations, it brings ultimate flexibility. I’m a big fan of the EF Code First approach and am planning to explain association mapping with code first in a series of blog posts and this one is dedicated to Complex Types. If you are new to Code First approach, you can find a great walkthrough here. In order to build a solid foundation for our discussion, we will start by learning about some of the core concepts around the relationship mapping.   What is Mapping?Mapping is the act of determining how objects and their relationships are persisted in permanent data storage, in our case, relational databases. What is Relationship mapping?A mapping that describes how to persist a relationship (association, aggregation, or composition) between two or more objects. Types of RelationshipsThere are two categories of object relationships that we need to be concerned with when mapping associations. The first category is based on multiplicity and it includes three types: One-to-one relationships: This is a relationship where the maximums of each of its multiplicities is one. One-to-many relationships: Also known as a many-to-one relationship, this occurs when the maximum of one multiplicity is one and the other is greater than one. Many-to-many relationships: This is a relationship where the maximum of both multiplicities is greater than one. The second category is based on directionality and it contains two types: Uni-directional relationships: when an object knows about the object(s) it is related to but the other object(s) do not know of the original object. To put this in EF terminology, when a navigation property exists only on one of the association ends and not on the both. Bi-directional relationships: When the objects on both end of the relationship know of each other (i.e. a navigation property defined on both ends). How Object Relationships Are Implemented in POCO domain models?When the multiplicity is one (e.g. 0..1 or 1) the relationship is implemented by defining a navigation property that reference the other object (e.g. an Address property on User class). When the multiplicity is many (e.g. 0..*, 1..*) the relationship is implemented via an ICollection of the type of other object. How Relational Database Relationships Are Implemented? Relationships in relational databases are maintained through the use of Foreign Keys. A foreign key is a data attribute(s) that appears in one table and must be the primary key or other candidate key in another table. With a one-to-one relationship the foreign key needs to be implemented by one of the tables. To implement a one-to-many relationship we implement a foreign key from the “one table” to the “many table”. We could also choose to implement a one-to-many relationship via an associative table (aka Join table), effectively making it a many-to-many relationship. Introducing the ModelNow, let's review the model that we are going to use in order to implement Complex Type with Code First. It's a simple object model which consist of two classes: User and Address. Each user could have one billing address. The Address information of a User is modeled as a separate class as you can see in the UML model below: In object-modeling terms, this association is a kind of aggregation—a part-of relationship. Aggregation is a strong form of association; it has some additional semantics with regard to the lifecycle of objects. In this case, we have an even stronger form, composition, where the lifecycle of the part is fully dependent upon the lifecycle of the whole. Fine-grained domain models The motivation behind this design was to achieve Fine-grained domain models. In crude terms, fine-grained means “more classes than tables”. For example, a user may have both a billing address and a home address. In the database, you may have a single User table with the columns BillingStreet, BillingCity, and BillingPostalCode along with HomeStreet, HomeCity, and HomePostalCode. There are good reasons to use this somewhat denormalized relational model (performance, for one). In our object model, we can use the same approach, representing the two addresses as six string-valued properties of the User class. But it’s much better to model this using an Address class, where User has the BillingAddress and HomeAddress properties. This object model achieves improved cohesion and greater code reuse and is more understandable. Complex Types: Splitting a Table Across Multiple Types Back to our model, there is no difference between this composition and other weaker styles of association when it comes to the actual C# implementation. But in the context of ORM, there is a big difference: A composed class is often a candidate Complex Type. But C# has no concept of composition—a class or property can’t be marked as a composition. The only difference is the object identifier: a complex type has no individual identity (i.e. no AddressId defined on Address class) which make sense because when it comes to the database everything is going to be saved into one single table. How to implement a Complex Types with Code First Code First has a concept of Complex Type Discovery that works based on a set of Conventions. The convention is that if Code First discovers a class where a primary key cannot be inferred, and no primary key is registered through Data Annotations or the fluent API, then the type will be automatically registered as a complex type. Complex type detection also requires that the type does not have properties that reference entity types (i.e. all the properties must be scalar types) and is not referenced from a collection property on another type. Here is the implementation: public class User{    public int UserId { get; set; }    public string FirstName { get; set; }    public string LastName { get; set; }    public string Username { get; set; }    public Address Address { get; set; }} public class Address {     public string Street { get; set; }     public string City { get; set; }            public string PostalCode { get; set; }        }public class EntityMappingContext : DbContext {     public DbSet<User> Users { get; set; }        } With code first, this is all of the code we need to write to create a complex type, we do not need to configure any additional database schema mapping information through Data Annotations or the fluent API. Database SchemaThe mapping result for this object model is as follows: Limitations of this mappingThere are two important limitations to classes mapped as Complex Types: Shared references is not possible: The Address Complex Type doesn’t have its own database identity (primary key) and so can’t be referred to by any object other than the containing instance of User (e.g. a Shipping class that also needs to reference the same User Address). No elegant way to represent a null reference There is no elegant way to represent a null reference to an Address. When reading from database, EF Code First always initialize Address object even if values in all mapped columns of the complex type are null. This means that if you store a complex type object with all null property values, EF Code First returns a initialized complex type when the owning entity object is retrieved from the database. SummaryIn this post we learned about fine-grained domain models which complex type is just one example of it. Fine-grained is fully supported by EF Code First and is known as the most important requirement for a rich domain model. Complex type is usually the simplest way to represent one-to-one relationships and because the lifecycle is almost always dependent in such a case, it’s either an aggregation or a composition in UML. In the next posts we will revisit the same domain model and will learn about other ways to map a one-to-one association that does not have the limitations of the complex types. References ADO.NET team blog Mapping Objects to Relational Databases Java Persistence with Hibernate

    Read the article

  • MVP Summit 2011 summary and thoughts: The &ldquo;I hope I don&rsquo;t cross a line and lose my MVP status&rdquo; post

    - by George Clingerman
    I've been wanting to write this post summarizing my thoughts about the MVP summit but have been dragging my feet since it's a very difficult one to write. However seeing Andy (http://forums.create.msdn.com/forums/t/77625.aspx) and Catalin (http://www.catalinzima.com/2011/03/mvp-summit-2011/) and Chris (http://geekswithblogs.net/cwilliams/archive/2011/03/07/144229.aspx) post about it has encouraged me to finally take the plunge. I'm going to have to write carefully though because I'm going to be dancing around a ton of NDA mine fields as well as having to walk the tight-rope of not sending the wrong message or having people read too much into what I'm saying. I want to note that most of what I'm about to say is just based on my observations, they're not thoughts that Microsoft has asked me to pass along and they're not things I heard Microsoft say. It's just me sharing what I think after going to the MVP summit. Let's start off with a short imaginary question and answer session.     Has the App Hub forums and XBLIG management been rather poor by Microsoft? Yes.     Do I think we're going to see changes to that overnight? No.     Will it continue to look bad from the outside? Somewhat. Confusing right? Well that's kind of how things are right now. Lots of confusion. XNA is doing AWESOME. Like, really, really awesome. As a result of that awesomeness, XNA is on three major platforms: Xbox 360, WP7 and PC. This means that internally Microsoft is really excited and invested in the technology. That's fantastic for XNA and really should show you the future the framework has. It's here to stay. So why are Xbox LIVE Indie Game developers feeling so much pain? The ironic thing is that pain is being caused by the success of XNA. When XNA was just a small thing, there was more freedom and more focus. It was just us and them. We were an only child. Now our family has grown and everyone has and wants some time with XNA. This gets XNA pulled in all directions and as it moves onto new platforms, it plays catch up trying to get those platforms up to speed to where Xbox LIVE Indie Games has grown. Forums, documentation, educational content. They all need to be there because Xbox LIVE Indie Games has all of that and more. Along with the catch up in features/documentation/awesomeness there's the catch up that the people on the team have to play. New platforms and new areas of development mean new players and those new guys don't have the history of being around from the beginning. This leads to a lack of understanding at times just how important some things are because they seem so small and insignificant (Rich Text defaulting for new forum profiles would be one things that jumps to mind). If you're not aware that the forums have become more than just a basic Q&A, if you're not aware that they're a central hub to a very active community, then you don't understand why that small change should be prioritized over something else. New people have to get caught up and figure out how to make a framework and central forum site work for everyone it's now serving. So yeah, a lot of our pain this last year has been simply that XNA is doing well and XBLIG is doing well so the focus was shifted to catch other things up. It hurts when a parent seems to not have any time for you and they're spending some much time with your new baby brother. Growing pains. All families and in our case our product family experience it to some degree. I think as WP7 matures we'll see the team figuring out how to give everyone the right amount of attention. While we're talking about some of our growing pains, it is also important to note (although not really an excuse) that the Xbox LIVE Arcade developers complain about many of the same things that we do. If you paid attention to talks and information coming out of GDC 2011, most of the the XBLA guys were saying things that sounded eerily similar to what the XBLIG developers are saying (Scott Nichols from GayGamer.net noticed http://twitter.com/#!/NaviFairyGG/status/43540379206811650). Does this mean we should just accept the status quo since we're being treated exactly the same? No way. However it DOES show that the way we're being treated is no indication of the stability and future of the platform, it's just Microsoft dropping the communication ball on two playing fields. We're not alone and we're not even being treated worse. Not great, but also in a weird way a very good sign. Now on to a few tidbits I think I CAN share from the summit (I'm really crossing my fingers I'm not stepping over some NDA line I shouldn't be). First, I discovered that the XBLIG user base is bigger than I personally had originally estimated. I won't give the exact numbers (although we did beg Microsoft to release some of these numbers so maybe someday?) but it was much larger than my original guestimates and I was pleasantly surprised. Maybe some of you guys had the right number when you were guessing, but I know that mine was much too low. And even MORE importantly the number of users/shoppers is growing at a steady pace as well. Our market is growing! That was fantastic news and really something that I had to share. On to the community manager discussion. It was mentioned. I was mentioned. I blushed. Nothing more to report there than the blush in my cheeks was a light crimson color. If I ever see a job description posted for that position I have a resume waiting in the wings. I can't deny that I think that would be my dream job... ...so after I finished blushing, the MVPs did make it very, very clear that the communication has to improve. Community manager or not the single biggest pain point with the Xbox LIVE Indie Game community has been a lack of communication. I have seen dramatic improvement in the team responding to MVPs and I'm even seeing more communication from them on the forums so I'm hoping that's a long term change. I really think they understood the issue, the problem remains how to open that communication channel in a way that was sustainable. I think they'll get it figured out and hopefully that's sooner rather than later. During the summit, you may have seen me tweeting about how I was "that guy" (http://twitter.com/#!/clingermangw/status/42740432471470081). You also may have noticed that Andy and Catalin both mentioned me in their summit write ups. I may have come on a bit strong while I was there...went a little out of character for myself. I've been agitated for a while with the way things have been and I've been listening to you guys and hearing you guys be agitated. I'm also watching some really awesome indie game developers looking elsewhere and leaving the platform. Some of them we might not have been able to keep even with changes, but others are only leaving because of perceptions and lack of communication from Microsoft. And that pisses me off. And I let Microsoft know that I was pissed off. You made your list and I took that list and verbalized it. I verbalized the hell out of it. [It was actually mentioned that I'm a lot nicer on the forums and in email than I am in person...I felt bad about that, but I couldn't stay silent]. Hopefully it did something guys, I really did try hard to get the message across. Along with my agitation, I also brought some pride. I mentioned several things in person to the team that I was particularly proud of. From people in the community that are doing an awesome job, to the re-launch of XboxIndies that was going on that week and even gamers like Steven Hurdle (http://writingsofmassdeduction.com/) who have purchased one XBLIG every day for over 100 days now. The community is freaking rocking it and I made sure to highlight that. So in conclusion, I'd just like to say hang in there (you know, like that picture of the cat). If you've been worried about investing in Xbox LIVE Indie Games because you think it's on shaky ground. It's not. Dream Build Play being about the Xbox 360 should have helped a little to point that out. The team is really scrambling around trying to figure things out and make improvements all around. There’s quite a few new gals and guys and it's going to take them time to catch up and there are a lot of constantly shifting priorities. We all have one toy, one team and we're fighting for time with it. It's also time for the community to continue spreading our wings and going out on our own more often. The Indie Game Winter Uprising was a fantastic example of that. We took things into our own hands and it got noticed and Microsoft got behind it. They do every time we stand up and do something (look at how many Microsoft employees tweeted, wrote about the re-launch of XboxIndies.com or the support I've gotten from them for my weekly XNA Notes). XNA is here to stay, it's time for us to stop being scared of that and figure out how to make our own games the successes they should be. There's definitely a list of things that need to be fixed, things that should be improved and I think we should definitely keep vocal about that with Microsoft. Keep it short, focused and prioritized. There's also a lot of things we can do ourselves while we're waiting on them to fix and change things. Lots of ways we can compensate for particular weaknesses in the channel. The kind of stuff that we can step up and do ourselves. Do it on our own, you know, the way Indies always do. And I'm really looking forward to watching us do just that.

    Read the article

  • CodePlex Daily Summary for Tuesday, March 16, 2010

    CodePlex Daily Summary for Tuesday, March 16, 2010New ProjectsAAPL-MySQL (a MySql implementation of the Agile ADO.Net Persistence Layer): Using the code conventions and code of the Agile ADO.Net Persistence Layer to build a MySql implementationAddress Book: Address Book is simple and easy to use application for storing and editing contacts. It has many features like search and grouping. It's developed ...Airplanes: Airplanes GameAJAX Fax document viewer: The AJAX Fax document viewer is an ASP.net 4.0 project which uses a Seadragon control to display Tiff/Tif files inside the browser. Since it's base...AxeFrog Core: This project contains the foundational code used in several other projects.Bolão: O objetivo deste projeto é estimular a troca de informações e experiências sobre arquitetura e desenvolvimento de software na prática, através do d...Caramel Engine: This is designed to be a logic engine for developing a wide variety of games. To include card logic, dice logic, territories, players, and even man...DotNetNuke® Skin BrandWorks: A DotNetNuke Design Challenge skin package submitted to the "Modern Business" category by M Perakakis & M Siganou (e-bilab). A very minimal look sk...DotNetNuke® Skin Reasonable: A DotNetNuke Design Challenge skin package submitted to the "Personal" category by Ralph Williams of Architech Solutions. A clean, classy, professi...DotNetNuke® Skin Seasons: A DotNetNuke Design Challenge skin package submitted to the "Personal" category by Atul Handa and Kevin McCusker of Mercer. This skin is a generic ...File Eraser: File Eraser make it esasier for IT Administrator or Advanced Users to administer files and eliminate long addresses, even UNC. It's developed in VB...GreenEyes: My project for IT personalHulu Launcher: Hulu Launcher is a simple Windows Media Center add-in that attempts to launch Hulu Desktop and manage the windows as seamlessly as possible.Imenik: Imenik makes it easier for users to organise their contacts!KharaPOS: KharaPOS is an Exemplar application for Silverlight 4, WCF RIA Services and .NET Framework 4.Mantas Cryptography: Pequena biblioteca de criptografia com suporte aos algorítmos DES, RC2, Rexor e TripleDES. Gera hashes HMAC-MD5, HMAC-RIPEMD160, HMAC-SHA (SHA1, S...MapWindow3D: A C# DirectX library that extends the MapWindow 6.0 geospatial software library by adding an external map. The map supports rotation and tilting i...Microsoft Silverlight Analytics Framework: Extensible Web Analytics Framework for Microsoft Silverlight Applications.Moq Examples: Unit tests demonstrating Moq features.Ncqrs: Framework that helps you to create Command-Query Responsibility Segregation based applications easily.NoLedge - Notetaking and knowledge database: NoLedge is an easy knowledge gathering and notetaking freeware with a simple interface. You can link a note with different titles and can retrieve ...Numina Application Framework: The framework is a set of tools that help facilitate authentication, authorization, and access control. It is much more than a SSO. It is a central...OData SDK for PHP: OData SDK for PHP is a library to facilitate the connection with OData Services.patterns & practices - Windows Azure Guidance: p&p site for Windows Azure related guidance.projecteuler.net: Exploring projecteuler.net using F#ResumeTracker: Small and easy to use tool that helps to track your job applications. To report bugs or for suggestions please email kirunchik@gmail.com.Selection Maker: Have you ever create a collection of music files? Imagine you just want to pick best of them by your choice. you should go to several folder,play...ShureValidation: Multilingual model validation library. Supports fluent validations, attribute validations and own custom validations.Simple Phonebook: Simple phonebook allows you to store contacts. It also allows you to export the contacts to .txt or .csv. This application is written in C#, but it...SoftUpd: A usefull library which provides an Update feature to any .Net software.sTASKedit: This program can modify and export Perfect World tasks.data files...Tigie: Tigie is a simple CMS system for basic website. It's simple, easy to customize. you'll have a very basic cms to start with and expand for it. All c...toapp: ap hwUltiLogger: UltiLogger is a fast, lightweight logging library programmed in C#. It is meant as a fast, easy, and efficient way for developers to access a relia...Unnme: UnnmeVisual Studio 2008 NUnit snippets: A simple set of useful NUnit snippets, for Visual studio 2008.webdama: italian checkers game c#XBrowser - Headless Browser for .Net: XBrowser is a "headless" web browser written for .Net applications using C#. It is designed to allow automated, remote controlled and test-based br...XML Integrator: XML integration, collaborative toolNew ReleasesAddress Book: Address Book: Address BookAddress Book: Address Book - Source: Address Book source code.AJAX Fax document viewer: AJAXTiff_Source 1.0: Source project for the AJAX Tiff viewer v1.0. Written in Visual Studio 2010 RC using ASP.net 4.0ASP.Net Routing Configuration: mal.Web.Routing v1.0.0.0: mal.Web.Routing v1.0.0.0ASP.NET Wiki Control: Release 1.0: Includes VS2010 Solution and Project files but targets the 3.5 framework so can still be used with VS2008 if new project files are created.BingPaper: Beta: BingPaper Beta Release This Beta release contains quite a few improvements: This Beta release contains a complate overhaul of the replacement tok...Bolão: teste: testeDotNetNuke® Skin Reasonable: Reasonable Package 1.0.0: A DotNetNuke Design Challenge skin package submitted to the "Personal" category by Ralph Williams of Architech Solutions. A clean, classy, professi...DotNetNuke® Skin Seasons: Seasons Package 1.0.0: A DotNetNuke Design Challenge skin package submitted to the "Personal" category by Atul Handa and Kevin McCusker of Mercer. This skin is a generic ...dylan.NET: dylan.NET v. 9.2: In TFS Serverdylan.NET Apps: dylan.NET Apps v. 1.1: First version of dnu.dll together with v.9.2 of dylan.NETFamily Tree Analyzer: Version 1.1.0.0: Version 1.1.0.0 Census report now shows in bold those individuals you KNOW to be alive at the date of the census. Direct Ancestors on census repor...GLB Virtual Player Builder: 0.4.1: Minor change to reset non-major/minor attr to 8.Hulu Launcher: HuluLauncher Release 1.0.1.1: HuluLauncher Release 1.0.1.1 is the initial, barely-tested release of this Windows Media Center add-in. It should work in Vista Media Center and 7 ...Imenik: Imenik: Imenik is now available!jQuery Library for SharePoint Web Services: SPServices 0.5.3: NOTE: While I work on new releases, I post alpha versions. Usually the alpha versions are here to address a particular need. I DO NOT recommend usi...KeelKit: KeelKit 1.0.3800: 更新内容如下: 优化了DBHelper的一些机制 修正一些BUG 支持Mysql PHP代理,使得能通过Web代理的方式远程访问数据库服务器 添加Model实例化方法,支持所有非自动计算字段的参数实例化、支持所有非空字段实例化 添加Model中的常量,使用这些常量可以获得表名称。 添加了自...Managed Extensibility Framework (MEF) Contrib: MefContrib 0.9.0.0: Updated to MEF Preview 9 Moved MefContrib.Extensions.Generics to MefContrib.Hosting.Generics Moved MefContrib.Extensions.Generics.Tests to MefC...MooiNooi MVC2LINQ2SQL Web Databinder: MooiNooi MVC2LINQ2SQL Web Databinder v0.1.1: Repaired a problem with collections... only index number under 10 were allowed... Please send me your comments and rate the project. Sorry.Mouse Gestures for .NET: Mouse Gestures 1.0: Improved version of the component + sample application. Version 1.0 is not backward compatible.MoviesForMyBlog: MoviesForMyBlog V1.0: This is version 1.0Nito.KitchenSink: Version 1: The first release of Nito.KitchenSink, which uses Nito.Linq 0.1. Please report any issues via the Issue Tracker.Nito.LINQ: Beta (v0.1): This is the first official public release of Nito.Linq. This release only supports .NET 3.5 SP1 with the Microsoft Rx libraries. The documentation...Nito.LINQ: Beta (v0.2): Added ListSource.Generate overloads that take a delegate for counting the list elements.OData SDK for PHP: OData SDK for PHP: This is an updated version of the ADO.NET Data Services toolkit for PHP. It includes full support for OData Protocol V2.0 specification, better pro...Orchard Project: Orchard Latest Build (0.1.2010.0312): This is a technical preview release of Orchard. Our intent with this release is to demonstrate representative experiences for our audiences (end-us...patterns & practices – Enterprise Library: Enterprise Library 5.0 - Beta2: This is a preliminary release of the code and documentation that may potentially be incomplete, and may change prior to the final release of Enterp...patterns & practices - Unity: Unity 2.0 - Beta2: This is a preliminary release of the code and documentation that may potentially be incomplete, and may change prior to the final release of Unity ...patterns & practices - Windows Azure Guidance: Code drop - 1: This initial version is the before the cloud baseline application, so you won’t find anything related to Windows Azure here. Next iteration, we'l...Rawr: Rawr 2.3.12: - First, a note about Rawr3. Rawr3 has been in development for quite a while now, and we know that everyone's eager to get it. It's been held back ...ResumeTracker: Resume Tracker v1.0: First release.Selection Maker: Selection Maker 1.0: This is just the first release of this programSevenZipSharp: SevenZipSharp 0.61: Added: Windows Mobile support bool Check() method for Extractor to test archives integrity FileExtractionFinished now returns FileInfoEventArgs...Silverlight 3.0 Advanced ToolTipService: Advanced ToolTipService v2.0.1: This release is compiled against the Silverlight 3.0 runtime. A demonstration on how to set the ToolTip content to a property of the DataContext o...Silverlight Flow Layouts library: SL and WPF Flow Layouts library March 2010: This release indtroduces some bug fixes, performance improvements, Silverlight 4 RC and WPF 4.0 RC support. Flow Layouts Library is a control libra...Simple Phonebook: SimplePhonebook Visual Studio 2010 Solution: Ovo je cijeli projekt u kojem se nalaze svi source fileovi koje sam koristio u izradi ove aplikacije. Za pokretanje je potreban Visual Studio 2010....Simple Phonebook: SimplePhonebook.rar: U ovoj .rar datoteci nalaze se izvršni fileovi. _ In this .rar file you can find .exe file needed for executing the application.SLARToolkit - Silverlight Augmented Reality Toolkit: SLARToolkit 1.0.1.0: Updated to Silverlight 4 Release Candidate. Introduces the new GenericMarkerDetector which uses the IXrgbReader interface. See the Marker Detecto...sPWadmin: pwAdmin v1.0: Fixed: Templates can now be saved server restart persistant (wait at least 60 seconds between saving and restarting)SQL Director for Dependencies & Indexes: SDD CTP 1.0: SQL Director for Dependencies allows you to view dependencies between tables, views, function, stored procedures and jobs. Newest Testing build, f...SqlCeViewer: SeasonStar Database Management 0.7.0.2: Update the user interface to help user understand clearly how to use .UltiLogger: Initial alpha release: Important! This is not a feature-complete release! It contains the logging priorities, and an interface for building logging systems from. THERE IS...Visual Studio 2008 NUnit snippets: Version 1.0: First stable release.Zeta Resource Editor: Source Code Release 2010-03-16: New source code. Binary setup is also available.Most Popular ProjectsMetaSharpWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitASP.NET Ajax LibraryWindows Presentation Foundation (WPF)ASP.NETLiveUpload to FacebookMost Active ProjectsLINQ to TwitterOData SDK for PHPRawrN2 CMSpatterns & practices – Enterprise LibraryDirectQBlogEngine.NETMapWindow6SharePoint Team-MailerNB_Store - Free DotNetNuke Ecommerce Catalog Module

    Read the article

< Previous Page | 282 283 284 285 286 287 288 289 290 291 292 293  | Next Page >