Search Results

Search found 7465 results on 299 pages for 'team leadership'.

Page 83/299 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • Release Notes 12/12/2012

    This past week the CodePlex team worked on several fixes to improve the stability of our TFS infrastructure, including applying TFS 2012 Update 1. We apologize for the recent downtime. We are not completely out of the woods, but we appreciate your patience as we work through the issues. Additional Bug Fixes: Fixed several issues with character encoding within file paths. Fixed issue where the number of pull requests and forks were disappearing after selecting either link. Fixed issue blocking license changes when special characters exist in copyright holder field. Have ideas on how to improve CodePlex? Please visit our suggestions page! Vote for existing ideas or submit a new one. As always you can reach out to the CodePlex team on Twitter @codeplex or reach me directly @mgroves84

    Read the article

  • Let&rsquo;s keep informed with &ldquo;Data Explorer&rdquo;

    - by Luca Zavarella
    At Pass Summit 2011 a new project was announced. It’s a Microsoft SQL Azure Lab and its codename is Microsoft “Data Explorer”. According to the official blog (http://blogs.msdn.com/b/dataexplorer/), this new tool provides an innovative way to acquire new knowledge from the data that interest you. In a nutshell, Data Explorer allows you to combine data from multiple sources, to publish and share the result. In addition, you can generate data streams in the RESTful open format (Open Data Protocol), and they can then be used by other applications. Nonetheless we can still use Excel or PowerPivot to analyze the results. Sources can be varied: Excel spreadsheets, text files, databases, Windows Azure Marketplace, etc.. For those who are not familiar with this resource, I strongly suggest you to keep an eye on the data services available to the Marketplace: https://datamarket.azure.com/browse/Data To tell the truth, as I read the above blog post, I was tempted to think of the Data Explorer as a "SSIS on Azure" addressed to the Power User. In fact, reading the response from Tim Mallalieu (Group Program Manager of Data Explorer) to the comment made to his post, I had a positive response to my first impression: “…we originally thinking of ourselves as Self-Service ETL. As we talked to more folks and started partnering with other teams we realized that would be an area that we can add value but that there were more opportunities emerging.” The typical operations of the ETL phase ( processing and organization of data in different formats) can be obtained thanks to Data Explorer Mashup. This is an image of the tool: The flexibility in the manipulation of information is given by Data Explorer Formula Language. This is a formula-based Excel-style specific language: Anyone wishing to know more can check the project page in addition to aforementioned blog: http://www.microsoft.com/en-us/sqlazurelabs/labs/dataexplorer.aspx In light of this new project, there is no doubt about the intention of Microsoft to get closer and closer to the Power User, providing him flexible and very easy to use tools for data analysis. The prime example of this is PowerPivot. The question that remains is always the same: having in a company more Power User will implicitly mean having different data models representing the same reality. But this would inevitably lead to anarchical data management... What do you think about that?

    Read the article

  • To Make Diversity Work, Managers Must Stop Ignoring Difference

    - by HCM-Oracle
    By Kate Pavao - Originally posted on Profit Executive coaches Jane Hyun and Audrey S. Lee noticed something during their leadership development coaching and consulting: Frustrated employees and overwhelmed managers. “We heard from voices saying, ‘I wish my manager understood me better’ or ‘I hope my manager would take the time to learn more about me and my background,’” remembers Hyun. “At the same token, the managers we were coaching had a hard time even knowing how to start these conversations.”  Hyun and Lee wrote Flex to address some of the fears managers have when it comes to leading diverse teams—such as being afraid of offending their employees by stumbling into sensitive territory—and also to provide a sure-footed strategy for becoming a more effective leader. Here, Hyun talks about what it takes to create innovate and productive teams in an increasingly diverse world, including the key characteristics successful managers share. Q: What does it mean to “flex”? Hyun: Flexing is the art of switching between leadership styles to work more effectively with people who are different from you. It’s not fundamentally changing who you are, but it’s understanding when you need to adapt your style in a situation so that you can accommodate people and make them feel more comfortable. It’s understanding the gap that might exist between you and others who are different, and then flexing across that gap to get the result that you're looking for. It’s up to all of us, not just managers, but also employees, to learn how to flex. When you hire new people to the organization, they're expected to adapt. The new people in the organization may need some guidance around how to best flex. They can certainly take the initiative, but if you can give them some direction around the important rules, and connect them with insiders who can help them figure out the most critical elements of the job, that will accelerate how quickly they can contribute to your organization. Q: Why is it important right now for managers to understand flexing? Hyun: The workplace is becoming increasingly younger, multicultural and female. The numbers bear it out. Millennials are entering the workforce and becoming a larger percentage of it, which is a global phenomenon. Thirty-six percent of the workforce is multicultural, and close to half is female. It makes sense to better understand the people who are increasingly a part of your workforce, and how to best lead them and manage them as well. Q: What do companies miss out on when managers don’t flex? Hyun: There are high costs for losing people or failing to engage them. The estimated costs of replacing an employee is about 150 percent of that person’s salary. There are studies showing that employee disengagement costs the U.S. something like $450 billion a year. But voice is the biggest thing you miss out on if you don’t flex. Whenever you want innovation or increased productivity from your people, you need to figure out how to unleash these things. The way you get there is to make sure that everybody’s voice is at the table. Q: What are some of the common misassumptions that managers make about the people on their teams? Hyun: One is what I call the Golden Rule mentality: We assume when we go to the workplace that people are going to think like us and operate like us. But sometimes when you work with people from a different culture or a different generation, they may have a different mindset about doing something, or a different approach to solving a problem, or a different way to manage some situation. When see something that’s different, we don't understand it, so we don't trust it. We have this hidden bias for people who are like us. That gets in the way of really looking at how we can tap our team members best potential by understanding how their difference may help them be effective in our workplace. We’re trained, especially in the workplace, to make assumptions quickly, so that you can make the best business decision. But with people, it’s better to remain curious. If you want to build stronger cross-cultural, cross-generational, cross-gender relationships, before you make a judgment, share what you observe with that team member, and connect with him or her in ways that are mutually adaptive, so that you can work together more effectively. Q: What are the common characteristics you see in leaders who are successful at flexing? Hyun: One is what I call “adaptive ability”—leaders who are able to understand that someone on their team is different from them, and willing to adapt his or her style to do that. Another one is “unconditional positive regard,” which is basically acceptance of others, even in their vulnerable moments. This attitude of grace is critical and essential to a healthy environment in developing people. If you think about when people enter the workforce, they're only 21 years old. It’s quite a formative time for them. They may not have a lot of management experience, or experience managing complex or even global projects. Creating the best possible condition for their development requires turning their mistakes into teachable moments, and giving them an opportunity to really learn. Finally, these leaders are not rigid or constrained in a single mode or style. They have this insatiable curiosity about other people. They don’t judge when they see behavior that doesn’t make sense, or is different from their own. For example, maybe someone on their team is a less aggressive than they are. The leader needs to remain curious and thinks, “Wow, I wonder how I can engage in a dialogue with this person to get their potential out in the open.”

    Read the article

  • What is an achievable way of setting content budgets (e.g. polygon count) for level content in a 3D title?

    - by MrCranky
    In answering this question for swquinn, the answer raised a more pertinent question that I'd like to hear answers to. I'll post our own strategy (promise I won't accept it as the answer), but I'd like to hear others. Specifically: how do you go about setting a sensible budget for your content team. Usually one of the very first questions asked in a development is: what's our polygon budget? Of course, these days it's rare that vertex/poly count alone is the limiting factor, instead shader complexity, fill-rate, lighting complexity, all come into play. What the content team want are some hard numbers / limits to work to such that they have a reasonable expectation that their content, once it actually gets into the engine, will not be too heavy. Given that 'it depends' isn't a particularly useful answer, I'd like to hear a strategy that allows me to give them workable limits without being a) misleading, or b) wrong.

    Read the article

  • DataTable to Generic List Conversion

    using System;using System.Collections.Generic;using System.Linq;using System.Data;namespace ConsoleApplication1{ class Program { static void Main(string[] args) { DataTable table = new DataTable { Columns = { {"Number", typeof(int)}, {"Name", typeof(string)} } }; //Just adding few test rows to datatable. for (int i = 1; i <= 5; i++) { table.Rows.Add(i, "Name" + i); } var returnList = from row in table.AsEnumerable() select new MyObject { Number = row.Field<int>("Number"), Name = row.Field<String>("Name") }; //Displaying converted collection foreach (MyObject item in returnList) { Console.WriteLine("{0}\t{1}", item.Number, item.Name); } } } class MyObject { public int Number { get; set; } public String Name { get; set; } }} span.fullpost {display:none;}

    Read the article

  • TraceTune shows Reads graphically

    - by Bill Graziano
    TraceTune now shows a graphical view of logical reads for each SQL statement in a trace file.  The width of the colored bar in the screen capture below is the percentage of logical reads for that statement.  The absolute number of reads is shown to the right. Any statement that has a user entered comment is shown in bold.  If you hover over the statement it will show the most recent comment for that statement.

    Read the article

  • Is it the job of a developer to suggest IT requirements?

    - by anything
    I am the only developer working on a web application which is nearing to its end. Now we are looking into making it Live in maybe a couple of months time. This is a web application for a non IT company. Though they have their own internal IT team, they have asked me on what will be the hardware requirements for the live servers eg. RAM, 32 bit or 64 bit. Shouldn't the internal IT team be doing this or since I am the only person working on the project is it my resposiblity to let them know of the any specific hardware requiremnts which may impact the performance of the project? The reason I am asking this question is that, I have not this before. All the times I used to be given a server and asked to deploy apps on it. I never used to worry about the server configuration etc.

    Read the article

  • SQL Server Configuration Scripting Utility Release 9

    - by Bill Graziano
    There’s another update to my little utility to script a SQL Server’s configuration.  I use this for two purposes.  First, I use it to keep my database mirroring servers up to date.  Second, I capture the output in a version control system and keep that for historical reference. In release 3.0.9 I made the following changes: Rewrote the encrypted trigger scripting.  It will now list the encrypted triggers in a comment in the table script but can’t actually script them. It now scripts any server event notifications. You can script a single database using the /scriptdb flag.  Please note that it will also script the instance and system databases when it does this. It will script any user-defined endpoints.  This will capture your mirroring endpoints and more importantly any service broker endpoints. It will gracefully skip database mail on the Express Edition. It still doesn’t support SQL Server 2012.  I think that’s the next feature to add though.

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part III)

    The last post ended with us just getting started on stumbling into text template file customization, a task that required a Visual Studio extension (Tangible T4 Editor) to even have a chance at completing.  Despite the benefits of the Tangible T4 Editor, I still had a hard time putting together a solid text template that would be easy to explain.  This is mostly due to the way the files allow you to mix code (encapsulated in <# #>) with straight-up text to generate.  It is effective to be sure, but not very readable.  Nevertheless, I will try and explain what was accomplished in my custom tt file, though the details of which are not really the point of this article (my way of saying dont criticize my crappy code, and certainly dont use it in any somewhat real application.  You may become dumber just by looking at this code.  You have been warned really the footnote I should put at the end of all of my blog posts). To begin with, there were two basic requirements that I needed the code generator to satisfy:  Reading one to many entity framework files, and using the entities that were found to write one to many class files.  Thankfully, using the Entity Object Generator as a starting point gave us an example on how to do exactly that by using the MetadataLoader and EntityFrameworkTemplateFileManager you include references to these items and use them like so: // Instantiate an entity framework file reader and file writer MetadataLoader loader = new MetadataLoader(this); EntityFrameworkTemplateFileManager fileManager = EntityFrameworkTemplateFileManager.Create(this); // Load the entity model metadata workspace MetadataWorkspace metadataWorkspace = null; bool allMetadataLoaded =loader.TryLoadAllMetadata("MFL.tt", out metadataWorkspace); EdmItemCollection ItemCollection = (EdmItemCollection)metadataWorkspace.GetItemCollection(DataSpace.CSpace); // Create an IO class to contain the 'get' methods for all entities in the model fileManager.StartNewFile("MFL.IO.gen.cs"); Next, we want to be able to loop through all of the entities found in the model, and then each property for each entity so we can generate classes and methods for each.  The code for that is blissfully simple: // Iterate through each entity in the model foreach (EntityType entity in ItemCollection.GetItems<EntityType>().OrderBy(e => e.Name)) {     // Iterate through each primitive property of the entity     foreach (EdmProperty edmProperty in entity.Properties.Where(p => p.TypeUsage.EdmType is PrimitiveType && p.DeclaringType == entity))     {         // TODO:  Create properties     }     // Iterate through each relationship of the entity     foreach (NavigationProperty navProperty in entity.NavigationProperties.Where(np => np.DeclaringType == entity))     {         // TODO:  Create associations     } } There really isnt anything more advanced than that going on in the text template the only thing I had to blunder through was realizing that if you want the generator to interpret a line of code (such as our iterations above), you need to enclose the code in <# and #> while if you want the generator to interpret the VALUE of code, such as putting the entity name into the class name, you need to enclose the code in <#= and #> like so: public partial class <#=entity.Name#> To make a long story short, I did a lot of repetition of the above to come up with a text template that generates a class for each entity based on its properties, and a set of IO methods for each entity based on its relationships.  The two work together to provide lazy-loading for hierarchical data (such getting Team.Players) so it should be pretty intuitive to use on a front-end.  This text template is available here you can tweak the inputFiles array to load one or many different edmx models and generate the basic xml IO and class files, though it will probably only work correctly in the simplest of cases, like our MFL model described in the previous post.  Additionally, there is no validation, logging or error handling which is something I want to handle later by stumbling through the enterprise library 5.0. The code that gets generated isnt anything special, though using the LINQ to XML feature was something very new and exciting for me I had only worked with XML in the past using the DOM or XML Reader objects along with XPath, and the LINQ to XML model is just so much more elegant and supposedly efficient (something to test later).  For example, the following code was generated to create a Player object for each Player node in the XML:         return from element in GetXmlData(_PlayerDataFile).Descendants("Player")             select new Player             {                 Id = int.Parse(element.Attribute("Id").Value)                 ,ParentName = element.Parent.Name.LocalName                 ,ParentId = long.Parse(element.Parent.Attribute("Id").Value)                 ,Name = element.Attribute("Name").Value                 ,PositionId = int.Parse(element.Attribute("PositionId").Value)             }; It is all done in one line of code, no looping needed.  Even though GetXmlData loads the entire xml file just like the old XML DOM approach would have, it is supposed to be much less resource intensive.  I will definitely put that to the test after we develop a user interface for getting at this data.  Speaking of the data where IS the data?  Weve put together a pretty model and a bunch of code around it, but we dont have any data to speak of.  We can certainly drop to our favorite XML editor and crank out some data, but if it doesnt totally match our model, it will not load correctly.  To help with this, Ive built in a method to generate xml at any given layer in the hierarchy.  So for us to get the closest possible thing to real data, wed need to invoke MFL.IO.GenerateTeamXML and save the results to file.  Doing so should get us something that looks like this: <Team Id="0" Name="0">   <Player Id="0" Name="0" PositionId="0">     <Statistic Id="0" PassYards="0" RushYards="0" Year="0" />   </Player> </Team> Sadly, it is missing the Positions node (havent thought of a way to generate lookup xml yet) and the data itself isnt quite realistic (well, as realistic as MFL data can be anyway).  Lets manually remedy that for now to give us a decent starter set of data.  Note that this is TWO xml files Lookups.xml and Teams.xml: <Lookups Id=0>   <Position Id="0" Name="Quarterback"/>   <Position Id="1" Name="Runningback"/> </Lookups> <Teams Id=0>   <Team Id="0" Name="Chicago">     <Player Id="0" Name="QB Bears" PositionId="0">       <Statistic Id="0" PassYards="4000" RushYards="120" Year="2008" />       <Statistic Id="1" PassYards="4200" RushYards="180" Year="2009" />     </Player>     <Player Id="1" Name="RB Bears" PositionId="1">       <Statistic Id="2" PassYards="0" RushYards="800" Year="2007" />       <Statistic Id="3" PassYards="0" RushYards="1200" Year="2008" />       <Statistic Id="4" PassYards="3" RushYards="1450" Year="2009" />     </Player>   </Team> </Teams> Ok, so we have some data, we have a way to read/write that data and we have a friendly way of representing that data.  Now, what remains is the part that I have been looking forward to the most: present the data to the user and give them the ability to add/update/delete, and doing so in a way that is very intuitive (easy) from a development standpoint.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Does having a higher paid technical job mean you do not get to code any more?

    - by c_maker
    I work at a large company where technical people fall roughly in one of these categories: A developer on a scrum team who develops for a single product and maybe works with other teams that are closely related to the product. An architect who is more of a consultant on multiple teams (5-6) and tries to recognize commonalities between team efforts that could be abstracted into libraries (architects do not write the library code, however). This architect also attends many meetings with management and attempts to set technical direction. In my company the architect role is where most technical people move into as the next step in their career. My questions are: Do most companies work such a way that their highest paid technical people are far removed from writing code? Is this a natural tendency for a developer's career? Can a developer have it all (code AND set direction?)

    Read the article

  • Release Notes for 11/20/2012

    The CodePlex team deployed a few times over the last week. Below is a roll-up of changes: Fixed issue with being able add additional commits to pull requests - Thanks to Oren Novotny Fixed problem with issue summaries breaking within words - Thanks to Jeff Handley and SoonDead Corrected inconsistencies between the time displayed on the history page and previous versions page for Git/Hg commits. Fixed perma-link issue when linking to forks. - Thanks to Scott Blomquist Fixed problem with connecting via Windows Live Writer - Thanks to yufeih Fixed source browsing problem when folders have special characters. Fixed AppHarbor service hooks for Mercurial projects. Have ideas on how to improve CodePlex? Please visit our suggestions page! Vote for existing ideas or submit a new one. As always you can reach out to the CodePlex team on Twitter @codeplex or reach me directly @mgroves84

    Read the article

  • Reading XML Content

    using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Xml.Linq; using System.Diagnostics; using System.Threading; using System.Xml; using System.Reflection; namespace XMLReading { class Program     { static void Main(string[] args)         { string fileName = @"C:\temp\t.xml"; List<EmergencyContactXMLDTO> emergencyContacts = new XmlReader<EmergencyContactXMLDTO, EmergencyContactXMLDTOMapper>().Read(fileName); foreach (var item in emergencyContacts)             { Console.WriteLine(item.FileNb);             }          }     } public class XmlReader<TDTO, TMAPPER> where TDTO : BaseDTO, new() where TMAPPER : PCPWXMLDTOMapper, new()     { public List<TDTO> Read(String fileName)         { XmlTextReader reader = new XmlTextReader(fileName); List<TDTO> emergencyContacts = new List<TDTO>(); while (true)             {                 TMAPPER mapper = new TMAPPER(); bool isFound = SeekElement(reader, mapper.GetMainXMLTagName()); if (!isFound) break;                 TDTO dto = new TDTO(); foreach (var propertyKey in mapper.GetPropertyXMLMap())                 { String dtoPropertyName = propertyKey.Key; String xmlPropertyName = propertyKey.Value;                     SeekElement(reader, xmlPropertyName);                     SetValue(dto, dtoPropertyName, reader.ReadElementString());                 }                 emergencyContacts.Add(dto);             } return emergencyContacts;         } private void SetValue(Object dto, String propertyName, String value)         { PropertyInfo prop = dto.GetType().GetProperty(propertyName, BindingFlags.Public | BindingFlags.Instance);             prop.SetValue(dto, value, null);         } private bool SeekElement(XmlTextReader reader, String elementName)         { while (reader.Read())             { XmlNodeType nodeType = reader.MoveToContent(); if (nodeType != XmlNodeType.Element)                 { continue;                 } if (reader.Name == elementName)                 { return true;                 }             } return false;         }     } public class BaseDTO     {     } public class EmergencyContactXMLDTO : BaseDTO     { public string FileNb { get; set; } public string ContactName { get; set; } public string ContactPhoneNumber { get; set; } public string Relationship { get; set; } public string DoctorName { get; set; } public string DoctorPhoneNumber { get; set; } public string HospitalName { get; set; }     } public interface PCPWXMLDTOMapper     { Dictionary<string, string> GetPropertyXMLMap(); String GetMainXMLTagName();     } public class EmergencyContactXMLDTOMapper : PCPWXMLDTOMapper     { public Dictionary<string, string> GetPropertyXMLMap()         { return new Dictionary<string, string>             {                 { "FileNb", "XFileNb" },                 { "ContactName", "XContactName"},                 { "ContactPhoneNumber", "XContactPhoneNumber" },                 { "Relationship", "XRelationship" },                 { "DoctorName", "XDoctorName" },                 { "DoctorPhoneNumber", "XDoctorPhoneNumber" },                 { "HospitalName", "XHospitalName" },             };         } public String GetMainXMLTagName()         { return "EmergencyContact";         }     } } span.fullpost {display:none;}

    Read the article

  • Configure SQL Express 2005 for remote access

    Please follow the below steps as shown in pictures to configure SQL Server Express 2005 for remote access. Fig1: Open SQL Serve Configuration Manager Fig2: Navigate to SQL Serve 2005 N/W configuration and click on Protocols node Fig3: Enable TCP/IP Protocol Fig4: Enable Named Pipes Protocol Fig5: After enabling TCP/IP and Named Pipes protocols Fig6: Finally click on TCP/IP to configure the port number to listen N/W requests to SQL Express 2005. span.fullpost {display:none;}

    Read the article

  • Using Dynamic SQL in Stored Procedures

    Dynamic SQL allows stored procedures to “write” or dynamically generate their SQL statements. The most common use case for dynamic SQL is stored procedures with optional parameters in the WHERE clause. These are typically called from reports or screens that have multiple, optional search criteria. This article describes how to write these types of stored procedures so they execute well and resist SQL injection attacks.

    Read the article

  • GlassFish Community Event @ JavaOne - Save the date!

    - by alexismp
    The interest for having a GlassFish community event at JavaOne is still very strong both inside Oracle and in the community, so this year again we'll be hosting a get together on the Sunday prior to the main event. If you're in town and attending JavaOne, mark your calendars : Sunday 2nd, October 2011 - 12:30pm-4:30pm in the Moscone This will be an opportunity to discuss the community status (adoption of Java EE 6, GlassFish 3.1.x) and hear about future plans, mainly around Java EE 7 and the related GlassFish release(s). We'd also like to have several participants share their deployment stories as well as some time for an free-form unconference format and some team building activity. Of course, beyond all the content shared in slides, this should really also be a good excuse to meet folks from the community and from the core GlassFish team at Oracle. Here's a post on last year's event. And before anybody asks, we are still exploring the party situation :-)

    Read the article

  • Facing quality issues

    - by juststartedmycareer
    A workforce management software has complex GUI (for example values in a page depends on the status (closed or open) of other pages). Only latest and near past development has test coverage. During our last release, we received lots of bugs from customer in-spite of 2 weeks of testing Sprint . We don't have dedicated test team. The developers does the unit test & User acceptance test. Every day triggers automated regression test. I am afraid the developers are not testing the entire workflow because its time consuming also not able to automate it because of its complexity. Any suggestions ?. The legacy code (15 yrs development) has less code coverage. How can I improve quality ? Note: Now not possible to hire testers to have independent test team!!

    Read the article

  • NetAdvantage - jQuery, ASP .NET MVC and HTML5 UI Components released for Web Developers

    Built for speed and portability across operating systems, iPad/tablets, desktops and multi-browser support. Includes controls for ASP .NET MVC and uses the latest technologies like HTML 5 & CSS 3. This preview includes a sampling of powerful UI controls: grid, date picker, rating, editors, even a video player! All work with the popular WebKit engine that underpins many modern desktop browsers without requiring plug-ins or extensions. The grid embraces the latest Web techniques and frameworks like jQuery Client Templates and DOM virtualization. Download these essentials for jQuery and ASP .NET MVC from us today. span.fullpost {display:none;}

    Read the article

  • SQL Injection: How it Works and How to Thwart it

    This is an extract from the book Tribal SQL. In this article, Kevin Feasel explains SQL injection attacks, how to defend against them, and how to keep your Chief Information Security Officer from appearing on the nightly news. NEW! The DBA Team in The Girl with the Backup TattooPina colada in the disk drives! How could any DBA do such a thing? And can the DBA Team undo the damage? Find out in Part 2 of their new series, 5 Worst Days in a DBA’s Life. Read the new article now.

    Read the article

  • DevWeek 2010 is Coming Up

    The time has come again for the UK’s biggest conference for .NET developers and SQL Server professionals. The 13th annual DevWeek conference takes place on 15-19 March 2010 in London. Expert speakers will cover a large range topics, including .NET 4.0, Silverlight 3, WCF 4, Visual Studio 2010, Thread Synchronization, ASP.NET 4.0, SQL Server 2008 R2, Unit Testing, CLR & C# 4.0, Windows Azure, and T-SQL Tips & Tricks. Find out more. span.fullpost {display:none;}

    Read the article

  • C#.NET: How to update multiple .NET pages when a particular event occurs in one .Net page? In another words how to use Observer pattern(Publish and subscribe to events)

    Problem: Suppose you have a scenario in which you have to update multiple pages when an event occurs in main page. For example imagine you have a main page where you are dispalying a tab control. This tab control has 3 tab pages where you are loading 3 different user controls. On click of an update button in main page imagine if you have do something in all the 3 tab panels. In other words an event in main page has to be handled in many other pages. An event in main page which contains the tab control has to be handled in all the tab panels(user controls) Answer: Use Observer pattern Define a base page for the page that contains the tab control. Main page which contains the tab: Baseline_Baseline Basepage for the above main page: BaselineBasePage User control that has to be udpated for an event in main page: Baseline_PriorNonDeloitte Source Code: public class BaselineBasePage : System.Web.UI.Page { IList lstControls = new List(); public void Add(IObserver userControl) { lstControls.Add(userControl); } public void Remove(IObserver userControl) { lstControls.Remove(userControl); } public void RemoveAllUserControls() { lstControls.Clear(); } public void Update(SaveEventArgs e) { foreach (IObserver LobjControl in lstControls) { LobjControl.Save(e); } } } public interface IObserver { void Update(SaveEventArgs e); } public partial class Baseline_Baseline : BaselineBasePage { . . . this.Add(_ucPI); this.Add(_ucPI1); protected void abActionBar_saveClicked(object sender, EventArgs e) { SaveEventArgs se = new SaveEventArgs(); se.TabType = (BaselineTabType)tcBaseline.ActiveTabIndex; this.Update(se); } } Public class Baseline_PriorNonDeloitte : System.Web.UI.UserControl,IObserver { public void Update(SaveEventArgs e) { } } More info at: http://www.dofactory.com/Patterns/PatternObserver.aspx span.fullpost {display:none;}

    Read the article

  • What would be a good way to request comments?

    - by WarpEnterprises
    In the project/team I'm working the frequency of comments is a little low. One reason might be that it is not clear to the long-time devs what lines in the code really needs a comment (each part of the project has quite fixed devs). To increase this we plan to let team members review the code and check in "requests for comments", which the main dev of that part should replace with useful comments. Do you think this could work? If "yes": what tags should we use to mark? (e.g. //TODO please comment) Can you think of alternatives for this process? Edit: I appreciate your answers about best practice in commenting and writing code, and I completey agree. But my question targets the cases where refactoring is not an option (not wanting to change working code, not wanting to "accuse" main dev of producing code that needs refactoring,...) - so only more or better comments are an option (at least for this question).

    Read the article

  • Reading Excel using OpenXML

    public DataTable ReadDataFromExcel()        {         string filePath = @"c:/temp/temp.xlsx";            using (SpreadsheetDocument LobjDocument = SpreadsheetDocument.Open(filePath, false))            {                            WorkbookPart LobjWorkbookPart = LobjDocument.WorkbookPart;                Sheet LobjSheetToImport = LobjWorkbookPart.Workbook.Descendants<Sheet>().First<Sheet>();                WorksheetPart LobjWorksheetPart = (WorksheetPart)(LobjWorkbookPart.GetPartById(LobjSheetToImport.Id));                SheetData LobjSheetData = LobjWorksheetPart.Worksheet.Elements<SheetData>().First();                //Read only the data rows and skip all the header rows.                int LiRowIterator = 1;                //  for progress bar                int LiTotal = LobjSheetData.Elements<Row>().Count() - MobjImportMapper.HeaderRowIndex;                // =================                foreach (Row LobjRowItem in LobjSheetData.Elements<Row>().Skip(6))                {                    DataRow LdrDataRow = LdtExcelData.NewRow();                    int LiColumnIndex = 0;                    int LiHasData = 0;                    LdrDataRow[LiColumnIndex] = LobjRowItem.RowIndex; //LiRowIterator;                    LiColumnIndex++;                    //TODO: handle restriction of column range.                    foreach (Cell LobjCellItem in LobjRowItem.Elements<Cell>().Where(PobjCell                        => ImportHelper.GetColumnIndexFromExcelColumnName(ImportHelper.GetColumnName(PobjCell.CellReference))                        <= MobjImportMapper.LastColumnIndex))                    {                                             // Gets the column index of the cell with data                        int LiCellColumnIndex = 10;                        if (LiColumnIndex < LiCellColumnIndex)                        {                            do                            {                                LdrDataRow[LiColumnIndex] = string.Empty;                                LiColumnIndex++;                            }                            while (LiColumnIndex < LiCellColumnIndex);                        }                        string LstrCellValue = LobjCellItem.InnerText;                        if (LobjCellItem.DataType != null)                        {                            switch (LobjCellItem.DataType.Value)                            {                                case CellValues.SharedString:                                    var LobjStringTable = LobjWorkbookPart.GetPartsOfType<SharedStringTablePart>().FirstOrDefault();                                    DocumentFormat.OpenXml.OpenXmlElement LXMLElment = null;                                    string LstrXMLString = String.Empty;                                    if (LobjStringTable != null)                                    {                                        LstrXMLString =                                            LobjStringTable.SharedStringTable.ElementAt(int.Parse(LstrCellValue, CultureInfo.InvariantCulture)).InnerXml;                                        if (LstrXMLString.IndexOf("<x:rPh", StringComparison.CurrentCulture) != -1)                                        {                                            LXMLElment = LobjStringTable.SharedStringTable.ElementAt(int.Parse(LstrCellValue, CultureInfo.InvariantCulture)).FirstChild;                                            LstrCellValue = LXMLElment.InnerText;                                        }                                        else                                        {                                            LstrCellValue = LobjStringTable.SharedStringTable.ElementAt(int.Parse(LstrCellValue, CultureInfo.InvariantCulture)).InnerText;                                        }                                    }                                    break;                                default:                                    break;                            }                        }                        LdrDataRow[LiColumnIndex] = LstrCellValue.Trim();                        if (!string.IsNullOrEmpty(LstrCellValue))                            LiHasData++;                       LiColumnIndex++;                    }                    if (LiHasData > 0)                    {                        LiRowIterator++;                        LdtExcelData.Rows.Add(LdrDataRow);                    }                }            }                       return LdtExcelData;        } span.fullpost {display:none;}

    Read the article

  • Accessing controls of .aspx file in .aspx.cs without any declaration.!!??

    I am able to access the controls of ".aspx" file in ".aspx.cs" directly without any declaration in ".aspx.cs" or in designer.cs. How is this possible? This is happeing only if I open website as using File System. Create a new ASP.NET web site application with Visual Studio 2008. So following three files will be created automatically              "Default.aspx",              "Default.aspx.cs"              "Default.designer.cs" Now Delete "Default.designer.cs" perminently. Just create a button in Default.aspx file    <asp:Button runat="server" Text="Save Plan" ID="btnSave" />   Close the Solution and open the website as File System.               File -> Open Web Site -> File System -> Select Web Site Folder and Open the project.                   Now btnSave is automatically recognized in Default.aspx.cs without any declaration in Default.aspx.cs as bellow                            System.Web.UI.WebControls.Button btnSave; How btnSave is being recognized by .cs file without defining it anywhere as an object of System.Web.UI.WebControls.Button? Note: This happens only if you open Web Site from File System.           and No Declaration at all for btnSave. Please refer this article on this. span.fullpost {display:none;}

    Read the article

  • Migrating IBM ClearCase to TFS

    - by Bob Hardister
    Using the Team Foundation Server Integration Tools Platform. Versions: ClearCase: 7.1.1.2 Team Foundation Server: 2012 RTM Integration Tools: 2.2.20314.1 OS: Windows 2008 R2 ENT SP1 I was able to do a simple example migration of a few files by using the following approach: Using a dynamic view Creating a view shortcut drive (i.e. Z:\) Running the tools as a UI client (not as a windows service) Running the tools UI in user mode (do not “Run as Administrator”) Using the CC detailed history adapter Selecting the view shortcut drive (i.e. Z) on the Tools UI Connect to CC dialog Selecting the “Detect Changes in CC” option on the Tools UI Connect to CC dialog Changing the DisableTargetAnalysis value to True on the Tools UI configuration view I have yet to perform actual migrations for real projects, but will update this blog as I do.

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >