Search Results

Search found 22884 results on 916 pages for 'team build'.

Page 42/916 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • Are programmers tied to their code?

    - by Jason
    Assuming you are working for a company or in a team of developers, is the code bound to the person who did it? When someone develops a particular functionnality or an area of the application, is this person the only one who can, is allowed or is just able to make changes to it? I personnally think that a well-done piece of code or program should be easily modified by any programmers, but what about what you see in your environment? At my work, I'd say that the majority of the code can be modified by anyone (that's what coding standards are for right?). There are some things though that are 'property' of some coworkers like the module that handles the pay or some important functionality of the production module (we are developing an ERP system). What about your work place? Am I the only one living this?

    Read the article

  • TFS: Work Items values from External Databases

    - by javarg
    A common question in TFS forums is how to populate list items from external sources in Work Items. Well, there is not a specific functionality to integrate Work Items with external databases or systems when designing them. Actually, you will need to associate your Work Items fields with Global Lists and then have some automated process update this global list regularly. Download this ImportGlobalList.zip file. I’ve put together a simple class (TfsGlobalList) that you can use to update global list items from a .NET application. You could for example, create a simple Console App and schedule it using Windows Scheduler. This App would query a database and then update a TFS Global List using the provided code. Note: the provided code must be run under an account with modify Global List permissions in TFS. Note: remember to refresh Team Explorer in order to see updates in Work Item field values. Enjoy!  

    Read the article

  • Is listing developer's full names in splash screen or about box still a widely spread and desirable practice?

    - by Pierre 303
    Just like in the closing credits of movies, some software vendors list the full names of the team that worked on the piece of software you are using. They are usually displayed in the splash screen (Photoshop) ... or in the about box (Traktor). In the demoscene, it is a mandatory practice, like in the movie industry. How do you see that in your own software? Is there any reason why not doing it? Is there any reason encouraging companies to do it?

    Read the article

  • Are developers expected to have skills of business analysts?

    - by T. Webster
    Our business analyst has left the team. We are now expected to do the work which was previously done by the business analyst, and the management thinks that a task which is done in three months by a business analyst can be done in one month by a developer. My experience is in programming only, and I'm not familiar to the business intelligence tools. To me this seems like maybe an unfair comparison or expectation, and might even trivialize the role of a business analyst. Has anyone else encountered this situation? How to deal with it?

    Read the article

  • Is hierarchical product backlog a good idea in TFS 2012-2013?

    - by Matías Fidemraizer
    I'd like to validate I'm not in the wrong way. My team project is using Visual Studio Scrum 2.x. Since each area/product has a lot of kind of requirements (security, user interface, HTTP/REST services...), I tried to manage this creating "parent backlogs" which are "open forever" and they contain generic requirements. Those parent backlogs have other "open forever" backlogs, and/or sprint backlogs. For example: HTTP/REST Services (forever) ___ Profiles API (forever) ________ POST profile (forever) _______________ We need a basic HTTP/REST profiles' API to register new user profiles (sprint backlog) Is it the right way of organizing the product backlog? Note: I know there're different points of view and that would be right for some and wrong for others. I'm looking for validation about if this is a possible good practice on TFS with Visual Studio Scrum.

    Read the article

  • Help me come up with my new job title

    - by Seva Alekseyev
    Hi all, I used to be a technical lead in a group of 3-5 programmers. Tech lead's responsibilities here would include thinking of/designing overall solution architecture, coding, refactoring, being the first to dive into the next big thing, reviewing others' code, sitting on customer meetings and answering endless questions from the rest of the team. Now I'm moving on to a branch-level position (in a branch of ~60 people), which entails pretty much the same, sans maybe the coding/refactoring part. Still kinda a tech lead, but the title "tech lead" is already being used and means something else - a group-level tech lead. Please help me come up with a good job title. I need something for my e-mail signature and, eventually, resume.

    Read the article

  • What can I do to encourage teams to lighten up? [closed]

    - by Rahul
    I work with a geographically distributed team (different timezones) with people from various cultures and background. Some of us have never met each other in person but we communicate with each other over phone, chat and email almost on an hourly basis. Most of our meetings and discussions are dead serious and boring. What's worse, any attempt at humor is not very well received because of cultural differences. I feel that we are all taking our work a bit too seriously. We don't shy away from painful arguments, nasty emails and heated discussions when things go wrong but never attempt to develop camaraderie or friendships in better times. I would like to know your experiences with such situations and what, if anything, did you do to lighten things up at workplace.

    Read the article

  • What did he or she do to enchant you? What was the feeling like? [closed]

    - by Pierre 303
    Guy Kawasaki, the famous author of The Art Of The Start is writing a new book about Enchantment (will be called Art Of Enchantment). He is asking stories about your employees, your things or even websites you visit. What about your colleagues? I need examples of a programmer (you), getting enchanted by another colleague, programmer, analyst, tester (must be within your team). UPDATE: The book is out! http://www.amazon.com/Enchantment-Changing-Hearts-Minds-Actions/dp/1591843790/ref=sr_1_2?ie=UTF8&qid=1298106612&sr=8-2

    Read the article

  • Is there a batch processing framework that could be used for C++ applications?

    - by Benjamin
    In my team we develop several command-line C++ applications that work together; they're currently run by hand in separate windows, and after a while managing windows gets really confusing. I'm looking for a better way to manage the processing of these applications; ideally it would have a GUI with the ability to do the following: start applications in a specified order display application status close particular applications Is there anything available that does this type of thing or would we need to develop our own? Is there a better way than this?

    Read the article

  • Has anyone used Rational Team Concert (RTC)?

    - by FryGuy
    The company I work for is currently evaluating replacements for SourceSafe, and for various reasons, I think RTC will be chosen. I'm a little scared that we're going to end up with a solution that isn't the best for us in our situation. I've tried researching a little bit about what it is, but all I have been able to find are marketing things, but nothing about how it actually works (any of the paradigms it uses, etc). Our team is around 8 developers and 2 QA people on a single project (and 4-5 more people that would be using it for their independent project). It seems like RTC is targetted for larger teams, but our team is relatively small. Does anyone has experience using RTC in a smaller team? The project that would be using it is a .NET/WPF application, so we would be using primarily Visual Studio. Is the Visual Studio integration any good, or are we stuck having to have Eclipse open on top of Visual Studio? Personally, I have been using Bazaar as my personal source control (and checking out/into sourcesafe from a branch), as well as on personal projects. Does RTC incorporate features of "third generation" version control systems, such as first class branching/merging and changesets rather than file changes, and good visualization of where changes come from? Also, what are the general pros and cons for it?

    Read the article

  • Should static analysis warnings fail the CI build?

    - by Cara
    Our team is investigating various options for static analysis in our project, and have mixed opinions about whether we want our Continuous Integration build to fail because of warnings from static analysis. The argument against failing the build is that there are often exceptions to the rules, and attempting to work around them just to make the build succeed reduces productivity. A better approach would be to generate reports with the build, and regularly dedicate developer time to addressing the reported issues. The counter-argument is that it is easy for the technical debt to build up if the bugs are not addressed immediately. Also, if the build fails when a potential bug is introduced, the amount of time required to fix it is reduced. What are your thoughts?

    Read the article

  • Why does headless PDE Build omit directories I've specified in build.properties's bin.includes?

    - by Woody Zenfell III
    One of my Eclipse plug-ins (OSGi bundles) is supposed to contain a directory (Database Elements) of .sql files. My build.properties shows: bin.includes = META-INF/,\ .,\ Database Elements/ (...which looks right to me.) When I build and run from within my interactive Eclipse IDE, everything works fine: calls to Bundle.getEntry(String) and Bundle.findEntries(String, String, bool) return valid URL objects; my tests are happy; my code is happy. When I build via headless ant script (using PDE Build), those same calls end up returning null. My tests break; my code breaks. I find that Database Elements is quietly but simply missing from my plug-in's JAR package. (META-INF and the built classes still make it in there fine.) I scoured the build log (even eventually invoking ant -verbose on the relevant portion of the build script) but saw no mention of anything helpful. What gives?

    Read the article

  • Is it possible to set a parameterized build or pass an environment variable via a hudson build trigger?

    - by Tim
    I'd like to use hudson to trigger a "promotion" of a build. The build would be a simple script that just copies a release candidate installer file from one location to another. (development dir to release/stable dir) Our build server is not on the public internet, but I want to be able to send an email or a text message or an IM with the build number to hudson, which will parse the build number and then do the copy/move. In looking at the jabber/IM plugin it does not look like this is possible (the parameter part) Has anyone solved this in some way? Should I use some other mechanism? I would prefer not to have to do the manual steps each time (SCP/FTP, etc) - I just want any QA team member to be able to trigger the build server to do the promotion.

    Read the article

  • Company wants to write custom project management tool, rather then use third party product.

    - by Jason Evans
    At the company I work, we are really wanting to get into the agile methodology for developing software. One thing that I'm not excited about is the fact that management wants us to build a custom project management feature inside the company's Intranet. I think this is a total waste of time. There are many great third party tools available (e.g. Axosoft OnTime) that can do everything we need, and more. For how much development time it would cost us to build our own project management module, we could buy numerous licences for a third party product. One concern is that, whilst we are writing code for a client, and using our custom Intranet project management module, we find bugs in the module that need fixing ASAP. That means having to stop work on the client code to fix the Intranet. That just puts shivers down my spine. Another worry I have is lack of functionality. This custom module is going to be so basic, that it will just feel really crap to use. That might sound a bit snooty, but for goodness sake, many third party tools are so feature rich, that the idea of having to write our own tool makes feel very uneasy. In fact, I can't be bothered. What do you guys think? I'm going to raise this issue with my boss, since I feel it's such an important topic to talk about. EDIT: Thanks for the great responses, much appreciated. To summarize some of them: Money Naturally my boss does want to save money, by not forking out a few hundred £'s for licences. However, for us to write a custom tool, it will take x number of days, multiplied by approx £500, which is our costs. I don't see the business value in this. Management have mentioned that they want to sell the Intranet as a product in the future, but it's so custom to our needs (and downright basic), that in order to give it to another client, I can see us having to fork a version of the code and rebuild the majority of it anyway. So it's not like we're gaining anything there in reuse. Features Having our own custom module means not feature bloat - only the functionality we require will be in the product. My issue is that there are plenty of free, open-source project management tools out there with minimal features already. So even if cost is an issue, we could look into open-source. Again it all boils down to the fact that I don't see the point in writing a project management tool in this day and age. It's a bit like writing your own web browser - why?, what's the point? Although management are asking for this tool, just because they are, it does not mean I'm going to please them and do it just because they asked for it. If something does not make sense, then I will raise it as a concern. At the end of the day, it's the developers who write the code, it's the developers who make money for a business. Thus, as far I'm concerned, the devs have a very big role in deciding how a company should manage projects and what tools are used. "I am Spartan, argh!" :) Hmm, I've not been able to make this question a wiki for some reason, thus I'm going to have to pick an answer to accept. Cheers. Jas.

    Read the article

  • How to get MSBuild Exec to run a java program?

    - by Vaccano
    I am trying to run a command line action in my Team Build (MSBuild). When I run it on the command line of the build machine it works fine. But when run in the build script I get a "exited with code 3". This is command that I am running: C:\Program Files\Wavelink\Avalanche\PackageBuilder.\jresdk\bin\java -classpath "WLUtil.jar;WLPackageBuilder.jar" com.wavelink.buildpkg.AvalanchePackageBuilder /build PackageName This command only works when run from the above directory (I have tried running it from c:\ with the full path at it fails). When I try to run it using ms build this is my statement: <PropertyGroup> <!--Working directory of the Package Builder Call--> <PkgBldWorkingDir>&quot;C:\Program Files\Wavelink\Avalanche\PackageBuilder&quot;</PkgBldWorkingDir> <!--Command line to run to make Package builder "go"--> <PkgBldRun>.\jresdk\bin\java&quot; -classpath &quot;WLUtil.jar;WLPackageBuilder.jar&quot; com.wavelink.buildpkg.AvalanchePackageBuilder</PkgBldRun> </PropertyGroup> <!--Run package builder command line to update the Ava File.--> <Exec ContinueOnError="true" WorkingDirectory="$(PackageBuilderWorkingDir)" Command="$(PkgBldRun) /build PackageName"/> As I said above this "exits with code 3". This is the full output: Task "Exec" Command: .\jresdk\bin\java -classpath "WLUtil.jar;WLPackageBuilder.jar" com.wavelink.buildpkg.AvalanchePackageBuilder /build PackageName The system cannot find the path specified. MSBUILD : warning MSB3073: The command ".\jresdk\bin\java -classpath "WLUtil.jar;WLPackageBuilder.jar" com.wavelink.buildpkg.AvalanchePackageBuilder /build PackageName" exited with code 3. The previous error was converted to a warning because the task was called with ContinueOnError=true. Build continuing because "ContinueOnError" on the task "Exec" is set to "true". Done executing task "Exec" -- FAILED. It says it can't find the file (who knows what file). I have tried it with and without the quotes (") in the working directory and with a full path as the command (gives the same error as when run on the command line). Any ideas on how to make this run a command line action in MS Build?

    Read the article

  • How to compile jsoup through Ant?

    - by JackWM
    I tried to use Ant to compile the jsoup source. I can compile successfully, but cannot pass the test. Here is the process: jsoup version: 1.6.3 ; Ant version: 1.8.2 the source of jsoup is in the directory src/ I made a build file src/build.xml This file contains <project name="jsoup"> <target name="compile"> <mkdir dir="build/classes"/> <javac srcdir="src" destdir="build/classes" includeantruntime="false"/> </target> <target name="jar"> <mkdir dir="build/jar"/> <jar destfile="build/jar/jsoup.jar" basedir="build/classes"> <manifest> <attribute name="Main-Class" value="StateTrace"/> </manifest> </jar> </target> <target name="run"> <!--<java jar="build/jar/jsoup.jar" input="htmls/index.html" fork="true"/>--> <exec executable="java"> <arg value="-jar"/> <arg value="build/jar/jsoup.jar"/> <arg value="htmls/index.html"/> </exec> </target> </project> Note: 1. StateTrace.java is my own test program; 2. htmls/index.html is the input to StateTrace.java. Then I compile and run it with Ant: > ant compile > ant jar > ant run After this, I got err like: run: [exec] Exception in thread "main" java.lang.ExceptionInInitializerError [exec] at org.jsoup.nodes.Entities$EscapeMode.<clinit>(Unknown Source) [exec] at org.jsoup.nodes.Document$OutputSettings.<init>(Unknown Source) [exec] at org.jsoup.nodes.Document.<init>(Unknown Source) [exec] at org.jsoup.parser.TreeBuilder.initialiseParse(Unknown Source) [exec] at org.jsoup.parser.TreeBuilder.parse(Unknown Source) [exec] at org.jsoup.parser.HtmlTreeBuilder.parse(Unknown Source) [exec] at org.jsoup.parser.Parser.parse(Unknown Source) [exec] at org.jsoup.Jsoup.parse(Unknown Source) [exec] at StateTrace.main(Unknown Source) [exec] Caused by: java.lang.NullPointerException [exec] at java.util.Properties$LineReader.readLine(Properties.java:418) [exec] at java.util.Properties.load0(Properties.java:337) [exec] at java.util.Properties.load(Properties.java:325) [exec] at org.jsoup.nodes.Entities.loadEntities(Unknown Source) [exec] at org.jsoup.nodes.Entities.<clinit>(Unknown Source) [exec] ... 9 more [exec] Result: 1 BUILD SUCCESSFUL Total time: 0 seconds However, if I manually compiled all the java source, like javac src/org/jsoup/*.java src/org/jsoup/parser/*.java src/org/jsoup/examples/*.java src/org/jsoup/nodes/*.java src/org/jsoup/safety/*.java src/org/jsoup/select/*.java src/org/jsoup/helper/*.java I could compile successfully and pass my test. Any clue? Thanks!

    Read the article

  • KnownType Not sufficient for Inclusion

    - by Kate at LittleCollie
    Why isn't the use of KnownType attribute in C# sufficient for inclusion of a DLL? Working with Visual Studio 2012 with TFS responsible for builds, I am on a project in which a service required use of this attribute as in the following: using Project.That.Contains.RequiredClassName; [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall, Namespace="SomeNamespace")] [KnownType(typeof(RequiredClassName))] public class Service : IService { } But to get the required DLL to be included in the bin output and therefore the installer from our production build, I had to add the follow to the constructor for Service: public Service() { // Exists only to force inclusion var ignore = new RequiredClassName(); } So, given that the project that contains RequiredClassName is itself referenced by the project that contains Service, why isn't the use of the KnownType attribute sufficient for inclusion of DLL in the output?

    Read the article

  • C Problem with Compiler?

    - by Solomon081
    I just started learning C, and wrote my hello world program: #include <stdio.h> main() { printf("Hello World"); return 0; } When I run the code, I get a really long error: Apple Mach-O Linker (id) Error Ld /Users/Solomon/Library/Developer/Xcode/DerivedData/CProj-cwosspupvengheeaapmkrhxbxjvk/Build/Products/Debug/CProj normal x86_64 cd /Users/Solomon/Desktop/C/CProj setenv MACOSX_DEPLOYMENT_TARGET 10.7 /Developer/usr/bin/clang -arch x86_64 -isysroot /Developer/SDKs/MacOSX10.7.sdk -L/Users/Solomon/Library/Developer/Xcode/DerivedData/CProj-cwosspupvengheeaapmkrhxbxjvk/Build/Products/Debug -F/Users/Solomon/Library/Developer/Xcode/DerivedData/CProj-cwosspupvengheeaapmkrhxbxjvk/Build/Products/Debug -filelist /Users/Solomon/Library/Developer/Xcode/DerivedData/CProj-cwosspupvengheeaapmkrhxbxjvk/Build/Intermediates/CProj.build/Debug/CProj.build/Objects-normal/x86_64/CProj.LinkFileList -mmacosx-version-min=10.7 -o /Users/Solomon/Library/Developer/Xcode/DerivedData/CProj-cwosspupvengheeaapmkrhxbxjvk/Build/Products/Debug/CProj ld: duplicate symbol _main in /Users/Solomon/Library/Developer/Xcode/DerivedData/CProj-cwosspupvengheeaapmkrhxbxjvk/Build/Intermediates/CProj.build/Debug/CProj.build/Objects-normal/x86_64/helloworld.o and /Users/Solomon/Library/Developer/Xcode/DerivedData/CProj-cwosspupvengheeaapmkrhxbxjvk/Build/Intermediates/CProj.build/Debug/CProj.build/Objects-normal/x86_64/main.o for architecture x86_64 Command /Developer/usr/bin/clang failed with exit code 1 I am running xCode Should I reinstall DevTools?

    Read the article

  • Synchronizing Mysql Table Schema [on hold]

    - by user1122069
    I have some difficulty keeping track of my SQL changes in a text file in SVN. One solution that I am aware of is to put the SQL queries in files (1.sql, 2.sql...) and to manually load each file at the proper time. Besides missing commas, the process is too cumbersome when builds become more frequent. I have actually taken to asking a co-worker to send me his SQL changes on Skype and we just apply the changes immediately on our local, development, and production servers (using three PhpMyAdmin tabs). I have seen several GUI tools mentioned in similar questions on SO, but these are actually more work and less automated than the aforementioned methods. Is there any standardized process by which this is done in an automated way? I can only guess that large companies build their own mechanisms of keeping track of database schema changes (yet I can't find a word about it - maybe they use files?). This question was closed as off-topic on Stack Overflow, so I am re-posting it here.

    Read the article

  • How does VS 2005 provide history across all TFS Team Projects when tf.exe cannot?

    - by AakashM
    In Visual Studio 2005, in the TFS Source Control Explorer, these is a top-level node for the TFS Server itself, with a child node for each Team Project. Right-clicking either the server node or the node for a Team Project gives a context menu on which there is a View History item. Selecting this gives you a History window showing the last 200 or so changesets, either for the specific Team Project chosen, or across all Team Projects. It is this history across all Team Projects that I am wondering about. The command-line tf.exe history command provides (as I understand it) basically the same functionality as is provided by the VS TFS Source Control plug-in. But I cannot work out how to get tf.exe history to provide this across-all-Team-Projects history. At a command line, supposing I have C:\ mapped as the root of my workspace, and Foo, Bar, and Baz as Team Projects, I can do C:\> tf history Foo /recursive /stopafter:200 to get the last 200 changesets that affected Team Project Foo; or from within a Team Project folder C:\Bar> tf history *.* /recursive /stopafter:200 which does the same thing for Team Project Bar - note that the wildcard *.* is allowed here. However, none of these work (each gives the error message shown): C:\> tf history /recursive /stopafter:200 The history command takes exactly one item C:\> tf history *.* /recursive /stopafter:200 Unable to determine the source control server C:\> tf history *.* /server:servername /recursive /stopafter:200 Unable to determine the workspace I don't see an option in the docs for tf for specifying a workspace; it seems to only want to determine it from the current folder. So what is VS 2005 doing? Is it internally doing a history on each Team Project in turn and then sticking the results together?? note also that I have tried with Power Tools; tfpt history from the command line gives exactly the same error messages seen here

    Read the article

  • Keyboard locking up in Visual Studio 2010

    - by Jim Wang
    One of the initiatives I’m involved with on the ASP.NET and Visual Studio teams is the Tactical Test Team (TTT), which is a group of testers who dedicate a portion of their time to roaming around and testing different parts of the product.  What this generally translates to is a day and a bit a week helping out with areas of the product that have been flagged as risky, or tackling problems that span both ASP.NET and Visual Studio.  There is also a separate component of this effort outside of TTT which is to help with customer scenarios and design. I enjoy being on TTT because it allows me the opportunity to look at the entire product and gain expertise in a wide range of areas.  This week, I’m looking at Visual Studio 2010 performance problems, and this gem with the keyboard in Visual Studio locking up ended up catching my attention. First of all, here’s a link to one of the many Connect bugs describing the problem: Microsoft Connect I like this problem because it really highlights the challenges of reproducing customer bugs.  There aren’t any clear steps provided here, and I don’t know a lot about your environment: not just the basics like our OS version, but also what third party plug-ins or antivirus software you might be running that might contribute to the problem.  In this case, my gut tells me that there is more than one bug here, just by the sheer volume of reports.  Here’s another thread where users talk about it: Microsoft Connect The volume and different configurations are staggering.  From a customer perspective, this is a very clear cut case of basic functionality not working in the product, but from our perspective, it’s hard to find something reproducible: even customers don’t quite agree on what causes the problem (installing ReSharper seems to cause a problem…or does it?). So this then, is the start of a QA investigation. If anybody has isolated repro steps (just comment on this post) that they can provide this will immensely help us nail down the issue(s), but I’ll be doing a multi-part series on my progress and methodologies as I look into the problem.

    Read the article

  • Technical development decision for my newly established software company

    - by test test
    I have a new software company where I am planning to develop CRM system. So I have settled down on the technological approach I am going to use:- I will use an open source Java-based CRM engine. I will use a third party reporting tool named JasperReports for providing reports capabilities for the CRM. I will develop the interface and any customization which the customer might ask for using asp.net mvc framework since my knowledge and experience are based on asp.net. And I will use the CRM API to integrate my asp.net web application with the Java-based CRM. I have developed a simple demo which integrate these three main components (CRM engine, asp.net application and the reporting tool) and they worked well. But I am afraid of the following risk that I might face if I go with the above approach: I should hire developers with different skills and experience: Developers with Java skills to be able to modify the Java-based CRM and writing plug-ins -when needed- to extend the CRM capabilities. Other developers with asp.net skills to be able to build the application such as application forms, the portal from where users will be able to start the CRM processes, searching capabilities, etc. So might the above point raise some risks when I start hiring a new team and start building the CRM application, OR I am on the right track at this early stage?

    Read the article

  • Where'd My Data Go? (and/or...How Do I Get Rid of It?)

    - by David Paquette
    Want to get a better idea of how cascade deletes work in Entity Framework Code First scenarios? Want to see it in action? Stick with us as we quickly demystify what happens when you tell your data context to nuke a parent entity. This post is authored by Calgary .NET User Group Leader David Paquette with help from Microsoft MVP in Asp.Net James Chambers. We got to spend a great week back in March at Prairie Dev Con West, chalk full of sessions, presentations, workshops, conversations and, of course, questions.  One of the questions that came up during my session: "How does Entity Framework Code First deal with cascading deletes?". James and I had different thoughts on what the default was, if it was different from SQL server, if it was the same as EF proper and if there was a way to override whatever the default was.  So we built a set of examples and figured out that the answer is simple: it depends.  (Download Samples) Consider the example of a hockey league. You have several different entities in the league including games, teams that play the games and players that make up the teams. Each team also has a mascot.  If you delete a team, we need a couple of things to happen: The team, games and mascot will be deleted, and The players for that team will remain in the league (and therefore the database) but they should no longer be assigned to a team. So, let's make this start to come together with a look at the default behaviour in SQL when using an EDMX-driven project. The Reference – Understanding EF's Behaviour with an EDMX/DB First Approach First up let’s take a look at the DB first approach.  In the database, we defined 4 tables: Teams, Players, Mascots, and Games.  We also defined 4 foreign keys as follows: Players.Team_Id (NULL) –> Teams.Id Mascots.Id (NOT NULL) –> Teams.Id (ON DELETE CASCADE) Games.HomeTeam_Id (NOT NULL) –> Teams.Id Games.AwayTeam_Id (NOT NULL) –> Teams.Id Note that by specifying ON DELETE CASCADE for the Mascots –> Teams foreign key, the database will automatically delete the team’s mascot when the team is deleted.  While we want the same behaviour for the Games –> Teams foreign keys, it is not possible to accomplish this using ON DELETE CASCADE in SQL Server.  Specifying a ON DELETE CASCADE on these foreign keys would cause a circular reference error: The series of cascading referential actions triggered by a single DELETE or UPDATE must form a tree that contains no circular references. No table can appear more than one time in the list of all cascading referential actions that result from the DELETE or UPDATE – MSDN When we create an entity data model from the above database, we get the following:   In order to get the Games to be deleted when the Team is deleted, we need to specify End1 OnDelete action of Cascade for the HomeGames and AwayGames associations.   Now, we have an Entity Data Model that accomplishes what we set out to do.  One caveat here is that Entity Framework will only properly handle the cascading delete when the the players and games for the team have been loaded into memory.  For a more detailed look at Cascade Delete in EF Database First, take a look at this blog post by Alex James.   Building The Same Sample with EF Code First Next, we're going to build up the model with the code first approach.  EF Code First is defined on the Ado.Net team blog as such: Code First allows you to define your model using C# or VB.Net classes, optionally additional configuration can be performed using attributes on your classes and properties or by using a Fluent API. Your model can be used to generate a database schema or to map to an existing database. Entity Framework Code First follows some conventions to determine when to cascade delete on a relationship.  More details can be found on MSDN: If a foreign key on the dependent entity is not nullable, then Code First sets cascade delete on the relationship. If a foreign key on the dependent entity is nullable, Code First does not set cascade delete on the relationship, and when the principal is deleted the foreign key will be set to null. The multiplicity and cascade delete behavior detected by convention can be overridden by using the fluent API. For more information, see Configuring Relationships with Fluent API (Code First). Our DbContext consists of 4 DbSets: public DbSet<Team> Teams { get; set; } public DbSet<Player> Players { get; set; } public DbSet<Mascot> Mascots { get; set; } public DbSet<Game> Games { get; set; } When we set the Mascot –> Team relationship to required, Entity Framework will automatically delete the Mascot when the Team is deleted.  This can be done either using the [Required] data annotation attribute, or by overriding the OnModelCreating method of your DbContext and using the fluent API. Data Annotations: public class Mascot { public int Id { get; set; } public string Name { get; set; } [Required] public virtual Team Team { get; set; } } Fluent API: protected override void OnModelCreating(DbModelBuilder modelBuilder) { modelBuilder.Entity<Mascot>().HasRequired(m => m.Team); } The Player –> Team relationship is automatically handled by the Code First conventions. When a Team is deleted, the Team property for all the players on that team will be set to null.  No additional configuration is required, however all the Player entities must be loaded into memory for the cascading to work properly. The Game –> Team relationship causes some grief in our Code First example.  If we try setting the HomeTeam and AwayTeam relationships to required, Entity Framework will attempt to set On Cascade Delete for the HomeTeam and AwayTeam foreign keys when creating the database tables.  As we saw in the database first example, this causes a circular reference error and throws the following SqlException: Introducing FOREIGN KEY constraint 'FK_Games_Teams_AwayTeam_Id' on table 'Games' may cause cycles or multiple cascade paths. Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints. Could not create constraint. To solve this problem, we need to disable the default cascade delete behaviour using the fluent API: protected override void OnModelCreating(DbModelBuilder modelBuilder) { modelBuilder.Entity<Mascot>().HasRequired(m => m.Team); modelBuilder.Entity<Team>() .HasMany(t => t.HomeGames) .WithRequired(g => g.HomeTeam) .WillCascadeOnDelete(false); modelBuilder.Entity<Team>() .HasMany(t => t.AwayGames) .WithRequired(g => g.AwayTeam) .WillCascadeOnDelete(false); base.OnModelCreating(modelBuilder); } Unfortunately, this means we need to manually manage the cascade delete behaviour.  When a Team is deleted, we need to manually delete all the home and away Games for that Team. foreach (Game awayGame in jets.AwayGames.ToArray()) { entities.Games.Remove(awayGame); } foreach (Game homeGame in homeGames) { entities.Games.Remove(homeGame); } entities.Teams.Remove(jets); entities.SaveChanges();   Overriding the Defaults – When and How To As you have seen, the default behaviour of Entity Framework Code First can be overridden using the fluent API.  This can be done by overriding the OnModelCreating method of your DbContext, or by creating separate model override files for each entity.  More information is available on MSDN.   Going Further These were simple examples but they helped us illustrate a couple of points. First of all, we were able to demonstrate the default behaviour of Entity Framework when dealing with cascading deletes, specifically how entity relationships affect the outcome. Secondly, we showed you how to modify the code and control the behaviour to get the outcome you're looking for. Finally, we showed you how easy it is to explore this kind of thing, and we're hoping that you get a chance to experiment even further. For example, did you know that: Entity Framework Code First also works seamlessly with SQL Azure (MSDN) Database creation defaults can be overridden using a variety of IDatabaseInitializers  (Understanding Database Initializers) You can use Code Based migrations to manage database upgrades as your model continues to evolve (MSDN) Next Steps There's no time like the present to start the learning, so here's what you need to do: Get up-to-date in Visual Studio 2010 (VS2010 | SP1) or Visual Studio 2012 (VS2012) Build yourself a project to try these concepts out (or download the sample project) Get into the community and ask questions! There are a ton of great resources out there and community members willing to help you out (like these two guys!). Good luck! About the Authors David Paquette works as a lead developer at P2 Energy Solutions in Calgary, Alberta where he builds commercial software products for the energy industry.  Outside of work, David enjoys outdoor camping, fishing, and skiing. David is also active in the software community giving presentations both locally and at conferences. David also serves as the President of Calgary .Net User Group. James Chambers crafts software awesomeness with an incredible team at LogiSense Corp, based in Cambridge, Ontario. A husband, father and humanitarian, he is currently residing in the province of Manitoba where he resists the urge to cheer for the Jets and maintains he allegiance to the Calgary Flames. When he's not active with the family, outdoors or volunteering, you can find James speaking at conferences and user groups across the country about web development and related technologies.

    Read the article

  • Rule of thumb for cost vs. savings for code re-use

    - by Styler
    Is it a good rule of thumb to always write code for the intent of re-using it somewhere down the road? Or, depending on the size of the component you are writing, is it better practice to design it for re-use when it makes sense with regards to time spent on it. What is a good rule of thumb for spending extra time on analysis and design on project components that have "some probability" of being needed later down the road for other things that may or may need this part. For example, if I have the need for project X to do things A, and B. A definitely needs to be written for re-use because it just makes sense to do so. B is very project specific at the moment, and I can hack it all together in a couple days to finish the project on time and give everyone kudos for being a great team, etc. Or if we say, lets spend a whole friggin' 2 weeks figuring out what project Y/Z might need this thing for and spend a load of extra time on on part B because someday we might need to use it on project Y/Z (where the savings will be realized). I'd imagine a perfect world situation would be a nicely crafted combination of project specific vs. re-use architected components given the project. However some code shops might feel it would be a great idea to write everything for the intention of using it at some point down the road.

    Read the article

  • Gallio MbUnit and Team City problem

    - by Bernard Larouche
    I asked a question this morning about an integration problem between Gallio and Team City. I changed the msbuild file to use the proper syntax with the latest Gallio build script API. Thank you for that Jeff Brown but now when I tried to build the application on Team City I get the following error : An unexpected error occurred during execution of the Gallio task.[16:19:49]: [Project "CoderForTraders.msbuild.teamcity.patch.tcprojx" (RebuildSolution;RunTests target(s)):] C:\TeamCity\buildAgent\work\fa1d38b0af329d65\CoderForTraders.msbuild(9, 9): FilterParseException: Colon expected Here's line 9 : <Gallio IgnoreFailures="true" Filter="Type=SomeFixture" Files="@(TestFile)"> and here is the whole file : <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <!-- This is needed by MSBuild to locate the Gallio task --> <UsingTask AssemblyFile="C:\Gallio\bin\Gallio.MSBuildTasks.dll" TaskName="Gallio" /> <!-- Specify the test files and assemblies --> <ItemGroup> <TestFile Include="C:\_CBL\CBL\CoderForTraders\Source\trunk\UnitTest\DomainModel.Tests\bin\Debug\CBL.CoderForTraders.DomainModel.Tests.dll" /> </ItemGroup> <Target Name="RunTests"> <Gallio IgnoreFailures="true" Filter="Type=SomeFixture" Files="@(TestFile)"> <!-- This tells MSBuild to store the output value of the task's ExitCode property into the project's ExitCode property --> <Output TaskParameter="ExitCode" PropertyName="ExitCode"/> </Gallio> <Error Text="Tests execution failed" Condition="'$(ExitCode)' != 0" /> </Target> <Target Name="RebuildSolution"> <Message Text="Starting to Build"/> <MSBuild Projects="CoderForTraders.sln" Properties="Configuration=Debug" Targets="Rebuild" /> </Target> </Project> Do you have an idea about the possible problem ?

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >