Search Results

Search found 20581 results on 824 pages for 'catalyst control centre'.

Page 164/824 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • How do you handle the tension between refactoring and the need for merging?

    - by Xavier Nodet
    Hi, Our policy when delivering a new version is to create a branch in our VCS and handle it to our QA team. When the latter gives the green light, we tag and release our product. The branch is kept to receive (only) bug fixes so that we can create technical releases. Those bug fixes are subsequently merged on the trunk. During this time, the trunk sees the main development work, and is potentially subject to refactoring changes. The issue is that there is a tension between the need to have a stable trunk (so that the merge of bug fixes succeed -- it usually can't if the code has been e.g. extracted to another method, or moved to another class) and the need to refactor it when introducing new features. The policy in our place is to not do any refactoring before enough time has passed and the branch is stable enough. When this is the case, one can start doing refactoring changes on the trunk, and bug-fixes are to be manually committed on both the trunk and the branch. But this means that developpers must wait quite some time before committing on the trunk any refactoring change, because this could break the subsequent merge from the branch to the trunk. And having to manually port bugs from the branch to the trunk is painful. It seems to me that this hampers development... How do you handle this tension? Thanks.

    Read the article

  • Subversion: svn status displays tons of undesired .metadata files

    - by FarmBoy
    I'm trying to set up Subversion on Ubuntu Linux. It seems to be working, except that when I made one change and tried svn status, I found about 100 files had been changed, in the .metadata directory. My ~/.subversion/config file currently contains the following line: global-ignores = *.o *.lo *.la *.al .libs *.so *.so.[0-9]* *.a *.pyc *.pyo *.rej *~ .*.swp .DS_Store What do I need to add to ignore the .metadata files? The directory under consideration is used by Eclipse for Python development using PyDev, if that matters.

    Read the article

  • Can my tortoise vs. hare race be improved?

    - by FredOverflow
    Here is my code for detecting cycles in a linked list: do { hare = hare.next(); if (hare == back) return; hare = hare.next(); if (hare == back) return; tortoise = tortoise.next(); } while (tortoise != hare); throw new AssertionError("cyclic linkage"); Is there a way to get rid of the code duplication inside the loop? Am I right in assuming that I don't need a check after making the tortoise take a step forward? As I see it, the tortoise can never reach the end of the list before the hare (contrary to the fable). Any other ways to simplify/beautify this code?

    Read the article

  • Versioned cloud-based social code snippet management

    - by Chapso
    It seems a lot to ask, but I'm looking for a cloud-based solution to managing code snippets. I am looking for: Tags User accounts (I want to be able to see all of my snippets on a single page) syntax highlighting versioning - myself or others should be able to edit my snippets to improve them\ straightforward UI with minimal advertising if any Does anyone know of a solution which meets these requirements? If not, would anyone be interested in something like this? As a software engineer, after step zero (does it already exist), I'm perfectly willing to go onto step 1 (would other people use it? If so, make it).

    Read the article

  • user access management in j2ee web application

    - by kawtousse
    Hi everyone, I am working with jsp/servlet project and i have to complete the module of access management to my jsps since I have more than one user with different profile. I defined a table in my database wich resume the profil and the url permitted like that: id_profil :1 url : http://localhost/...xyz.jsp id page 1 Now I am trying to let the menu modified appropriately to the id_profil of the logged user. So there are pages allowed in one profile but must be hidden to others. I have no idea since now how to realize this and it is so important for me. thanks for your help.

    Read the article

  • Continuous integration with multiple branch development

    - by ryanprayogo
    In the project that I'm working on, we are using SVN with 'Stable Trunk' strategy. What that means is that for each bug that is found, QA opens a bug ticket and assigns it to a developer. Then, a developer fixes that bug and checks it in a branch (off trunk, let's call this the bug branch) and that branch will only contain fixes for that particular bug ticket When we decided to do a release, for each bug fixes that we want to release to the customer, a developer will merge all the fixes from several bug branch to trunk and proceed with the normal QA cycle. The problem is that we use trunk as the codebase for our CI job (Hudson, specifically), and therefore, for all commits to the bug branch, it will miss the daily build until it gets merged to trunk when we decided to release the new version of the software. Obviously, that defeats the purpose of having CI. What is the proper way to fix this issue?

    Read the article

  • How to handle injecting dependencies into rich domain models?

    - by Arne
    In a web server project with a rich domain model (application logic is in the model, not in the services) how do you handle injecting the dependencies into the model objects? What are your experiences? Do you use some form of AOP? Like Springs @Configurable annotation? Load time or build time weawing? Problems you encountered? Do you use manual injection? Then how do you handle different instantiation scenarios (creating of the objects through an library [like Hibernate], creating objects with "new" ...)? Or do you use some other way of injecting the dependencies?

    Read the article

  • Multiple plugin instance loading with MEF

    - by Dave
    In my last application, using MEF to load plugins went just fine, but now I'm running into a new issue. I have a solution for it that I explain at the end of this question, but I'm looking for other ways to do it. Let's say I have an interface called ApplianceInterface. I also have two plugins that inherit from ApplianceInterface, let's call them Blender and Processor. Now, I would like to have multiple Blenders and Processors in my application, but I am not sure how to instantiate them properly. Before, I would simply use the ImportMany attribute and upon calling ComposeParts, my application would load Blender and Processor. For example: [ImportMany(typeof(ApplianceInterface))] private IEnumerable<ApplianceInterface> Appliances { get; set; } and my Blender and Processor plugins would be attributed like this: [PartCreationPolicy(CreationPolicy.NonShared)] [Export(typeof(MyInterface)] public class Blender : ApplianceInterface { ... } but what this ends up doing for me is populating Appliances with one Blender and one Processor. I need to be able to create an arbitrary number of Blender and Processor objects. Now, from the documentation I understand that [PartCreationPolicy(CreationPolicy.NonShared)] is what allows MEF to create a new instance each time, but is there a similar "magical" way to create a specific number of instances of something using MEF? Up until now, I've relied on [Import] and [ImportMany] to resolve the assemblies. Is my only option to use a global container, and then resolve the export manually using GetExportedValue<? I have tried GetExportedValue< and that implementation does work fine for me, but I was just curious if there is a better, more accepted way to do it.

    Read the article

  • ASP.NET MVC & Windsor.Castle: working with HttpContext-dependent services

    - by Igor Brejc
    I have several dependency injection services which are dependent on stuff like HTTP context. Right now I'm configuring them as singletons the Windsor container in the Application_Start handler, which is obviously a problem for such services. What is the best way to handle this? I'm considering making them transient and then releasing them after each HTTP request. But what is the best way/place to inject the HTTP context into them? Controller factory or somewhere else?

    Read the article

  • StructureMap Configuration Per Thread/Request for the Full Dependency Chain

    - by Phil Sandler
    I've been using Structuremap for a few months now, and it has worked great on some smaller (green field) projects. Most of the configurations I have had to set up involve a single implementation per interface. In the cases where I needed to choose a specific implementation at runtime, I created a factory class that used ObjectFactory.GetNamedInstance<(). In the smaller projects, there were few enough of these cases where I was comfortable with the references to ObjectFactory. My understanding is that you want to limit these references as much as possible, and ideally only reference the ObjectFactory once. I am working to refactor a larger codebase to use IOC/StructureMap, and am finding that I may need many of these factory classes with ObjectFactory references to get what I need. Essentially, I am creating a "root service" with the ObjectFactory, so that everything in the dependency chain is managed by the container. The root service is created by name (i.e. "BuildCar", "BuildTruck"), and the services needed deeper in the dependency chain could also be constructed using the same name--so the "IAttachWheels" service could vary based on whether a car or truck is being built. Since the class that depends on IAttachWheels is the same in both configurations, I don't think I can use ConstructedBy in the registry to choose the implementation. Also, to be clear, the IAttachWheels implementations need to be managed by the container as well, because the dependency chain runs fairly deep. I looked briefly at Profiles as an option, but read (here on StackOverflow) that changing profiles essentially changes implementations for all threads. Is there a feature that is similar to profiles that is thread/request specific? Is the factory class that references ObjectFactory approach the right way to go? Any thoughts would be appreciated.

    Read the article

  • What are the advantages of a rebase over a merge in git?

    - by eSKay
    In this article, the author explains rebasing with this diagram: Rebase: If you have not yet published your branch, or have clearly communicated that others should not base their work on it, you have an alternative. You can rebase your branch, where instead of merging, your commit is replaced by another commit with a different parent, and your branch is moved there. while a normal merge would have looked like this: So, if you rebase, you are just losing a history state (which would be garbage collected sometime in the future). So, why would someone want to do a rebase at all? What am I missing here?

    Read the article

  • How to manage IoC containers in tests?

    - by frosty
    I'm very new to testing and IoC containers and have two projects: MySite.Website (MVC) MySite.WebsiteTest Currently I have an IoC container in my website. Should I recreate another IoC container for my test? Or is there a way to use the IoC in both?

    Read the article

  • T4 Template Interception

    - by JeffN825
    I'm wondering if anyone out there knows of any T4 template based method interception systems? We are beginning to write mobile applications (currently with MonoTouch for IOS). We have a very nice core set of DI/IoC functionality and I'd like to leverage this in development for the new platform. Since runtime code generation Reflection.Emit is not supported, I'm hoping to use T4 templates to implement the dynamic interception functionality (+ TinyIoC as a container for resolution). We are currently using Castle Windsor (and intend to continue doing so for our SL and full .NET development), but all of the Windsor specific ties are completely encapsulated, so given a suitable T4 solution, it shouldn't be hard to implement an adapter that uses a T4 based implementation instead of Windsor.

    Read the article

  • How can I get git to work with a remote server?

    - by Adrienne
    I am the CM person for a small company that just started using Git. We have two Git repositories currently hosted on a Windows box that is our all-purpose Windows server. But, we just set up a dedicated server for our CM software on an Ubuntu Linux server named "Callisto". So I created a test Git repository on Callisto. I gave its directory all of the proper permissions recursively. I had the sysadmin create a login for me on Callisto, and I created a key to use for logging in via SSH. I set up my key to use a passphrase; I don't know if that could be contributing to my problems? Anyway, I know my SSH login works because I tested it through puTTY. But, even after hours of trials and head scratching, I can't get my Windows Git bash (mSysGit) to talk to Callisto for the purposes of pushing or pulling Callisto's git repository files. I keep getting "Fatal error. The remote end hung up unexpectedly." And I've even gotten the error that Git doesn't recognize the test repository on Callisto as a git repository. I read online that the "Fatal error...hung up unexpectedly" is usually a problem with the server connection or permissions. So what am I missing or overlooking here? And why doesn't a pull using the git:// protocol work, since that only uses read-only access? Group and public permissions for the git repository's directory on Callisto are set to read and execute, but not write. If anyone could help, I would be so grateful. Thank you.

    Read the article

  • Is there a database with git-like qualities?

    - by Mat
    I'm looking for a database where multiple users can contribute and commit new data; other users can then pull that data into their own database repository, all in a git-like manner. A transcriptional database, if you like; does such a thing exist? My current thinking is to dump the database to a single file as SQL, but that could well get unwieldy once it is of any size. Another option is to dump the database and use the filesystem, but again it gets unwieldy once of any size.

    Read the article

  • What git gotchas have you been caught by?

    - by Bob Aman
    The worst one I've been caught by was with git submodules. I had a submodule for a project on github. The project was unmaintained, and I wanted to submit patches, but couldn't, so I forked. Now the submodule was pointing at the original library, and I needed it to point at the fork instead. So I deleted the old submodule and replaced it with a submodule for the new project in the same commit. Turns out that this broke everyone else's repositories. I'm still not sure what the correct way of handling this situation is, but I ended up deleting the submodule, having everyone pull and update, and then I created the new submodule, and had everyone pull and update again. It took the better portion of a day to figure that out. What have other people done to accidentally screw up git repositories in non-obvious ways, and how did you resolve it?

    Read the article

  • Keeping track of dependency revisions

    - by Samaursa
    I have a project with several dependencies that are in various repositories. Each time I commit changes to my project, I make sure I write the revision numbers of all the dependent repositories so that in the event I ever have to come back to this revision (let's call it 5), I can immediately know which revisions of the dependent repositories revision 5 is guaranteed to work with, update the dependencies to the specified revisions, compile and run the project. So for example if I have: Dep1 @ Revisions 10 Dep2 @ Revisions 20 Dep3 @ Revisions 10 Proj @ Revisions 35 And let's say that when Proj was on revision 17, the Dep1 revision was 5, Dep2 revision was 13 and Dep3 revision was 3. So in my SVN logs, I recorded something like this: !! Works with Dep1 Rev 5, Dep2 Rev 13, Dep3 Rev 3 To me this seems primitive and makes me believe that there is a better way to do it. Now in one of my other questions, Ivy Dependency Manager has been recommended. I have not looked at it in detail yet (seems complicated and yet another thing I must learn). To me it seems like the log of SVN (and Mercurial etc.) could have been split into Log and Dependencies (if any) where the latter could be switched off if there were no dependencies (unless of course I am unaware of an easier/better solution). This would allow for a cleaner log that maybe even warned at each new commit to check the previously defined dependencies again and make sure they have not changed. So, I was wondering how everyone manages this situations and if you have any tips, techniques, programs, suggestions that you can offer. Thank you.

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >