Search Results

Search found 2823 results on 113 pages for 'perforce branch spec'.

Page 30/113 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Procedual level generation for a platformer game (tilebased) using player physics

    - by Notbad
    I have been searching for information about how to build a 2d world generator (tilebased) for a platformer game I am developing. The levels should look like dungeons with a ceiling and a floor and they will have a high probability of being just made of horizontal rooms but sometimes they can have exits to a top/down room. Here is an example of what I would like to achieve. I'm refering only to the caves part. I know level design won't be that great when generated but I think it is possible to have something good enough for people to enjoy the procedural maps (Note: Supermetrod Spoiler!): http://www.snesmaps.com/maps/SuperMetroid/SuperMetroidMapNorfair.html Well, after spending some time thinking about this I have some ideas to create the maps that I would like to share with you: 1) I have read about celular automatas and I would like to use them to carve the rooms but instead of carving just a tile at once I would like to carve full columns of tiles. Of course this carving system will have some restrictions like how many tiles must be left for the roof and the ceiling, etc... This way I could get much cleaner rooms than using the ussual automata. 2) I want some branching into the rooms. It will have little probability to happen but I definitely want it. Thinking about carving I came to the conclusion that I could be using some sort of path creation algorithm that the carving system would follow to create a path in the rooms. This could be more noticiable if we make the carving system to carve columns with the height of a corridor or with the height of a wide room (this will be added to the system as a param). This way at some point I could spawn a new automa beside the main one to create braches. This new automata should play side by side with the first one to create dead ends, islands (both paths created by the automatas meet at some point or lead to the same room. It would be too long to explain here all the tests I have done, etc... just will try to summarize the problems to see if anyone could bring some light to solve them (I don't mind sharing my successes but I think they aren't too relevant): 1) Zone reachability: How can I make sure that the player will be able to reach all zones I created (mainly when branches happen or vertical rooms are created). When branches are created I have to make sure that there will be a way to get onto the new created branch. I mean a bifurcation that the player could follow. Player will follow the main path or jump to a platform to get onto the other way). On the other hand if an island is created by the meeting of both branches I need to make sure the player will be able to get onto the island too. 2) When a branch is created and corridors are generated for each branch how can I make then both merge or repel to create an island or just make them separated corridors. 3) When I create a branch and an island is created becasue both corridors merge at somepoint or they lead to the same room, is there any way to detect this and randomize where to create the needed platforms to get onto the created isle? This platforms could be created at the start of the island or at the end. I guess part of the problem could be solved using some sort of graph following the created paths but I'm a bit lost in this sea of precedural content creation :). On the other hand I don't expect a solution to the problem but some information to get me moving forward again. Thanks in advance.

    Read the article

  • SQL Server: Writing CASE expressions properly when NULLs are involved

    - by Mladen Prajdic
    We’ve all written a CASE expression (yes, it’s an expression and not a statement) or two every now and then. But did you know there are actually 2 formats you can write the CASE expression in? This actually bit me when I was trying to add some new functionality to an old stored procedure. In some rare cases the stored procedure just didn’t work correctly. After a quick look it turned out to be a CASE expression problem when dealing with NULLS. In the first format we make simple “equals to” comparisons to a value: SELECT CASE <value> WHEN <equals this value> THEN <return this> WHEN <equals this value> THEN <return this> -- ... more WHEN's here ELSE <return this> END Second format is much more flexible since it allows for complex conditions. USE THIS ONE! SELECT CASE WHEN <value> <compared to> <value> THEN <return this> WHEN <value> <compared to> <value> THEN <return this> -- ... more WHEN's here ELSE <return this> END Now that we know both formats and you know which to use (the second one if that hasn’t been clear enough) here’s an example how the first format WILL make your evaluation logic WRONG. Run the following code for different values of @i. Just comment out any 2 out of 3 “SELECT @i =” statements. DECLARE @i INTSELECT  @i = -1 -- first resultSELECT  @i = 55 -- second resultSELECT  @i = NULL -- third resultSELECT @i AS OriginalValue, -- first CASE format. DON'T USE THIS! CASE @i WHEN -1 THEN '-1' WHEN NULL THEN 'We have a NULL!' ELSE 'We landed in ELSE' END AS DontUseThisCaseFormatValue, -- second CASE format. USE THIS! CASE WHEN @i = -1 THEN '-1' WHEN @i IS NULL THEN 'We have a NULL!' ELSE 'We landed in ELSE' END AS UseThisCaseFormatValue When the value of @i is –1 everything works as expected, since both formats go into the –1 WHEN branch. When the value of @i is 55 everything again works as expected, since both formats go into the ELSE branch. When the value of @i is NULL the problems become evident. The first format doesn’t go into the WHEN NULL branch because it makes an equality comparison between two NULLs. Because a NULL is an unknown value: NULL = NULL is false. That is why the first format goes into the ELSE Branch but the second format correctly handles the proper IS NULL comparison.   Please use the second more explicit format. Your future self will be very grateful to you when he doesn’t have to discover these kinds of bugs.

    Read the article

  • Appropriate design / technologies to handle dynamic string formatting?

    - by Mark W
    recently I was tasked with implementing a way of adding support for versioning of hardware packet specifications to one of our libraries. First a bit of information about the project. We have a hardware library which has classes for each of the various commands we support sending to our hardware. These hardware modules are essentially just lights with a few buttons, and a 2 or 4 digit display. The packets typically follow the format {SOH}AADD{ETX}, where AA is our sentinel action code, and DD is the device ID. These packet specs are different from one command to the next obviously, and the different firmware versions we have support different specifications. For example, on version 1 an action code of 14 may have a spec of {SOH}AADDTEXT{ETX} which would be AA = 14 literal, DD = device ID, TEXT = literal text to display on the device. Then we come out with a revision with adds an extended byte(s) onto the end of the packet like this {SOH}AADDTEXTE{ETX}. Assume the TEXT field is fixed width for this example. We have now added a new field onto the end which could be used to say specify the color or flash rate of the text/buttons. Currently this java library only supports one version of the commands, the latest. In our hardware library we would have a class for this command, say a DisplayTextArgs.java. That class would have fields for the device ID, the text, and the extended byte. The command class would expose a method which generates the string ("{SOH}AADDTEXTE{ETX}") using the value from the class. In practice we would create the Args class as needed, populate the fields, call the method to get our packet string, then ship that down across the CAN. Some of our other commands specification can vary for the same command, on the same version, depending on some runtime state. For example, another command for version 1 may be {SOH}AA{ETX}, where this action code clears all of the modules behind a specific controller device of their text. We may overload this packet to have option fields with multiple meanings like {SOH}AAOC{ETX} where OC is literal text, which tells the controller to only clear text on a specific module type, and to leave the others alone, or the spec could also have an option format of {SOH}AADD{ETX} to clear the text off a a specific device. Currently, in the method which generates the packet string, we would evaluate fields on the args class to determine which spec we will be using when formatting the packet. For this example, it would be along the lines of: if m_DeviceID != null then use {SOH}AADD{ETX} else if m_ClearOCs == true then use {SOH}AAOC{EXT} else use {SOH}AA{ETX} I had considered using XML, or a database to store String.format format strings, which were linked to firmware version numbers in some table. We would load them up at startup, and pass in the version number of the hardwares firmware we are currently using (I can query the devices for their firmware version, but the version is not included in all packets as part of the spec). This breaks down pretty quickly because of the dynamic nature of how we select which version of the command to use. I then considered using a rule engine to possibly build out expressions which could be interpreted at runtume, to evaluate the args class's state, and from that select the appropriate format string to use, but my brief look at rule engines for java scared me away with its complexity. While it seems like it might be a viable solution, it seems overly complex. So this is why I am here. I wouldn't say design is my strongest skill, and im having trouble figuring out the best way to approach this problem. I probably wont be able to radically change the args classes, but if the trade off was good enough, I may be able to convince my boss that the change is appropriate. What I would like from the community is some feedback on some best practices / design methodologies / API or other resources which I could use to accomplish: Logic to determine which set of commands to use for a given firmware version Of those command, which version of each command to use (based on the args classes state) Keep the rules logic decoupled from the application so as to avoid needing releases for every firmware version Be simple enough so I don't need weeks of study and trial and error to implement effectively.

    Read the article

  • How to test Rails 3 Engines with Cucumber & Rspec?

    - by cowboycoded
    I apologize if this question is slightly subjective... I am trying to figure out the best way to test Rails 3 Engines with Cucumber & Rspec. In order to test the engine a rails 3 app is necessary. Here is what I am currently doing: Add a rails test app to the root of the gem (myengine) by running: rails new /myengine/rails_app Add Cucumber to /myengine/rails_app/features as you would in a normal Rails app Require the Rails Engine Gem (using :path=>"/myengine") in /myengine/rails_app/Gemfile Add spec to the root directory of the gem: /myengine/spec Include the fixtures in /myengine/spec/fixtures and I add the following to my cuc env.rb: env.rb: Fixtures.reset_cache fixtures_folder = File.join(Rails.root, 'spec', 'fixtures') fixtures = Dir[File.join(fixtures_folder, '*.yml')].map {|f| File.basename(f, '.yml') } Fixtures.create_fixtures(fixtures_folder, fixtures) Do you see any problems with setting it up like this? The tests run fine, but I am a bit hesitant to put the features inside the test rails app. I originally tried putting the features in the root of the gem and I created the test rails app inside features/support, but for some reason my engine would not initialize when I ran the tests, even though I could see the app loading everything else when cuc ran. If anyone is working with Rails Engines and is using cuc and rspec for testing, I would be interested to hear your setup.

    Read the article

  • Solutions for working with multiple branches in ASP.Net

    - by Corey McKinnon
    At work, we are often working on multiple branches of our product at one time. For example, right now, we have a maintenance branch, a branch with code just going to QA, and a branch for a new major initiative, that won't be merged for some time now. Our web project is set up to use IIS, so every time we switch to a different branch, we have to go in to IIS Admin and change the path on the virtual directory, then reset IIS, and sometimes even restart Visual Studio, to avoid getting build errors. Is there any way to simplify this, other than not having our web project set up as a virtual directory? I'm not sure we want to make that change at this point. What do you do to make this easier, assuming you do this? Corey @RedWolves, virtual machines would definitely work, but I'm not sure it would be any simpler, especially for some of the other developers on my team, which is partly why I'm looking for more simplicity. @Dan, we're not able to change source control providers, unfortunately. @pix0r, that's something I'll try when I get back to work. Thanks for the suggestion. @Haacked, I'll have to give that a try too, but I think we have some issues with why that won't work (I can't remember exactly why right now; this application was originally written in .Net 1.1, pre-Cassini, and I can't remember if we tried it when we upgraded to 2.0 or not). Thanks all for the responses so far.

    Read the article

  • Android Apps not working in emulator

    - by Mohit Deshpande
    None of my apps work in the emulator. I am running Ubuntu 9.10 and everytime I try to access my UI, the app crashes. All I get is an "Sorry! The application ... has stopped unexpectedly". For EVERY app this happens. package com.mohit.helloandroid; import android.app.TabActivity; import android.content.Intent; import android.content.res.Resources; import android.os.Bundle; import android.widget.TabHost; public class HelloAndroid extends TabActivity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); Resources res = getResources(); //Resource object to get drawables TabHost tabHost = getTabHost(); //The activity tabhost TabHost.TabSpec spec; //Reusable tab spec Intent intent; intent = new Intent().setClass(this, HelloAndroid.class); spec = tabHost.newTabSpec("artists").setIndicator("Artists", res .getDrawable(R.drawable.tab_artists)) .setContent(intent); tabHost.addTab(spec); } } I don't know how this code could possibly throw a message like that.

    Read the article

  • Best way to fork SVN project with Git

    - by Jeremy Thomerson
    I have forked an SVN project using Git because I needed to add features that they didn't want. But at the same time, I wanted to be able to continue pulling in features or fixes that they added to the upstream version down into my fork (where they don't conflict). So, I have my Git project with the following branches: master - the branch I actually build and deploy from feature_* - feature branches where I work or have worked on new things, which I then merge to master when complete vendor-svn - my local-only git-svn branch that allows me to "git svn rebase" from their svn repo vendor - my local branch that i merge vendor-svn into. then i push this (vendor) branch to the public git repo (github) So, my flow is something like this: git checkout vendor-svn git svn rebase git checkout vendor git merge vendor-svn git push origin vendor Now, the question comes here: I need to review each commit that they made (preferably individually since at this point I'm about twenty commits behind them) before merging them into master. I know that I could run git checkout master; git merge vendor, but this would pull in all changes and commit them, without me being able to see if they conflict with what I need. So, what's the best way to do this? Git seems like a great tool for handling forks of projects since you can pull and push from multiple repos - I'm just not experienced with it enough to know the best way of doing this. Here's the original SVN project I'm talking about: https://appkonference.svn.sourceforge.net/svnroot/appkonference My fork is at github.com/jthomerson/AsteriskAudioKonf (sorry - I couldn't make it a link since I'm a new user here)

    Read the article

  • Can't get automated release working with Hudson + Git + Maven Release Plugin

    - by Christopher Maier
    As the title says, I'm trying to get an automated release job working on Hudson. It's a Maven project, and all the code is in Git. Manually, I do the release on my personal machine like so: git checkout master mvn -B release:prepare release:perform This works perfectly. The Maven release plugin properly pushes the release tag to the origin repository as well as the next commit that bumps the version to the next SNAPSHOT. However, when I run this same Maven job through Hudson (either by creating my own "release" job or by using the M2 Release Plugin) it doesn't work so well. The release tag gets pushed out to the origin repository, and the release gets pushed out to our Nexus repository, but the subsequent commit that bumps the version to the next SNAPSHOT doesn't go out. Furthermore, the "master" branch in the origin repository doesn't get changed at all. I've looked in Hudson's workspace for the job, however, and the version has been updated. After looking at the output from the Hudson job, it appears that the Git plugin does not actually checkout "master", but rather it's SHA1 id. That is, if the "master" branch label points to commit "f6af76f541f1a1719e9835cdb46a183095af6861", Hudson does git checkout -f f6af76f541f1a1719e9835cdb46a183095af6861 instead of git checkout -f master As a result, the changes that the Maven release plugin is making are not actually on any branch (certainly not on "master") and these changes don't make it to the origin repository. It runs on the right code, but bookkeeping-wise, the changes seem to get lost because no branch label points to them. Has anybody gotten the Hudson + Git + Maven Release Plugin combo to work properly? Is there some additional configuration somewhere I can set to make this happen? Or is this a bug in the Hudson Git plugin? Thanks in advance.

    Read the article

  • What guidelines should be followed when using an unstable/testing/stable branching scheme?

    - by Elliot
    My team is currently using feature branches while doing development. For each user story in our sprint, we create a branch and work it in isolation. Hence, according to Martin Fowler, we practice Continuous Building, not Continuous Integration. I am interested in promoting an unstable/testing/stable scheme, similar to that of Debian, so that code is promoted from unstable = testing = stable. Our definition of done, I'd recommend, is when unit tests pass (TDD always), minimal documentation is complete, automated functional tests pass, and feature has been demo'd and accepted by PO. Once accepted by the PO, the story will be merged into the testing branch. Our test developers spend most of their time in this branch banging on the software and continuously running our automated tests. This scares me, however, because commits from another incomplete story may now make it into the testing branch. Perhaps I'm missing something because this seems like an undesired consequence. So, if moving to a code promotion strategy to solve our problems with feature branches, what strategy/guidelines do you recommend? Thanks.

    Read the article

  • How to do this Python / MySQL manipulation (match) more efficiently?

    - by NJTechie
    Following is my data : Company Table : ID Company Address City State Zip Phone 1 ABC 123 Oak St Philly PA 17542 7329878901 2 CDE 111 Joe St Newark NJ 08654 3 GHI 211 Foe St Brick NJ 07740 7321178901 4 JAK 777 Wall Ocean NJ 07764 7322278901 5 KLE 87 Ilk St Plains NY 07654 7376578901 6 AB 1 W.House SField PA 87656 7329878901 Branch Office Table : ID Address City State Zip Phone 1 323 Alk St Philly PA 17542 7329832221 1 171 Joe St Newark NJ 08654 3 287 Foe St Brick NJ 07740 7321178901 3 700 Wall Ocean NJ 07764 7322278901 1 89 Blk St Surrey NY 07154 7376222901 File to be Matched (In MySQL): ID Company Address City State Zip Phone 1 ABC 123 Oak St Philly PA 17542 7329878901 2 AB 171 Joe St Newark NJ 08654 3 GHI 211 Foe St Brick NJ 07740 7321178901 4 JAK 777 Wall Ocean NJ 07764 7322278901 5 K 87 Ilk St Plains NY 07654 7376578901 Resulting File : ID Company Address City State Zip Phone appendedID 1 ABC 123 Oak St Philly PA 17542 7329878901 [Original record, field always empty] 1 ABC 171 Joe St Newark NJ 08654 1 [Company Table] 1 ABC 323 Alk St Philly PA 17542 7329832221 1 [Branch Office Table] 1 AB 1 W.House SField PA 87656 7329878901 6 [Partial firm and State, Zip match] 2 CDE 111 Joe St Newark NJ 08654 3 GHI 211 Foe St Brick NJ 07740 7321178901 3 GHI 700 Wall Ocean NJ 07764 7322278901 3 3 GHI 287 Foe St Brick NJ 07740 7321178901 3 4 JAK 777 Wall Ocean NJ 07764 7322278901 5 KLE 87 Ilk St Surrey NY 07654 7376578901 5 KLE 89 Blk St Surrey NY 07154 7376222901 5 Requirement : 1) I have to match each firm on the 'File to be Matched' to that of Company and Branch Office tables (MySQL). 2) If there are multiple exact/partial matches, then the ID from Company, Branch Office table is inserted as a new row in the resulting file. 3) Not all the firms will be matched perfectly, in that case I have to match on partial Company names (like 5/8th of the company name) and any of the address fields and insert them in the resulting file. Please help me out in the most efficient solution for this problem.

    Read the article

  • Running RSpec Files From ruby code

    - by Brian D.
    I'm trying to run RSpec tests straight from ruby code. More specifically, I'm running some mysql scripts, loading the rails test environment and then I want to run my rspec tests (which is what I'm having trouble with)... I'm trying to do this with a rake task. Here is my code so far: require"spec" require "spec/rake/spectask" RAILS_ENV = 'test' namespace :run_all_tests do desc "Run all of your tests" puts "Reseting test database..." system "mysql --user=root --password=dev < C:\\Brian\\Work\\Personal\\BrianSite\\database\\BrianSite_test_CreateScript.sql" puts "Filling database tables with test data..." system "mysql --user=root --password=dev < C:\\Brian\\Work\\Personal\\BrianSite\\database\\Fill_Test_Tables.sql" puts "Starting rails test environment..." task :run => :environment do puts "RAILS_ENV is #{RAILS_ENV}" # Run rspec test files here... require "spec/models/blog_spec.rb" end end I thought the require "spec/models/blog_spec.rb" would do it, but the tests aren't running. Anyone know where I'm going wrong? Thanks for any help.

    Read the article

  • Moving from SVN to HG : branching and backup

    - by rorycl
    My company runs svn right now and we are very familiar with it. However, because we do a lot of concurrent development, merging can become very complicated.. We've been playing with hg and we really like the ability to make fast and effective clones on a per-feature basis. We've got two main issues we'd like to resolve before we move to hg: Branches for erstwhile svn users I'm familiar with the "4 ways to branch in Mercurial" as set out in Steve Losh's article. We think we should "materialise" the branches because I think the dev team will find this the most straightforward way of migrating from svn. Consequently I guess we should follow the "branching with clones" model which means that separate clones for branches are made on the server. While this means that every clone/branch needs to be made on the server and published separately, this isn't too much of an issue for us as we are used to checking out svn branches which come down as separate copies. I'm worried, however, that merging changes and following history may become difficult between branches in this model. Backup If programmers in our team make local clones of a branch, how do they backup the local clone? We're used to seeing svn commit messages like this on a feature branch "Interim commit: db function not yet working". I can't see a way of doing this easily in hg. Advice gratefully received. Rory

    Read the article

  • SVN marking file production ready

    - by dan.codes
    I am kind of new to SVN, I haven't used it in detail basically just checking out trunk and committing then exporting and deploying. I am working on a big project now with several developers and we are looking for the best deployment process. The one thing we are hung up on is the best way to tag, branch and so on. We are used to CVS where all you have to do is commit a file and tag it as production ready and if not that code will not get deployed. I see that SVN handles tagging differently then CVS. I figure I am looking at this and making it overly complex. It seems the only way to work on a project and commit files without it being in the production code is to do it in a branch and then merge those changes when you are ready for it to be deployed. I am assuming you could also be working on other code that should be deployed so you would have to be switching between working copies, because otherwise you are working on a branch that isn't getting mixed in with the trunk or production branch? This process seems overly complex and I was wondering if anyone could give me what you think is the best process for managing this.

    Read the article

  • git-svn: reset tracking for master

    - by digitala
    I'm using git-svn to work with an SVN repository. My working copies have been created using git svn clone -s http://foo.bar/myproject so that my working copy follows the default directory scheme for SVN (trunk, tags, branches). Recently I've been working on a branch which was created using git-svn branch myremotebranch and checked-out using git checkout --track -b mybranch myremotebranch. I needed to work from multiple locations, so from the branch I git-svn dcommit-ed files to the SVN repository quite regularly. After finishing my changes, I switched back to the master and executed a merge, committed the merge, and tried to dcommit the successful merge to the remote trunk. It seems as though after the merge the remote tracking for the master has switched to the branch I was working on: # git checkout master # git merge mybranch ... (successful) # git add . # git commit -m '...' # git svn dcommit Committing to http://foo.bar/myproject/branches/myremotebranch ... # Is there a way I can update the master so that it's following remotes/trunk as before the merge? I'm using git 1.7.0.5, if that's any help.

    Read the article

  • Git: changes not reflecting on other checkouts - huh?

    - by Chad Johnson
    Okay, so, I have my branches (git branch -a): * chat master remotes/origin/HEAD -> origin/master remotes/origin/chat I make changes (still with the 'chat' branch checkout out), commit, and push. I go to my server, on which I have a clone of the repository, and I do a fetch: git getch then I switch to the chat branch: git checkout --track -b chat origin/chat and I event do a pull, just to make sure everything is up to date: git pull and my changes from my other computer are NOT. THERE. What the heck am I doing wrong? If I had hair, I would have pulled it out. Thankfully I am bald. When I try a 'git commit' again, I get this # On branch chat # Changed but not updated: # (use "git add/rm <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: app/controllers/chat_controller.rb # modified: app/views/dashboard/index.html.erb # modified: app/views/dashboard/layout.js.erb # modified: app/views/layouts/dashboard.html.erb # deleted: app/views/project/.tmp_edit.html.erb.55742~ # deleted: app/views/project/.tmp_edit.html.erb.83482~ # modified: public/stylesheets/dashboard/layout.css # # Untracked files: # (use "git add <file>..." to include in what will be committed) # # .loadpath # .project # config/database.yml # config/environments/development.yml # config/environments/production.yml # config/environments/test.yml # log/ no changes added to commit (use "git add" and/or "git commit -a")

    Read the article

  • Tagging in Subversion - how do I make the decision about continuing to work on my trunk vs. the new

    - by Howiecamp
    I'm running Tortoise SVN to manage a project. Obviously the principles around tagging apply to any implementation of SVN but in this question I'll be referring to some TortoiseSVN-specific dialog boxes and messages. My working directory and the subversion repository structure both have a Source root directory and the Trunk, Tags and Branches directories underneath. (I couldn't figure out how to do a multilevel indented hierarchy in markdown without using bullets, so if someone could edit and fix this I'd appreciate it.) I'm working out of the Trunk directory in my working copy and it's pointing at the Trunk directory in the repo. I want to apply a Tag "Release1" so I click the "Branch/tag..." menu option and set the repo path as my [repo_path/bla/Source/Tags/Release1" tag. This dialog box gives me the option to "Switch my working copy to new branch/tag". I understand that if this option is left unchecked, the new "Release1" branch under /Tags" will be created but my working copy will remain on the previous "Trunk" path. If I do check this option (or use the Switch command) I understand that my working copy will switch to the new "Release1" branch under "/Tags". Where I'm missing a concept is how to make this decision. It doesn't seem like I want to switch my working directory to the recently created tag since by definition (?) I want that tag to be a snapshot of my code as of a point in time. If I don't switch the working directory, I'll continue working off Trunk and when I'm ready to take another snapshot I'll make another tag. And so on... Am I understanding this right or am I stating something incorrectly in the previous paragraph (e.g. the statement about not wanting to switch to the tag since the tag should represent a point in time snapshot) or otherwise missing something regarding how to make this decision?

    Read the article

  • Am I doing it right?

    - by LuckySlevin
    I have situation. I have to create a Sports Club system in JAVA. There should be a class your for keeping track of club name, president name and braches the club has. For each sports branch also there should be a class for keeping track of a list of players. Also each player should have a name, number, position and salary. So, I come up with this. Three seperate classes: public class Team { String clubName; String preName; Branch []branches; } public class Branch { Player[] players; } public class Player { String name; String pos; int salary; int number; } The problems are creating Branch[] in another class and same for the Player[]. Is there any simplier thing to do this? For example, I want to add info for only the club name, president name and branches of the club, in this situation, won't i have to enter players,names,salaries etc. since they are nested in each other. I hope i could be clear. For further questions you can ask.

    Read the article

  • git push says everything up to date when it definitely is not

    - by Wolf
    I have a public repository. No one else has forked, pulled, or done anything else to it. I made some minor changes to one file, successfully committed them, and tried to push. It says 'Everything up-to-date'. There are no branches. I'm very, very new to git and I don't understand what on earth is going on. git remote show origin tells me: HEAD branch: master Remote branch: master tracked Local ref configured for 'git push': master pushes to master (up to date) Any ideas what I can do to make this understand that it's NOT up to date? Thanks Updates: git status: # On branch master # Untracked files: # (use "git add ..." to include in what will be committed) # # histmarkup.el # vendor/yasnippet-0.6.1c/snippets/ no changes added to commit (use "git add" and/or "git commit -a") git branch -a: * master remotes/origin/master git fsck: dangling tree 105cb101ca1a4d2cbe1b5c73eb4a238e22cb4998 dangling tree 85bd0461f0fcb1618d46c8a80d3a4a7932de34bb Update 2: I re-opened the modified file, and the modifications I KNOW I had made were gone. So I added them again, went through the rigamarole of git status, git add filename, git commit -m "(message)", and git push origin master, and all of a sudden it works the way it's supposed to.

    Read the article

  • Can I use my objects without fully populating them?

    - by LuckySlevin
    I have situation. I have to create a Sports Club system in JAVA. There should be a class your for keeping track of club name, president name and braches the club has. For each sports branch also there should be a class for keeping track of a list of players. Also each player should have a name, number, position and salary. So, I come up with this. Three seperate classes: public class Team { String clubName; String preName; Branch []branches; } public class Branch { Player[] players; } public class Player { String name; String pos; int salary; int number; } The problems are creating Branch[] in another class and same for the Player[]. Is there any simplier thing to do this? For example, I want to add info for only the club name, president name and branches of the club, in this situation, won't i have to enter players,names,salaries etc. since they are nested in each other. I hope i could be clear. For further questions you can ask.

    Read the article

  • A question of long-running and disruptive branches

    - by Matt Enright
    We are about to begin prototyping a new application that will share some existing infrastructure assemblies with an existing application, and also involve a significant subset of the existing domain model. Parts of the domain model will likely undergo some serious changes for this new application, and the endgame for all of this, once the new application has been fully specified and is launch-ready is that we would like to re-unify the models of the two applications (as well as share a database, link functionality, etc.), but for the duration of development, prototyping, etc, we will be using a separate database so that we can change things without worrying about impact to development or use of the existing application. Since it is a prototype, there will be a pretty long window during which serious changes or rearchitecturing can occur as product management experiments with different workflows, different customer bases are surveyed, and we try and keep up. We have already made a Subversion branch, so as to not impact concurrent development on the mature application, and are toying with 2 potential ways of moving forward with this: Use the svn branch as the sole mechanism of separation. Make our changes to the existing domain models, and evaluate their impact on the existing application (and make requisite changes to ProjectA) when we have established that our long-running side branch is stable enough for re-entry to trunk. "Fork" the shared code (temporarily): Copy ProjectA.Entities to NewProject.Entities, and treat all of the NewProject code as self-contained. When all of the perturbations around the model have died down and we feel satisfied, manually re-integrate the changes (as granular or sweeping as warranted) back into ProjectA.Entities, updating ProjectA to use the improved models at each step (this can take place either before or after the subversion merge has occurred). The subversion merge will then not handle recombination of any of the heavy changes here. Note: the "fork" method only applies to the code we see significant changes in store for, and whose modification will break ProjectA - shared infrastructure stuff for example, we would just modify in place (on our branch) and let the merge sort out. Development is hard, go shopping. Naturally, after not coming to an agreement, we're turning it over to the oracle of power that is SO. Any experience with any of these methods, pain points to watch out for, something new entirely?

    Read the article

  • Entity Framework - Store parent reference on child relationship (one -> many)

    - by contactmatt
    I have a setup like this: [Table("tablename...")] public class Branch { public Branch() { Users = new List<User>(); } [Key] public int Id { get; set; } public string Name { get; set; } public List<User> Users { get; set; } } [Table("tablename...")] public class User { [Key] public int Id {get; set; } public string Username { get; set; } public string Password { get; set; } [ForeignKey("ParentBranch")] public int? ParentBranchId { get; set; } // Is this possible? public Branch ParentBranch { get; set; } // ??? } Is it possible for the User to know what parent branch it belongs to? The code above is not working. Entity Framework version 5.0 .NET 4.0 c#

    Read the article

  • TFS 2010 SDK: Smart Merge - Programmatically Create your own Merge Tool

    - by Tarun Arora
    Technorati Tags: Team Foundation Server 2010,TFS SDK,TFS API,TFS Merge Programmatically,TFS Work Items Programmatically,TFS Administration Console,ALM   The information available in the Merge window in Team Foundation Server 2010 is very important in the decision making during the merging process. However, at present the merge window shows very limited information, more that often you are interested to know the work item, files modified, code reviewer notes, policies overridden, etc associated with the change set. Our friends at Microsoft are working hard to change the game again with vNext, but because at present the merge window is a model window you have to cancel the merge process and go back one after the other to check the additional information you need. If you can relate to what i am saying, you will enjoy this blog post! I will show you how to programmatically create your own merging window using the TFS 2010 API. A few screen shots of the WPF TFS 2010 API – Custom Merging Application that we will be creating programmatically, Excited??? Let’s start coding… 1. Get All Team Project Collections for the TFS Server You can read more on connecting to TFS programmatically on my blog post => How to connect to TFS Programmatically 1: public static ReadOnlyCollection<CatalogNode> GetAllTeamProjectCollections() 2: { 3: TfsConfigurationServer configurationServer = 4: TfsConfigurationServerFactory. 5: GetConfigurationServer(new Uri("http://xxx:8080/tfs/")); 6: 7: CatalogNode catalogNode = configurationServer.CatalogNode; 8: return catalogNode.QueryChildren(new Guid[] 9: { CatalogResourceTypes.ProjectCollection }, 10: false, CatalogQueryOptions.None); 11: } 2. Get All Team Projects for the selected Team Project Collection You can read more on connecting to TFS programmatically on my blog post => How to connect to TFS Programmatically 1: public static ReadOnlyCollection<CatalogNode> GetTeamProjects(string instanceId) 2: { 3: ReadOnlyCollection<CatalogNode> teamProjects = null; 4: 5: TfsConfigurationServer configurationServer = 6: TfsConfigurationServerFactory.GetConfigurationServer(new Uri("http://xxx:8080/tfs/")); 7: 8: CatalogNode catalogNode = configurationServer.CatalogNode; 9: var teamProjectCollections = catalogNode.QueryChildren(new Guid[] {CatalogResourceTypes.ProjectCollection }, 10: false, CatalogQueryOptions.None); 11: 12: foreach (var teamProjectCollection in teamProjectCollections) 13: { 14: if (string.Compare(teamProjectCollection.Resource.Properties["InstanceId"], instanceId, true) == 0) 15: { 16: teamProjects = teamProjectCollection.QueryChildren(new Guid[] { CatalogResourceTypes.TeamProject }, false, 17: CatalogQueryOptions.None); 18: } 19: } 20: 21: return teamProjects; 22: } 3. Get All Branches with in a Team Project programmatically I will be passing the name of the Team Project for which i want to retrieve all the branches. When consuming the ‘Version Control Service’ you have the method QueryRootBranchObjects, you need to pass the recursion type => none, one, full. Full implies you are interested in all branches under that root branch. 1: public static List<BranchObject> GetParentBranch(string projectName) 2: { 3: var branches = new List<BranchObject>(); 4: 5: var tfs = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri("http://<ServerName>:8080/tfs/<teamProjectName>")); 6: var versionControl = tfs.GetService<VersionControlServer>(); 7: 8: var allBranches = versionControl.QueryRootBranchObjects(RecursionType.Full); 9: 10: foreach (var branchObject in allBranches) 11: { 12: if (branchObject.Properties.RootItem.Item.ToUpper().Contains(projectName.ToUpper())) 13: { 14: branches.Add(branchObject); 15: } 16: } 17: 18: return branches; 19: } 4. Get All Branches associated to the Parent Branch Programmatically Now that we have the parent branch, it is important to retrieve all child branches of that parent branch. Lets see how we can achieve this using the TFS API. 1: public static List<ItemIdentifier> GetChildBranch(string parentBranch) 2: { 3: var branches = new List<ItemIdentifier>(); 4: 5: var tfs = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri("http://<ServerName>:8080/tfs/<CollectionName>")); 6: var versionControl = tfs.GetService<VersionControlServer>(); 7: 8: var i = new ItemIdentifier(parentBranch); 9: var allBranches = 10: versionControl.QueryBranchObjects(i, RecursionType.None); 11: 12: foreach (var branchObject in allBranches) 13: { 14: foreach (var childBranche in branchObject.ChildBranches) 15: { 16: branches.Add(childBranche); 17: } 18: } 19: 20: return branches; 21: } 5. Get Merge candidates between two branches Programmatically Now that we have the parent and the child branch that we are interested to perform a merge between we will use the method ‘GetMergeCandidates’ in the namespace ‘Microsoft.TeamFoundation.VersionControl.Client’ => http://msdn.microsoft.com/en-us/library/bb138934(v=VS.100).aspx 1: public static MergeCandidate[] GetMergeCandidates(string fromBranch, string toBranch) 2: { 3: var tfs = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri("http://<ServerName>:8080/tfs/<CollectionName>")); 4: var versionControl = tfs.GetService<VersionControlServer>(); 5: 6: return versionControl.GetMergeCandidates(fromBranch, toBranch, RecursionType.Full); 7: } 6. Get changeset details Programatically Now that we have the changeset id that we are interested in, we can get details of the changeset. The Changeset object contains the properties => http://msdn.microsoft.com/en-us/library/microsoft.teamfoundation.versioncontrol.client.changeset.aspx - Changes: Gets or sets an array of Change objects that comprise this changeset. - CheckinNote: Gets or sets the check-in note of the changeset. - Comment: Gets or sets the comment of the changeset. - PolicyOverride: Gets or sets the policy override information of this changeset. - WorkItems: Gets an array of work items that are associated with this changeset. 1: public static Changeset GetChangeSetDetails(int changeSetId) 2: { 3: var tfs = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri("http://<ServerName>:8080/tfs/<CollectionName>")); 4: var versionControl = tfs.GetService<VersionControlServer>(); 5: 6: return versionControl.GetChangeset(changeSetId); 7: } 7. Possibilities In future posts i will try and extend this idea to explore further possibilities, but few features that i am sure will further help during the merge decision making process would be, - View changed files - Compare modified file with current/previous version - Merge Preview - Last Merge date Any other features that you can think of?

    Read the article

  • Windows Server 2012 Branchcache vs. DFS-R

    - by TheCleaner
    Warning, subjective question ahead! But hopefully a good one that won't get closed. SCENARIO: I have a branch office that currently has no on-premise server. They access everything including a DC across a 12Mbps WAN link (MPLS). The link isn't saturated, averaging around 20% utilization. The circuit is very stable and has a high SLA and excellent uptime. However, large file transfers (mainly reads, not writes) from the file server across the WAN can be slow. We don't currently utilize DFS. RESEARCH DONE: I'm aware of WAN acceleration, using either dedicated hardware (Riverbed) or a dedicated software VM (Silver Peak) for example. But the pricing is outside of our current budget and the need isn't quite there yet from our perspective (since the issue is mainly in a "pull" scenario not necessarily push/pull). I'm mainly looking at deploying a Windows server at this branch office and either utilizing DFS-R or BranchCache. Looking at a table comparison and assuming we are looking at a "hosted branchcache server" and not simply distributed: It would appear there are benefits to both, even if both are "hosted" on a server. QUESTIONS I ACTUALLY HAVE: In what scenarios do each of these techs shine and where do you choose one over the other? Looking at a hosted Branchcache server, can you set "pre-fetching" of certain folders/files on the central file server so that they are immediately accessible locally at the branch? Do you have to do this on a schedule (if it is possible)? Looking at DFS-R my concern (and apparently solved with 3rd party apps) is file locking and making sure the file gets updated properly during a write operation (ie, making sure if both copies are accessed and both are written to, which file takes precedence and what happens to the changes?). Ideal it would seem would be to lock any alternate replicas of the data, but is it really that big of an issue? Does Branchcache lock the central file for editing? Does branchcache only transmit the deltas back to the central file of what has changed? Would either technology be ill advised if the branch office server was going to be utilized as a domain controller as well?

    Read the article

  • Appropriate USB enclosure to use with an Ultra ATA drive

    - by Topdown
    I have a friend with a broken iBook, but we wish to recover the hard disk. I haven't seen the drive itself, however the spec lists it as an "Ultra ATA drive". Could you please advise if this is 100% compatible with any standard USB IDE 2.5inch enclosure? Full spec: iBook 12" 1GHz(AP) 256MB DDR266 SDRAM built-in Keyboard/Mac OS X Bluetooth Module 40GB Ultra ATA drive Combo (DVD-ROM/CD-RW).

    Read the article

  • How to setup GIT repo on server with need for working dir (non- bare)

    - by OrangeTux
    I want to have configurate a GIT repo for a website. Multiple users will have a clone of the repo on their local machine and on the end of each day they push their work to the server. I can setup a bare repo, but I want a working dir/non-bare repository. The idea is that the working dir of the repository will the root folder for the website. At the end of each day all changes will be visible directly. But I can't find a way to do this. Initializing the server repo with git init gives the following error when a client is trying to push some files: git push origin master [email protected]'s password: Counting objects: 3, done. Writing objects: 100% (3/3), 227 bytes, done. Total 3 (delta 0), reused 0 (delta 0) remote: error: refusing to update checked out branch: refs/heads/master remote: error: By default, updating the current branch in a non-bare repository remote: error: is denied, because it will make the index and work tree inconsistent remote: error: with what you pushed, and will require 'git reset --hard' to match remote: error: the work tree to HEAD. remote: error: remote: error: You can set 'receive.denyCurrentBranch' configuration variable to remote: error: 'ignore' or 'warn' in the remote repository to allow pushing into remote: error: its current branch; however, this is not recommended unless you remote: error: arranged to update its work tree to match what you pushed in some remote: error: other way. remote: error: remote: error: To squelch this message and still keep the default behaviour, set remote: error: 'receive.denyCurrentBranch' configuration variable to 'refuse'. To ssh://[email protected]/home/orangetux/www/ ! [remote rejected] master -> master (branch is currently checked out) error: failed to push some refs to 'ssh://[email protected]/home/orangetux/www/' So I'm wondering if this the right way to setup a GIT repo for a website? If so, how do I have to do this? If not, what is a better way to setup a GIT repo for the development of a website? EDIT you can't push to a non-bare repository Oke, clear. But whats the way to solve my problem? Create a bare repository on the server and have a clone of this repo on the same server in the htdocs folder? This looks a bit clumsy to me. To see the result of a commit I've to clone the repository each time.

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >