Search Results

Search found 1621 results on 65 pages for 'maven scm'.

Page 54/65 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • Best Version control for lone developer

    - by Stephen
    I'm a lone developer at the moment; please share you experiences on what is a good VC setup for a lone developer. My constraints are; I work on multiple machines and need to keep them synced up Sometimes I work offline I'm currently using Subversion(just the client to a remote server), and that is working ok. I'm interested in mecurial and git DVCS, but none of their use-cases make sense to my situation. EDIT: I've migrated my active development to Fossil http://www.fossil-scm.org/ after trialing it with a client. I really like the features to autosync my repositories(reducing accidental forks), the documentation support(both wiki and embedded/versioned) that supports my need to document the code and the project in different spaces, the easy to configure issue tracker, nice access control, skinnable web interface and helpful community.

    Read the article

  • What is the difference between clearcase and vss in label a release?

    - by raj
    Hi, We are using clearcase as our SCM. I have not much experience with clearcase. Now we are about to release our code to production. I want to label my code as I have done using VSS in my previous projects. But in clearcase labeling is not as easy as in VSS. clearcase is asking to create a label type before label a folder in VOB. I don't understand the concept of creating label type? Any guidance on this will be highly appreciated.

    Read the article

  • How do negated patterns work in .gitignore?

    - by chrisperkins
    I am attempting to use a .gitignore file with negated patterns (lines starting with !), but it's not working the way I expect. As a minimal example, I have the folllowing directory structure: C:/gittest -- .gitignore -- aaa/ -- bbb/ -- file.txt -- ccc/ -- otherfile.txt and in my gitignore file, I have this: aaa/ !aaa/ccc/ My understanding (based on this: http://ftp.sunet.se/pub//Linux/kernel.org/software/scm/git/docs/gitignore.html) is that the file aaa/ccc/otherfile.txt should not be ignored, but in fact git is ignoring everything under aaa. Am I misunderstanding this sentence: "An optional prefix ! which negates the pattern; any matching file excluded by a previous pattern will become included again."? BTW, this is on Windows with msysgit 1.7.0.2.

    Read the article

  • Multiple developers on a Titanium project

    - by Cybear
    I'm making an iPhone app with Appcelerator Titanium and I want to share the source code with a few more programmers. I will use a SCM repository which at some point might be open to the general public. Now my question is, are there any files which I should not commit to the repository? In project root I can tell that tiapp.xml and mainfest are telling the app GUID, is there any reason for me to keep that private? (this value is also shown many places in the build/ folder) I've added everything in the Resources/ folder. If I skip the build/iphone/build/ folder, will developers still be able to build the project? Side question - When another programmer downloads this code, it seems to me that (s)he has to have the same directory structure as I do? Any workarounds for this?

    Read the article

  • Capistrano 3, Rails 4, database configuration does not specify adapter

    - by Kazmin
    When I start cap production deploy it fails like this: DEBUG [4ee8fa7a] Command: cd /home/deploy/myapp/releases/releases/20131025212110 && (RVM_BIN_PATH=~/.rvm/bin RAILS_ENV= ~/.rvm/bin/myapp_rake assets:precompile ) DEBUG [4ee8fa7a] rake aborted! DEBUG [4ee8fa7a] database configuration does not specify adapter You can see that "RAILS_ENV=" is actually empty and I'm wondering why that might be happening? I assume that this is the reason for the latter error that I don't have a database configuration. The deploy.rb file is below: set :application, 'myapp' set :repo_url, '[email protected]:developer/myapp.git' set :branch, :master set :deploy_to, '/home/deploy/myapp/releases' set :scm, :git set :devpath, "/home/deploy/myapp_development" set :user, "deploy" set :use_sudo, false set :default_env, { rvm_bin_path: '~/.rvm/bin' } set :keep_releases, 5 namespace :deploy do desc 'Restart application' task :restart do on roles(:app), in: :sequence, wait: 5 do # Your restart mechanism here, for example: within release_path do execute " bundle exec thin restart -O -C config/thin/production.yml" end end end after :restart, :clear_cache do on roles(:web), in: :groups, limit: 3, wait: 10 do within release_path do end end end after :finishing, 'deploy:cleanup' end?

    Read the article

  • How do people manage changes to common library files stored across mutiple (Mercurial) repositories?

    - by mckoss
    This is perhaps not a question unique to Mercurial, but that's the SCM that I've been using most lately. I work on multiple projects and tend to copy source code for libraries or utilities from a previous project to get a leg up on starting a new project. The problem comes in when I want to merge all the changes I made in my latest project, back into a "master" copy of those shared library files. Since the files stored in disjoint repositories will have distinct version histories, Mercurial won't be able to perform an intelligent merge if I just copy the files back to the master repo (or even between two independent projects). I'm looking for an easy way to preserve the change history so I can merge library files back to the master with a minimum of external record keeping (which is one of the reasons I'm using SVN less as merges require remembering when copies were made across branches). Perhaps I need to do a bit more up-front organization of my repository to prepare for a future merge back to a common master.

    Read the article

  • Git + Capistrano = Automatic Release Notes Generator ?

    - by Matt Rogish
    We use git (github) and capistrano (like 99% of the Rails shops out there) to deploy our app to production. What I'd like to do is, after every cap * deploy generate a text file containing all the git commit comments since the last deploy. I can then take that list of commit comments, clean it up, and put it somewhere for consumption. "git log" http://book.git-scm.com/3_reviewing_history_-_git_log.html has plenty of options for fetching log messages, but I don't see an easy way in capistrano to return the current and previous commits, or even the last date/time a deployment occurred, so I can pass that to git log Thoughts? I can't be the first one doing this... Thanks!

    Read the article

  • Is git svn rebase required before git svn dcommit?

    - by allyourcode
    I'm reading about using git as an svn client here: http://learn.github.com/p/git-svn.html That page suggests that you do git svn rebase before git svn dcommit, which makes perfect sense; it's like doing svn update before doing svn commit. Then, I started looking at the documentation for git svn dcommit (I was wondering what the 'd' is about): http://www.kernel.org/pub/software/scm/git/docs/git-svn.html You have to scroll down a bit to see the documentation on dcommit, which says this: Commit each diff from a specified head directly to the SVN repository, and then rebase or reset (depending on whether or not there is a diff between SVN and head). This confuses me, because if you do as the first page says, there will be no changes to pull down from svn once the first part of dcommit finishes. I'm also confused by the part that talks about reset; isn't git reset for removing changes from the staging area? Why would rebase or reset follow (the first part of) a dcommit?

    Read the article

  • Why is capistrano acting up like this?

    - by Matt
    I am having an issue with my deploy i ran cap deploy and got this Warning: Permanently added 'github.com,207.97.227.239' (RSA) to the list of known hosts. ** [174.143.150.79 :: out] Permission denied (publickey). ** fatal: The remote end hung up unexpectedly command finished *** [deploy:update_code] rolling back * executing "rm -rf /home/deploy/transprint/releases/20110105034446; true" servers: ["174.143.150.79"] [174.143.150.79] executing command here is my deploy.rb set :application, "transprint" set :domain, "174.149.150.79" set :user, "deploy" set :use_sudo, false set :scm, :git set :deploy_via, :remote_cache set :app_path, "production" set :rails_env, 'production' set :repository, "[email protected]:myname/something.git" set :scm_username, 'deploy' set :deploy_to, "/home/deploy/#{application}" role :app, domain role :web, domain role :db, domain, :primary => true please help

    Read the article

  • Why is Harvest being purchased at all?

    - by Mike Caron
    Does your work environment use Harvest SCM? I've used this now at two different locations and find it appalling. In one situation I wrote a conversion script so I could use CVS locally and then daily import changes to the Harvest system while I was sleeping. The corp was fanatic about using Harvest, despite 80% of the programmers crying for something different. It was needlessly complicated, slow and heavy. It is now a job requirement for me that Harvest is not in use where I work. Has anyone else used Harvest before? What's your experience? As bad as mine? Did you employ other, different workarounds? Why is this product still purchased today?

    Read the article

  • Git: What is a tracking branch?

    - by jerhinesmith
    Can someone explain a "tracking branch" as it applies to git? Here's the definition from git-scm.com: A 'tracking branch' in Git is a local branch that is connected to a remote branch. When you push and pull on that branch, it automatically pushes and pulls to the remote branch that it is connected with. Use this if you always pull from the same upstream branch into the new branch, and if you don't want to use "git pull" explicitly. Unfortunately, being new to git and coming from SVN, that definition makes absolutely no sense to me. I'm reading through "The Pragmatic Guide to Git" (great book, by the way), and they seem to suggest that tracking branches are a good thing and that after creating your first remote (origin, in this case), you should set up your master branch to be a tracking branch, but it unfortunately doesn't cover why a tracking branch is a good thing or what benefits you get by setting up your master branch to be a tracking branch of your origin repository. Can someone please enlighten me (in English)?

    Read the article

  • How to integrate an open source C program instead of calling its executable through a system call?

    - by ihamer
    I have an executable (fossil scm) that is being invoked by my program externally through ::CreateProcess windows call. The stdout and stderr are then captured. Since the source code for fossil is available, I would prefer to create a static library out of it and issue calls directly. Currently, communication to fossil is done through the command line parameters, and the communication back is through the process return code, stdout and stderr. Fossil writes to stdout/err through printf and fprintf calls. What is the best way to solve this with minimum alteration of fossil source? Is there a reliable and cross-platform way to intercept stdout/err and send it into a memory buffer?

    Read the article

  • Is there a tool to build and test a local change on multiple platforms

    - by Ben
    A company I used to work for was plagued with build breakages. So they made a tool that would zip up a developers local changes (which it detected from SCM) and send them to a remote server for a test build. The remote server would update its copy of the source with the repository and then apply the changes it received from the developer. It would then build and test the changes. We actually targeted multiple platforms so it would do the above for each of those platforms. When it was done, if everything was green, the developer was reasonable confident they could submit the change without breaking the "real" build. Are there any tools out there that do something similar?

    Read the article

  • Binding PropertyName of CollectionViewSource SortDescription in Xaml

    - by Faisal
    Here is my xaml that tells the collectionviewsource sort property name. <CollectionViewSource Source="{Binding Contacts}" x:Key="contactsCollection" Filter="CollectionViewSource_Filter"> <CollectionViewSource.SortDescriptions> <scm:SortDescription PropertyName="DisplayName" /> </CollectionViewSource.SortDescriptions> </CollectionViewSource> The xaml above works fine but problem I have is that I don't know how to give a variable value to SortDescription PropertyName. I have a property in my viewmodel that tells which property to sort on but I am not able to bind this property to SortDescription's PropertyName field. Is there any way?

    Read the article

  • How to handle images folder with many images

    - by Billy
    I'm developing a new aspnet website with 200k images in a /Images/ -folder. Many operations in Visual Studio is slow because it access the folder, adding a web service takes 10 minutes. The images is not checked into scm (svn). How should I structure the tree of code, to improve performance in VS? It would also be neat if not all developers needed to copy 200k images to their local disk to be able to develop on the site. Images as DB blobs is not an option.

    Read the article

  • Capistrano update causes C: to be placed in the current directory (cygwin)

    - by user321775
    When I run cap deploy:update in a directory on my local machine (via cygwin), "C:" magically appears in the directory. Sure enough, I can cd to it and it's my windows C: drive. Now I'm afraid to delete it, but I definitely don't want it in this directory (a rails project under /home/username/blah/blah). Here's my config/deploy.rb file. custom options set :application, "xyz.com" set :repository, "ssh://[email protected]:yyyy/home/git/xxx" set :user, "myname" set :runner, user set :use_sudo, false server "xxx.xxx.xxx.xxx:yyyy", :app, :web, :db, :primary = true deploy to set :deploy_to, "/home/myname/public_html/xyz" repository set :scm, :git set :deploy_via, :copy ssh options default_run_options[:pty] = true ssh_options[:paranoid] = false ssh_options[:port] = yyyy start passenger namespace :deploy do task :start do ; end task :stop do ; end task :restart, :roles = :app, :except = { :no_release = true } do run "#{try_sudo} touch #{File.join(current_path,'tmp','restart.txt')}" end end Anyone see the problem? And does anyone know a safe way of getting rid of the C: drives that have already shown up (this has happened in a few directories)?

    Read the article

  • hg unshelve not working

    - by shanebonham
    Our team is just getting started with Mercurial. One of the first things we've started to play with is hg shelve. Locally, I have no problem shelving changes. It all works perfectly from what I can tell. However, when I try to unshelve, I get the restoring backup files message, but when I run hg diff, there are no changes, and my changes are missing from the code. If i do hg unshelve -i I can see the diff, but again, trying to unshelve seems to have no effect. I've been trying to test it with some very simple changes that shouldn't be a problem in terms of conflicts, e.g. adding a test comment. I should note that I've tried hg unshelve -f after which it says unshelve completed but again, my changes are not restored. Any ideas what I am doing wrong? If it matters: Mercurial Distributed SCM (version 1.5.1+20100405)

    Read the article

  • Hudson, is it possible to make a plugin configuration non-visible depending on job type?

    - by Haju
    With the plugin (SCM plugin) I'm working on the problem is that it doesn't work in any other job/project type than Freestyle-project. I'd like to hide the plugin configuration from project configuration page on other job/project types (maven, matrix etc), because it seems to distract people. I wonder if there's a "right" way of doing this, or any way at all? Currently the project type is checked in checkout-method as a first thing, and if it doesn't match, the build is failed instantly, but this is not completely satisfactory solution, since it causes a bit more work to the end user.

    Read the article

  • How to import .class file in a .java file?

    - by Namratha
    Hi, What i need to do is as follows: I have a bigloo scheme program (*.scm), then using the bigloo frameworks jvm a class file is generated. I want to use this .class file from a .java file. That is, i need to import this(.class) file as i want to use some functions defined in the scheme file which is now a .class file. How do i do it in Eclipse? i have created two packages one for java and one for the .class file. Then i import the package containing the .class file. But i am not able to use the function in the .class file in the .java file. Are there any settings to be done? Please let me know how this can be done.

    Read the article

  • Why is Harvest being purchased at all?

    - by Mike Caron
    Does your work environment use Harvest SCM? I've used this now at two different locations and find it appalling. In one situation I wrote a conversion script so I could use CVS locally and then daily import changes to the Harvest system while I was sleeping. The corp was fanatic about using Harvest, despite 80% of the programmers crying for something different. It was needlessly complicated, slow and heavy. It is now a job requirement for me that Harvest is not in use where I work. Has anyone else used Harvest before? What's your experience? As bad as mine? Did you employ other, different workarounds? Why is this product still purchased today?

    Read the article

  • When open-sourcing a live Rails app, is it dangerous to leave the session key secret in source contr

    - by rspeicher
    I've got a Rails app that's been running live for some time, and I'm planning to open source it in the near future. I'm wondering how dangerous it is to leave the session key store secret in source control while the app is live. If it's dangerous, how do people usually handle this problem? I'd guess that it's easiest to just move the string to a text file that's ignored by the SCM, and read it in later. Just for clarity, I'm talking about this: # Your secret key for verifying cookie session data integrity. # If you change this key, all old sessions will become invalid! # Make sure the secret is at least 30 characters and all random, # no regular words or you'll be exposed to dictionary attacks. ActionController::Base.session = { :key => '_application_session', :secret => '(long, unique string)' } And while we're on the subject, is there anything else in a default Rails app that should be protected when open sourcing a live app?

    Read the article

  • Cloud Computing : publication du volet 3 du Syntec Numérique

    - by Eric Bezille
    Une vision client/fournisseur réunie autour d'une ébauche de cadre contractuel Lors de la Cloud Computing World Expo qui se tenait au CNIT la semaine dernière, j'ai assisté à la présentation du nouveau volet du Syntec numérique sur le Cloud Computing et les "nouveaux modèles" induits : modèles économiques, contrats, relations clients-fournisseurs, organisation de la DSI. L'originalité de ce livre blanc vis à vis de ceux déjà existants dans le domaine est de s'être attaché à regrouper l'ensemble des acteurs clients (au travers du CRIP) et fournisseurs, autour d'un cadre de formalisation contractuel, en s'appuyant sur le modèle e-SCM. Accélération du passage en fournisseur de Services et fin d'une IT en silos ? Si le Cloud Computing permet d'accélérer le passage de l'IT en fournisseur de services (dans la suite d'ITIL v3), il met également en exergue le challenge pour les DSI d'un modèle en rupture nécessitant des compétences transverses permettant de garantir les qualités attendues d'un service de Cloud Computing : déploiement en mode "self-service" à la demande, accès standardisé au travers du réseau,  gestion de groupes de ressources partagées,  service "élastique" : que l'on peut faire croitre ou diminuer rapidement en fonction de la demande mesurable On comprendra bien ici, que le Cloud Computing va bien au delà de la simple virtualisation de serveurs. Comme le décrit fort justement Constantin Gonzales dans son blog ("Three Enterprise Principles for Building Clouds"), l'important réside dans le respect du standard de l'interface d'accès au service. Ensuite, la façon dont il est réalisé (dans le nuage), est de la charge et de la responsabilité du fournisseur. A lui d'optimiser au mieux pour être compétitif, tout en garantissant les niveaux de services attendus. Pour le fournisseur de service, bien entendu, il faut maîtriser cette implémentation qui repose essentiellement sur l'intégration et l'automatisation des couches et composants nécessaires... dans la durée... avec la prise en charge des évolutions de chacun des éléments. Pour le client, il faut toujours s'assurer de la réversibilité de la solution au travers du respect des standards... Point également abordé dans le livre blanc du Syntec, qui rappelle les points d'attention et fait un état des lieux de l'avancement des standards autour du Cloud Computing. En vous souhaitant une bonne lecture...

    Read the article

  • What is a resonable workflow for designing webapps?

    - by Evan Plaice
    It has been a while since I have done any substantial web development and I'd like to take advantage of the latest practices but I'm struggling to visualize the workflow to incorporate everything. Here's what I'm looking to use: CakePHP framework jsmin (JavaScript Minify) SASS (Synctactically Awesome StyleSheets) Git CakePHP: Pretty self explanatory, make modifications and update the source. jsmin: When you modify a script, do you manually run jsmin to output the new minified code, or would it be better to run a pre-commit hook that automatically generates jsmin outputs of javascript files that have changed. Assume that I have no knowledge of implementing commit hooks. SASS: I really like what SASS has to offer but I'm also aware that SASS code isn't supported by browsers by default so, at some point, the SASS code needs to be transformed to normal CSS. At what point in the workflow is this done. Git I'm terrified to admit it but, the last time I did any substantial web development, I didn't use SCM source control (IE, I did use source control but it consisted of a very detailed change log with backups). I have since had plenty of experience using Git (as well as mercurial and SVN) for desktop development but I'm wondering how to best implement it for web development). Is it common practice to implement a remote repository on the web host so I can push the changes directly to the production server, or is there some cross platform (windows/linux) tool that makes it easy to upload only changed files to the production server. Are there web hosting companies that make it eas to implement a remote repository, do I need SSH access, etc... I know how to accomplish this on my own testing server with a remote repository with a separate remote tracking branch already but I've never done it on a remote production web hosting server before so I'm not aware of the options yet. Extra: I was considering implementing a javascript framework where separate javascript files used on a page are compiled into a single file for each page on the production server to limit the number of file downloads needed per page. Does something like this already exist? Is there already an open source project out in the wild that implements something similar that I could use and contribute to? Considering how paranoid web devs are about performance (and the fact that the number of file requests on a website is a big hit to performance) I'm guessing that there is some wizard hacker on the net who has already addressed this issue.

    Read the article

  • Where Are You on the Visualization Maturity Curve?

    - by Celine Beck
    The old phrase “A picture is worth a thousand words” is as true now as ever. Providing the right users with access to the right product data, at the right time, can provide significant benefits to a business. This is especially evident with increasing technical and product complexities, elongated supply chains, and growing pressure to bring innovative products to market faster. With this in mind, it is easy to understand why visualization is an integral part of any successful product lifecycle management (PLM) strategy. At a bare minimum, knowledge workers use multiple individual documents of different formats and structure, and leverage visualization solutions to access information; but the real value of visualization can be fully reaped when it is connected to enterprise applications like PLM and tied to the appropriate business context. The picture below illustrates this visualization maturity curve, as we presented during the last Oracle Open World and the transformational effect that visualization can have on PLM processes and performance (check out the post about AutoVue Key Highlights from Oracle Open World 2012 for more information). Organizations are likely to see greater positive impact on business performance when visualization is connected to enterprise systems, allowing access to information coming from multiple sources, such as PLM, supply chain management (SCM) and enterprise resource planning (ERP). This allows organizations to reach higher levels of collaboration and optimize decision-making capacity as users can benefit from in-context access to visual information. For instance, within a PLM system, a design engineer can access a product assembly and review digital annotations added by other users specific to the engineering change request he is reviewing rather than all historical annotations. The last stage on the curve is what we call augmented business visualization (ABV).  ABV is an innovative framework which lets structured data (from Oracle’s Agile PLM for instance) interact with unstructured data (documents, design, 3D models, etc). With this new level of integration, information coming from multiple sources can be presented in a highly visual fashion; color displays can be used in order to identify parts with specific characteristics (for example pending quality issues) and you can take actions directly from within the context of documents and designs, maximizing user productivity. Those who had the chance to attend our PLM session during Oracle Open World already got a sneak peek of our latest augmented business visualization for Oracle’s Agile PLM. The solution generated a lot of wows. Stephen Porter, CEO at Zero Wait State, indicated in a post entitled “The PLM State: the Manhattan Project-Oracle’s Next Big Secret Weapon” that “this kind of synergy between visualization and PLM could qualify as a powerful weapon differentiating Agile PLM from other solutions.” If you are interested in learning more about ABV for Oracle’s Agile PLM and hear about real examples of usage of visualization at all stages of the visualization maturity curve, don’t miss our Visual Decision Making to Optimize New Product Development and Introduction session during the Oracle Value Chain Summit (Feb. 4-6, 2013, San Francisco). We look forward to seeing you there!

    Read the article

  • Oracle OpenWorld Call for MDM Papers

    - by david.butler(at)oracle.com
    As the MDM Track owner, I would like to invite everyone to respond to the Oracle OpenWorld (October 2-6, Moscone Center, San Francisco) Call for Papers (https://oracleus.wingateweb.com/portal/cfp/ ). The Call for Papers is open now through Sunday, March 27. This is an outstanding opportunity for organizations familiar with MDM to tell their story to a very large, knowledgeable and intensely interested community. Opportunities for feedback and networking abound.  I would love to see MDM papers on: business drivers; business benefits; quantified ROI stories; business process optimization; implementation styles; implementation lessons learned; using master data as a service; data governance best practices; end-to-end data quality experiences; support for SOA; Chart of Accounts issues fixed; how to leverage reference data; improving EPM and/or BI across the board; operationalizing a data warehouse; support for cloud computing; compliance success stories; architecture, scalability, and mixed workload RAC platform performance examples; industry specific value propositions (Financial Services; Retail, Telecom; Manufacturing, High Tech Manufacturing, Public Sector, Health Care, …); and line of business specific value propositions (CRM, ERP, PLM, SCM, …); etc. In fact, given that MDM positively impacts all areas of operations and analytics, there are no limits to the ideas you may have for an OpenWorld presentation. When you follow the submission process, be sure to use “Master Data Management” for either the Primary or Optional track. Add “Master Data Management” as an Optional track if you are adding MDM content to a presentation on one of the following tracks: Agile; Customer Relationship Management, Oracle E-Business Suite, Product Lifecycle Management, Siebel, Sourcing and Procurement, Supply Chain Management, or one of the 18 available industry tracks. If Cloud Computing is included, please add “Cloud Computing” as a Cross-Stream Track. And don’t forget to make “MDM” a Tag, along with Business Intelligence, Cloud, CRM, Data Integration, Data Migration, Data Warehousing, EPM, or Service-Oriented Architecture whenever your content includes these items. I will personally review each submission. I hope you all keep me very busy over the next few weeks.

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >