Search Results

Search found 3443 results on 138 pages for 'official repositories'.

Page 3/138 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Hosting Bazaar shared repositories

    - by Kishor Kundan
    What i want ? We operate in a small team of 9 people including developers, QA and designers. I want to setup a version control. We have a ubuntu (server edition) and i want to host all our repositories there. I have no understanding that even if it is possible. What I have done? We have setup bazaar on all distributions. We are using Bazaar explorer as our gui front-end. The command edition from console isn't very comfortable to all members. We have gone through the manual, but it hasn't been very helpful. Our inexperience being the cause. Team The designers are using windows distribution and developers & QA are using ubuntu distributions. I have googled around and i am really struggling to find a good tutorial for this setup. So any links/guides/leads towards accomplishing the same would be very helpful. While posting links or answer please do consider our inexperience. Thank you !!! cheers

    Read the article

  • Generic Repositories with DI & Data Intensive Controllers

    - by James
    Usually, I consider a large number of parameters as an alarm bell that there may be a design problem somewhere. I am using a Generic Repository for an ASP.NET application and have a Controller with a growing number of parameters. public class GenericRepository<T> : IRepository<T> where T : class { protected DbContext Context { get; set; } protected DbSet<T> DbSet { get; set; } public GenericRepository(DbContext context) { Context = context; DbSet = context.Set<T>(); } ...//methods excluded to keep the question readable } I am using a DI container to pass in the DbContext to the generic repository. So far, this has met my needs and there are no other concrete implmentations of IRepository<T>. However, I had to create a dashboard which uses data from many Entities. There was also a form containing a couple of dropdown lists. Now using the generic repository this makes the parameter requirments grow quickly. The Controller will end up being something like public HomeController(IRepository<EntityOne> entityOneRepository, IRepository<EntityTwo> entityTwoRepository, IRepository<EntityThree> entityThreeRepository, IRepository<EntityFour> entityFourRepository, ILogError logError, ICurrentUser currentUser) { } It has about 6 IRepositories plus a few others to include the required data and the dropdown list options. In my mind this is too many parameters. From a performance point of view, there is only 1 DBContext per request and the DI container will serve the same DbContext to all of the Repositories. From a code standards/readability point of view it's ugly. Is there a better way to handle this situation? Its a real world project with real world time constraints so I will not dwell on it too long, but from a learning perspective it would be good to see how such situations are handled by others.

    Read the article

  • Github Organization Repositories, Issues, Multiple Developers, and Forking - Best Workflow Practices

    - by Jim Rubenstein
    A weird title, yes, but I've got a bit of ground to cover I think. We have an organization account on github with private repositories. We want to use github's native issues/pull-requests features (pull requests are basically exactly what we want as far as code reviews and feature discussions). We found the tool hub by defunkt which has a cool little feature of being able to convert an existing issue to a pull request, and automatically associate your current branch with it. I'm wondering if it is best practice to have each developer in the organization fork the organization's repository to do their feature work/bug fixes/etc. This seems like a pretty solid work flow (as, it's basically what every open source project on github does) but we want to be sure that we can track issues and pull requests from ONE source, the organization's repository. So I have a few questions: Is a fork-per-developer approach appropriate in this case? It seems like it could be a little overkill. I'm not sure that we need a fork for every developer, unless we introduce developers who don't have direct push access and need all their code reviewed. In which case, we would want to institute a policy like that, for those developers only. So, which is better? All developers in a single repository, or a fork for everyone? Does anyone have experience with the hub tool, specifically the pull-request feature? If we do a fork-per-developer (or even for less-privileged devs) will the pull-request feature of hub operate on the pull requests from the upstream master repository (the organization's repository?) or does it have different behavior? EDIT I did some testing with issues, forks, and pull requests and found that. If you create an issue on your organization's repository, then fork the repository from your organization to your own github account, do some changes, merge to your fork's master branch. When you try to run hub -i <issue #> you get an error, User is not authorized to modify the issue. So, apparently that work flow won't work.

    Read the article

  • Looking for tips on managing complexity with SCM repositories

    - by Philip Regan
    I am a solo developer in my department and I have a lot of individual projects, all created and managed by me. I started using SVN at ProjectLocker via Versions on the Mac a couple years ago when the variety of projects started getting unwieldy. Scenario 1: Now I have a process that is of reasonable complexity it can be broken up into multiple smaller applications and they all share files. In one phase, there is a single shared file—a constants file—that is shared between a Cocoa app and an iPhone app framework. In the second phase, the iPhone app framework will be used to create individual apps of the same ilk—controller classes and what not will all be the same—but with different content in each. The problem that I am running across is that the file in the first phase is in one repository with the application that started it, and the app framework is in a second, separate repository. Scenario 2: I have another application framework that partially relies on code from an open source project. This is all internal, non-commerical work, but again, the application framework is going to be used to create a variety of unique products and processes. So, now I have an internally managed repository and an externally managed one out of my control. I make little changes to the open source code to meet the needs of my framework when there is an update I download, but I never commit back into the external repository (though, now that I think about it, I don't think I'm committing it to mine either. Oops). The Problem I have all of this set up on my production Mac quite nicely, but duplicating and subsequently maintaining that environment on my laptop has been challenging. For Scenario 1, I've thought of merging these two projects together into the same repository because they are, for all intents and purposes inextricably linked. But, Scenario 2, I think I'm stuck just managing files as best I can. The Question I'm wondering if anyone has any tips on how to manage either of these situations, as well as other complex SCM scenarios when it comes to linking various files from various repositories together. My familiarity with SVN only comes from my work with Versions. It's been great, but I'm a little out of my depth here.

    Read the article

  • mismatch of version of libkdcraw20 and libkdcraw-data

    - by naveen jankar
    I'm using Ubuntu 12.04 LTS. When I'm installlin digiKam, I'm getting a mismatch in the versions of libkdcraw20 and libkdcraw-data wanted by it and that available in the repositories. It wants version 4.8.5-0ubuntu0.2 (or in other words that is the latest version according to synaptic) but the available one is 4.8.5-0ubuntu0.3 in both cases. Is there a work-around? Or how do I request the Ubuntu managers to rectify this? addenda On Synaptic i selected digikam to be installed. it downloaded all the dependencies but the 2 in question were not found - the message it gave "W: Failed to fetch security.ubuntu.com/ubuntu/pool/main/libk/libkdcraw/… 404 Not Found [IP: 91.189.91.13 80]". I google-searched the 2 files in the repositories to find that the version available there is ubuntu0.3 instead of ubuntu0.2.

    Read the article

  • Repositories for CentOS that don't suck?

    - by Keyo
    I'm used to using ubuntu/debian repositories and they are great. I can apt-get just about any package and it'll be there. I have not found this on centos. I called my hosting company and they suggest I install atomic turtle since it's compatible with cPanel. This didn't work when I tried to install git. yum install git ... No package git available Repeat the same thing for just about any package, the default repositories are pathetic. So perhaps there are other repositories I can use. Can anyone suggest any? Edit The problem was cPanel excluding some git dependencies in yum.conf. See http://www.cmdln.org/2010/05/07/install-git-on-centos-cpanel-server/

    Read the article

  • Unable to checkout svn repositories

    - by lucaghera
    I have an ubuntu 12.04 machine were apache2 is set up with SSL certificates. In the same machine there is a SVN server. It all worked great till the update to 12.04. Now I'm able to access the svn via a web-browser and also by using an eclipse plugin (subversive), but I'm not able to access the svn via command line. When I try to check out a repo from a Mac Os X client it returns: svn: E120171: Unable to connect to a repository at URL 'https://IP/svn/repo_name' svn: E120171: Error running context: An error occurred during SSL communication If I try to check out a repo from an Ubuntu client it returns: svn: OPTIONS of 'https://IP/svn/repo_name': SSL handshake failed: SSL error: A TLS warning alert has been received. (https://IP)

    Read the article

  • I cant add PPA repositories!

    - by Karthik Krishna
    For example, after running this command : sudo add-apt-repository ppa:tualatrix/ppa I get the following output : Traceback (most recent call last): File "/usr/bin/add-apt-repository", line 125, in <module> ppa_info = get_ppa_info_from_lp(user, ppa_name) File "/usr/lib/python2.7/dist-packages/softwareproperties/ppa.py", line 80, in get_ppa_info_from_lp curl.perform() pycurl.error: (6, "Couldn't resolve host 'launchpad.net'") Why is it so. I just installed Ubuntu 12.04 LTS. And it works fine. I have updated and installed the system. I have even installed all required packages. But the thing is as soon as I want to install more packages, Like PPA and sort I am not able to. Till now I have not been able to install any PPA. I am working behind a proxy.

    Read the article

  • Is remmina 1.0 in the standard universe repositories?

    - by jackweirdy
    I just noticed that the copy of remmina I have on my machine (Running ubuntu 12.04) is 9.99.1 (This is up to date according to apt). The remmina website says that the most recent version is version 1.0 which uses FreeRDP. I'd like to use FreeRDP instead of rdesktop because of the improved MS RemoteApp support. To cut to the chase, is version 1.0 of remmina in the repos, or do I have to install it manually? (I've had a quick browse but haven't found anything). Added:

    Read the article

  • Organizing Git repositories with common nested sub-modules

    - by André Caron
    I'm a big fan of Git sub-modules. I like to be able to track a dependency along with its version, so that you can roll-back to a previous version of your project and have the corresponding version of the dependency to build safely and cleanly. Moreover, it's easier to release our libraries as open source projects as the history for libraries is separate from that of the applications that depend on them (and which are not going to be open sourced). I'm setting up workflow for multiple projects at work, and I was wondering how it would be if we took this approach a bit of an extreme instead of having a single monolithic project. I quickly realized there is a potential can of worms in really using sub-modules. Supposing a pair of applications: studio and player, and dependent libraries core, graph and network, where dependencies are as follows: core is standalone graph depends on core (sub-module at ./libs/core) network depdends on core (sub-module at ./libs/core) studio depends on graph and network (sub-modules at ./libs/graph and ./libs/network) player depends on graph and network (sub-modules at ./libs/graph and ./libs/network) Suppose that we're using CMake and that each of these projects has unit tests and all the works. Each project (including studio and player) must be able to be compiled standalone to perform code metrics, unit testing, etc. The thing is, a recursive git submodule fetch, then you get the following directory structure: studio/ studio/libs/ (sub-module depth: 1) studio/libs/graph/ studio/libs/graph/libs/ (sub-module depth: 2) studio/libs/graph/libs/core/ studio/libs/network/ studio/libs/network/libs/ (sub-module depth: 2) studio/libs/network/libs/core/ Notice that core is cloned twice in the studio project. Aside from this wasting disk space, I have a build system problem because I'm building core twice and I potentially get two different versions of core. Question How do I organize sub-modules so that I get the versioned dependency and standalone build without getting multiple copies of common nested sub-modules? Possible solution If the the library dependency is somewhat of a suggestion (i.e. in a "known to work with version X" or "only version X is officially supported" fashion) and potential dependent applications or libraries are responsible for building with whatever version they like, then I could imagine the following scenario: Have the build system for graph and network tell them where to find core (e.g. via a compiler include path). Define two build targets, "standalone" and "dependency", where "standalone" is based on "dependency" and adds the include path to point to the local core sub-module. Introduce an extra dependency: studio on core. Then, studio builds core, sets the include path to its own copy of the core sub-module, then builds graph and network in "dependency" mode. The resulting folder structure looks like: studio/ studio/libs/ (sub-module depth: 1) studio/libs/core/ studio/libs/graph/ studio/libs/graph/libs/ (empty folder, sub-modules not fetched) studio/libs/network/ studio/libs/network/libs/ (empty folder, sub-modules not fetched) However, this requires some build system magic (I'm pretty confident this can be done with CMake) and a bit of manual work on the part of version updates (updating graph might also require updating core and network to get a compatible version of core in all projects). Any thoughts on this?

    Read the article

  • Mercurial browser on Windows 2003 takes several refreshes before displaying repositories

    - by Tim Murphy
    When attempt to browse my Mercurial repositories it usually takes several refreshes before the repository list is displayed. The configuration is as follows: Windows Server 2003 (Dedicated machine hosted by http://www.server4you.com/. Site has anonymous password protection with self-signed SSL. Mercurial 1.5.3 Python 2.6.5 Python for Windows 32 extensions 214 py2.6 isapi-wsgi 0.4.2 The repositories are being served via ISAPI using the standard hgwebdir_wspi.py file (copy to follow). Other problems with the repository server: Before doing a clone/push/etc I have to browse the repositories first otherwise hg on my local machine can not locate the site. I have one a repository with a large changeset that after a minute or so throw error "abort: error: An existing connection was forcibly closed by the remote host". Will be asking another question for this problem. What can I do to start tracking down this problem? hgwebdir_wsgi.py # Configuration file location hgweb_config = r'C:\Public\Mercurial\WebSite\hgweb.config' # Global settings for IIS path translation path_strip = 0 # Strip this many path elements off (when using url rewrite) path_prefix = 0 # This many path elements are prefixes (depends on the # virtual path of the IIS application). import sys # Adjust python path if this is not a system-wide install #sys.path.insert(0, r'c:\path\to\python\lib') # Enable tracing. Run 'python -m win32traceutil' to debug if hasattr(sys, 'isapidllhandle'): import win32traceutil # To serve pages in local charset instead of UTF-8, remove the two lines below import os os.environ['HGENCODING'] = 'UTF-8' import isapi_wsgi from mercurial import demandimport; demandimport.enable() from mercurial.hgweb.hgwebdir_mod import hgwebdir # Example tweak: Replace isapi_wsgi's handler to provide better error message # Other stuff could also be done here, like logging errors etc. class WsgiHandler(isapi_wsgi.IsapiWsgiHandler): error_status = '500 Internal Server Error' # less silly error message isapi_wsgi.IsapiWsgiHandler = WsgiHandler # Only create the hgwebdir instance once application = hgwebdir(hgweb_config) def handler(environ, start_response): # Translate IIS's weird URLs url = environ['SCRIPT_NAME'] + environ['PATH_INFO'] paths = url[1:].split('/')[path_strip:] script_name = '/' + '/'.join(paths[:path_prefix]) path_info = '/'.join(paths[path_prefix:]) if path_info: path_info = '/' + path_info environ['SCRIPT_NAME'] = script_name environ['PATH_INFO'] = path_info return application(environ, start_response) def __ExtensionFactory__(): return isapi_wsgi.ISAPISimpleHandler(handler) if __name__=='__main__': from isapi.install import * params = ISAPIParameters() HandleCommandLine(params) hgweb.config [paths] / = C:\Public\Mercurial\Repositories\* [web] allow_archive = bz2 gz zip ; Allows archive downloads. allow_push = ######## ; Users that are allowed to push.

    Read the article

  • Managing Files/Folder in Content Repositories or File Systems with Oracle ADF and WebCenter

    - by Shay Shmeltzer
    One more entry in a set of entries (1,2,3) about the capabilities that WebCenter adds to ADF applications. WebCenter is basically the new Portal framework in the Oracle stack - and one key thing that portals do is work with content, allowing you to compose and publish content from files as well as save and store content. In this demo you'll see how using a set of taskflows provided by WebCenter you can add a file management, creation and viewing capabilities to a regular ADF application. To try this out you don't need any fancy content management system - we'll just use your file system for now. All you need is the WebCenter extension installed in JDeveloper, and then you can follow the demo on your own JDeveloper instance. You'll define a connection to your content repository you'll be able to add a bunch of pre-built WebCenter taskflows into your page. And suddenly you can upload/download/create and view document directly from your applicaiton. Check it out:

    Read the article

  • Safe project development - free repositories

    - by friko
    Some time ago we started a private hobby project. We made a project on javaforge.com, created an svn repository and started developing our app. Right now we are really far with our project, but somehow we never worried if our project is really safe on such free development tool like javaforge ? I mean, what if our project would earn some money and the source code become valuable ? Could it be stolen or could somebody take it over ? We want to be sure that we are not wasting our time and want to be really sure about our project safety. Is it possible to safely develop a project in such free repository ? We would like also to start using redmine, so if you know any safe place for moving our project, please take this under consideration. Thanks a lot.

    Read the article

  • Warning about unavailable repositories

    - by richzilla
    An icon has recently appeared on my panel with an exclamation mark. The message i get when hovering over this is that the update information is out of date, and that this may be caused by an unavailable repository or a network isssue. My network connection appears to be ok. The message advises i manually check for updates. When i do this, i get the following message: W:Failed to fetch http://ppa.launchpad.net///ubuntu/dists/maverick/main/source/Sources.gz 404 Not Found , W:Failed to fetch http://ppa.launchpad.net///ubuntu/dists/maverick/main/binary-amd64/Packages.gz 404 Not Found , E:Some index files failed to download, they have been ignored, or old ones used instead. Any idea whats going on?

    Read the article

  • Repositories and the Save Method

    One of the questions I've been getting lately goes like this: Should a Repository class have a Save method? And the standard answer: It depends. It's hard to keep track of all the different approaches to implementing the repository pattern these days, but I assume when someone asks me this type of question they are thinking of using code like this: var employee = new Employee(); employeeRepository.Save(employee); var account = new Account(); accountRepository.Save(account); This...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to remove corrupted repositories ?

    - by istimsak abdulbasir
    I was in the process of updating 11.04 and came across and error message saying: Reading package lists... Error! E: Encountered a section with no Package: header E: Problem with MergeList /var/lib/apt/lists/security.ubuntu.com_ubuntu_dists_natty-security_main_i18n_Translation-en E: The package lists or status file could not be parsed or opened. I tried to remove the damage repository by going to ubuntu software center. There was no option of repository removal. Then I tried synaptic, however I got the same error message stated above. I cannot find software source in 11.04. How do I remove repository from the commandline since that seems to be my only option?

    Read the article

  • Hosted bug tracking system with mercurial repositories (Summary of options & request for opinions)

    - by Mark Booth
    The Question What hosted mercurial repository/bug tracking system or systems have you used? Would you recommend it to others? Are there serious flaws, either in the repository hosting or the bug tracking features that would make it difficult to recommend it? Do you have any other experiences with it or opinions of it that you would like to share? If you have used other non mercurial hosted repository/bug tracking systems, how does it compare? (If I understand correctly, the best format for this type of community-wiki style question is one answer per option, if you have experienced if several) Background I have been looking into options for setting up a bug/issue tracking database and found some valuable advice in this thread and this. But then I got to thinking that a hosted solution might not only solve the problem of tracking bugs, but might also solve the problem we have accessing our mercurial source code repositories while at customer sites around the world. Since we currently have no way to serve mercurial repositories over ssl, when I am at a customer site I have to connect my laptop via VPN to my work network and access the mercurial repositories over a samba share (even if it is just to synce twice a day). This is excruciatingly slow on high latency networks and can be impossible with some customers' firewalls. Even if we could run a TRAC or Redmine server here (thanks turnkey), I'm not sure it would be much quicker as our internet connection is over-stretched as it is. What I would like is for developers to be able to be able to push/pull to/from a remote repository, servicing engineers to be able to pull from a remote repository and for customers (both internal and external) to be able to submit bug/issue reports. Initial options The two options I found were Assembla and Jira. Looking at Assembla I thought the 'group' price looked reasonable, but after enquiring, found that each workspace could only contain a single repository. Since each of our products might have up to a dozen repositories (mostly for libraries) which need to be managed seperately for each product, I could see it getting expensive really quickly. On the plus side, it appears that 'users' are just workspace members, so you can have as many client users (people who can only submit support tickets and track their own tickets) without using up your user allocation. Jira only charges based on the number of users, unfortunately client users also count towards this, if you want them to be able to track their tickets. If you only want clients to be able to submit untracked issues, you can let them submit anonymously, but that doesn't feel very professional to me. More options Looking through MercurialHosting page that @Paidhi suggested, I've added the options which appear to offer private repositories, along with another that I found with a web search. Prices are as per their website today (29th March 2010). Corrections welcome in the future. Anyway, here is my summary, according to the information given on their websites: Assembla, http://www.assembla.com/, looks to be a reasonable price, but suffers only one repository per workspace, so three projects with 6 repos each would use up most of the spaces associated with a $99/month professional account (20 spaces). Bug tracking is based on Trac. Mercurial+Trac support was announced in a blog entry in 2007, but they only list SVN and Git on their Features web page. Cost: $24, $49, $99 & $249/month for 40, 40, unlimited, unlimited users and 1, 10, 20, 100 workspaces. SSL based push/pull? Website https login. BitBucket, http://bitbucket.org/plans/, is primarily a mercurial hosting site for open source projects, with SSL support, but they have an integrated bug tracker and they are cheap for private repositories. It has it’s own issues tracker, but also integrates with Lighthouse & FogBugz. Cost: $0, $5, $12, $50 & $100/month for 1, 5, 15, 25 & 150 private repositories. SSL based push/pull. No https on website login, but supports OpenID, so you can chose an OpenID provider with https login. Codebase HQ, http://www.codebasehq.com/, supports Hg and is almost as cheap as BitBucket. Cost: £5, £13, £21 & £40/month for 3, 15, 30 & 60 active projects, unlimited repositories, unlimited users (except 10 users at £5/month) and 0.5, 2, 4 & 10GB. SSL based push/pull? Website https login? Firefly, http://www.activestate.com/firefly/, by ActiveState looks interesting, but the website is a little light on details, such as whether you can only have one repository per project or not. Cost: $9, $19, & £39/month for 1, 5 & 30 private projects, with a 0.5, 1.5 & 3 GB storage limit. SSL based push/pull? Website https login. Jira, http://www.atlassian.com/software/jira/, isn’t limited by the number of repositories you can have, but by ‘user’. It could work out quite expensive if we want client users to be able to track their issues, since they would need a full user account to be created for them. Also, while there is a Mercurial extension to support jira, there is no ‘Advanced integration’ for Mercurial from Atlassian Fisheye. Cost: $150, $300, $400, $500, $700/month for 10, 25, 50, 100, 100+ users. SSL based push/pull? Website https login. Kiln & FogBugz On Demand, http://fogcreek.com/Kiln/IntrotoOnDemand.html, integrates Kilns mercurial DVCS features with FogBugz, where the combined package is much cheaper than the component parts. Also, the Fogbugz integration is supposedly excellent. *8’) Cost: £30/developer/month ($5/d/m more than either on their own). SSL based push/pull? SourceRepo, http://sourcerepo.com/, also supports HG and is even cheaper than BitBucket & Codebase. Cost: $4, $7 & $13/month for 1, unlimited & unlimited repositories/trac/redmine instances and 500MB, 1GB & 3GB storage. SSL based push/pull. Website https login. Edit: 29th March 2010 & Bounty I split this question into sections, made the questions themselves more explicit, added other options from the research I have done since my first posting and made this community wiki, since I now understand what CW is for. *8') Also, I've added a bounty to encourage people to offer their opinions. At the end of the bounty period, I will award the bounty to whoever writes the best review (good or bad), irrespective of the number of up/down votes it gets. Given that it's probably more important to avoid bad providers than find the absolute best one, 'bad reviews' could be considered more important than good ones.

    Read the article

  • Official end of support date for Internet Explorer 6 on Windows XP

    - by scunliffe
    If I read the docs on Windows Service Pack support policies, and the specific Internet Explorer lifecycle support page as well as the Wikipedia page I've deduced that: IE6 support ends/ended at: Windows 2000 Ended (date unknown) Windows XP SP0 (RTM) Ended Home: 30-Aug-2003 Pro: 30-Sep-2004 SP1 Ended Home: 11-Jul-2004 Pro: 11-Jul-2004 SP2 Home: 13-Jul-2010 Pro: 13-Jul-2010 SP3 (released: April 21, 2008) Home: ??? Pro: ??? What isn't clear is the Windows XP SP3 scenario. In "human" terms, when is the end of support for IE6 on Windows XP SP3? e.g. if there is never a Windows XP SP4... or heaven forbid, an SP4 is released. I realize this doesn't force people to upgrade etc. however I'm trying to get a "semi" official word on when IE6 moves into the "not supported" category. I'm not interested in philosophical answers e.g. "big enterprise won't upgrade but they will expect support into 2017" stuff... I just want the "clear answer" in terms of official Microsoft support.

    Read the article

  • How do I get security updates for restricted/partner packages?

    - by laramichaels
    I want to perform just security updates on Ubuntu 12.04 LTS, keeping the rest of the system unchanged. I need to do this from the command line, no the GUI update manager. I have implemented the solution described here, which seems to work great for this purpose. I merely substituted 'precise' for 'lucid' given that I am on 12.04. My question is: by using apt pinning as described in that answer, will I still receive security updates for packages distributed through the "other" repositories - partner, restricted, multiverse, etc? Or will it only get me updates for the packages in the "core" distribution? thanks! ~l

    Read the article

  • Pulling and pushing between two google code repositories

    - by Kim L
    I'll start by quoting google's blog Project owners can now create multiple repositories for their project, and they can choose to make any of those new repositories a clone of any of the project's other repositories. These project clones share the same commit access permissions as the original project and make it easier for project members to work together on new features. A common pattern in the Mercurial world is to place each "official" branch into a separate repository with naming conventions like "project-crew", "project-stable", and so on. I've done exactly this. I have my default repository and then I've cloned that repository to a repo named "dev". I intend to use the default repository as my stable repo and then the dev repo as my primary development repo. Now I'm just wondering how on earth I should go about to pull and push between the default and the dev repositories?

    Read the article

  • Linq to Sql, Repositories, and Asp.Net MVC ViewData: How to remove redundancy?

    - by Dr. Zim
    Linq to SQL creates objects which are IQueryable and full of relations. Html Helpers require specific interface objects like IEnumerable<SelectListItem>. What I think could happen: Reuse the objects from Linq to SQL without all the baggage, i.e., return Pocos from the Linq to SQL objects without additional Domain Model classes? Extract objects that easily convert to (or are) Html helper objects like the SelectListItem enumeration? Is there any way to do this without breaking separation of concerns? Some neat oop trick to bridge the needs? For example, if this were within a repository, the SelectListItem wouldn't be there. The select new is a nice way to cut out an object from the Linq to SQL without the baggage but it's still referencing a class that shouldn't be referenced: IEnumerable<SelectListItem> result = (from record in db.table select new SelectListItem { Selected = record.selected, Text= record.Text, Value= record.Value } ).AsEnumerable();

    Read the article

  • Downloading Python 2.5.4 (from official website) in order to install it

    - by brilliant
    I was quite hesitant about whether I should post this question here on "StackOverflow" or on "SuperUser", but finally decided to post it here as Python is more a programming language rather than a piece of software. I've been recently using Python 2.5.4 that is installed on my computer, but at the moment I am not at home (and won't be for about two weeks from now), so I need to install the same version of Python on another computer. This computer has Windows XP installed – just like the one that I have at home. The reason why I need Python 2.5.4 is because I am using “Google App Engine”, and I was told that it only supports Python 2.5 However, when I went to the official Python page for the download, I discovered that certain things have changed, and I don’t quite remember where exactly from that site I had downloaded Python 2.5.4 on my computer at home. I found this page: http://www.python.org/download/releases/2.5.4/ Here is how it looks: (If you can’t see it here, please check it out at this address: http://brad.cwahi.net/some_pictures/python_page.jpg ) A few things here are not clear to me. It says: For x86 processors: python-2.5.4.msi For Win64-Itanium users: python-2.5.4.ia64.msi For Win64-AMD64 users: python-2.5.4.amd64.msi First of all, I don’t know what processor I am using – whether mine is “x86” or not; and also, I don’t know whether I am an “Win64-Itanium” or an “Win64-AMD64” user. Are Itanium and AMD64 also processors? Later it says: Windows XP and later already have MSI; many older machines will already have MSI installed. I guess, it is my case, but then I am totally puzzled as to which link I should click as it seems now that I don’t need those three previous links (as MSI is already installed on Windows XP), but there is no fourth link provided for those who use “Windows XP” or older machines. Of course, there are these words after that: Windows users may also be interested in Mark Hammond's win32all package, available from Sourceforge. but it seems to me that it is something additional rather than the main file. So, my question is simple: Where in the official Python website I can download Python 2.5.4, precisely, which link I should click?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >