Search Results

Search found 4805 results on 193 pages for 'repository'.

Page 122/193 | < Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >

  • Ubuntu 12.04 transmission-daemon and zfsonlinux: bad file descriptor and corrupt pieces

    - by Ivailo Karamanolev
    I'm running a Ubuntu 12.04 with zfsonlinux and transmission-daemon. The issues: sporadic Bad File Descriptor and Piece #xxx is corrupt errors. After I recheck the torrent, everything seems fine. That happens only when downloading: once it's in seeding mode. This only happens after the torrent client has been running for some time. I installed zfsonlinux from the offical stable ppa (https://launchpad.net/~zfs-native/+archive/stable). I previously tried running transmission-daemon from the Ubuntu repository, but since I've switched to building the latest transmission from source with the latest libevent (all stable) - same thing. I've seen bug reports (https://trac.transmissionbt.com/ticket/4147) for that issue, but none of them seem to have a solution. How can I fix these errors, or at least understand where they come from and what I can do to rectify the issue?

    Read the article

  • fatal: http://myserverip/home/git/example.git/info/refs not found: did you run git update-server-i

    - by bobobobo
    I followed this example to set up a git repository on my server. It worked, and I successfully pushed my code to it. But now, how do I pull or clone? Using the docs, I tried git clone http://REMOTE_SERVER/home/git/example.git .. But for me, I'm getting: fatal: http://myserverip/home/git/example.git/info/refs not found: did you run git update-server-info on the server? I ran git-update-server info, but nothing changed Edit: Ah, hold on. I changed it to git clone ssh://REMOTE_SERVER/home/git/example.git and I'm getting something.. it wants my user/pass, but how do I make the server public then not requiring login?

    Read the article

  • preparing to migrate MOSS 2007 SP1 content into new server with MOSS 2007 SP2 with the same server n

    - by Albert Widjaja
    To All Experts here, I'd like to perform a MOSS server content migration from the existing single instance server (SQL Server 2008, Windows Server 2008, SharePoint 2007 SP1) all in one and by maintaining the server name only, here's what I had in my migration plan document outline: I have created the new Windows Server 2008 R2 + SQL Server 2008 SP1 and MOSS 2007 SP2 after that: Backup MOSS_ContentDB from SQL Server 2008 SSMS (does this enough to cover all top level sites and its content in the library and doc. repository ?) Backup all top level sites from CA site using the CA Sites backup and restore tools. turn off the old MOSSDEV01 (current server) rename the existing server (TempServ01) into MOSSDEV01 (so that the user bookmark and other link inside MOSS site still working) perform restore of the DB restore the site collection and its content from the backup is that the correct way to do it ? I haven't execute the server configuration wizard to create the same port number of the SSP, CA and the SQL Server DB yet. Any idea and suggestion would be greatly appreciated Thanks.

    Read the article

  • Fragile XenServer won't create new VDIs anymore

    - by thoiz_vd
    I'm getting increasingly frustrated with XenServer. Currently I'm using 5.6FP1 and it seems to be very fragile. Canceling any VDI-related action almost guarantees trouble. This time I tried to create a new VM using a snapshot for a template, with the fast disk cloning option disabled. It would take much longer than I was able and willing to keep my XenCenter open, so I canceled it. That took ages too, so I had to decide to "Quit Anyway." Since, I seem unable to create any new VDI. New attempts at cloning halt with "The attempt to clone the VDI failed," and creating new VMs based on built-in templates hang at "Provisioning." I'm in need of some advice how to solve this. What I did so far is run a xe vdi-list, which returned nothing odd to me, but I'm no expert. I assume that an incomplete VDI is blocking my Storage Repository somehow, however, how to deal with that remains unclear.

    Read the article

  • Problem with initializing a type with WinsdorContainer

    - by the_drow
    public ApplicationView(string[] args) { InitializeComponent(); string configFilePath = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "log4net.config"); FileInfo configFileInfo = new FileInfo(configFilePath); XmlConfigurator.ConfigureAndWatch(configFileInfo); IConfigurationSource configSource = ConfigurationManager.GetSection("ActiveRecord") as IConfigurationSource; Assembly assembly = Assembly.Load("Danel.Nursing.Model"); ActiveRecordStarter.Initialize(assembly, configSource); WindsorContainer windsorContainer = ApplicationUtils.GetWindsorContainer(); windsorContainer.Kernel.AddComponentInstance<ApplicationView>(this); windsorContainer.Kernel.AddComponent(typeof(ApplicationController).Name, typeof(ApplicationController)); controller = windsorContainer.Resolve<ApplicationController>(); // exception is thrown here OnApplicationLoad(args); } The stack trace is this: Castle.MicroKernel.ComponentActivator.ComponentActivatorException was unhandled Message="ComponentActivator: could not instantiate Danel.Nursing.Scheduling.Actions.DataServices.NurseAbsenceDataService" Source="Castle.MicroKernel" StackTrace: at Castle.MicroKernel.ComponentActivator.DefaultComponentActivator.CreateInstance(CreationContext context, Object[] arguments, Type[] signature) at Castle.MicroKernel.ComponentActivator.DefaultComponentActivator.Instantiate(CreationContext context) at Castle.MicroKernel.ComponentActivator.DefaultComponentActivator.InternalCreate(CreationContext context) at Castle.MicroKernel.ComponentActivator.AbstractComponentActivator.Create(CreationContext context) at Castle.MicroKernel.Lifestyle.AbstractLifestyleManager.Resolve(CreationContext context) at Castle.MicroKernel.Lifestyle.SingletonLifestyleManager.Resolve(CreationContext context) at Castle.MicroKernel.Handlers.DefaultHandler.Resolve(CreationContext context) at Castle.MicroKernel.Resolvers.DefaultDependencyResolver.ResolveServiceDependency(CreationContext context, ComponentModel model, DependencyModel dependency) at Castle.MicroKernel.Resolvers.DefaultDependencyResolver.Resolve(CreationContext context, ISubDependencyResolver parentResolver, ComponentModel model, DependencyModel dependency) at Castle.MicroKernel.ComponentActivator.DefaultComponentActivator.CreateConstructorArguments(ConstructorCandidate constructor, CreationContext context, Type[]& signature) at Castle.MicroKernel.ComponentActivator.DefaultComponentActivator.Instantiate(CreationContext context) at Castle.MicroKernel.ComponentActivator.DefaultComponentActivator.InternalCreate(CreationContext context) at Castle.MicroKernel.ComponentActivator.AbstractComponentActivator.Create(CreationContext context) at Castle.MicroKernel.Lifestyle.AbstractLifestyleManager.Resolve(CreationContext context) at Castle.MicroKernel.Lifestyle.SingletonLifestyleManager.Resolve(CreationContext context) at Castle.MicroKernel.Handlers.DefaultHandler.Resolve(CreationContext context) at Castle.MicroKernel.Resolvers.DefaultDependencyResolver.ResolveServiceDependency(CreationContext context, ComponentModel model, DependencyModel dependency) at Castle.MicroKernel.Resolvers.DefaultDependencyResolver.Resolve(CreationContext context, ISubDependencyResolver parentResolver, ComponentModel model, DependencyModel dependency) at Castle.MicroKernel.ComponentActivator.DefaultComponentActivator.CreateConstructorArguments(ConstructorCandidate constructor, CreationContext context, Type[]& signature) at Castle.MicroKernel.ComponentActivator.DefaultComponentActivator.Instantiate(CreationContext context) at Castle.MicroKernel.ComponentActivator.DefaultComponentActivator.InternalCreate(CreationContext context) at Castle.MicroKernel.ComponentActivator.AbstractComponentActivator.Create(CreationContext context) at Castle.MicroKernel.Lifestyle.AbstractLifestyleManager.Resolve(CreationContext context) at Castle.MicroKernel.Lifestyle.SingletonLifestyleManager.Resolve(CreationContext context) at Castle.MicroKernel.Handlers.DefaultHandler.Resolve(CreationContext context) at Castle.MicroKernel.Resolvers.DefaultDependencyResolver.ResolveServiceDependency(CreationContext context, ComponentModel model, DependencyModel dependency) at Castle.MicroKernel.Resolvers.DefaultDependencyResolver.Resolve(CreationContext context, ISubDependencyResolver parentResolver, ComponentModel model, DependencyModel dependency) at Castle.MicroKernel.ComponentActivator.DefaultComponentActivator.CreateConstructorArguments(ConstructorCandidate constructor, CreationContext context, Type[]& signature) at Castle.MicroKernel.ComponentActivator.DefaultComponentActivator.Instantiate(CreationContext context) at Castle.MicroKernel.ComponentActivator.DefaultComponentActivator.InternalCreate(CreationContext context) at Castle.MicroKernel.ComponentActivator.AbstractComponentActivator.Create(CreationContext context) at Castle.MicroKernel.Lifestyle.AbstractLifestyleManager.Resolve(CreationContext context) at Castle.MicroKernel.Lifestyle.SingletonLifestyleManager.Resolve(CreationContext context) at Castle.MicroKernel.Handlers.DefaultHandler.Resolve(CreationContext context) at Castle.MicroKernel.DefaultKernel.ResolveComponent(IHandler handler, Type service, IDictionary additionalArguments) at Castle.MicroKernel.DefaultKernel.ResolveComponent(IHandler handler, Type service) at Castle.MicroKernel.DefaultKernel.get_Item(Type service) at Castle.Windsor.WindsorContainer.Resolve(Type service) at Castle.Windsor.WindsorContainer.ResolveT at Danel.Nursing.Scheduling.ApplicationView..ctor(String[] args) in E:\Agile\Scheduling\Danel.Nursing.Scheduling\ApplicationView.cs:line 65 at Danel.Nursing.Scheduling.Program.Main(String[] args) in E:\Agile\Scheduling\Danel.Nursing.Scheduling\Program.cs:line 24 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() InnerException: System.ArgumentNullException Message="Value cannot be null.\r\nParameter name: types" Source="mscorlib" ParamName="types" StackTrace: at System.Type.GetConstructor(BindingFlags bindingAttr, Binder binder, Type[] types, ParameterModifier[] modifiers) at Castle.MicroKernel.ComponentActivator.DefaultComponentActivator.FastCreateInstance(Type implType, Object[] arguments, Type[] signature) at Castle.MicroKernel.ComponentActivator.DefaultComponentActivator.CreateInstance(CreationContext context, Object[] arguments, Type[] signature) InnerException: It actually says that the type that I'm trying to initialize does not exist, I think. This is the concreate type that it complains about: namespace Danel.Nursing.Scheduling.Actions.DataServices { using System; using Helpers; using Rhino.Commons; using Danel.Nursing.Model; using NHibernate.Expressions; using System.Collections.Generic; using DateUtil = Danel.Nursing.Scheduling.Actions.Helpers.DateUtil; using Danel.Nursing.Scheduling.Actions.DataServices.Interfaces; public class NurseAbsenceDataService : AbstractDataService<NurseAbsence>, INurseAbsenceDataService { NurseAbsenceDataService(IRepository<NurseAbsence> repository) : base(repository) { } //... } } The AbstractDataService only holds the IRepository for now. Anyone got an idea why the exception is thrown?

    Read the article

  • speeding up cvs pserver over WAN

    - by Aleksandar Ivanisevic
    I have a remote location with 5 developers that are using a CVS (pserver) repository in main office via WAN, but the bandwidth i have is only a couple of hundred kbps so CVS operations are fairly slow. Is there any way to speed this up? I can rsync over a local copy of the CVS root(s) every couple of minutes, so this can handle the updates, but obviously not the commits. Is there any way to say to CVS to update from one server, but push to another one? Any other ideas?

    Read the article

  • javax.naming.InvalidNameException using Oracle BPM and weblogic when accessing directory

    - by alfredozn
    We are getting this exception when we start our cluster (2 managed servers, 1 admin), we have deployed only the ears corresponding to the OBPM 10.3.1 SP1 in a weblogic 10.3. When the server cluster starts, one of the managed servers (the first to start) get overloaded and ran out of connections to the directory DB because of this repeatedly error. It looks like the engine is trying to get the info from the LDAP server but I don't know why it is building a wrong query. fuego.directory.DirectoryRuntimeException: Exception [javax.naming.InvalidNameException: CN=Alvarez Guerrero Bernardo DEL:ca9ef28d-3b94-4e8f-a6bd-8c880bb3791b,CN=Deleted Objects,DC=corp: [LDAP: error code 34 - 0000208F: NameErr: DSID-031001BA, problem 2006 (BAD_NAME), data 8349, best match of: 'CN=Alvarez Guerrero Bernardo DEL:ca9ef28d-3b94-4e8f-a6bd-8c880bb3791b,CN=Deleted Objects,DC=corp,dc=televisa,dc=com,dc=mx' ^@]; remaining name 'CN=Alvarez Guerrero Bernardo DEL:ca9ef28d-3b94-4e8f-a6bd-8c880bb3791b,CN=Deleted Objects,DC=corp']. at fuego.directory.DirectoryRuntimeException.wrapException(DirectoryRuntimeException.java:85) at fuego.directory.hybrid.ldap.JNDIQueryExecutor.selectById(JNDIQueryExecutor.java:163) at fuego.directory.hybrid.ldap.JNDIQueryExecutor.selectById(JNDIQueryExecutor.java:110) at fuego.directory.hybrid.ldap.Repository.selectById(Repository.java:38) at fuego.directory.hybrid.msad.MSADGroupValueProvider.getAssignedParticipantsInternal(MSADGroupValueProvider.java:124) at fuego.directory.hybrid.msad.MSADGroupValueProvider.getAssignedParticipants(MSADGroupValueProvider.java:70) at fuego.directory.hybrid.ldap.Group$7.getValue(Group.java:149) at fuego.directory.hybrid.ldap.Group$7.getValue(Group.java:152) at fuego.directory.hybrid.ldap.LDAPResult.getValue(LDAPResult.java:76) at fuego.directory.hybrid.ldap.LDAPOrganizationGroupAccessor.setInfo(LDAPOrganizationGroupAccessor.java:352) at fuego.directory.hybrid.ldap.LDAPOrganizationGroupAccessor.build(LDAPOrganizationGroupAccessor.java:121) at fuego.directory.hybrid.ldap.LDAPOrganizationGroupAccessor.build(LDAPOrganizationGroupAccessor.java:114) at fuego.directory.hybrid.ldap.LDAPOrganizationGroupAccessor.fetchGroup(LDAPOrganizationGroupAccessor.java:94) at fuego.directory.hybrid.HybridGroupAccessor.fetchGroup(HybridGroupAccessor.java:146) at sun.reflect.GeneratedMethodAccessor66.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at fuego.directory.provider.DirectorySessionImpl$AccessorProxy.invoke(DirectorySessionImpl.java:756) at $Proxy66.fetchGroup(Unknown Source) at fuego.directory.DirOrganizationalGroup.fetch(DirOrganizationalGroup.java:275) at fuego.metadata.GroupManager.loadGroup(GroupManager.java:225) at fuego.metadata.GroupManager.find(GroupManager.java:57) at fuego.metadata.ParticipantManager.addNestedGroups(ParticipantManager.java:621) at fuego.metadata.ParticipantManager.buildCompleteRoleAssignments(ParticipantManager.java:527) at fuego.metadata.Participant$RoleTransitiveClousure.build(Participant.java:760) at fuego.metadata.Participant$RoleTransitiveClousure.access$100(Participant.java:692) at fuego.metadata.Participant.buildRoles(Participant.java:401) at fuego.metadata.Participant.updateMembers(Participant.java:372) at fuego.metadata.Participant.<init>(Participant.java:64) at fuego.metadata.Participant.createUncacheParticipant(Participant.java:84) at fuego.server.persistence.jdbc.JdbcProcessInstancePersMgr.loadItems(JdbcProcessInstancePersMgr.java:1706) at fuego.server.persistence.Persistence.loadInstanceItems(Persistence.java:838) at fuego.server.AbstractInstanceService.readInstance(AbstractInstanceService.java:791) at fuego.ejbengine.EJBInstanceService.getLockedROImpl(EJBInstanceService.java:218) at fuego.server.AbstractInstanceService.getLockedROImpl(AbstractInstanceService.java:892) at fuego.server.AbstractInstanceService.getLockedImpl(AbstractInstanceService.java:743) at fuego.server.AbstractInstanceService.getLockedImpl(AbstractInstanceService.java:730) at fuego.server.AbstractInstanceService.getLocked(AbstractInstanceService.java:144) at fuego.server.AbstractInstanceService.getLocked(AbstractInstanceService.java:162) at fuego.server.AbstractInstanceService.unselectAllItems(AbstractInstanceService.java:454) at fuego.server.execution.ToDoItemUnselect.execute(ToDoItemUnselect.java:105) at fuego.server.execution.DefaultEngineExecution$AtomicExecutionTA.runTransaction(DefaultEngineExecution.java:304) at fuego.transaction.TransactionAction.startNestedTransaction(TransactionAction.java:527) at fuego.transaction.TransactionAction.startTransaction(TransactionAction.java:548) at fuego.transaction.TransactionAction.start(TransactionAction.java:212) at fuego.server.execution.DefaultEngineExecution.executeImmediate(DefaultEngineExecution.java:123) at fuego.server.execution.DefaultEngineExecution.executeAutomaticWork(DefaultEngineExecution.java:62) at fuego.server.execution.EngineExecution.executeAutomaticWork(EngineExecution.java:42) at fuego.server.execution.ToDoItem.executeAutomaticWork(ToDoItem.java:261) at fuego.ejbengine.ItemExecutionBean$1.execute(ItemExecutionBean.java:223) at fuego.server.execution.DefaultEngineExecution$AtomicExecutionTA.runTransaction(DefaultEngineExecution.java:304) at fuego.transaction.TransactionAction.startBaseTransaction(TransactionAction.java:470) at fuego.transaction.TransactionAction.startTransaction(TransactionAction.java:551) at fuego.transaction.TransactionAction.start(TransactionAction.java:212) at fuego.server.execution.DefaultEngineExecution.executeImmediate(DefaultEngineExecution.java:123) at fuego.server.execution.EngineExecution.executeImmediate(EngineExecution.java:66) at fuego.ejbengine.ItemExecutionBean.processMessage(ItemExecutionBean.java:209) at fuego.ejbengine.ItemExecutionBean.onMessage(ItemExecutionBean.java:120) at weblogic.ejb.container.internal.MDListener.execute(MDListener.java:466) at weblogic.ejb.container.internal.MDListener.transactionalOnMessage(MDListener.java:371) at weblogic.ejb.container.internal.MDListener.onMessage(MDListener.java:327) at weblogic.jms.client.JMSSession.onMessage(JMSSession.java:4547) at weblogic.jms.client.JMSSession.execute(JMSSession.java:4233) at weblogic.jms.client.JMSSession.executeMessage(JMSSession.java:3709) at weblogic.jms.client.JMSSession.access$000(JMSSession.java:114) at weblogic.jms.client.JMSSession$UseForRunnable.run(JMSSession.java:5058) at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:516) at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201) at weblogic.work.ExecuteThread.run(ExecuteThread.java:173) Caused by: javax.naming.InvalidNameException: CN=Alvarez Guerrero Bernardo DEL:ca9ef28d-3b94-4e8f-a6bd-8c880bb3791b,CN=Deleted Objects,DC=corp: [LDAP: error code 34 - 0000208F: NameErr: DSID-031001BA, problem 2006 (BAD_NAME), data 8349, best match of: 'CN=Alvarez Guerrero Bernardo DEL:ca9ef28d-3b94-4e8f-a6bd-8c880bb3791b,CN=Deleted Objects,DC=corp,dc=televisa,dc=com,dc=mx' ^@]; remaining name 'CN=Alvarez Guerrero Bernardo DEL:ca9ef28d-3b94-4e8f-a6bd-8c880bb3791b,CN=Deleted Objects,DC=corp' at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:2979) at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:2794) at com.sun.jndi.ldap.LdapCtx.searchAux(LdapCtx.java:1826) at com.sun.jndi.ldap.LdapCtx.c_search(LdapCtx.java:1749) at com.sun.jndi.toolkit.ctx.ComponentDirContext.p_search(ComponentDirContext.java:368) at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.search(PartialCompositeDirContext.java:338) at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.search(PartialCompositeDirContext.java:321) at javax.naming.directory.InitialDirContext.search(InitialDirContext.java:248) at fuego.jndi.FaultTolerantLdapContext.search(FaultTolerantLdapContext.java:612) at fuego.directory.hybrid.ldap.JNDIQueryExecutor.selectById(JNDIQueryExecutor.java:136) ... 67 more

    Read the article

  • Using Subversion with SQL Server Management Studio

    - by Mike
    I am a member of a team with 3 developers. We have started using Redmine here for project management and issue tracking and LOVE it. I have seen elsewhere how nicely Redmine can work when a back-end repository is set up for a project. There is nice integration all around. This shop is currently .Net and SQL Server 2005. I am thinking about recommending a move to Subversion for our VCS (so that we can integrate with Redmine). I have seen a product called VisualSVN which will make it possible to use Visual Studio with Subversion, so that covers .Net. But the other big question is if it is possible to configure SQL Server Management Studio to somehow use Subversion for its VCS. Has anyone done this? This shop is currently using Sourcegear Fortress.

    Read the article

  • beanstalk using php-git on windows client

    - by ntidote
    I am trying to install beanstalk for php using git. I am using a Windows Client machine. I am done with the prerequisite installations , credentials setup. I am following the link http://docs.amazonwebservices.com/elasticbeanstalk/latest/dg/create_deploy_PHP.sdlc.html The following step does not workout (i use git bash for git related commands) From your Git repository directory, type the following command. git aws.config This gives the error git : 'aws.config' is not a git command. Please suggest how to deal with the issue.

    Read the article

  • Cron Tips for not running cron jobs on holidays (the monday of a three day weekend)

    - by Poul
    We have about a one hundred machine set-up with each machine running cron jobs like starting and stopping services and archiving these services' log files at the end of the day to a centralized repository. One headache we have is the three-day weekend (we're closed on holidays). We don't want the services starting up on those days and connecting to our business partner's machines. We currently do this by manually commenting out the most critical jobs and letting a bunch of errors happen all day. Not ideal. Basically if a job has '1-5' set in the day field we want this to mean 'work days' and not Monday to Friday'. We have a database that keeps track of which days are indeed 'work days' So, is it possible to override Cron's day-matching algorithm, or is there some other way to easily set a cron setting to avoid things starting up on a Monday holiday? Thanks!

    Read the article

  • Customizing post-commit messages in svn for different users

    - by Suresh
    I have an svn repository that users can access (read/write) using their account OR via tunneling over ssh with svnserve. I also have a post-commit hook that sends mails to specific users for different projects via svnnotify: the typical command is svnnotify <params> --to-regex-map <list of email IDs> <regex> For users who have accounts on the system, the notification email is sent from @machine.domain, which is fine. For users coming in via tunnelling, the email gets sent from @machine.domain, which is a fake address since these users don't have an account - the only reason I specify a tunnel-user id is to keep track of who made which update. So my question (finally) is: is there a way to pass a parameter (the "true" email address) to svnserve so that when the post-commit mail is sent, it can be sent "from" the correct email address ? p.s this is my first post here - if I haven't provided sufficient information, apologies: I'm happy to provide more details.

    Read the article

  • Is it possible to manually specify an alternative Procfile on Heroku?

    - by BillyBBone
    I have a repository which can be deployed in two modes: one is a front-end web application, while the other is a data manipulating process which runs non-stop, 24x7. The application runs on Django and connects to a Postgres database. For architectural reasons that I won't go into, I'd like to deploy the app in front-end mode inside as one Heroku application, and deploy the same app (i.e. the same git repo) in the data agent mode, as another Heroku application. Both applications will share the same Postgres connection string, and thus the data agent will feed the front-end app. Is it possible to maintain two separate Procfiles in one repo? This would cause the 3 appropriate dynos to start in front-end mode, and would spin up another process entirely in the other mode.

    Read the article

  • Capistrano deploying to different servers with different authentication methods

    - by marimaf
    I need to deploy to 2 different server and these 2 servers have different authentication methods (one is my university's server and the other is an amazon web server AWS) I already have running capistrano for my university's server, but I don't know how to add the deployment to AWS since for this one I need to add ssh options for example to user the .pem file, like this: ssh_options[:keys] = [File.join(ENV["HOME"], ".ssh", "test.pem")] ssh_options[:forward_agent] = true I have browsed starckoverflow and no post mention about how to deal with different authentication methods this and this I found a post that talks about 2 different keys, but this one refers to a server and a git, both usings different pem files. This is not the case. I got to this tutorial, but couldn't find what I need. I don't know if this is relevant for what I am asking: I am working on a rails app with ruby 1.9.2p290 and rails 3.0.10 and I am using an svn repository Please any help os welcome. Thanks a lot

    Read the article

  • Problem running mercurial against symlinked .hgrc file under Cygwin/Windows 7

    - by emptyset
    This is not a question about handling symlinks in the mercurial repository. I have this setup at work where I keep my dotfiles in a separate directory (.configuration) that I can use to synch my dotfiles between cygwin/windows and linux, then use symlinks instead of dotfiles in the home directory. So, I have the symlink ~/.hgrc -> .configuration/.hgrc in my home directory. After setting this up, mercurial complains thus: $ hg st hg: config error at C:\Users\aaf\.hgrc:1: '!<symlink>ÿþ.configuration/.hgrc' Removing the symlink and replacing it with the actual file works, so the contents of the .hgrc file are not at fault. I can live with that, I suppose, but I'd like to know why this happens. All other tools I've configured the same way work great with symlinked dotfiles.

    Read the article

  • setting up eclim to support php

    - by tipu
    i have the plugin pdt installed with my eclim using: DISPLAY=:1 ./eclipse/eclipse -nosplash -consolelog -debug \ -application org.eclipse.equinox.p2.director \ -repository http://download.eclipse.org/releases/helios \ -installIU org.eclipse.php.feature.group i compiled the thing using dargs for php: ant -Declipse.home=/home/tipu/downloads/eclipse -Dplugins=php but creating a project gives me: java.lang.IllegalArgumentException: Unable to find nature for alias 'php'. Supported aliases include: javascript=org.eclipse. wst.jsdt.core.jsNature, java=org.eclipse.jdt.core.javanature while executing command (port: 9091): -editor vim -command project_create -f "/home/tipu/phpproj2/" -n php thoughts on how to fix?

    Read the article

  • Why is it that XCode cannot push my changes?

    - by Justin Case
    I am writing an iOS application in XCode. I associated a remote repository to it. I finished writing a View Controller file and then went to File - Source Control - Commit. I wrote a commit message. Oddly, every time I typed a space, an error popped up that read "1 of 2 files will be commited." I then tried to push the commit by clicking File - Source Control - Push. However, I get an error that notes that I have unsaved changes. Why? Didn't I just commit?

    Read the article

  • archiva/jetty with nginx ssl proxy: getting http responses

    - by numb3rs1x
    I've been banging my head against this for awhile now. I have an archiva repository server I'm trying to proxy through nginx with ssl offloading. archiva has a jetty server built in that is listening on port 8008 of the localhost. I'm able to get to the archiva server through the proxy, but it wants to return http responses and not https responses. I thought that setting the following headers was supposed to tell the server to respond with https: proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_redirect off; I also tried "proxy_redirect default;". It seems that the jetty/archiva server is not recognizing these or there needs to be something more. I've been scouring forums and as far as I can tell, everything is set as it should be. I'm not sure where else to check at this point. Has anyone had any success with this?

    Read the article

  • Are SANs unreliable?

    - by chaos
    So at the place where I wear one of my various hats, this one representing a development rather than admin role, there's been an initiative to move to SANs. So far, I have been spectacularly unimpressed. First it was this behavior where, when MySQL databases are on the SAN, the first few tables that anything tries to hit after the system boots come up as nonexistent and MySQL has to be restarted before it realizes they're actually there. Then today, on multiple systems (including the primary SVN repository, ever-so-wonderfully) we get SAN mounts spewing IO errors and the filesystems going into read-only, which is the kind of behavior I expect from directly mounted naked disks, not fault-tolerant managed storage. Right now, I'm at the point where if I were putting together a project and somebody said "hey we should use SANs", my response would be "GTFO". So basically I want to know whether my experience is typical or even common, or whether I'm having some kind of freakishly bad luck with SANs. The systems these SANs are attached to are all CentOS machines, if that's relevant.

    Read the article

  • Git on Windows Server

    - by Chris
    I have my Git repository hosted at github.com. I would like to push updates and such to github.com and then log into my Windows server and do a git pull to get my changes. Is this optimal? It seems like whenever I try to do a git pull on the server, the files seem to get updated somehow since the last pull. And so I am unable to get the update as git says I need to commit my local (Windows server) changes. How can I use git like I want to? Or is there a better way?

    Read the article

  • MySQL started to consume about 40% of system CPU time and got unresponsive suddenly

    - by Alex
    I use Debian 6.0.3 x86_64 and MySQL 5.5.20-1~dotdeb.0-log from the Dotdeb repository. MySQL process started to consume a lot of "sy" CPU time serveral hours ago according to this graph. I was not able to connect to running mysqld process and had to kill it. I found nothing useful in logs. My setup seems to be quite common (I assume Dotdeb just redistributes stock MySQL versions) and I have never seen anything like this before. What is the possible root cause of this? How can I prevent this situation in the future?

    Read the article

  • Bootstrap a debian build environment and build source packages with no root privileges

    - by Erwan Queffélec
    On debian squeeze, I am trying to do the following : fetch sources package from the wheezy source repository bootstrap a squeeze chroot for several architectures build the packages for several architectures (i386, amd64 + all and any) I want both the fetching, bootstrapping and build operation to be scriptable, repeatable, and run as a normal user. For the environment setup, I want to make as little use of the root account as possible (install the necessary dependencies, and maybe some visudo stuff). If possible I would like to avoid using a VM (pbuilder with user mode linux) So far I have tried several things with pbuilder (require root), debootstrap (require root) with little success. Here is an example script of what I want to do (does not work): #/bin/bash set -e set -x this=`readlink -f $0` this_dir=`dirname $this` archs='i386 amd64 any all' pushd $this_dir/src # I actually want the following line to work with a repo that # is not in /etc/apt/sources.list but that is another question apt-get source cyrus-imapd-2.4 popd for arch in $archs do build_dir=$this_dir/build/$arch/ pbuilder --create --configfile $build_dir/pbuilderrc --buildresult $build_dir/ pbuilder --build --configfile $build_dir/pbuilderrc --buildresult \ $build_dir/ $this_dir/src/*.dsc # of course I want to use the .dput.cf in /home/myuser/ # and not in /root/ dput $build_dir/$arch/*.changes done

    Read the article

  • How to know the root device size of some public AMI?

    - by red23jordan
    Since I do not want to pay money for my testing, the free limit size is 10G. I can know the root device root for some default AMI such as Amazon Linux AMI 2012.03 The Amazon Linux AMI 2012.03 is an EBS-backed, PV-GRUB image. It includes Linux 3.2, AWS tools, and repository access to multiple versions of MySQL, PostgreSQL, Python, Ruby, and Tomcat. Root Device Size: 8 GB And the last row displayed 8GB. However, if I find AMI in Community Page, it does not show the root device size. Can anyone know how to use the instance such as centOS that is not provided by default but it is under 10GB so that I can still free use?

    Read the article

  • svnserve accepts only local connection

    - by stiv
    I've installed svnserve in linux box konrad. On konrad I can checkout from svn: steve@konrad:~$ svn co svn://konrad A konrad/build.xml On my local Windows pc i can ping konrad, but checkout doesn work: C:\Projects>svn co svn://konrad svn: E730061: Unable to connect to a repository at URL 'svn://konrad' svn: E730061: Can't connect to host 'konrad': ??????????? ?? ???????????, ?.?. ???????? ????????? ?????? ?????? ?? ???????????. My linux firewall is disabled: konrad# iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination and windows firewall is also off (I can't send screen shot here, so believe me). How can I fix that? Any ideas?

    Read the article

  • How to install two packages that write the same file

    - by Joel E Salas
    Sirs, a conundrum. I have two packages that each create /usr/bin/ffprobe. One of them is ffmpeg from the Deb Multimedia repository, while the other is ffmbc 0.7-rc5 built from source. The hand-rolled one is business-critical, and we used to just install it from source wherever it was necessary. I can only assume it would clobber the ffmpeg file, and there were never any ill effects. In theory, it should be acceptable for our ffmbc package to overwrite the file from the ffmpeg package. Is there any easy way to reconcile this?

    Read the article

  • wso2 ESB: server configuration CRITICAL

    - by nuvio
    My Scenario: I have server_1 (192.168.10.1) with wso2-ESB and server_2 (192.168.10.2) with Glassfish-v3 + web services. Problem: I am trying to create a proxy in ESB using the java Web Services, but the created proxy does not respond properly. The log says: Unable to sendViaPost for http or https does not change the result. I think I should configure the axis2.xml but I am having trouble, and don't know what to do. What is the configuration for my scenario? Please help me! EDIT: To be clear, I can directly consume the WebService in the Glassfish server, it works normal, both port and url are accessible. Only when I create a "Pass through Proxy" in the ESB, it does not work. I don't think is matter of Proxy configuration...I never had problems while deployed locally, problems started once I have uploaded the ESB to a remote server. I really would need someone to point me what is the correct procedure when installing the ESB on a remote host: configuration of axis2.xml and carbon.xml, ports, transport receivers etc... P.S. I had a look at the official (wso2 esb and carbon) guides with no luck, but I am missing something... Endpoint of Java Web Service: http://192.168.10.2:8080/HelloWorld/Hello?wsdl ESB Proxy Enpoint: http://192.168.10.1:8280/services/HelloProxy The following is my axis2.xml configuration, please check it: <transportReceiver name="http" class="org.apache.synapse.transport.nhttp.HttpCoreNIOListener"> <parameter name="port" locked="false">8280</parameter> <parameter name="non-blocking" locked="false">true</parameter> <parameter name="bind-address" locked="false">192.168.10.1</parameter> <parameter name="WSDLEPRPrefix" locked="false">https//192.168.10.1:8280</parameter> <parameter name="httpGetProcessor" locked="false">org.wso2.carbon.transport.nhttp.api.NHttpGetProcessor</parameter> <!--<parameter name="priorityConfigFile" locked="false">location of priority configuration file</parameter>--> </transportReceiver> <!-- the non blocking https transport based on HttpCore + SSL-NIO extensions --> <transportReceiver name="https" class="org.apache.synapse.transport.nhttp.HttpCoreNIOSSLListener"> <parameter name="port" locked="false">8243</parameter> <parameter name="non-blocking" locked="false">true</parameter> <parameter name="bind-address" locked="false">192.168.10.1</parameter> <parameter name="WSDLEPRPrefix" locked="false">https://192.168.10.1:8243</parameter> <!--<parameter name="priorityConfigFile" locked="false">location of priority configuration file</parameter>--> <parameter name="httpGetProcessor" locked="false">org.wso2.carbon.transport.nhttp.api.NHttpGetProcessor</parameter> <parameter name="keystore" locked="false"> <KeyStore> <Location>repository/resources/security/wso2carbon.jks</Location> <Type>JKS</Type> <Password>wso2carbon</Password> <KeyPassword>wso2carbon</KeyPassword> </KeyStore> </parameter> <parameter name="truststore" locked="false"> <TrustStore> <Location>repository/resources/security/client-truststore.jks</Location> <Type>JKS</Type> <Password>wso2carbon</Password> </TrustStore> </parameter> <!--<parameter name="SSLVerifyClient">require</parameter> supports optional|require or defaults to none --> </transportReceiver>

    Read the article

< Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >