Search Results

Search found 4585 results on 184 pages for 'master subdocument'.

Page 60/184 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • Git - post-receive hook with git pull "Failed to find a valid git directory"

    - by ludicco
    It's very weird but when setting a git repository and creating a post-receive hook with: echo "--initializing hook--" cd ~/websites/testing echo "--prepare update--" git pull echo "--update completed--" the hook runs indeed, but it never manage to run git pull properly: 6bfa32c..71c3d2a master -> master --initializing hook-- --prepare update-- fatal: Not a git repository: '.' Failed to find a valid git directory. --update completed-- so I'm asking myself now, how it's possible to make the hook update the clone with post-receive? in this case the user running the processes is the same, and its everything inside the user folder so I really don't understand...because if if I go manually into cd ~/websites/testing git pull it works without any problem... any help on that would be pretty much appreciated Thanks a lot

    Read the article

  • How to setup Hadoop cluster so that it accepts mapreduce jobs from remote computers?

    - by drasto
    There is a computer I use for Hadoop map/reduce testing. This computer runs 4 Linux virtual machines (using Oracle virtual box). Each of them has Cloudera with Hadoop (distribution c3u4) installed and serves as a node of Hadoop cluster. One of those 4 nodes is master node running namenode and jobtracker, others are slave nodes. Normally I use this cluster from local network for testing. However when I try to access it from another network I cannot send any jobs to it. The computer running Hadoop cluster has public IP and can be reached over internet for another services. For example I am able to get HDFS (namenode) administration site and map/reduce (jobtracker) administration site (on ports 50070 and 50030 respectively) from remote network. Also it is possible to use Hue. Ports 8020 and 8021 are both allowed. What is blocking my map/reduce job submits from reaching the cluster? Is there some setting that I must change first in order to be able to submit map/reduce jobs remotely? Here is my mapred-site.xml file: <configuration> <property> <name>mapred.job.tracker</name> <value>master:8021</value> </property> <!-- Enable Hue plugins --> <property> <name>mapred.jobtracker.plugins</name> <value>org.apache.hadoop.thriftfs.ThriftJobTrackerPlugin</value> <description>Comma-separated list of jobtracker plug-ins to be activated. </description> </property> <property> <name>jobtracker.thrift.address</name> <value>0.0.0.0:9290</value> </property> </configuration> And this is in /etc/hosts file: 192.168.1.15 master 192.168.1.14 slave1 192.168.1.13 slave2 192.168.1.9 slave3

    Read the article

  • PostgreSQL 9.1 Database Replication Between Two Production Environments with Load Balancer

    - by littleK
    I'm investigating different solutions for database replication between two PostgreSQL 9.1 databases. The setup will include two production servers on the cloud (Amazon EC2 X-Large Instances), with an elastic load balancer. What is the typical database implementation for for this type of setup? A master-master replication (with Bucardo or rubyrep)? Or perhaps use only one shared database between the two environments, with a shared disk failover? I've been getting some ideas from http://www.postgresql.org/docs/9.0/static/different-replication-solutions.html. Since I don't have a lot of experience in database replication, I figured I would ask the experts. What would you recommend for the described setup?

    Read the article

  • Error Message: Primary Slave Drive - ATAPI Incompatible

    - by Jaleel Dwight
    So I recently switched towers and now, all of a sudden, I'm getting this error message as soon as I boot my system. The error message reads "Primary Slave Drive - ATAPI Incompatible". This has really been frustrating since there's NO Primary Slave Drive even installed in my computer!! I have one hard drive installed which is recognized as the Primary Master & a DVD Drive which is recognized as the Secondary Master. My old system ran flawlessly. I'm beginning to think maybe my motherboard is bugging out cuz I fail to understand how it can be telling me that a "nonexistent" drive is incompatible. Any suggestions??

    Read the article

  • backup and restoration of a freeipa infrastructure

    - by Sirex
    I'm finding the documentation on ipa server backup and restoration sadly lacking, and being so centrally critical it's not something i'm really happy about shooting in the dark with - could some kind soul more knowledable in the matter please attempt to provide an idiot-proof guide to backing up and restoring of IPA server(s) ? Particularly the main server (the cert signing one). ...We're looking towards rolling out ipa in a two server setup (1 master, 1 replica). I'm using dns srv records to handle failover, hence a loss of the replica isn't a big deal as i could make a new one and force a resync to happen - it's losing the master that bothered me. The thing that i'm really struggling with is locating a step-by-step procedure for backing up and restoring the master server. I'm aware that whole-VM snapshot is the recommended way of doing IPA server backup, but that isn't an option at this time for us. I'm also aware that freeipa 3.2.0 includes some sort of backup command build in, but that isn't in the ipa version of centos, and i don't expect it will be for some time yet. I've been trying many different methods, but none of them seem to restore cleanly, amongst others, i've tried; a command similar to db2ldif.pl -D "cn=directory manager" -w - -n userroot -a /root/userroot.ldif the script from here to produce three ldif files -- one for the domain ({domain}-userroot), and two for the ipa server (ipa-ipaca and ipa-userroot): Most of the restores i've tried have been similar to the form of: ldif2db.pl -D "cn=directory manager" -w - -n userroot -i userroot.ldif which seems to work and reports no errors, but totally borks the ipa install on the machine and i can no longer login with either the admin password on the backed up server, or the one i set it to on installation before attempting the ldif2db command (i'm installing ipa-server and running ipa-server-install, then attempting the restore). I'm not overly bothered about losing the CA, having to rejoin the domain, losing replication etc etc (although it'd be awesome if that could be avoided) but in the instance of the main server dropping i'd really like to avoid having to re-enter all the user/group information. I guess in the instance of losing the main server i could promote the other one and replicate in the other direction, but i've not tried that, either. Has anyone done that ? tl;dr: Can someone provide an idiots guide to backing up and restoring an IPA server (preferably on CentOS 6) in a clear enough way that'd make me feel confident it'll actually work when the dreaded time comes ? Crayons are optional, but appreciated ;-) I can't be the only person struggling with this, seeing how widely used IPA is, surely ?

    Read the article

  • What git branching models actually work - the final question

    - by UncleCJ
    In our company we have successfully deployed git and we are currently using a simple trunk/release/hotfixes branching model. However, this has it's problems, I have some key issues of confusion in the community which would be awesome to have answered here. Maybe my hopes for an Alexander stroke are too great, quite possibly I'll decompose this question into more manageable issues, but here's my first shot. Workflows / branching models - below are the three main descriptions of this I have seen, but they are partially contradicting each other or don't go far enough to sort out the subsequent issues we've run into (as described below). Thus our team so far defaults to not so great solutions. Are you doing something better? gitworkflows(7) Manual Page (nvie) A successful Git branching model (reinh) A Git Workflow for Agile Teams Merging vs rebasing (tangled vs sequential history) - the bids on this are as confusing as it gets. Should one pull --rebase or wait with merging back to the mainline until your task is finished? Personally I lean towards merging since this preserves a visual illustration of on which base a task was started and finished, and I even prefer merge --no-ff for this purpose. It has other drawbacks however. Also many haven't realized the useful property of merging - that it isn't commutative (merging a topic branch into master does not mean merging master into the topic branch). I am looking for a natural workflow - sometimes mistakes happen because our procedures don't capture a specific situation with simple rules. For example a fix needed for earlier releases should of course be based sufficiently downstream to be possible to merge upstream into all branches necessary (is the usage of these terms clear enough?). However it happens that a fix makes it into the master before the developer realizes it should have been placed further downstream, and if that is already pushed (even worse, merged or something based on it) then the option remaining is cherry-picking, with it's associated perils... What simple rules like such do you use? Also in this is included the awkwardness of one topic branch necessarily excluding other topic branches (assuming they are branched from a common baseline). Developers don't want to finish a feature to start another one feeling like the code they just wrote is not there anymore How to avoid creating merge conflicts (due to cherry-pick)? What seems like a sure way to create a merge conflict is to cherry-pick between branches, they can never be merged again? Would applying the same commit in revert (how to do this?) in either branch possibly solve this situation? This is one reason I do not dare to push for a largely merge-based workflow. How to decompose into topical branches? - We realize that it would be awesome to assemble a finished integration from topic branches, but often work by our developers is not clearly defined (sometimes as simple as "poking around") and if some code has already gone into a "misc" topic, it can not be taken out of there again, according to the question above? How do you work with defining/approving/graduating/releasing your topic branches? Proper procedures like code review and graduating would of course be lovely, but we simply cannot keep things untangled enough to manage this - any suggestions? integration branches, illustration please? Vote and comment as much as you'd like, I'll try to keep the issue page clear and informative enough. Thanks! Below is a list of related topics on stackoverflow I have checked out: What are some good strategies to allow deployed applications to be hotfixable? Workflow description for git usage for in-house development Git workflow for corporate Linux kernel development How do you maintain development code and production code? (thanks for this PDF!) git releases management Git Cherry-pick vs Merge Workflow How to cherry-pick multiple commits How do you merge selective files with git-merge? How to cherry pick a range of commits and merge into another branch ReinH Git Workflow git workflow for making modifications you’ll never push back to origin Cherry-pick a merge Proper Git workflow for combined OS and Private code? Maintaining Project with Git Why cant Git merge file changes with a modified parent/master. Git branching / rebasing good practices When will "git pull --rebase" get me in to trouble?

    Read the article

  • BackupExec 12 to Bacula questions

    - by LVDave
    We have a 128 node compute cluster for environmental modeling, with a master/head node which we currently backup with a Windows2003 system running BackupExec 12 and a single HP LTO3 tape drive. We have recently ordered an Overland NEO200s 12 slot library, and are considering migrating off Windows to CentOS 5 for the backup server. The master/head node is RHEL5, with the compute nodes currently being migrated from a mix of RHEL3/4/5 to CentOS5. I'm fairly familiar with RH/Centos, but have no experience with Bacula. We've tentatively settled on Bacula as our cluster vendor recommended it. My questions are: 1) Does Bacula support an Overland NEO200s/LTO3 library? 2) Can Bacula catalog/restore tapes written by BE? and 3) I've head of Amanda, but am even more unfamiliar with it than Bacula. Any assistance would be appreciated Dave Frandin

    Read the article

  • Apache gives empty reply

    - by Jorge Bernal
    It happens randomly, and only on moodle installations. Apache don't add a line in the logs when this happens, and I don't know where to look. koke@escher:~/Code/eboxhq/moodle[master]$ curl -I http://training.ebox-technologies.com/login/signup.php?course=WNA001 curl: (52) Empty reply from server koke@escher:~/Code/eboxhq/moodle[master]$ curl -I http://training.ebox-technologies.com/login/signup.php?course=WNA001 HTTP/1.1 200 OK The apache conf is quite straightforward and works perfectly in the other vhosts <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /srv/apache/training.ebox-technologies.com/htdocs ServerName training.eboxhq.com ErrorLog /var/log/apache2/training.ebox-technologies.com-error.log CustomLog /var/log/apache2/training.ebox-technologies.com-access.log combined <FilesMatch "\.(ico|gif|jpe?g|png|js|css)$"> ExpiresActive On ExpiresDefault "access plus 1 week" Header add Cache-Control public </FilesMatch> </VirtualHost> Using apache 2.2.9 php 5.2.6 and moodle 1.9.5+ (Build: 20090722) Any ideas welcome :)

    Read the article

  • Git pull auto complete OSX

    - by vodkhang
    Follow some instruction on this site http://denis.tumblr.com/post/71390665/adding-bash-completion-for-git-on-mac-os-x-leopard . I can do git auto complete for MAC OS. However, when I type git pull origin ma (for master), and then tab it takes a long time for git to auto complete to become git pull origin master . I think it connect to the server to get the branch, but I am not sure, is there any way to make it faster and only get the branch on local machine cd /tmp git clone git://git.kernel.org/pub/scm/git/git.git cd git git checkout v`git --version | awk '{print $3}'` cp contrib/completion/git-completion.bash ~/.git-completion.bash cd ~ rm -rf /tmp/git echo -e "source ~/.git-completion.bash" >> .profile

    Read the article

  • Jenkins swarm-plugin jar file, won't run in background

    - by JeanMertz
    We're working on an automation script for our Jenkins slaves on a local Unix server. To connect the slaves to the Jenkins master, we use the swarm plugin. Setting up the master was easy, and connecting clients is also easy with a single command. However, I am trying to get the slave command (a java application) to run in the background without stalling the current process, this doesn't seem to work. I've created an init.d file and added it to update-rc.d but that doesn't work. #!/bin/bash /usr/bin/java -jar /root/swarm-client-1.7-jar-with-dependencies.jar -executors 4 I've also tried to run it with an ampersand & to start the process in the background, but that doesn't work either because - from looking at the source - the jar file actually boots another process that is then started in the foreground. Any ideas on how to make this jar file start without stopping the bootstrap script?

    Read the article

  • Point dns server to root dns servers [duplicate]

    - by Dhaksh
    This question already has an answer here: What is a glue record? 3 answers Why does DNS work the way it does? 4 answers I have setup a custom authoritative only DNS server using bind9. Its a Master ans Slave method. Assume DNS Servers are: ns1.customdnsserver.com [192.168.91.129] ==> Master ns2.customdnsserver.com [192.168.91.130] ==> Slave Now i will host few shared hosting websites in my own web server. Where i will link above Nameservers to my domains in shared hosting. My Question is: How do i tell root DNS servers about my own authoritative only DNS server? So that when someone queries for domain www.example.com and if the domain's website is hosted in my shared hosting i want root servers to point the query to my own DNS Server so that the www.example.com get resolved for IP address.

    Read the article

  • Prevent zsh from trying to expand everything

    - by Attila O.
    Recently switched from bash, I noticed that zsh will try to expand every command or argument that looks like it has wildcards in it. So the following lines won't work any more: git diff master{,^^} zsh: no matches found: master^^ scp remote:~/*.txt . zsh: no matches found: remote:~/*.txt The only way to make the above commands work is to quote the arguments, which is quite annoying. Q: How do I configure zsh to still try to expand wildcards, but if there are no matches, just pass on the argument as-is? EDIT: Possibly related: scp with zsh : no matches found

    Read the article

  • CENTOS: unsupported dictionary type: sqlite in POSTFIX

    - by Ferdinand
    Oct 30 09:24:15 postfix postfix/smtpd[1622]: fatal: unsupported dictionary type: sqlite Oct 30 09:24:16 postfix postfix/master[1165]: warning: process /usr/libexec/postfix/smtpd pid 1622 exit status 1 Oct 30 09:24:16 postfix postfix/master[1165]: warning: /usr/libexec/postfix/smtpd: bad command startup -- throttling I'm trying to use sqlite with postfix, but I get that error. I'm using CENTOS 6.4 x64. I have sqlite and sqlite-devel installed too. I'm assuming postfix from BASE (CentOS repo) comes without sqlite support? I've been not able to recompile with sqlite support using this: http://www.postfix.org/SQLITE_README.html Is there another way to get it to work?

    Read the article

  • How to proceed setting up a secondary mysql linux slave?

    - by Algorist
    I have a mysql database master and slave in production. I want to setup additional mysql slave. There is around 15 Terabyte of data in the database and there are MYISAM and InnoDB tables in the database. I am thinking of below options: Shutdown master database and copy the mysql data folder to secondary slave. Can Innodb tables be copied like this? Run flush table with read lock, scp the file to new slave and unlock the table and this is possible for myisam tables, can I do the same for innodb tables too? Thanks for looking at the question.

    Read the article

  • MySQL replication/connection failing over SSL

    - by Marcel Tjandraatmadja
    I set up two MySQL servers where one is replicating from the other. They both work perfectly, but once I turn on SSL I get the following error: ERROR 2026 (HY000): SSL connection error I get the same error running from command line like so: mysql --ssl=1 --ssl-ca=/etc/mysql/certificates/ca-cert.pem --ssl-cert=/etc/mysql/certificates/client-cert.pem --ssl-key=/etc/mysql/certificates/client-key.pem --user=slave --password=slavepassword --host=master.url.com Both MySQL servers are running on version 5.0.77. There is a difference that MySQL in the master server was compiled under x86_64 while in the slave server under i686. Also both machines are running CentOS 5. Plus I generated certificates as per this page. Any idea for finding a solution?

    Read the article

  • How to disallow use of disable_functions in?

    - by Gaia
    I'm obviously not the first one to have this problem, but I cannot not find an answer to this situation. I want to lock down PHP a bit, more specifically the use disable_functions. The environment is CentOS 6.2/PHP 5.3.3 fcgid/Apache 2.2.15: Whats the proper apache config (AllowOverride, etc) to disable any PHP setting to be changed via .htaccess? All other overrides are ok (current setting is AllowOverride All) Whats the proper config to forbid effective use of disable_functions in all but the master php.ini (as in forbid use of disable_functions in /home/myvhost/etc/php5/php.ini or any directory within in that vhost public_html. another way to say this: the only effective disable_functions comes from the master php.ini)? If #2 is not possible, at least whats the proper config to disallow a vhost owner to effectively use any php.ini but the vhost main one (/home/myvhost/etc/php5/php.ini)? Thanks

    Read the article

  • DNS delegation on same server with DDNS and second slave server

    - by Austin
    I have two servers running BIND, the first is setup as the master of two zones and the second as a slave for those zones. The zones are example.com and ddns.example.com. I have DDNS running and thousands of device entries are dynamically created in ddns.example.com. I wanted to keep DDNS separate from the main example.com, so I created a separate zone that the DHCP servers update. Considering these zones are hosted on the same server, is it possible to have delegation working from example.com to ddns.example.com? For example if my workstation's search domain is example.com and pointed towards 10.1.10.1 for its DNS provider, I would like to be able to resolve hostname.ddns. As it is, I can resolve hostname.ddns.example.com, but would like to be able to resolve just hostname.ddns. Alternatively, if the workstation's search domain is ddns.example.com, what settings do I need to be able to change to be able to resolve web, ftp, etc, which are all hosts in the parent, example.com zone? Does the ddns.example.com zone need to forward to the example.com zone? Again, all the zones are setup on the same server with a second server setup as a slave. named.conf: zone "example.com" IN { type master; file "example.com"; allow-update { none; }; } zone "ddns.example.com" IN { type master; file "ddns.example.com"; allow-update { key dhcp-update; }; } example.com zone file: $ORIGIN . $TTL 86400 example.com IN SOA ns1.example.com. hostmaster.example.com. ( serial, refresh, retry, etc. ) NS ns1.example.com. NS ns2.example.com. $ORIGIN example.com. ns1 A 10.1.10.1 ns2 A 10.1.10.2 web A 10.1.15.30 ftp A 10.1.15.31 host3 A 10.1.15.32 $ORIGIN ddns.example.com NS ns1 NS ns2 ns1 A 10.1.10.1 ns2 A 10.1.10.2

    Read the article

  • Spring MVC and 406 Error XML request

    - by Asp1de
    Hi i have a problem when running my code outside eclipse. This is my Equinox enviroment: Framework is launched. id State Bundle 0 ACTIVE org.eclipse.osgi_3.7.0.v20110221 1 ACTIVE org.eclipse.equinox.common_3.6.0.v20110506 2 ACTIVE org.eclipse.update.configurator_3.3.100.v20100512 3 RESOLVED catalina-config_1.0.0 Master=20 4 ACTIVE org.springframework.osgi.catalina.start.osgi_1.0.0 Fragments=62 5 ACTIVE com.springsource.javax.activation_1.1.1 6 ACTIVE com.springsource.javax.annotation_1.0.0 7 ACTIVE com.springsource.javax.ejb_3.0.0 8 ACTIVE com.springsource.javax.el_1.0.0 9 ACTIVE com.springsource.javax.mail_1.4.0 10 ACTIVE com.springsource.javax.persistence_1.0.0 11 ACTIVE com.springsource.javax.servlet_2.5.0 12 ACTIVE com.springsource.javax.servlet.jsp_2.1.0 13 ACTIVE com.springsource.javax.servlet.jsp.jstl_1.1.2 14 ACTIVE com.springsource.javax.xml.bind_2.0.0 15 ACTIVE com.springsource.javax.xml.rpc_1.1.0 16 ACTIVE com.springsource.javax.xml.soap_1.3.0 17 ACTIVE com.springsource.javax.xml.stream_1.0.1 18 ACTIVE com.springsource.javax.xml.ws_2.1.1 19 ACTIVE com.springsource.org.aopalliance_1.0.0 20 ACTIVE com.springsource.org.apache.catalina_6.0.18 Fragments=3, 22 21 ACTIVE com.springsource.org.apache.commons.logging_1.1.1 22 RESOLVED com.springsource.org.apache.coyote_6.0.18 Master=20 23 ACTIVE com.springsource.org.apache.el_6.0.18 24 ACTIVE com.springsource.org.apache.juli.extras_6.0.18 25 ACTIVE com.springsource.org.apache.log4j_1.2.15 Fragments=33 26 ACTIVE com.springsource.org.apache.taglibs.standard_1.1.2 27 ACTIVE org.springframework.osgi.commons-el.osgi_1.0.0.SNAPSHOT 28 ACTIVE data_1.0.0 29 ACTIVE Api_1.0.0 30 ACTIVE connector_1.0.0 31 ACTIVE core_1.0.0 32 ACTIVE org.springframework.osgi.jasper.osgi_5.5.23.SNAPSHOT 33 RESOLVED com.springsource.org.apache.log4j.config_1.0.0 Master=25 34 ACTIVE testController_1.0.0 35 ACTIVE org.eclipse.core.contenttype_3.4.100.v20100505-1235 36 ACTIVE org.eclipse.core.jobs_3.5.0.v20100515 37 ACTIVE org.eclipse.equinox.app_1.3.0.v20100512 38 ACTIVE org.eclipse.equinox.preferences_3.3.0.v20100503 39 ACTIVE org.eclipse.equinox.registry_3.5.0.v20100503 40 ACTIVE org.eclipse.osgi.services_3.2.100.v20100503 41 ACTIVE osgi.core_4.3.0.201102171602 42 ACTIVE dataImplementation_1.0.0 43 ACTIVE org.springframework.osgi.servlet-api.osgi_2.4.0.SNAPSHOT 44 ACTIVE org.springframework.aop_3.1.1.RELEASE 45 ACTIVE org.springframework.asm_3.1.1.RELEASE 46 ACTIVE org.springframework.beans_3.1.1.RELEASE 47 ACTIVE org.springframework.context_3.1.1.RELEASE 48 ACTIVE org.springframework.context.support_3.1.1.RELEASE 49 ACTIVE org.springframework.core_3.1.1.RELEASE 50 ACTIVE org.springframework.expression_3.1.1.RELEASE 51 ACTIVE org.springframework.osgi.extensions.annotations_1.2.1 52 ACTIVE org.springframework.osgi.core_1.2.1 53 ACTIVE org.springframework.osgi.extender_1.2.1 54 ACTIVE org.springframework.osgi.io_1.2.1 55 ACTIVE org.springframework.osgi.mock_1.2.1 56 ACTIVE org.springframework.osgi.web_1.2.1 57 ACTIVE org.springframework.osgi.web.extender_1.2.1 58 ACTIVE org.springframework.oxm_3.1.1.RELEASE 59 ACTIVE org.springframework.transaction_3.1.1.RELEASE 60 ACTIVE org.springframework.web_3.1.1.RELEASE 61 ACTIVE org.springframework.web.servlet_3.1.1.RELEASE 62 RESOLVED tomcat-configuration-fragment_1.0.0 Master=4 My controller is: @RequestMapping(value = "/test1", method = RequestMethod.GET, produces = "application/json") public @ResponseBody Person test1() { logger.info(" <--- Test 1 ---> \n"); Person p = new Person("a", "b", "c"); return p; } @RequestMapping(value = "/test2", method = RequestMethod.GET, produces = "application/xml") public @ResponseBody Person test3() { logger.info(" <--- Test 1 ---> \n"); Person p = new Person("a", "b", "c"); return p; } @RequestMapping(value = "/test2", method = RequestMethod.GET, headers = "Accept=*/*") public @ResponseBody Person test4() { logger.info(" <--- Test 1 ---> \n"); Person p = new Person("a", "b", "c"); return p; } @RequestMapping(value = "/parent", method = RequestMethod.GET, headers = "Accept=application/xml") public @ResponseBody Parent test2() { logger.info(" <--- Test 1 ---> \n"); Parent p = new Parent("a", "b"); return p; } If i run the TEST 1(json request) it works perfectly but when i run the test 2, 3 and 4 the browser give me back that error: (406) The resource identified by this request is only capable of generating responses with characteristics not acceptable according to the request "accept" headers (). Could someone help me? PS: if i run the bundle inside ECLIPSE it works perfectly. I generate the bundles with maven.

    Read the article

  • How do I handle mysql replication in EC2 using private IPs?

    - by chris
    I am trying to set up a mysql master/slave configuration in two EC2 instances. However, every time I reboot an instance, the IP address (and hostname) changes. I could assign an Elastic IP address, but would prefer to use the internal IP address. I can't be the first person to do this, but I can't seem to find a solution. There are a lot of "getting started" guides, but none of them mention how to handle changing IP addresses. So what are the best practices to manage master/slave replication in EC2?

    Read the article

  • Fatal error on "mode 120000" file during git -> svn migration

    - by Oliver
    Following instructions from the following website: http://code.google.com/p/support/wiki/ImportingFromGit I'm trying to migrate a git repository to svn, but during the "git rebase master tmp" step it fails with the following error after apply the first few patches: $ git rebase master tmp First, rewinding head to replay your work on top of it... Applying: Imported Applying: Cleaned up the readme file Applying: fix problem with versions fatal: unable to write file foobar mode 120000 Patch failed at 0003 fix problem with versions When you have resolved this problem run "git rebase --continue". If you would prefer to skip this patch, instead run "git rebase --skip". To restore the original branch and stop rebasing run "git rebase --abort". I understand that 120000 may refer to a symlink, but Subversion has supported symlinks for a long time now. Subversion installed is 1.6.5, Git is 1.6.3.3. Running on Ubuntu Linux. The system is not running out of disk space and this operation is taking place within my home directory so permissions should not be an issue.

    Read the article

  • Is this a bug in Profiler or Entity Framework?

    - by AjarnMark
    Using Entity Framework 4 with stored procedures and SQL Server 2008 SP1... When running SQL Server Profiler (TSQL_SPs template), the lines that show my stored procedure call and its statements say that they executed in DatabaseID = 1 (Master) but it is actually happening in my application database (ID = 8). The procedures execute properly and return the data, and they only exist in my application database, so why does Profiler mark those lines as being in Master? Is this a bug in Profiler? Is it a bug in EF4? Note that running the same code against a SQL 2000 instance, Profiler correctly shows the application's database ID.

    Read the article

  • homebrew in mac lion

    - by user975352
    I'm beginner of mac lion(10.7.2). I don't know well about mac but ubuntu. I installed homebrew to my mac, and I did command below. $ brew install git and then $ brew update error: Could not resolve host: github.com; nodename nor servname provided, or not known while accessing https://github.com/mxcl/homebrew.git/info/refs fatal: HTTP request failed Error: Failed while executing git pull origin refs/heads/master:refs/remotes/origin/master What's happen in my mac? How to resolve this? Would you help me?

    Read the article

  • How to retain background colors when pasting between documents in Excel

    - by Iain Fraser
    I have a script that programatically generates excel spreadsheets - cleaning up ugly reports that are given to us from another organisation. For interests sake; I'm using PHPExcel to generate the "clean" reports. We get these reports every week for an event that happens every couple of months. The reports contain a list of attendees along with a group id that allows us to know that some attendees belong together. To help the event organisers out, I've taken the event ID and generated a unqiue color code (based on the hash of the event ID - truncated to 6 characters). This unique colour code is set as the background color of a cell in each row. This helps organisers quickly visually identify group members. Trouble is, when the organisers copy the rows from the weeks report into the master report (which contains all attendees, not just the ones that signed up this week) - all the colour-codes snap to the master template's color pallette. Thank you very much for your time All the best Iain

    Read the article

  • Best way to duplicate databases nightly?

    - by Margaret
    Hey all We just got two new servers, that are running Windows Server 2008. The intent is to make the machines pretty much identical, copying the content of the master to the slave on a nightly basis, so that if anything fails, the second copy can stand in immediately. It doesn't need to be up-to-the-minute mirroring, though I suppose that wouldn't hurt if performance is not affected. The two machines will, amongst other things, each be running an instance of SQL Server 2008. The aim is to duplicate the databases on the master down to the slave on a nightly basis. Unless I'm misunderstanding, the slave databases in mirrored databases require the primary to be present to work correctly; I'm hoping for some solution where we have a second machine that can be up and running with minimal downtime if the first one falls over. Am I misunderstanding mirroring? Is that the best way to do things, or should I use some other mechanism? If so, what?

    Read the article

  • Clonezilla , NLite and PCs with different hardware specs..

    - by r2b2
    Hi, I used Clonezilla to create and restore images from a master computer to other workstations with the same specs. But the problem now is there are new computers whose hardware specs are different than the ones I maintained (mostly the videocard is different). Is it possible to create a customize Windows XP installer using Nlite and integrate all potential videocard and motherboard drivers ? If I then used this NLite ISO to install to a master computer which i will later clone and restore the image to other workstations, will windows xp still pick up the correct driver set? During XP installation, does the installer transfers all the drivers to the computer's harddisk? Thanks! Ryan

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >