Search Results

Search found 5011 results on 201 pages for 'grand master t'.

Page 39/201 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • Ideas for scaling out database architecture

    - by andrew
    We're looking to scale out our existing database architecture and need some advice on which way to go. We currently have 2 web servers behind a load balancer that both read & write to a single master database which replicates to a slave. Ideally, I'd like each of the webservers to point to their own master DB and have the data between the 2 synchronised but from what I've read, using any kind of master-master or ring-replication is discouraged. I'm looking for a general "what do other people do" kind of answer - database vendor isn't a concern at the moment but we'd like to stay with MySQL or convert to MSSQL. Any ideas would be gratefully received. Many thanks, Andrew

    Read the article

  • Can't connect to samba

    - by Rick
    Windows 7, connecting to Samba shares I have a follow up question from the link above. I am running Samba 3.0.23d on FreeBSD is release 7.1 I changed the policies as described above but still cannot connect to the samba server with the windows 7 or a server 2008. I feel it is a problem with recognizing the new machines on the network. the windows machines can see the samba server, but cannot connect to it or view any of the files. After changing the security policies the samba server asked for network id and password but would not allow the machine to connect, said they were unknown username or bad password. Here is my current config file. there is no sign of encryption anywhere, should I just add the line? not sure what that would do elsewhere. Workgroup = WWOFFSET server string = WWO File Server (%v) security = server username map = /usr/local/etc/smb.users hosts allow = 10. 127. # If you want to automatically load your printer list rather # than setting them up individually then you'll need this ; load printers = yes # you may wish to override the location of the printcap file ; printcap name = /etc/printcap # on SystemV system setting printcap name to lpstat should allow # you to automatically obtain a printer list from the SystemV spool # system ; printcap name = lpstat # It should not be necessary to specify the print system type unless # it is non-standard. Currently supported print systems include: # bsd, cups, sysv, plp, lprng, aix, hpux, qnx ; printing = cups # Uncomment this if you want a guest account, you must add this to /etc/passwd # otherwise the user "nobody" is used ; guest account = pcguest # this tells Samba to use a separate log file for each machine # that connects log file = /var/log/samba/log.%m # Put a capping on the size of the log files (in Kb). max log size = 50 # Use password server option only with security = server # The argument list may include: # password server = My_PDC_Name [My_BDC_Name] [My_Next_BDC_Name] # or to auto-locate the domain controller/s # password server = * ; password server = <NT-Server-Name> password server = SERVER0 # Use the realm option only with security = ads # Specifies the Active Directory realm the host is part of ; realm = MY_REALM # Backend to store user information in. New installations should # use either tdbsam or ldapsam. smbpasswd is available for backwards # compatibility. tdbsam requires no further configuration. ; passdb backend = tdbsam ; passdb backend = smbpasswd # Using the following line enables you to customise your configuration # on a per machine basis. The %m gets replaced with the netbios name # of the machine that is connecting. # Note: Consider carefully the location in the configuration file of # this line. The included file is read at that point. ; include = /usr/local/etc/smb.conf.%m # Most people will find that this option gives better performance. # See the chapter 'Samba performance issues' in the Samba HOWTO Collection # and the manual pages for details. # You may want to add the following on a Linux system: # SO_RCVBUF=8192 SO_SNDBUF=8192 socket options = TCP_NODELAY # Configure Samba to use multiple interfaces # If you have multiple network interfaces then you must list them # here. See the man page for details. ; interfaces = 192.168.12.2/24 192.168.13.2/24 # Browser Control Options: # set local master to no if you don't want Samba to become a master # browser on your network. Otherwise the normal election rules apply ; local master = no # OS Level determines the precedence of this server in master browser # elections. The default value should be reasonable ; os level = 33 # Domain Master specifies Samba to be the Domain Master Browser. This # allows Samba to collate browse lists between subnets. Don't use this # if you already have a Windows NT domain controller doing this job ; domain master = yes # Preferred Master causes Samba to force a local browser election on startup # and gives it a slightly higher chance of winning the election ; preferred master = yes # Enable this if you want Samba to be a domain logon server for # Windows95 workstations. ; domain logons = yes # if you enable domain logons then you may want a per-machine or # per user logon script # run a specific logon batch file per workstation (machine) ; logon script = %m.bat # run a specific logon batch file per username ; logon script = %U.bat # Where to store roving profiles (only for Win95 and WinNT) # %L substitutes for this servers netbios name, %U is username # You must uncomment the [Profiles] share below ; logon path = \\%L\Profiles\%U # Windows Internet Name Serving Support Section: # WINS Support - Tells the NMBD component of Samba to enable it's WINS Server ; wins support = yes # WINS Server - Tells the NMBD components of Samba to be a WINS Client # Note: Samba can be either a WINS Server, or a WINS Client, but NOT both ; wins server = w.x.y.z # WINS Proxy - Tells Samba to answer name resolution queries on # behalf of a non WINS capable client, for this to work there must be # at least one WINS Server on the network. The default is NO. ; wins proxy = yes # DNS Proxy - tells Samba whether or not to try to resolve NetBIOS names # via DNS nslookups. The default is NO. dns proxy = no # charset settings ; display charset = ASCII ; unix charset = ASCII ; dos charset = ASCII # These scripts are used on a domain controller or stand-alone # machine to add or delete corresponding unix accounts ; add user script = /usr/sbin/useradd %u ; add group script = /usr/sbin/groupadd %g ; add machine script = /usr/sbin/adduser -n -g machines -c Machine -d /dev/null -s /bin/false %u ; delete user script = /usr/sbin/userdel %u ; delete user from group script = /usr/sbin/deluser %u %g ; delete group script = /usr/sbin/groupdel %g unix extensions = no

    Read the article

  • Logging Apparently Not Working - Can't Set Configure Replication

    - by square_eyes
    I'm using a tutorial to set up a master (single) slave setup. When I run the command show master status; I get Empty set (0.00 sec). I have done a lot of research and believe that the log file I have set up isn't being used properly. The master is a Windows server, and I am administering it through workbench. The log file (log-bin.log) I have configured in the options exists, and is accessible by 'System'. But when I open it as a text file it doesn't have any data. So I'm assuming it's not getting used properly. How can I get the logging happening so I can finish setting up this replication? Edit: Purely for reference here is the tutorial. I followed all the steps and double checked everything up until show master status.

    Read the article

  • Bonding: works only from one link

    - by Crazy_Bash
    I would like to install bonding with 4 links. but only one of them is active. eth4 is always active. the others simply don't work. those are my configs: DEVICE="eth2" BOOTPROTO="none" MASTER=bond0 SLAVE=yes USERCTL=no NM_CONTROLLED="no" ONBOOT="yes" DEVICE="eth3" BOOTPROTO="none" MASTER=bond0 SLAVE=yes USERCTL=no NM_CONTROLLED="no" ONBOOT="yes" DEVICE="eth4" BOOTPROTO="none" MASTER=bond0 SLAVE=yes USERCTL=no NM_CONTROLLED="no" ONBOOT="yes" DEVICE="eth5" BOOTPROTO="none" MASTER=bond0 SLAVE=yes USERCTL=no NM_CONTROLLED="no" ONBOOT="yes" DEVICE=bond0 IPADDR=<ip> BROADCAST=<ip> NETWORK=<ip> GATEWAY=<ip> NETMASK=<ip> USERCTL=no BOOTPROTO=none ONBOOT=yes NM_CONTROLLED=no /etc/modprobe.d/bonding.conf alias bond0 bonding options bond0 mode=4 miimon=100 updelay=200 #downdelay=200 xmit_hash_policy=layer3+4 lacp_rate=1 Linux: Linux 3.0.0+ #1 SMP Fri Oct 26 07:55:47 EEST 2012 x86_64 x86_64 x86_64 GNU/Linux what i've tried: downdelay=200 xmit_hash_policy=layer3+4 lacp_rate=1

    Read the article

  • How to reset mysql's replication settings completely, without reinstalling it?

    - by user38060
    I set up mysql replication by adding references to binlogs, relay logs etc in my.cnf restarted mysql, it worked. I wanted to change it so I deleted all binlog related files including log-bin.index, removed binlog statements from my.cnf restarted server, works set master to '', purge master logs since now(), reset slave, stop slave, stop master. now, to set up replication again, I added binlog statements to the server. But then I hit this problem when restarting with: sudo mysqld (the only way to see mysql's startup errors) I get this error: /usr/sbin/mysqld: File '/etc/mysql/var/log-bin.index' not found (Errcode: 13) Because indeed, this file does not exist! (I deleted it, while trying to set up a new replication system) Hmm, if I change the config line to: log-bin-index = log-bin.index I get a different error: [ERROR] Can't generate a unique log-filename /etc/mysql/var/bin.(1-999) [ERROR] MSYQL_BIN_LOG::open failed to generate new file name. [ERROR] Aborting The first time I set up replication on this system, I didn't need to create this file. I did the same thing - added references to a previously non-existing file, and mysql created it. Same with relay logs, etc. I don't know why mysql insists on trying to read the old folder. Should I just reinstall the whole package again? That seems like overkill. my my.cnf: [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking bind-address = IP key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP table_cache = 64 sort_buffer =64K net_buffer_length =2K query_cache_limit = 1M query_cache_size = 16M slow_query_log_file = /etc/mysql/var/mysql-slow.log long_query_time = 1 log-queries-not-using-indexes expire_logs_days = 10 max_binlog_size = 100M server-id = 3 log-bin = /etc/mysql/var/bin.log log-slave-updates log-bin-index = /etc/mysql/var/log-bin.index log-error = /etc/mysql/var/error.log relay-log = /etc/mysql/var/relay.log relay-log-info-file = /etc/mysql/var/relay-log.info relay-log-index = /etc/mysql/var/relay-log.index auto_increment_increment = 10 auto_increment_offset = 3 master-host = HOST master-user = USER master-password=PWD replicate-do-db = DBNAME collation_server=utf8_unicode_ci character_set_server=utf8 skip-character-set-client-handshake [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash [myisamchk] key_buffer_size = 16M sort_buffer_size = 8M [mysqlhotcopy] interactive-timeout !includedir /etc/mysql/conf.d/ Update: Changing all the /etc/mysql/var/xxx paths in binlog & relay log statements to local has somehow solved the problem. I thought it was apparmor causing it at first, but when I added /etc/mysql/* rw, to apparmor's config and restarted it, it still couldn't read the full path.

    Read the article

  • Win7 + SysPrep + CloneZilla = lost settings

    - by oshirowanen
    I've just cloned a PC (Win7 + SysPrep + CloneZilla) and after starting up the new clone, Win7 wants to be reactivated again and the video driver seems to be missing (as in the aero effect has disappeared from the new clone) and the internet connection settings seem to be missing too. All those were set in the master before running SysPrep. Why have these settings been lost? The master PC was had windows 7 activated using a volume license key, and the clone pc is the same make and model as the master computer.

    Read the article

  • Excel 2007: Named ranges problems when linking workbooks

    - by Mike
    I've 30+ workbooks each with 5 specific worksheets (formated the same). Each worksheet's data needs to be linked to a master workbook, so that I end up with 5 master workbooks and all the specific data in one long table format $A$2:$I$750. (Are you still with me? ;)) I don't have access to a database, so I'm having to link the sheets to their master workbook directly. I've highlighted the data I need; named the range; and then tried referencing this from my master workbook. I get the #Value error symbol when I try to link (=[WorkbookName]!MyNamedRange) to a cell that doesn't match the top left cell of my range. Example: MyNamedrange is always =$A$2:$I43$ on one specific sheet. On my master workbook it works if it's referenced at A2 but I get #Value if it's referenced A1, or A44. Any ideas? I'm trying to link my data in one continous table so I can run a pivot on it, and other things. Can it be done like this, or should I just copy and paste? I'm trying to keep things 'linked'so I do not need to spend time C&Ping all day. Many thanks Mike.

    Read the article

  • extjs how to make a nested child using xTemplate when we don't know how deep is it?

    - by Ebo the gordon
    first, sorry if my english bad,.... in my script, variable tplData below is dynamic,... (lets say it generates from database) so, every chid, can have another child. and so on,.... now, i'm stack how to iteration it,.. var tplData = [{ name : 'Naomi White' },{ name : 'Yoko Ono' },{ name : 'John Smith', child : [{ name:'Michael (John\'s son)', child: [{ name : 'Brad (Michael\'s son,John\'s grand son)' },{ name : 'Brid (Michael\'s son,John\'s grand son)', child: [{ name:'Buddy (Brid\'s son,Michael\'s grand son)' }] },{ name : 'Brud (Michael\'s son,John\'s grand son)' }] }] }]; var myTpl = new Ext.XTemplate( '<tpl for=".">', '<div style="background-color: {color}; margin: 10px;">', '<b> Name :</b> {name}<br />', // how to make this over and over every child (while it has ) '<tpl if="typeof child !=\'undefined\'">', '<b> Child : </b>', '<tpl for="child">', '{name} <br />', '</tpl>', '</tpl>', /////////////////////////////////////// '</div>', '</tpl>' ); myTpl.compile(); myTpl.overwrite(document.body, tplData);

    Read the article

  • Import a puppet manifest from the node itself?

    - by bobinabottle
    I have a somewhat unique situation. Our systems team manages our main puppet master, and the development team is fine with everything however they are thinking of using it to control some elements on their desktop machines, whilst still being connected to our central puppet master. Since we don't want the changes they make to go into our puppet master.. is there a way of puppet importing a manifest from the node directly? As in.. on the developer machine, they put a file "/root/development.pp" or something, and then on our puppet master we put something like node { "developermachine": # Do the majority of normal things # import "/root/development.pp" } We have a few different options we can take about security of write access to the puppet manifests, but if puppet were to support something like this it would probably be the cleanest for us. Any help is appreciated :)

    Read the article

  • Is there a way to batch create DNS slave zones on a new slave DNS server?

    - by Josh
    I currently have a DNS server which is serving as a master DNS server for a number of our domains. I want to set up a brand new secondary DNS server. Is there any way I can automatically have BIND on the new server act as a secondary for all the domains on the primary server? In case it matters, I have Webmin on the primary server. I believe Webmin has an option to create a zone as a secondary on another server when creating a new master zone on one server, but I don;t know of any way to batch create secondary zones for a number of existing master zones. Maybe I'm missing something. Is there a way to "batch create" DNS slave zones on a brand new slave DNS server for all the DNS zones on an existing master?

    Read the article

  • Excael 2007: Name range problems when linking workbooks

    - by Mike
    I've 30+ workbooks each with 5 specific worksheets (formated the same). Each worksheet's data needs to be linked to a master workbook, so that I end up with 5 master workbooks and all the specific data in one long table format $A$2:$I$750. (Are you still with me? ;)) I don't have access to a database, so I'm having to link the sheets to their master workbook directly. I've highlighted the data I need; named the range; and then tried referencing this from my master workbook. I get the #Value error symbol when I try to link (=[WorkbookName]!MyNamedRange) to a cell that doesn't match the top left cell of my range. Example: MyNamedrange is always =$A$2:$I43$ on one specific sheet. On my master workbook it works if it's referenced at A2 but I get #Value if it's referenced A1, or A44. Any ideas? I'm trying to link my data in one continous table so I can run a pivot on it, and other things. Can it be done like this, or should I just copy and paste? I'm trying to keep things 'linked'so I do not need to spend time C&Ping all day. Many thanks Mike.

    Read the article

  • Puppet inventory service using puppetdb

    - by Oli
    I have 3 servers set up. A puppet master using passenger (puppet-server1), dashboard using passenger (puppet-server2) and puppetdb (puppet-server3). I cannot get the inventory service working in the dashboard. The puppet master is able to sign certs and hand out manifests. The nodes have checked in to the dashboard ok The puppetdb appears to be working - logs files as follows: 2012-12-13 17:53:10,899 INFO [command-proc-74] [puppetdb.command] [8490148f-865a-45c8-b5b5-2c8824d753dd] [replace facts] puppet-server3.test.net 2012-12-13 17:53:11,041 INFO [command-proc-74] [puppetdb.command] [dfcc5168-06df-41d4-9a97-77b4cd3f4a2b] [replace catalog] puppet-server3.test.net 2012-12-13 17:55:28,600 INFO [command-proc-74] [puppetdb.command] [b2cc0a96-0404-49f5-96ad-19c778508d3d] [replace facts] puppet-client2.test.net 2012-12-13 17:55:28,729 INFO [command-proc-74] [puppetdb.command] [4dc4b8f3-06df-4dad-a89a-92ac80447b99] [replace catalog] puppet-client2.test.net The puppet master has the following configured in puppet.conf [master] certname = puppet-server1.test.net storeconfigs = true storeconfigs_backend = puppetdb reports = store, http reporturl = http://puppet-server2.test.net/reports/upload The puppet master have the following configured in auth.conf #access for puppet dashboard facts path /facts auth yes method find, search allow dashboard The puppet dashboard has this configured in /usr/share/puppet-dashboard/config/settings.yml # Hostname of the inventory server. inventory_server: 'puppet-server3.test.net' # Port for the inventory server. inventory_port: 8081 The inventory is on as I see a link to the inventory in the dashboard server But I am getting this error: Inventory Could not retrieve facts from inventory service: SSL_connect SYSCALL returned=5 errno=0 state=SSLv3 read finished A clearly an SSL error - but I have followed the documentation and have no idea how to fix this. Can anyone help please? Oli

    Read the article

  • PowerDNS: multiple supermasters and transfering domain

    - by blauwblaatje
    Hi, I've got a setup with multiple supermasters (bind) and multiple superslaves (pdns). It all seems to work just fine, pdns is being updated when I'm adding or changing a domain. But, when I want to migrate a domain from one master to another, pdns doesn't like it. It tells me the new server isn't a master for this domain, although I deleted the domain on the old server. Now, I think that part of the problem is, that pdns doesn't get an update when a domain is deleted, which would also explain a lot of dead domains in my pdns. It looks like the slave is constantly polling a server and getting RCODE=5 back. The master isn't aware of the domain and the slave thinks the master still serves that domain. Anyone familiar with this problem?

    Read the article

  • Question About mk-table-checksum Results

    - by stevenmusumeche
    Hello, I have 1 master and 2 slaves. I am using MySQL 5.1.42 on all servers. I am attempting to use mk-table-checksum to verify that their data is in sync, but I am getting unexpected results on one of the slaves. First, I generate the checksums on the master like this: mk-table-checksum h=localhost --databases MYDB --tables {$table_list} --replicate=MYDB.mk_checksum --chunk-size=10M My understanding is that this runs the checksum queries on the master which then propagate via normal replication to the slaves. So, no locking is needed because the slaves will be at the same logical point in time when they run the checksum queries on themselves. Is this correct? Next, to verify that the checksums match, I run this on the master: mk-table-checksum --databases MYDB --replicate=IRC.mk_checksum --replicate-check 1 h=localhost,u=maatkit,p=xxxx If there are any differences, I repair the slaves like this: mk-table-sync --execute --verbose --replicate IRC.mk_checksum h=localhost,u=maatkit,p=xxxx After doing all of this, I repaired both slaves with mk-table-sync. However, everytime I run this sequence (after everything has already been repaired), one slave is perfectly in sync but one slave always has a few tables out of sync. I am 99.999% sure that the data on the slaves matches, since I repaired everything and the tables were not even updated on the master between runs of the checksum script. What would cause a few tables to always show out of sync on only one of the slaves? I am stuck. Here is the output: Differences on h=x.x.x.x,p=...,u=maatkit DB TBL CHUNK CNT_DIFF CRC_DIFF BOUNDARIES IRC product 10 0 1 product_id = 147377 AND product_id < 162085 IRC post_order_survey 0 0 1 1=1 IRC mk_heartbeat 0 0 1 1=1 IRC mailing_list 0 0 1 1=1 IRC honey_pot_log 0 0 1 1=1 IRC product 12 0 1 product_id = 176793 AND product_id < 191501 IRC product 18 0 1 product_id = 265041 IRC orders 26 0 1 order_id = 694472 IRC orders_product 6 0 1 op_id = 935375

    Read the article

  • Injection of an EJB into a web java class under JBoss 7.1.1

    - by Dobbo
    I am trying to build a website using JBoss 7.1.1 and RESTeasy. I have managed to constructed and deploy and EAR with a both a WAR and an EJB-JAR contained within: voyager-app.ear META-INF/MANIFEST.MF META-INF/application.xml META-INF/jboss-app.xml lib/voyager-lib.jar voyager-adm.war voyager-ejb.jar voyager-web.war So far things are very simple. voyager-adm.war & voyager-lib.jar are empty (just the manifest file) but I know that I'm going to have code for them shortly. There is just one Stateful EJB - HarbourMasterBean (with just a local interface) and a few Database Entity Beans in the EJB jar file: voyager-ejb.jar META-INF/MANIFEST.MF META-INF/persistence.xml com/nutrastat/voyager/db/HarbourMasterBean.class com/nutrastat/voyager/db/HarbourMasterLocal.class com/nutrastat/voyager/db/PortEntity.class com/nutrastat/voyager/db/ShipEntity.class As far as I can tell the EJBs deploy correctly because the database units are created and the log shows that the publication of some HarbourMaster references: java:global/voyager-app/voyager-ejb/harbour-master!com.nutrastat.voyager.db.HarbourMasterLocal java:app/voyager-ejb/harbour-master!com.nutrastat.voyager.db.HarbourMasterLocal java:module/harbour-master!com.nutrastat.voyager.db.HarbourMasterLocal java:global/voyager-app/voyager-ejb/harbour-master java:app/voyager-ejb/harbour-master java:module/harbour-master The problem lies in getting the HarbourMaster EJB injected into my web bean. The reference to it is alway NULL no matter what I try. voyager-web.war META-INF/MANIFEST.MF WEB-INF/web.xml WEB-INF/classes/com/nutrastat/voyager/web/ WEB-INF/classes/com/nutrastat/voyager/web/Ships.class WEB-INF/classes/com/nutrastat/voyager/web/VoyagerApplication.class Ships.java: @Path("fleet") public class Ships { protected transient final Logger log; @EJB private HarbourMasterLocal harbourMaster; public Ships() { log = LoggerFactory.getLogger(getClass()); } @GET @Path("ships") @Produces({"text/plain"}) public String listShips() { if (log.isDebugEnabled()) log.debug("Harbour master value: " + harbourMaster); return "Harbour Master: " + harbourMaster; } } &lt;web-app xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd" version="3.0" &gt; <display-name>Voyager Web Application</display-name> <listener> <listener-class> org.jboss.resteasy.plugins.server.servlet.ResteasyBootstrap </listener-class> </listener> <servlet> <servlet-name>Resteasy</servlet-name> <servlet-class> org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher </servlet-class> <init-param> <param-name> javax.ws.rs.Application </param-name> <param-value> com.nutrastat.voyager.web.VoyagerApplication </param-value> </init-param> </servlet> <servlet-mapping> <servlet-name>Resteasy</servlet-name> <url-pattern>/*</url-pattern> </servlet-mapping> &lt;/web-app&gt; I have been searching the web for an answer and read a number of places, both on StackOverflow and elsewhere that suggests is can be done, and that the problems lies with configuration. But they post only snippets and I'm never sure if I'm doing things correctly. Many thanks for any help you can provide. Dobbo

    Read the article

  • DNS Replication on Server 2008 R2

    - by Aaron
    Hi There, I have been trying out public only facing DNS servers with Server 2008 R2 Web - I've wanted to setup at least 2 in a master/slave replication. Using Microsoft DNS I am able to add in the domains into the primary zone on the master DNS server (ns1) and add the records ok and have them visible publically. On ns2 I can then add in the same domain but as a secondary zone and get them to replicate / zone transfer fine. Is there a way inside of Windows to have the slave(s) automatically synchronise all the changes from the master? For example it's ok if i have manually added the domains onto each of the NS's but if i add a new zone on the master i have to add it on the slave before it replicates. I installed Simple DNS and they have a 'Super Master/Slave' which takes care of exactly this whereby if you add a new domain into the primary zone it is automatically created and kept in sync on NS2 but i would have to buy a licence. All this is non active directory if that helps. Can anyone advise if it is possible to do this using Microsoft DNS? Many Thanks in Advance!

    Read the article

  • How to recover from failed Mysql schema update, with replication?

    - by OmerGertel
    I have two MySQL servers configured with master-slave replication. Before we deploy a new application version we: 1) STOP SLAVE 2) Take a MySQL dump of the slave. However, if a mistake is done during the deployment of the new schema version (a table is dropped by mistake, for example), having the slave intact doesn't help. Our service is write-intensive, so we can't turn it back up until we have a master working. If we now load the mysql dump back into the master, it will take a long time during which our service remains down. What is the best-practice to recover from such a mistake? How can I setup the system so I can easily promote the slave, turn on our service and only then tend to the broken database? Mainly, I'm worried with re-syncing the slave and the master after changes are done on the slave.

    Read the article

  • How to set up Heartbeat to run a service only at one node

    - by Jon Skarpeteig
    I have two Ubuntu 12.04 servers, which run mysql in a master-master setup, with mmm as manager. How can I set up heartbeat to make sure that mmm only runs at one node at the time? *Edit to explain more clearly My setup: ---------VIP (10.0.0.123)------ | | Node1 Node2 Where bot Node1 and Node2 run: Mysql Multi-Master Replication Manager for MySQL (mmm) Heartbeat I only want a single write enabled mysql node, and I can only have one mmm running at the time, else I'll get collision between the managers.

    Read the article

  • Subdocument in Word won't save

    - by ChrisW
    Because I know Word has a history of not liking very large documents (my supervisor specifically told me not to use LaTeX... grr), I decided to learn the Master document / subdocument feature of Word when writing my PhD thesis. I have the title page / table of contents etc in the master document, and each chapter as a separate document. However, when I save the master document, it appears to save all the chapter documents apart from one (Chapter 4), for which it brings up the Save Document dialog box, helpfully with "Chapter4.docx" in the "Save as" box (n.b. Chpater4.dox is not open). Clicking save does nothing, and doesn't make the dialog box go away. Saving as a different document means that my changes aren't reflected in the same document. There must be some reason Word doesn't like this particular document but I've got no idea why - there's nothing special in it that isn't in any of the other chapters. I have tried closing all documents, renaming Chapter4.docx, opening the master document, expanding all documents, OKing the warning that Chapter4.dox does not exist, and inserting the 'new' document, but even when I save the master document it still won't save the new Chapter4 document. If anyone knows any reason why Word is acting like this (or if I'm doing anything stupid), I'll be eternally grateful (p.s. sorry for the long rambling message. It's late; I've been working on my PhD 4.5 years, I really really want to throw this computer out the window, and I hope people are kind enough not to downvote this question because of it's rambling nature!) Update With Word closed, I've tried to delete Chapter4.docx (having made a backup!) - but I get a warning that it can't be deleted because it's open in Microsoft Word... these files are on a network drive and the same problems are happening on 2 different computers. I could login to the filestore through ssh and force the file to be deleted, but I'm curious to know why this is happening!

    Read the article

  • T-SQL for autogrowth of multiple data files

    - by ddono25
    I can't seem to figure out the problems with my script to alter SQL Server 2008 database and file growth. There are two data files and a log file, all which need to have Autogrowth ON. Does this look completely wrong? Thanks! USE MASTER GO ALTER DATABASE BigDB MODIFY FILE ( NAME = BIGDBPPE, FILENAME = "H:\MSSQL\Data\BigDB.mdf", MAXSIZE = UNLIMITED, FILEGROWTH = 2000MB) USE MASTER GO ALTER DATABASE BigDB MODIFY FILE ( NAME = BIGDBPPE1, FILENAME = "K:\MSSQL\Data\BigDB_data1.ndf", MAXSIZE = UNLIMITED, FILEGROWTH = 2000MB) USE MASTER GO ALTER DATABASE BigDB MODIFY FILE ( NAME = BIGDBPPE_log, FILENAME = "O:\MSSQL\Data\BigDB_log.ldf", MAXSIZE = UNLIMITED, FILEGROWTH = 200MB) GO

    Read the article

  • Puppet claims to be unable to resolve domains even if domain properly resolves

    - by gparent
    I have a fairly simple puppet setup, one master and one node, both running Debian Squeeze 6.0.4. I have DNS entries for the two machines, client and master respectively. Both client and master's DNS entries resolve correctly on both machines to the right IPs. On my client, I have this configuration: [main] server = master.example.org logdir=/var/log/puppet vardir=/var/lib/puppet ssldir=/var/lib/puppet/ssl rundir=/var/run/puppet factpath=$vardir/lib/facter pluginsync=true templatedir=/var/lib/puppet/templates Key exchange seems to fail, according to this messages in /var/log/syslog: localhost puppet-agent[11364]: Could not request certificate: getaddrinfo: Name or service not known Why is resolution not working only for puppet?

    Read the article

  • Mysql replication, Slow resyncing of slave after an error

    - by James Hackett
    I have a slave that got an error about a months or so ago and got way behind the master. I fixed the error and now playing catchup with the master but its going very slowly. Its going at 1.3x real time. I was using less that 10% of the db resources when these writes were first happening so the speed of the server shouldn't be an issue. Is there any settings I can switch to help the slave catch up with the master?

    Read the article

  • Ganglia divide colors by rolles

    - by com
    Sorry for a silly question I am still newbie to Ganglia. In Ganglia I control few important metrics for mysql (seconds behind master and etc.). In addition I have few bunches of mysql servers (every bunch has it's own tasks, but all of the bunches should be tested for seconds behind master). I am interested if this possible to show all metrics on the one page with different colors to different bunches. Right now in metric "seconds behind master" I see all mysql servers with metric "seconds behind master" with colors to different states (red is critical, gray is ok). Can I set a color to a graph according to it's bunch? Thanks!

    Read the article

  • What is the best way to change the replication scheme of 2 currently replicated slaves?

    - by mmattax
    I have MySQL replication set up in production as follows: DB1 - DB2 DB1 - BAK Where DB2 and BAK are slaves to DB1. All 3 servers are in sync (0 seconds behind the master) and have 30+ GB of data. I'd like to put the servers in a new master-slave configuration as follows: DB1 - DB2 - BAK What is the best way to change the master host on BAK? Is there a way to avoid having to stop the slave thread on DB2 and getting a mysqldump for BAK (a 5-6 hour processes) ?

    Read the article

  • Rebasing a branch which is public

    - by Dror
    I'm failing to understand how to use git-rebase, and I consider the following example. Let's start a repository in ~/tmp/repo: $ git init Then add a file foo $ echo "hello world" > foo which is then added and committed: $ git add foo $ git commit -m "Added foo" Next, I started a remote repository. In ~/tmp/bare.git I ran $ git init --bare In order to link repo to bare.git I ran $ git remote add origin ../bare.git/ $ git push --set-upstream origin master Next, lets branch, add a file and set an upstream for the new branch b1: $ git checkout -b b1 $ echo "bar" > foo2 $ git add foo2 $ git commit -m "add foo2 in b1" $ git push --set-upstream origin b1 Now it is time to switch back to master and change something there: $ echo "change foo" > foo $ git commit -a -m "changed foo in master" $ git push At this point in master the file foo contain changed foo, while in b1 it is still hello world. Finally, I want to sync b1 with the progress made in master. $ git checkout b1 $ git fetch origin $ git rebase origin/master At this point git st returns: # On branch b1 # Your branch and 'origin/b1' have diverged, # and have 2 and 1 different commit each, respectively. # (use "git pull" to merge the remote branch into yours) # nothing to commit, working directory clean At this point the content of foo in the branch b1 is change foo as well. So what does this warning mean? I expected I should do a git push, git suggests to do git pull... According to this answer, this is more or less it, and in his comment @FrerichRaabe explicitly say that I don't need to do a pull. What's going on here? What is the danger, how should one proceed? How should the history be kept consistent? What is the interplay between the case described above and the following citation: Do not rebase commits that you have pushed to a public repository. taken from pro git book. I guess it is somehow related, and if not I would love to know why. What's the relation between the above scenario and the procedure I described in this post.

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >