Search Results

Search found 5011 results on 201 pages for 'grand master t'.

Page 66/201 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • Jenkins swarm-plugin jar file, won't run in background

    - by JeanMertz
    We're working on an automation script for our Jenkins slaves on a local Unix server. To connect the slaves to the Jenkins master, we use the swarm plugin. Setting up the master was easy, and connecting clients is also easy with a single command. However, I am trying to get the slave command (a java application) to run in the background without stalling the current process, this doesn't seem to work. I've created an init.d file and added it to update-rc.d but that doesn't work. #!/bin/bash /usr/bin/java -jar /root/swarm-client-1.7-jar-with-dependencies.jar -executors 4 I've also tried to run it with an ampersand & to start the process in the background, but that doesn't work either because - from looking at the source - the jar file actually boots another process that is then started in the foreground. Any ideas on how to make this jar file start without stopping the bootstrap script?

    Read the article

  • Point dns server to root dns servers [duplicate]

    - by Dhaksh
    This question already has an answer here: What is a glue record? 3 answers Why does DNS work the way it does? 4 answers I have setup a custom authoritative only DNS server using bind9. Its a Master ans Slave method. Assume DNS Servers are: ns1.customdnsserver.com [192.168.91.129] ==> Master ns2.customdnsserver.com [192.168.91.130] ==> Slave Now i will host few shared hosting websites in my own web server. Where i will link above Nameservers to my domains in shared hosting. My Question is: How do i tell root DNS servers about my own authoritative only DNS server? So that when someone queries for domain www.example.com and if the domain's website is hosted in my shared hosting i want root servers to point the query to my own DNS Server so that the www.example.com get resolved for IP address.

    Read the article

  • Prevent zsh from trying to expand everything

    - by Attila O.
    Recently switched from bash, I noticed that zsh will try to expand every command or argument that looks like it has wildcards in it. So the following lines won't work any more: git diff master{,^^} zsh: no matches found: master^^ scp remote:~/*.txt . zsh: no matches found: remote:~/*.txt The only way to make the above commands work is to quote the arguments, which is quite annoying. Q: How do I configure zsh to still try to expand wildcards, but if there are no matches, just pass on the argument as-is? EDIT: Possibly related: scp with zsh : no matches found

    Read the article

  • CENTOS: unsupported dictionary type: sqlite in POSTFIX

    - by Ferdinand
    Oct 30 09:24:15 postfix postfix/smtpd[1622]: fatal: unsupported dictionary type: sqlite Oct 30 09:24:16 postfix postfix/master[1165]: warning: process /usr/libexec/postfix/smtpd pid 1622 exit status 1 Oct 30 09:24:16 postfix postfix/master[1165]: warning: /usr/libexec/postfix/smtpd: bad command startup -- throttling I'm trying to use sqlite with postfix, but I get that error. I'm using CENTOS 6.4 x64. I have sqlite and sqlite-devel installed too. I'm assuming postfix from BASE (CentOS repo) comes without sqlite support? I've been not able to recompile with sqlite support using this: http://www.postfix.org/SQLITE_README.html Is there another way to get it to work?

    Read the article

  • How to proceed setting up a secondary mysql linux slave?

    - by Algorist
    I have a mysql database master and slave in production. I want to setup additional mysql slave. There is around 15 Terabyte of data in the database and there are MYISAM and InnoDB tables in the database. I am thinking of below options: Shutdown master database and copy the mysql data folder to secondary slave. Can Innodb tables be copied like this? Run flush table with read lock, scp the file to new slave and unlock the table and this is possible for myisam tables, can I do the same for innodb tables too? Thanks for looking at the question.

    Read the article

  • MySQL replication/connection failing over SSL

    - by Marcel Tjandraatmadja
    I set up two MySQL servers where one is replicating from the other. They both work perfectly, but once I turn on SSL I get the following error: ERROR 2026 (HY000): SSL connection error I get the same error running from command line like so: mysql --ssl=1 --ssl-ca=/etc/mysql/certificates/ca-cert.pem --ssl-cert=/etc/mysql/certificates/client-cert.pem --ssl-key=/etc/mysql/certificates/client-key.pem --user=slave --password=slavepassword --host=master.url.com Both MySQL servers are running on version 5.0.77. There is a difference that MySQL in the master server was compiled under x86_64 while in the slave server under i686. Also both machines are running CentOS 5. Plus I generated certificates as per this page. Any idea for finding a solution?

    Read the article

  • How to disallow use of disable_functions in?

    - by Gaia
    I'm obviously not the first one to have this problem, but I cannot not find an answer to this situation. I want to lock down PHP a bit, more specifically the use disable_functions. The environment is CentOS 6.2/PHP 5.3.3 fcgid/Apache 2.2.15: Whats the proper apache config (AllowOverride, etc) to disable any PHP setting to be changed via .htaccess? All other overrides are ok (current setting is AllowOverride All) Whats the proper config to forbid effective use of disable_functions in all but the master php.ini (as in forbid use of disable_functions in /home/myvhost/etc/php5/php.ini or any directory within in that vhost public_html. another way to say this: the only effective disable_functions comes from the master php.ini)? If #2 is not possible, at least whats the proper config to disallow a vhost owner to effectively use any php.ini but the vhost main one (/home/myvhost/etc/php5/php.ini)? Thanks

    Read the article

  • DNS delegation on same server with DDNS and second slave server

    - by Austin
    I have two servers running BIND, the first is setup as the master of two zones and the second as a slave for those zones. The zones are example.com and ddns.example.com. I have DDNS running and thousands of device entries are dynamically created in ddns.example.com. I wanted to keep DDNS separate from the main example.com, so I created a separate zone that the DHCP servers update. Considering these zones are hosted on the same server, is it possible to have delegation working from example.com to ddns.example.com? For example if my workstation's search domain is example.com and pointed towards 10.1.10.1 for its DNS provider, I would like to be able to resolve hostname.ddns. As it is, I can resolve hostname.ddns.example.com, but would like to be able to resolve just hostname.ddns. Alternatively, if the workstation's search domain is ddns.example.com, what settings do I need to be able to change to be able to resolve web, ftp, etc, which are all hosts in the parent, example.com zone? Does the ddns.example.com zone need to forward to the example.com zone? Again, all the zones are setup on the same server with a second server setup as a slave. named.conf: zone "example.com" IN { type master; file "example.com"; allow-update { none; }; } zone "ddns.example.com" IN { type master; file "ddns.example.com"; allow-update { key dhcp-update; }; } example.com zone file: $ORIGIN . $TTL 86400 example.com IN SOA ns1.example.com. hostmaster.example.com. ( serial, refresh, retry, etc. ) NS ns1.example.com. NS ns2.example.com. $ORIGIN example.com. ns1 A 10.1.10.1 ns2 A 10.1.10.2 web A 10.1.15.30 ftp A 10.1.15.31 host3 A 10.1.15.32 $ORIGIN ddns.example.com NS ns1 NS ns2 ns1 A 10.1.10.1 ns2 A 10.1.10.2

    Read the article

  • Spring MVC and 406 Error XML request

    - by Asp1de
    Hi i have a problem when running my code outside eclipse. This is my Equinox enviroment: Framework is launched. id State Bundle 0 ACTIVE org.eclipse.osgi_3.7.0.v20110221 1 ACTIVE org.eclipse.equinox.common_3.6.0.v20110506 2 ACTIVE org.eclipse.update.configurator_3.3.100.v20100512 3 RESOLVED catalina-config_1.0.0 Master=20 4 ACTIVE org.springframework.osgi.catalina.start.osgi_1.0.0 Fragments=62 5 ACTIVE com.springsource.javax.activation_1.1.1 6 ACTIVE com.springsource.javax.annotation_1.0.0 7 ACTIVE com.springsource.javax.ejb_3.0.0 8 ACTIVE com.springsource.javax.el_1.0.0 9 ACTIVE com.springsource.javax.mail_1.4.0 10 ACTIVE com.springsource.javax.persistence_1.0.0 11 ACTIVE com.springsource.javax.servlet_2.5.0 12 ACTIVE com.springsource.javax.servlet.jsp_2.1.0 13 ACTIVE com.springsource.javax.servlet.jsp.jstl_1.1.2 14 ACTIVE com.springsource.javax.xml.bind_2.0.0 15 ACTIVE com.springsource.javax.xml.rpc_1.1.0 16 ACTIVE com.springsource.javax.xml.soap_1.3.0 17 ACTIVE com.springsource.javax.xml.stream_1.0.1 18 ACTIVE com.springsource.javax.xml.ws_2.1.1 19 ACTIVE com.springsource.org.aopalliance_1.0.0 20 ACTIVE com.springsource.org.apache.catalina_6.0.18 Fragments=3, 22 21 ACTIVE com.springsource.org.apache.commons.logging_1.1.1 22 RESOLVED com.springsource.org.apache.coyote_6.0.18 Master=20 23 ACTIVE com.springsource.org.apache.el_6.0.18 24 ACTIVE com.springsource.org.apache.juli.extras_6.0.18 25 ACTIVE com.springsource.org.apache.log4j_1.2.15 Fragments=33 26 ACTIVE com.springsource.org.apache.taglibs.standard_1.1.2 27 ACTIVE org.springframework.osgi.commons-el.osgi_1.0.0.SNAPSHOT 28 ACTIVE data_1.0.0 29 ACTIVE Api_1.0.0 30 ACTIVE connector_1.0.0 31 ACTIVE core_1.0.0 32 ACTIVE org.springframework.osgi.jasper.osgi_5.5.23.SNAPSHOT 33 RESOLVED com.springsource.org.apache.log4j.config_1.0.0 Master=25 34 ACTIVE testController_1.0.0 35 ACTIVE org.eclipse.core.contenttype_3.4.100.v20100505-1235 36 ACTIVE org.eclipse.core.jobs_3.5.0.v20100515 37 ACTIVE org.eclipse.equinox.app_1.3.0.v20100512 38 ACTIVE org.eclipse.equinox.preferences_3.3.0.v20100503 39 ACTIVE org.eclipse.equinox.registry_3.5.0.v20100503 40 ACTIVE org.eclipse.osgi.services_3.2.100.v20100503 41 ACTIVE osgi.core_4.3.0.201102171602 42 ACTIVE dataImplementation_1.0.0 43 ACTIVE org.springframework.osgi.servlet-api.osgi_2.4.0.SNAPSHOT 44 ACTIVE org.springframework.aop_3.1.1.RELEASE 45 ACTIVE org.springframework.asm_3.1.1.RELEASE 46 ACTIVE org.springframework.beans_3.1.1.RELEASE 47 ACTIVE org.springframework.context_3.1.1.RELEASE 48 ACTIVE org.springframework.context.support_3.1.1.RELEASE 49 ACTIVE org.springframework.core_3.1.1.RELEASE 50 ACTIVE org.springframework.expression_3.1.1.RELEASE 51 ACTIVE org.springframework.osgi.extensions.annotations_1.2.1 52 ACTIVE org.springframework.osgi.core_1.2.1 53 ACTIVE org.springframework.osgi.extender_1.2.1 54 ACTIVE org.springframework.osgi.io_1.2.1 55 ACTIVE org.springframework.osgi.mock_1.2.1 56 ACTIVE org.springframework.osgi.web_1.2.1 57 ACTIVE org.springframework.osgi.web.extender_1.2.1 58 ACTIVE org.springframework.oxm_3.1.1.RELEASE 59 ACTIVE org.springframework.transaction_3.1.1.RELEASE 60 ACTIVE org.springframework.web_3.1.1.RELEASE 61 ACTIVE org.springframework.web.servlet_3.1.1.RELEASE 62 RESOLVED tomcat-configuration-fragment_1.0.0 Master=4 My controller is: @RequestMapping(value = "/test1", method = RequestMethod.GET, produces = "application/json") public @ResponseBody Person test1() { logger.info(" <--- Test 1 ---> \n"); Person p = new Person("a", "b", "c"); return p; } @RequestMapping(value = "/test2", method = RequestMethod.GET, produces = "application/xml") public @ResponseBody Person test3() { logger.info(" <--- Test 1 ---> \n"); Person p = new Person("a", "b", "c"); return p; } @RequestMapping(value = "/test2", method = RequestMethod.GET, headers = "Accept=*/*") public @ResponseBody Person test4() { logger.info(" <--- Test 1 ---> \n"); Person p = new Person("a", "b", "c"); return p; } @RequestMapping(value = "/parent", method = RequestMethod.GET, headers = "Accept=application/xml") public @ResponseBody Parent test2() { logger.info(" <--- Test 1 ---> \n"); Parent p = new Parent("a", "b"); return p; } If i run the TEST 1(json request) it works perfectly but when i run the test 2, 3 and 4 the browser give me back that error: (406) The resource identified by this request is only capable of generating responses with characteristics not acceptable according to the request "accept" headers (). Could someone help me? PS: if i run the bundle inside ECLIPSE it works perfectly. I generate the bundles with maven.

    Read the article

  • How do I handle mysql replication in EC2 using private IPs?

    - by chris
    I am trying to set up a mysql master/slave configuration in two EC2 instances. However, every time I reboot an instance, the IP address (and hostname) changes. I could assign an Elastic IP address, but would prefer to use the internal IP address. I can't be the first person to do this, but I can't seem to find a solution. There are a lot of "getting started" guides, but none of them mention how to handle changing IP addresses. So what are the best practices to manage master/slave replication in EC2?

    Read the article

  • Fatal error on "mode 120000" file during git -> svn migration

    - by Oliver
    Following instructions from the following website: http://code.google.com/p/support/wiki/ImportingFromGit I'm trying to migrate a git repository to svn, but during the "git rebase master tmp" step it fails with the following error after apply the first few patches: $ git rebase master tmp First, rewinding head to replay your work on top of it... Applying: Imported Applying: Cleaned up the readme file Applying: fix problem with versions fatal: unable to write file foobar mode 120000 Patch failed at 0003 fix problem with versions When you have resolved this problem run "git rebase --continue". If you would prefer to skip this patch, instead run "git rebase --skip". To restore the original branch and stop rebasing run "git rebase --abort". I understand that 120000 may refer to a symlink, but Subversion has supported symlinks for a long time now. Subversion installed is 1.6.5, Git is 1.6.3.3. Running on Ubuntu Linux. The system is not running out of disk space and this operation is taking place within my home directory so permissions should not be an issue.

    Read the article

  • Is this a bug in Profiler or Entity Framework?

    - by AjarnMark
    Using Entity Framework 4 with stored procedures and SQL Server 2008 SP1... When running SQL Server Profiler (TSQL_SPs template), the lines that show my stored procedure call and its statements say that they executed in DatabaseID = 1 (Master) but it is actually happening in my application database (ID = 8). The procedures execute properly and return the data, and they only exist in my application database, so why does Profiler mark those lines as being in Master? Is this a bug in Profiler? Is it a bug in EF4? Note that running the same code against a SQL 2000 instance, Profiler correctly shows the application's database ID.

    Read the article

  • homebrew in mac lion

    - by user975352
    I'm beginner of mac lion(10.7.2). I don't know well about mac but ubuntu. I installed homebrew to my mac, and I did command below. $ brew install git and then $ brew update error: Could not resolve host: github.com; nodename nor servname provided, or not known while accessing https://github.com/mxcl/homebrew.git/info/refs fatal: HTTP request failed Error: Failed while executing git pull origin refs/heads/master:refs/remotes/origin/master What's happen in my mac? How to resolve this? Would you help me?

    Read the article

  • Best way to duplicate databases nightly?

    - by Margaret
    Hey all We just got two new servers, that are running Windows Server 2008. The intent is to make the machines pretty much identical, copying the content of the master to the slave on a nightly basis, so that if anything fails, the second copy can stand in immediately. It doesn't need to be up-to-the-minute mirroring, though I suppose that wouldn't hurt if performance is not affected. The two machines will, amongst other things, each be running an instance of SQL Server 2008. The aim is to duplicate the databases on the master down to the slave on a nightly basis. Unless I'm misunderstanding, the slave databases in mirrored databases require the primary to be present to work correctly; I'm hoping for some solution where we have a second machine that can be up and running with minimal downtime if the first one falls over. Am I misunderstanding mirroring? Is that the best way to do things, or should I use some other mechanism? If so, what?

    Read the article

  • How to retain background colors when pasting between documents in Excel

    - by Iain Fraser
    I have a script that programatically generates excel spreadsheets - cleaning up ugly reports that are given to us from another organisation. For interests sake; I'm using PHPExcel to generate the "clean" reports. We get these reports every week for an event that happens every couple of months. The reports contain a list of attendees along with a group id that allows us to know that some attendees belong together. To help the event organisers out, I've taken the event ID and generated a unqiue color code (based on the hash of the event ID - truncated to 6 characters). This unique colour code is set as the background color of a cell in each row. This helps organisers quickly visually identify group members. Trouble is, when the organisers copy the rows from the weeks report into the master report (which contains all attendees, not just the ones that signed up this week) - all the colour-codes snap to the master template's color pallette. Thank you very much for your time All the best Iain

    Read the article

  • Clonezilla , NLite and PCs with different hardware specs..

    - by r2b2
    Hi, I used Clonezilla to create and restore images from a master computer to other workstations with the same specs. But the problem now is there are new computers whose hardware specs are different than the ones I maintained (mostly the videocard is different). Is it possible to create a customize Windows XP installer using Nlite and integrate all potential videocard and motherboard drivers ? If I then used this NLite ISO to install to a master computer which i will later clone and restore the image to other workstations, will windows xp still pick up the correct driver set? During XP installation, does the installer transfers all the drivers to the computer's harddisk? Thanks! Ryan

    Read the article

  • Postfix TLS issue

    - by HTF
    I'm trying to enable TLS on Postfix but the daemon is crashing: Sep 16 16:00:38 core postfix/master[1689]: warning: process /usr/libexec/postfix/smtpd pid 1694 killed by signal 11 Sep 16 16:00:38 core postfix/master[1689]: warning: /usr/libexec/postfix/smtpd: bad command startup -- throttling CentOS 6.3 x86_64 # postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases broken_sasl_auth_clients = yes command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 disable_vrfy_command = yes home_mailbox = Maildir/ html_directory = no inet_interfaces = all inet_protocols = all local_recipient_maps = mail_owner = postfix mailbox_command = mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost mydomain = domain.com myhostname = mail.domain.com mynetworks = 127.0.0.0/8 myorigin = $mydomain newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES relay_domains = sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtp_tls_note_starttls_offer = yes smtp_tls_session_cache_database = btree:/var/lib/postfix/smtpd_tls_cache.db smtp_use_tls = yes smtpd_delay_reject = yes smtpd_error_sleep_time = 1s smtpd_hard_error_limit = 20 smtpd_helo_required = yes smtpd_helo_restrictions = permit_mynetworks, reject_non_fqdn_hostname, reject_invalid_hostname, permit smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_pipelining, reject_non_fqdn_recipient, reject_unknown_recipient_domain, reject_invalid_hostname, reject_non_fqdn_hostname, reject_non_fqdn_sender, reject_unknown_sender_domain, reject_unauth_destination reject_rbl_client cbl.abuseat.org, reject_rbl_client bl.spamcop.net, permit smtpd_sasl_auth_enable = yes smtpd_sasl_local_domain = $myhostname smtpd_sasl_path = private/auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_sender_restrictions = permit_mynetworks, reject_non_fqdn_sender, reject_unknown_sender_domain, permit smtpd_soft_error_limit = 10 smtpd_tls_CAfile = /etc/postfix/ssl/cacert.pem smtpd_tls_cert_file = /etc/postfix/ssl/smtpd.crt smtpd_tls_key_file = /etc/postfix/ssl/smtpd.key smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_session_cache_timeout = 3600s smtpd_use_tls = yes tls_random_source = dev:/dev/urandom unknown_local_recipient_reject_code = 550

    Read the article

  • Win2008 DC in a Windows 2000 domain: can I keep the old DC?

    - by gravyface
    Will be putting a new Windows 2008 SE Server into a single domain network with two domain controllers, both running Windows 2000 Server. The functional level of the domain is mixed mode/2000. Until a second 2008 DC can be purchased, I'd like to leave the current Win2k operational master DC as a backup DC as the other member servers running 2003 have either accounting/SQL or Exchange on them. Eventually all the w2k servers will be decommissioned, but until then, I need another DC for redundancy. Following the standard process for adding a new DC, can I leave the old operational master DC (or the other backup DC) running after I transfer the FSMO roles to the new server? Will this cause any issues?

    Read the article

  • distributed, fault-tolerant network block device

    - by gucki
    I'm looking for a distributed, fault-tolerant network storage system which exposes block devices (not filesystems) on the clients. A client's block device should write simultaneously to several storage nodes A client's block device should not fail as long as not all storage nodes backing it went down The master should automatically redistribute storages' data when a storage node fails or gets added/ removed A single master (which is for metadata only) is fine So ideally the architecture would be very similar to moosefs (http://www.moosefs.org/) but instead of exposing a real filesystem mounted using a fuse client it'd expose block devices on the clients. I know of iscsi and drbd but both don't seem to offer what I'm looking for. Or am I missing something?

    Read the article

  • How to clear a zone from a broken Bind/Named server

    - by Cerin
    I tried adding a new zone for "mydomain4.com" to my Named DNS server. However, when I went to restart it, I received the unhelpful error message: Error in named configuration: zone mydomain4.com/IN: loaded serial 3 zone mydomain3.com/IN: loaded serial 2 zone mydomain2.com/IN: loaded serial 2 zone mydomain1.com/IN: loaded serial 2 zone mydomain0.com/IN: loaded serial 6 zone localhost.localdomain/IN: loaded serial 0 zone localhost/IN: loaded serial 0 zone 1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa/IN: loaded serial 0 zone 1.0.0.127.in-addr.arpa/IN: loaded serial 0 zone 0.in-addr.arpa/IN: loaded serial 0 zone mydomain/IN: loaded serial 2010092201 dns_rdata_fromtext: db.10.157.10:27: near '*.mydomain4.com.': bad name (check-names) zone 10.157.10.in-addr.arpa/IN: loading from master file db.10.157.10 failed: bad name (check-names) zone 10.157.10.in-addr.arpa/IN: not loaded due to errors. _default/10.157.10.in-addr.arpa/IN: bad name (check-names) I'm confused by this, since I thought I created the new zone identically to how I created the other 4 zones. However, since I need this DNS server up, I tried deleting the new zone file at /var/named/chroot/var/named/mydomain4.com.db. However, upon trying to restart again, I received a new unhelpful error: Error in named configuration: zone mydomain4.com/IN: loading from master file mydomain4.com.db failed: file not found zone mydomain4.com/IN: not loaded due to errors. _default/mydomain4.com./IN: file not found zone mydomain3.com/IN: loaded serial 2 zone mydomain2.com/IN: loaded serial 2 zone mydomain1.com/IN: loaded serial 2 zone mydomain0.com/IN: loaded serial 6 zone localhost.localdomain/IN: loaded serial 0 zone localhost/IN: loaded serial 0 zone 1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa/IN: loaded serial 0 zone 1.0.0.127.in-addr.arpa/IN: loaded serial 0 zone 0.in-addr.arpa/IN: loaded serial 0 zone mydomain/IN: loaded serial 2010092201 dns_rdata_fromtext: db.10.157.10:27: near '*.mydomain4.com.': bad name (check-names) zone 10.157.10.in-addr.arpa/IN: loading from master file db.10.157.10 failed: bad name (check-names) zone 10.157.10.in-addr.arpa/IN: not loaded due to errors. _default/10.157.10.in-addr.arpa/IN: bad name (check-names) Obviously, named still thinks the zone file is being used, but I can't find where. I've tried doing: grep -lir "mydomain4" / but it doesn't find any files containing that text. How do I purge this domain from named's configs? Also, how do I figure out what caused the original error?

    Read the article

  • Ubuntu 11.04 and OpenLDAP - where is the config?

    - by Tom SKelley
    I've been asked to setup a multimaster LDAP environment on Ubuntu 11.04 - instead of a single master server. I cloned the master server and recreated it into two VMs. I am trying to follow the instructions on the OpenLDAP documentation here: http://www.openldap.org/doc/admin24/replication.html and it talks about modifying the cn=config tree within LDAP. The subdirectory tree appears to be there at: /etc/ldap/slapd.d/ and a slapcat -b cn=config drops out a load of config information. When I try to connect using a browser and the admin bind credentials: ldapsearch -D '<adminDN>' -w <password> -b 'cn=config' I get: # extended LDIF # # LDAPv3 # base <> (default) with scope subtree # filter: (objectclass=*) # requesting: ALL # # search result search: 2 result: 32 No such object I don't see the config context when I connect via an LDAP browser either. I'm sure I'm missing something, but I can't see what it is!

    Read the article

  • multiple puppet masters set up using inventory

    - by Oli
    I have managed to set up multiple puppet masters with one puppet master acting as a CA and clients are able to get a certificate from this CA server but use their designated puppet master to get their manifests. See this question for more info.. multiple puppet masters. However, there are a couple of things I have had to do to get this working correctly and have an error which I'll get to. First of all, to get inventory working for a puppet-client (PC) connecting to its designated puppet-master (PM), I had to copy the CA certs on PM1 to the PM2 ca directory. I ran this command: scp [email protected]:/var/lib/puppet/ssl/ca/* [email protected]:/var/lib/puppet/ssl/ca/. Once i have done that, I was able to uncomment the SSLCertificateChainFile, SSLCACertificateFile & SSLCARevocationFile section of my rack.conf VH file on the PM2. Once I had done this, inventory started to work. Does this sound an acceptable way to do things? Secondly, in the puppet.conf file, I am setting the designated PM server for that client. Unless there is a better way, this is how it'll work in my production setup. So PC1 will talk to PM1 and PC2 will talk to PM2. This is where I have an error. When PC2 first requests a cert from the CA on PM1, the cert appears and then I sign the cert on the CA on PM1. When I then do a puppet agent --test on PC2 (which has server = PM2 in puppet.conf), I get this error: Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: puppet-master2.test.net(10.1.1.161) access to /certificate_revocation_list/ca [find] at :112 However, if I change the PC2 puppet.conf file and specify server = PM1 and the rerun puppet agent --test, i do not get any errors. I can then revert the change in the puppet.conf file back to server = PM2 and everything seems to run normally. Do I have to set up some kind of ProxyPassMatch on PM2 for requests made from clients to /certificate_revocation_list/* and redirect them to PM1? Or how can I fix this error? Cheers, Oli

    Read the article

  • I can't remove a background image in Powerpoint 2007

    - by computergeek
    Hello Everyone! I have a Powerpoint 2007 document. There is this annoying background graphic in my Powerpoint slide. I know about slide master, but this little bugger is not something that I can delete from slide master. Argh!!!! The graphic is a little pumpkin at the bottom of the slide and my company logo at the top. http://www.violetonresumes.com/HelpMeRemoveThisGraphic.pptx Here is a copy of the document. I've already lost two hours of my life on this one. Any help would be greatly appreciated. Cheers

    Read the article

  • Why does the EFI shell not detect my Windows DVD?

    - by Oliver Salzburg
    I'm currently looking into (U)EFI for the first time and am already really confused. I insert the Windows Server 2008 R2 Enterprise disc into the DVD-ROM and boot into the EFI shell. The shell will automatically list all detected devices, which are: blk0 :CDRom - Alias (null) Acpi(PNP0A03,0)/Pci(1F|2)/Ata(Primary,Master)/CDROM(Entry0) blk1 :BlockDevice - Alias (null) Acpi(PNP0A03,0)/Pci(1F|2)/Ata(Primary,Master) To my understanding, it should have already detected the filesystem on blk0 and should have mounted it as fs0. Why is that not happening? If I insert a USB drive, it gets mounted just fine. The board is an Intel S5520HC in case that makes a difference.

    Read the article

  • Trying to create a git repo that does an automatic checkout everytime someone updates origin

    - by Dane Larsen
    Basically, I have a server with a git repo 'origin'. I'm trying to have another repo auto-pull from origin every time someone pushes code to it. I've been using the hooks in origin, specifically post-receive. So far, my post receive looks something like this: #!/bin/sh GIT_DIR=/home/<user>/<test_repo> git pull origin master But when I push to origin from another computer, I get the error: remote: fatal: Not a git repository: '/home/<user>/<test_repo>' However, test_repo most definitely is a git repo. I can cd into it and run 'git pull origin master' and it works fine. Is there an easier way to do what I'm trying to do? If not, what am I doing wrong with this approach? Thanks in advance. Edit, to clarify: The repo is a website in progress, and I'd like to have a version of it available at all times that is fully up to date.

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >