Search Results

Search found 5414 results on 217 pages for 'otn j master'.

Page 81/217 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • Fatal error on "mode 120000" file during git -> svn migration

    - by Oliver
    Following instructions from the following website: http://code.google.com/p/support/wiki/ImportingFromGit I'm trying to migrate a git repository to svn, but during the "git rebase master tmp" step it fails with the following error after apply the first few patches: $ git rebase master tmp First, rewinding head to replay your work on top of it... Applying: Imported Applying: Cleaned up the readme file Applying: fix problem with versions fatal: unable to write file foobar mode 120000 Patch failed at 0003 fix problem with versions When you have resolved this problem run "git rebase --continue". If you would prefer to skip this patch, instead run "git rebase --skip". To restore the original branch and stop rebasing run "git rebase --abort". I understand that 120000 may refer to a symlink, but Subversion has supported symlinks for a long time now. Subversion installed is 1.6.5, Git is 1.6.3.3. Running on Ubuntu Linux. The system is not running out of disk space and this operation is taking place within my home directory so permissions should not be an issue.

    Read the article

  • Is this a bug in Profiler or Entity Framework?

    - by AjarnMark
    Using Entity Framework 4 with stored procedures and SQL Server 2008 SP1... When running SQL Server Profiler (TSQL_SPs template), the lines that show my stored procedure call and its statements say that they executed in DatabaseID = 1 (Master) but it is actually happening in my application database (ID = 8). The procedures execute properly and return the data, and they only exist in my application database, so why does Profiler mark those lines as being in Master? Is this a bug in Profiler? Is it a bug in EF4? Note that running the same code against a SQL 2000 instance, Profiler correctly shows the application's database ID.

    Read the article

  • homebrew in mac lion

    - by user975352
    I'm beginner of mac lion(10.7.2). I don't know well about mac but ubuntu. I installed homebrew to my mac, and I did command below. $ brew install git and then $ brew update error: Could not resolve host: github.com; nodename nor servname provided, or not known while accessing https://github.com/mxcl/homebrew.git/info/refs fatal: HTTP request failed Error: Failed while executing git pull origin refs/heads/master:refs/remotes/origin/master What's happen in my mac? How to resolve this? Would you help me?

    Read the article

  • Best way to duplicate databases nightly?

    - by Margaret
    Hey all We just got two new servers, that are running Windows Server 2008. The intent is to make the machines pretty much identical, copying the content of the master to the slave on a nightly basis, so that if anything fails, the second copy can stand in immediately. It doesn't need to be up-to-the-minute mirroring, though I suppose that wouldn't hurt if performance is not affected. The two machines will, amongst other things, each be running an instance of SQL Server 2008. The aim is to duplicate the databases on the master down to the slave on a nightly basis. Unless I'm misunderstanding, the slave databases in mirrored databases require the primary to be present to work correctly; I'm hoping for some solution where we have a second machine that can be up and running with minimal downtime if the first one falls over. Am I misunderstanding mirroring? Is that the best way to do things, or should I use some other mechanism? If so, what?

    Read the article

  • How to retain background colors when pasting between documents in Excel

    - by Iain Fraser
    I have a script that programatically generates excel spreadsheets - cleaning up ugly reports that are given to us from another organisation. For interests sake; I'm using PHPExcel to generate the "clean" reports. We get these reports every week for an event that happens every couple of months. The reports contain a list of attendees along with a group id that allows us to know that some attendees belong together. To help the event organisers out, I've taken the event ID and generated a unqiue color code (based on the hash of the event ID - truncated to 6 characters). This unique colour code is set as the background color of a cell in each row. This helps organisers quickly visually identify group members. Trouble is, when the organisers copy the rows from the weeks report into the master report (which contains all attendees, not just the ones that signed up this week) - all the colour-codes snap to the master template's color pallette. Thank you very much for your time All the best Iain

    Read the article

  • Clonezilla , NLite and PCs with different hardware specs..

    - by r2b2
    Hi, I used Clonezilla to create and restore images from a master computer to other workstations with the same specs. But the problem now is there are new computers whose hardware specs are different than the ones I maintained (mostly the videocard is different). Is it possible to create a customize Windows XP installer using Nlite and integrate all potential videocard and motherboard drivers ? If I then used this NLite ISO to install to a master computer which i will later clone and restore the image to other workstations, will windows xp still pick up the correct driver set? During XP installation, does the installer transfers all the drivers to the computer's harddisk? Thanks! Ryan

    Read the article

  • Postfix TLS issue

    - by HTF
    I'm trying to enable TLS on Postfix but the daemon is crashing: Sep 16 16:00:38 core postfix/master[1689]: warning: process /usr/libexec/postfix/smtpd pid 1694 killed by signal 11 Sep 16 16:00:38 core postfix/master[1689]: warning: /usr/libexec/postfix/smtpd: bad command startup -- throttling CentOS 6.3 x86_64 # postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases broken_sasl_auth_clients = yes command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 disable_vrfy_command = yes home_mailbox = Maildir/ html_directory = no inet_interfaces = all inet_protocols = all local_recipient_maps = mail_owner = postfix mailbox_command = mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost mydomain = domain.com myhostname = mail.domain.com mynetworks = 127.0.0.0/8 myorigin = $mydomain newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES relay_domains = sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtp_tls_note_starttls_offer = yes smtp_tls_session_cache_database = btree:/var/lib/postfix/smtpd_tls_cache.db smtp_use_tls = yes smtpd_delay_reject = yes smtpd_error_sleep_time = 1s smtpd_hard_error_limit = 20 smtpd_helo_required = yes smtpd_helo_restrictions = permit_mynetworks, reject_non_fqdn_hostname, reject_invalid_hostname, permit smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_pipelining, reject_non_fqdn_recipient, reject_unknown_recipient_domain, reject_invalid_hostname, reject_non_fqdn_hostname, reject_non_fqdn_sender, reject_unknown_sender_domain, reject_unauth_destination reject_rbl_client cbl.abuseat.org, reject_rbl_client bl.spamcop.net, permit smtpd_sasl_auth_enable = yes smtpd_sasl_local_domain = $myhostname smtpd_sasl_path = private/auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_sender_restrictions = permit_mynetworks, reject_non_fqdn_sender, reject_unknown_sender_domain, permit smtpd_soft_error_limit = 10 smtpd_tls_CAfile = /etc/postfix/ssl/cacert.pem smtpd_tls_cert_file = /etc/postfix/ssl/smtpd.crt smtpd_tls_key_file = /etc/postfix/ssl/smtpd.key smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_session_cache_timeout = 3600s smtpd_use_tls = yes tls_random_source = dev:/dev/urandom unknown_local_recipient_reject_code = 550

    Read the article

  • Win2008 DC in a Windows 2000 domain: can I keep the old DC?

    - by gravyface
    Will be putting a new Windows 2008 SE Server into a single domain network with two domain controllers, both running Windows 2000 Server. The functional level of the domain is mixed mode/2000. Until a second 2008 DC can be purchased, I'd like to leave the current Win2k operational master DC as a backup DC as the other member servers running 2003 have either accounting/SQL or Exchange on them. Eventually all the w2k servers will be decommissioned, but until then, I need another DC for redundancy. Following the standard process for adding a new DC, can I leave the old operational master DC (or the other backup DC) running after I transfer the FSMO roles to the new server? Will this cause any issues?

    Read the article

  • distributed, fault-tolerant network block device

    - by gucki
    I'm looking for a distributed, fault-tolerant network storage system which exposes block devices (not filesystems) on the clients. A client's block device should write simultaneously to several storage nodes A client's block device should not fail as long as not all storage nodes backing it went down The master should automatically redistribute storages' data when a storage node fails or gets added/ removed A single master (which is for metadata only) is fine So ideally the architecture would be very similar to moosefs (http://www.moosefs.org/) but instead of exposing a real filesystem mounted using a fuse client it'd expose block devices on the clients. I know of iscsi and drbd but both don't seem to offer what I'm looking for. Or am I missing something?

    Read the article

  • How to clear a zone from a broken Bind/Named server

    - by Cerin
    I tried adding a new zone for "mydomain4.com" to my Named DNS server. However, when I went to restart it, I received the unhelpful error message: Error in named configuration: zone mydomain4.com/IN: loaded serial 3 zone mydomain3.com/IN: loaded serial 2 zone mydomain2.com/IN: loaded serial 2 zone mydomain1.com/IN: loaded serial 2 zone mydomain0.com/IN: loaded serial 6 zone localhost.localdomain/IN: loaded serial 0 zone localhost/IN: loaded serial 0 zone 1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa/IN: loaded serial 0 zone 1.0.0.127.in-addr.arpa/IN: loaded serial 0 zone 0.in-addr.arpa/IN: loaded serial 0 zone mydomain/IN: loaded serial 2010092201 dns_rdata_fromtext: db.10.157.10:27: near '*.mydomain4.com.': bad name (check-names) zone 10.157.10.in-addr.arpa/IN: loading from master file db.10.157.10 failed: bad name (check-names) zone 10.157.10.in-addr.arpa/IN: not loaded due to errors. _default/10.157.10.in-addr.arpa/IN: bad name (check-names) I'm confused by this, since I thought I created the new zone identically to how I created the other 4 zones. However, since I need this DNS server up, I tried deleting the new zone file at /var/named/chroot/var/named/mydomain4.com.db. However, upon trying to restart again, I received a new unhelpful error: Error in named configuration: zone mydomain4.com/IN: loading from master file mydomain4.com.db failed: file not found zone mydomain4.com/IN: not loaded due to errors. _default/mydomain4.com./IN: file not found zone mydomain3.com/IN: loaded serial 2 zone mydomain2.com/IN: loaded serial 2 zone mydomain1.com/IN: loaded serial 2 zone mydomain0.com/IN: loaded serial 6 zone localhost.localdomain/IN: loaded serial 0 zone localhost/IN: loaded serial 0 zone 1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa/IN: loaded serial 0 zone 1.0.0.127.in-addr.arpa/IN: loaded serial 0 zone 0.in-addr.arpa/IN: loaded serial 0 zone mydomain/IN: loaded serial 2010092201 dns_rdata_fromtext: db.10.157.10:27: near '*.mydomain4.com.': bad name (check-names) zone 10.157.10.in-addr.arpa/IN: loading from master file db.10.157.10 failed: bad name (check-names) zone 10.157.10.in-addr.arpa/IN: not loaded due to errors. _default/10.157.10.in-addr.arpa/IN: bad name (check-names) Obviously, named still thinks the zone file is being used, but I can't find where. I've tried doing: grep -lir "mydomain4" / but it doesn't find any files containing that text. How do I purge this domain from named's configs? Also, how do I figure out what caused the original error?

    Read the article

  • Ubuntu 11.04 and OpenLDAP - where is the config?

    - by Tom SKelley
    I've been asked to setup a multimaster LDAP environment on Ubuntu 11.04 - instead of a single master server. I cloned the master server and recreated it into two VMs. I am trying to follow the instructions on the OpenLDAP documentation here: http://www.openldap.org/doc/admin24/replication.html and it talks about modifying the cn=config tree within LDAP. The subdirectory tree appears to be there at: /etc/ldap/slapd.d/ and a slapcat -b cn=config drops out a load of config information. When I try to connect using a browser and the admin bind credentials: ldapsearch -D '<adminDN>' -w <password> -b 'cn=config' I get: # extended LDIF # # LDAPv3 # base <> (default) with scope subtree # filter: (objectclass=*) # requesting: ALL # # search result search: 2 result: 32 No such object I don't see the config context when I connect via an LDAP browser either. I'm sure I'm missing something, but I can't see what it is!

    Read the article

  • multiple puppet masters set up using inventory

    - by Oli
    I have managed to set up multiple puppet masters with one puppet master acting as a CA and clients are able to get a certificate from this CA server but use their designated puppet master to get their manifests. See this question for more info.. multiple puppet masters. However, there are a couple of things I have had to do to get this working correctly and have an error which I'll get to. First of all, to get inventory working for a puppet-client (PC) connecting to its designated puppet-master (PM), I had to copy the CA certs on PM1 to the PM2 ca directory. I ran this command: scp [email protected]:/var/lib/puppet/ssl/ca/* [email protected]:/var/lib/puppet/ssl/ca/. Once i have done that, I was able to uncomment the SSLCertificateChainFile, SSLCACertificateFile & SSLCARevocationFile section of my rack.conf VH file on the PM2. Once I had done this, inventory started to work. Does this sound an acceptable way to do things? Secondly, in the puppet.conf file, I am setting the designated PM server for that client. Unless there is a better way, this is how it'll work in my production setup. So PC1 will talk to PM1 and PC2 will talk to PM2. This is where I have an error. When PC2 first requests a cert from the CA on PM1, the cert appears and then I sign the cert on the CA on PM1. When I then do a puppet agent --test on PC2 (which has server = PM2 in puppet.conf), I get this error: Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: puppet-master2.test.net(10.1.1.161) access to /certificate_revocation_list/ca [find] at :112 However, if I change the PC2 puppet.conf file and specify server = PM1 and the rerun puppet agent --test, i do not get any errors. I can then revert the change in the puppet.conf file back to server = PM2 and everything seems to run normally. Do I have to set up some kind of ProxyPassMatch on PM2 for requests made from clients to /certificate_revocation_list/* and redirect them to PM1? Or how can I fix this error? Cheers, Oli

    Read the article

  • I can't remove a background image in Powerpoint 2007

    - by computergeek
    Hello Everyone! I have a Powerpoint 2007 document. There is this annoying background graphic in my Powerpoint slide. I know about slide master, but this little bugger is not something that I can delete from slide master. Argh!!!! The graphic is a little pumpkin at the bottom of the slide and my company logo at the top. http://www.violetonresumes.com/HelpMeRemoveThisGraphic.pptx Here is a copy of the document. I've already lost two hours of my life on this one. Any help would be greatly appreciated. Cheers

    Read the article

  • Why does the EFI shell not detect my Windows DVD?

    - by Oliver Salzburg
    I'm currently looking into (U)EFI for the first time and am already really confused. I insert the Windows Server 2008 R2 Enterprise disc into the DVD-ROM and boot into the EFI shell. The shell will automatically list all detected devices, which are: blk0 :CDRom - Alias (null) Acpi(PNP0A03,0)/Pci(1F|2)/Ata(Primary,Master)/CDROM(Entry0) blk1 :BlockDevice - Alias (null) Acpi(PNP0A03,0)/Pci(1F|2)/Ata(Primary,Master) To my understanding, it should have already detected the filesystem on blk0 and should have mounted it as fs0. Why is that not happening? If I insert a USB drive, it gets mounted just fine. The board is an Intel S5520HC in case that makes a difference.

    Read the article

  • Trying to create a git repo that does an automatic checkout everytime someone updates origin

    - by Dane Larsen
    Basically, I have a server with a git repo 'origin'. I'm trying to have another repo auto-pull from origin every time someone pushes code to it. I've been using the hooks in origin, specifically post-receive. So far, my post receive looks something like this: #!/bin/sh GIT_DIR=/home/<user>/<test_repo> git pull origin master But when I push to origin from another computer, I get the error: remote: fatal: Not a git repository: '/home/<user>/<test_repo>' However, test_repo most definitely is a git repo. I can cd into it and run 'git pull origin master' and it works fine. Is there an easier way to do what I'm trying to do? If not, what am I doing wrong with this approach? Thanks in advance. Edit, to clarify: The repo is a website in progress, and I'd like to have a version of it available at all times that is fully up to date.

    Read the article

  • BIND returns serverfail when querying for its authoriative domain

    - by estol
    Hi there Serverfault folks! First of all: sorry about the title, I had some problem coming up with the proper title. I have a little home server set up, for internet sharing, samba, basic http, dlna mediaserver and what not, and I happend to have a domain at hand, so I thought why not direct it to this computer? I have a BIND 9.8.0 installed, and - afaik - configured it properly. For a few days, the public view did not worked, and I really did not cared, since the local view worked. But now suddenly, even the local view fails. If I try to query the nameserver for anything in my domain, it returns the following error: $ nslookup andromeda.dafaces.com ;; Got SERVFAIL reply from ::1, trying next server ;; Got SERVFAIL reply from ::1, trying next server Server: 127.0.0.1 Address: 127.0.0.1#53 ** server can't find andromeda.dafaces.com.dafaces.com: SERVFAIL Also, the public view points to the old ip address of the domain, probably because of the same error. Some information about the system: $ uname -a Linux tressis 2.6.37-ARCH #1 SMP PREEMPT Tue Mar 15 09:21:17 CET 2011 x86_64 AMD Athlon(tm) 64 X2 Dual Core Processor 5000+ AuthenticAMD GNU/Linux $ named -v BIND 9.8.0 And the named.conf file: # cat /etc/named.conf // // /etc/named.conf // include "/etc/rndc.key"; #controls { # inet 127.0.0.1 allow {localhost; } keys { "dnskulcs"; }; #}; options { directory "/var/named"; pid-file "/var/run/named/named.pid"; auth-nxdomain yes; datasize default; // Uncomment these to enable IPv6 connections support // IPv4 will still work: listen-on-v6 { any; }; listen-on { any; }; // Add this for no IPv4: // listen-on { none; }; // Default security settings. // allow-recursion { 127.0.0.1; ::1; 192.168.1.0/24; }; // allow-recursion { any; }; allow-query { any; }; allow-transfer { 127.0.0.1; ::1; 92.243.14.172; 87.98.164.164; 88.191.64.64; }; allow-update { key "dnskulcs"; }; version none; hostname none; server-id none; zone-statistics yes; forwarders { 213.46.246.53; 213.26.246.54; 8.8.8.8; 8.8.4.4; 192.188.242.65; 193.227.196.3; 2001:470:20::2; }; }; view "local" { match-clients { 192.168.1.0/24; 127.0.0.1; ::1; fec0:0:0:ffff::/64; }; recursion yes; zone "localhost" IN { type master; file "localhost.zone"; allow-transfer { any; }; }; zone "0.0.127.in-addr.arpa" IN { type master; file "127.0.0.zone"; allow-transfer { any; }; }; zone "." IN { type hint; file "root.hint"; }; zone "dafaces.com" IN { type master; file "internal/dafaces.com.fw"; allow-update { key "dnskulcs"; }; }; zone "1.168.192.in-addr.arpa" IN { type master; file "internal/dafaces.com.rev"; allow-update { key "dnskulcs"; }; }; }; view "public" { match-clients { any;}; recursion no; zone "dafaces.com" IN { type master; file "external/dafaces.com.fw"; allow-transfer { 87.98.164.164; 195.234.42.1; 88.191.64.64; }; }; }; //zone "example.org" IN { // type slave; // file "example.zone"; // masters { // 192.168.1.100; // }; // allow-query { any; }; // allow-transfer { any; }; //}; logging { channel xfer-log { file "/var/log/named.log"; print-category yes; print-severity yes; print-time yes; severity info; }; category xfer-in { xfer-log; }; category xfer-out { xfer-log; }; category notify { xfer-log; }; }; All help would be highly appreciated! EDIT: Zone files: # cat /var/named/internal/dafaces.com.fw $ORIGIN . $TTL 3600 ; 1 hour dafaces.com IN SOA tressis.dafaces.com. postmaster.dafaces.com. ( 2011032201 ; serial 28800 ; refresh (8 hours) 7200 ; retry (2 hours) 2419200 ; expire (4 weeks) 3600 ; minimum (1 hour) ) NS tressis.dafaces.com. A 192.168.1.1 MX 10 mail.dafaces.com. $ORIGIN _tcp.dafaces.com. _http SRV 0 5 80 www.dafaces.com. _ssh SRV 0 5 22 tressis.dafaces.com. $ORIGIN dafaces.com. acrisius A 192.168.1.230 andromeda A 192.168.1.7 andromeda-win7 CNAME andromeda aspasia A 192.168.1.233 athena A 192.168.1.232 callisto A 192.168.1.102 db A 192.168.1.1 management A 192.168.1.1 ; web management for the router functions haley A 192.168.1.5 hoth A 192.168.1.101 mail A 192.168.1.1 satelite A 192.168.1.20 sony-player A 192.168.1.103 TXT "310f16de2d2712dfc4ae6e5c54f60f828e" torrent A 192.168.1.1 tracker A 192.168.1.1 tressis A 192.168.1.1 www A 192.168.1.1 zeus A 192.168.1.231 and # cat /var/named/external/dafaces.com.fw $ORIGIN . $TTL 3600 dafaces.com IN SOA ns.dafaces.com. postmaster.dafaces.com. ( 2011032405; serial 28800; refresh 7200; retry 2419200; expire 3600; minimum ) NS ns.dafaces.com. NS ns0.xname.org. NS ns1.xname.org. NS ns2.xname.org. A 89.135.129.37 MX 10 mail.dafaces.com. $ORIGIN dafaces.com. ;Szolgaltatasok _ssh._tcp SRV 0 5 22 tressis _http._tcp SRV 0 5 80 www ns A 89.135.129.37 hoth A 89.135.129.37 www A 89.135.129.37 mail A 89.135.129.37 db A 89.135.129.37 torrent A 89.135.129.37 tracker A 89.135.129.37 Edit: Ohh, hell I almost forgot. Since the node is connected to the internet via a residential connection, there is a possibility, that the public ipv4 address will change(but thank god, it is a very rare case), so I daily update the external IP address in the zone file with a shellscript: # cat /etc/cron.daily/dnsupdate #!/bin/sh FILE="/var/named/external/dafaces.com.fw" SERIAL=$(date +%Y%m%d05) PUBLIC_IP=$(ifconfig internet |sed -n "/inet addr:.*255.255.255.255/{s/.*inet addr://; s/ .*//; p}") cat $FILE | sed --posix 's/^.* serial$/\t\t\t\t\t'$SERIAL'; serial/' | sed --posix 's/[0-9]*\.[0-9]*\.[0-9]*\.[0-9]*/'$PUBLIC_IP'/' > /tmp/ujzona mv /tmp/ujzona $FILE /etc/rc.d/named reload

    Read the article

  • Separate zone exceptions for each view in BIND

    - by Stefan M
    Problem: Separate zones by query source network and return different records for LAN clients compared to WAN clients. I've implemented this at home on a small alix router with Bind 9.4. One view called "lan" and one view called "wan". The "lan" view had just the root.hints file and one zone. The "wan" view had many other zones, including a copy of the one zone from the "lan" view, but with different records. Querying domain1.tld from the LAN would give me local records. Querying domain1.tld from the WAN would give me external records. Querying domain2.tld from the LAN would give me the same records as from the WAN as it only existed in the WAN view. Now I'm trying to re-implement this on a larger scale and suddenly my view is unable to query anything outside itself. This is natural according to the bind-users list and they suggest I copy all my views into my LAN view. I'm hoping someone here has a better solution because that means I'll have to copy, and maintain, thousands of zone files in multiple views. This is unfeasible. My configuration at home resembles this. acl lanClients { 192.168.22.0/24; 127.0.0.1; }; view "intranet" { match-clients { lanClients; }; recursion yes; notify no; // Standard zones // zone "." { type hint; file "etc/root.hint"; }; zone "domain1.tld" { type master; file "intranet/domain1.tld"; }; }; view "internet" { match-clients { !localnets; any; }; recursion no; allow-transfer { slaveDNS; }; include "master.zones"; }; Requests from the LAN for domain1.tld give local records, requests from the WAN give remote records. This works fine both at home and in my new Bind 9.7 on a larger scale. The difference is that at home I have somehow managed to make my LAN get remote records from domains in master.zones, without specifying those zones as duplicates in the "intranet" view. Trying this on a larger scale with Bind 9.7 I get no results at all except for the zones specified in the view. What am I missing? I've tried the same configuration for Bind 9.7.

    Read the article

  • How can I delete, break, or otherwise convert cross references to simple text in microsoft word 2013

    - by Mr Purple
    Cross referencing figure and table captions is useful while editing a document but can become confude when copying and pasting between large documents. I need to pass my document to a colleague who will collate my document with others and has requested that I remove or delete any cross referencing so that my "correct" cross references do not interfere or get interfered with by any other cross references that may be in my colleagues master collated document. My document will be cut and pasted into the master and no further complicated instructions after that point will be tolerated by my colleague. Is there a simple way to convert my cross references to simple text? I am using microsoft word 2013.

    Read the article

  • How to configure autofs5 timeout on per-filesystem basis?

    - by Norman Ramsey
    Because of a show-stopping bug in Debian autofs 4, I just upgraded to autofs5. It is not honoring the timeout option in my auto.master file: /var/autofs/removable /etc/auto.removable --timeout=2 I use this map for thumb drives and so on; I don't want a general default timeout of 2 seconds. I did some digging and although the --timeout option worked in autofs 4, and it appears in some examples on the Web, it is not actually sanctioned (or even mentioned) in the documentation for the auto.master file. So I don't feel I can report the problem as a bug. How can I get autofs5 to timeout after 2 seconds only on designated filesystems? Update: I am using a Debian-packaged autofs5, version 5.0.4-3.2.

    Read the article

  • How do I create an ISO image from a directory structure on CentOS?

    - by tom smith
    I'm trying to figure out the exact mkisofs cmd to create the ISO with the following directory and file structure. I've tried different commands, but when I mount the ISO that is created the directory tree has not been reproduced. The initial directory tree is: master.iso:: mount -o loop /apps/vmware/master.iso /mnt/vmtest ls /mnt/vmtest isolinux ks.cfg upgra32 upgra64 upgrade.sh ls /mnt/vmtest/isolinux boot.cat initrd.img isolinux.bin isolinux.cfg vmlinuz I've used different variations of the following mkisofs command without success: mkisofs -o '/foo/test.iso' -b 'isolinux.bin' -c 'boot.cat' -no-emul-boot -boot-load-size 4 -boot-info-table 'isolinux' How do I make an ISO that captures a directory's exact structure?

    Read the article

  • Hiera + Puppet classes

    - by Amadan
    I'm trying to figure out Puppet (3.0) and how it relates to built-in Hiera. So this is what I tried, an extremely simple example (I'll make a more complex hierarchy when I manage to get the simple one working): # /etc/puppet/hiera.yaml :backends: - yaml :hierarchy: - common :yaml: :datadir: /etc/puppet/hieradata # /etc/puppet/hieradata/common.yaml test::param: value # /etc/puppet/modules/test/manifests/init.pp class test ($param) { notice($param) } # /etc/puppet/manifests/site.pp include test If I directly apply it, it's fine: $ puppet apply /etc/puppet/manifests/site.pp Scope(Class[Test]): value If I go through puppet master, it's not fine: $ puppet agent --test Could not retrieve catalog from remote server: Error 400 on SERVER: Must pass param to Class[Test] at /etc/puppet/manifests/site.pp:1 on node <nodename> What am I missing? EDIT: I just left the office but a thought struck me: I should probably restart puppet master so it can see the new hiera.conf. I'll try that on Monday; in the meantime, if anyone figures out some not-it problem, I'd appreciate it :)

    Read the article

  • "Slave" user accounts in GNU/Linux

    - by Vi
    How to make one user account to be like root for some other user account, e.g. to be able to read, write, chmod all it's files, chown from this account to master and back, kill/ptrace all it's processes and to all thinks root can, but limited only to that particular slave account? Now I'm simulating this by allowing "master" user to "sudo -u slaveuser" and setting setfacl -dRm u:masteruser:rwx ~slaveuser. It is useful as I run most desktop programs in separate user accounts, but need to move files between them sometimes. If it requires some simple kernel patch it is OK.

    Read the article

  • Bridging wireless and wired networks on Linux box

    - by nixnotwin
    I have the following setup: modem + router - - - - -Ubuntu box on master mode.........wireless devises. Ubuntu machine connects to Internet on wired network. I've dhcp3-server, masquerading, and wireless card with master mode on Ubuntu box. The issue is Ubuntu connects to the router on NAT. The wireless devises connect to the Ubuntu box on a NAT too (though different). SO my wireless devises are behind two NAT networks. The solution I am looking for is Ubuntu should forward dhcp requests to the modem+router, and Ubuntu should act as a switch or a bridge that allows wireless devises to connect to the wired network. So the modem+router should act a main router.

    Read the article

  • Problem with intel chipset 4 serie and centos dealing with dual head

    - by Antoine
    I've a fujitsu lifebook S7220, it's been a while since i try to configure it to use a dual head with centos 5.4 x86_64. Everytime I try, the xserver crash... I've got an intel chipset mobile 4 serie (GMA 4500MHD, if I recall good!) When I do an lspci -v i've got these : 00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) (prog-if 00 [VGA controller]) Subsystem: Fujitsu Limited. Unknown device 1451 Flags: bus master, fast devsel, latency 0, IRQ 177 Memory at f2000000 (64-bit, non-prefetchable) [size=4M] Memory at d0000000 (64-bit, prefetchable) [size=256M] I/O ports at 1800 [size=8] Capabilities: [90] Message Signalled Interrupts: 64bit- Queue=0/0 Enable- Capabilities: [d0] Power Management version 3 00:02.1 Display controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) Subsystem: Fujitsu Limited. Unknown device 1451 Flags: bus master, fast devsel, latency 0 Memory at f2400000 (64-bit, non-prefetchable) [size=1M] Capabilities: [d0] Power Management version 3 My question is, anyone already got this problem and how did you fix it? Thank you for your answer!

    Read the article

  • Unresponsive virtual OS

    - by confusedGeek
    Hopefully someone has a suggestion on how to resolve this. Configuration Host: Win 2003R2 w/Virtual Server 2005R2 Virtual1: Win 2003R2 w/Sql Server 2005 Virtual2: Win 2003R2 w/WSS 3.0 Situation This past weekend the power went out and took down the servers (no UPS, it's a desktop standing in as dev testing server). Since the servers went down the Virtual2 server after running WSS fairly heavily for an hour to two will become unresponsive via HTTP. If I login via virtual server's remote control I don't get anything beyond a background screen. The CPU counter on the virtual server's master status shows that it isn't doing anything. The only thing I have been able to do is to turn off Virtual2, which loses any state changes. Shutdown commands issue from the virtual server master status are ignored. After restarting Virtual2 the event logs and application logs don't indicate what caused the problem. Anyone have an idea as to how to repair the OS, or maybe what could be the problem? Thanks ahead of time.

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >