Search Results

Search found 5011 results on 201 pages for 'grand master t'.

Page 67/201 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • BIND returns serverfail when querying for its authoriative domain

    - by estol
    Hi there Serverfault folks! First of all: sorry about the title, I had some problem coming up with the proper title. I have a little home server set up, for internet sharing, samba, basic http, dlna mediaserver and what not, and I happend to have a domain at hand, so I thought why not direct it to this computer? I have a BIND 9.8.0 installed, and - afaik - configured it properly. For a few days, the public view did not worked, and I really did not cared, since the local view worked. But now suddenly, even the local view fails. If I try to query the nameserver for anything in my domain, it returns the following error: $ nslookup andromeda.dafaces.com ;; Got SERVFAIL reply from ::1, trying next server ;; Got SERVFAIL reply from ::1, trying next server Server: 127.0.0.1 Address: 127.0.0.1#53 ** server can't find andromeda.dafaces.com.dafaces.com: SERVFAIL Also, the public view points to the old ip address of the domain, probably because of the same error. Some information about the system: $ uname -a Linux tressis 2.6.37-ARCH #1 SMP PREEMPT Tue Mar 15 09:21:17 CET 2011 x86_64 AMD Athlon(tm) 64 X2 Dual Core Processor 5000+ AuthenticAMD GNU/Linux $ named -v BIND 9.8.0 And the named.conf file: # cat /etc/named.conf // // /etc/named.conf // include "/etc/rndc.key"; #controls { # inet 127.0.0.1 allow {localhost; } keys { "dnskulcs"; }; #}; options { directory "/var/named"; pid-file "/var/run/named/named.pid"; auth-nxdomain yes; datasize default; // Uncomment these to enable IPv6 connections support // IPv4 will still work: listen-on-v6 { any; }; listen-on { any; }; // Add this for no IPv4: // listen-on { none; }; // Default security settings. // allow-recursion { 127.0.0.1; ::1; 192.168.1.0/24; }; // allow-recursion { any; }; allow-query { any; }; allow-transfer { 127.0.0.1; ::1; 92.243.14.172; 87.98.164.164; 88.191.64.64; }; allow-update { key "dnskulcs"; }; version none; hostname none; server-id none; zone-statistics yes; forwarders { 213.46.246.53; 213.26.246.54; 8.8.8.8; 8.8.4.4; 192.188.242.65; 193.227.196.3; 2001:470:20::2; }; }; view "local" { match-clients { 192.168.1.0/24; 127.0.0.1; ::1; fec0:0:0:ffff::/64; }; recursion yes; zone "localhost" IN { type master; file "localhost.zone"; allow-transfer { any; }; }; zone "0.0.127.in-addr.arpa" IN { type master; file "127.0.0.zone"; allow-transfer { any; }; }; zone "." IN { type hint; file "root.hint"; }; zone "dafaces.com" IN { type master; file "internal/dafaces.com.fw"; allow-update { key "dnskulcs"; }; }; zone "1.168.192.in-addr.arpa" IN { type master; file "internal/dafaces.com.rev"; allow-update { key "dnskulcs"; }; }; }; view "public" { match-clients { any;}; recursion no; zone "dafaces.com" IN { type master; file "external/dafaces.com.fw"; allow-transfer { 87.98.164.164; 195.234.42.1; 88.191.64.64; }; }; }; //zone "example.org" IN { // type slave; // file "example.zone"; // masters { // 192.168.1.100; // }; // allow-query { any; }; // allow-transfer { any; }; //}; logging { channel xfer-log { file "/var/log/named.log"; print-category yes; print-severity yes; print-time yes; severity info; }; category xfer-in { xfer-log; }; category xfer-out { xfer-log; }; category notify { xfer-log; }; }; All help would be highly appreciated! EDIT: Zone files: # cat /var/named/internal/dafaces.com.fw $ORIGIN . $TTL 3600 ; 1 hour dafaces.com IN SOA tressis.dafaces.com. postmaster.dafaces.com. ( 2011032201 ; serial 28800 ; refresh (8 hours) 7200 ; retry (2 hours) 2419200 ; expire (4 weeks) 3600 ; minimum (1 hour) ) NS tressis.dafaces.com. A 192.168.1.1 MX 10 mail.dafaces.com. $ORIGIN _tcp.dafaces.com. _http SRV 0 5 80 www.dafaces.com. _ssh SRV 0 5 22 tressis.dafaces.com. $ORIGIN dafaces.com. acrisius A 192.168.1.230 andromeda A 192.168.1.7 andromeda-win7 CNAME andromeda aspasia A 192.168.1.233 athena A 192.168.1.232 callisto A 192.168.1.102 db A 192.168.1.1 management A 192.168.1.1 ; web management for the router functions haley A 192.168.1.5 hoth A 192.168.1.101 mail A 192.168.1.1 satelite A 192.168.1.20 sony-player A 192.168.1.103 TXT "310f16de2d2712dfc4ae6e5c54f60f828e" torrent A 192.168.1.1 tracker A 192.168.1.1 tressis A 192.168.1.1 www A 192.168.1.1 zeus A 192.168.1.231 and # cat /var/named/external/dafaces.com.fw $ORIGIN . $TTL 3600 dafaces.com IN SOA ns.dafaces.com. postmaster.dafaces.com. ( 2011032405; serial 28800; refresh 7200; retry 2419200; expire 3600; minimum ) NS ns.dafaces.com. NS ns0.xname.org. NS ns1.xname.org. NS ns2.xname.org. A 89.135.129.37 MX 10 mail.dafaces.com. $ORIGIN dafaces.com. ;Szolgaltatasok _ssh._tcp SRV 0 5 22 tressis _http._tcp SRV 0 5 80 www ns A 89.135.129.37 hoth A 89.135.129.37 www A 89.135.129.37 mail A 89.135.129.37 db A 89.135.129.37 torrent A 89.135.129.37 tracker A 89.135.129.37 Edit: Ohh, hell I almost forgot. Since the node is connected to the internet via a residential connection, there is a possibility, that the public ipv4 address will change(but thank god, it is a very rare case), so I daily update the external IP address in the zone file with a shellscript: # cat /etc/cron.daily/dnsupdate #!/bin/sh FILE="/var/named/external/dafaces.com.fw" SERIAL=$(date +%Y%m%d05) PUBLIC_IP=$(ifconfig internet |sed -n "/inet addr:.*255.255.255.255/{s/.*inet addr://; s/ .*//; p}") cat $FILE | sed --posix 's/^.* serial$/\t\t\t\t\t'$SERIAL'; serial/' | sed --posix 's/[0-9]*\.[0-9]*\.[0-9]*\.[0-9]*/'$PUBLIC_IP'/' > /tmp/ujzona mv /tmp/ujzona $FILE /etc/rc.d/named reload

    Read the article

  • How can I delete, break, or otherwise convert cross references to simple text in microsoft word 2013

    - by Mr Purple
    Cross referencing figure and table captions is useful while editing a document but can become confude when copying and pasting between large documents. I need to pass my document to a colleague who will collate my document with others and has requested that I remove or delete any cross referencing so that my "correct" cross references do not interfere or get interfered with by any other cross references that may be in my colleagues master collated document. My document will be cut and pasted into the master and no further complicated instructions after that point will be tolerated by my colleague. Is there a simple way to convert my cross references to simple text? I am using microsoft word 2013.

    Read the article

  • Separate zone exceptions for each view in BIND

    - by Stefan M
    Problem: Separate zones by query source network and return different records for LAN clients compared to WAN clients. I've implemented this at home on a small alix router with Bind 9.4. One view called "lan" and one view called "wan". The "lan" view had just the root.hints file and one zone. The "wan" view had many other zones, including a copy of the one zone from the "lan" view, but with different records. Querying domain1.tld from the LAN would give me local records. Querying domain1.tld from the WAN would give me external records. Querying domain2.tld from the LAN would give me the same records as from the WAN as it only existed in the WAN view. Now I'm trying to re-implement this on a larger scale and suddenly my view is unable to query anything outside itself. This is natural according to the bind-users list and they suggest I copy all my views into my LAN view. I'm hoping someone here has a better solution because that means I'll have to copy, and maintain, thousands of zone files in multiple views. This is unfeasible. My configuration at home resembles this. acl lanClients { 192.168.22.0/24; 127.0.0.1; }; view "intranet" { match-clients { lanClients; }; recursion yes; notify no; // Standard zones // zone "." { type hint; file "etc/root.hint"; }; zone "domain1.tld" { type master; file "intranet/domain1.tld"; }; }; view "internet" { match-clients { !localnets; any; }; recursion no; allow-transfer { slaveDNS; }; include "master.zones"; }; Requests from the LAN for domain1.tld give local records, requests from the WAN give remote records. This works fine both at home and in my new Bind 9.7 on a larger scale. The difference is that at home I have somehow managed to make my LAN get remote records from domains in master.zones, without specifying those zones as duplicates in the "intranet" view. Trying this on a larger scale with Bind 9.7 I get no results at all except for the zones specified in the view. What am I missing? I've tried the same configuration for Bind 9.7.

    Read the article

  • How to configure autofs5 timeout on per-filesystem basis?

    - by Norman Ramsey
    Because of a show-stopping bug in Debian autofs 4, I just upgraded to autofs5. It is not honoring the timeout option in my auto.master file: /var/autofs/removable /etc/auto.removable --timeout=2 I use this map for thumb drives and so on; I don't want a general default timeout of 2 seconds. I did some digging and although the --timeout option worked in autofs 4, and it appears in some examples on the Web, it is not actually sanctioned (or even mentioned) in the documentation for the auto.master file. So I don't feel I can report the problem as a bug. How can I get autofs5 to timeout after 2 seconds only on designated filesystems? Update: I am using a Debian-packaged autofs5, version 5.0.4-3.2.

    Read the article

  • How do I create an ISO image from a directory structure on CentOS?

    - by tom smith
    I'm trying to figure out the exact mkisofs cmd to create the ISO with the following directory and file structure. I've tried different commands, but when I mount the ISO that is created the directory tree has not been reproduced. The initial directory tree is: master.iso:: mount -o loop /apps/vmware/master.iso /mnt/vmtest ls /mnt/vmtest isolinux ks.cfg upgra32 upgra64 upgrade.sh ls /mnt/vmtest/isolinux boot.cat initrd.img isolinux.bin isolinux.cfg vmlinuz I've used different variations of the following mkisofs command without success: mkisofs -o '/foo/test.iso' -b 'isolinux.bin' -c 'boot.cat' -no-emul-boot -boot-load-size 4 -boot-info-table 'isolinux' How do I make an ISO that captures a directory's exact structure?

    Read the article

  • Hiera + Puppet classes

    - by Amadan
    I'm trying to figure out Puppet (3.0) and how it relates to built-in Hiera. So this is what I tried, an extremely simple example (I'll make a more complex hierarchy when I manage to get the simple one working): # /etc/puppet/hiera.yaml :backends: - yaml :hierarchy: - common :yaml: :datadir: /etc/puppet/hieradata # /etc/puppet/hieradata/common.yaml test::param: value # /etc/puppet/modules/test/manifests/init.pp class test ($param) { notice($param) } # /etc/puppet/manifests/site.pp include test If I directly apply it, it's fine: $ puppet apply /etc/puppet/manifests/site.pp Scope(Class[Test]): value If I go through puppet master, it's not fine: $ puppet agent --test Could not retrieve catalog from remote server: Error 400 on SERVER: Must pass param to Class[Test] at /etc/puppet/manifests/site.pp:1 on node <nodename> What am I missing? EDIT: I just left the office but a thought struck me: I should probably restart puppet master so it can see the new hiera.conf. I'll try that on Monday; in the meantime, if anyone figures out some not-it problem, I'd appreciate it :)

    Read the article

  • "Slave" user accounts in GNU/Linux

    - by Vi
    How to make one user account to be like root for some other user account, e.g. to be able to read, write, chmod all it's files, chown from this account to master and back, kill/ptrace all it's processes and to all thinks root can, but limited only to that particular slave account? Now I'm simulating this by allowing "master" user to "sudo -u slaveuser" and setting setfacl -dRm u:masteruser:rwx ~slaveuser. It is useful as I run most desktop programs in separate user accounts, but need to move files between them sometimes. If it requires some simple kernel patch it is OK.

    Read the article

  • Bridging wireless and wired networks on Linux box

    - by nixnotwin
    I have the following setup: modem + router - - - - -Ubuntu box on master mode.........wireless devises. Ubuntu machine connects to Internet on wired network. I've dhcp3-server, masquerading, and wireless card with master mode on Ubuntu box. The issue is Ubuntu connects to the router on NAT. The wireless devises connect to the Ubuntu box on a NAT too (though different). SO my wireless devises are behind two NAT networks. The solution I am looking for is Ubuntu should forward dhcp requests to the modem+router, and Ubuntu should act as a switch or a bridge that allows wireless devises to connect to the wired network. So the modem+router should act a main router.

    Read the article

  • Problem with intel chipset 4 serie and centos dealing with dual head

    - by Antoine
    I've a fujitsu lifebook S7220, it's been a while since i try to configure it to use a dual head with centos 5.4 x86_64. Everytime I try, the xserver crash... I've got an intel chipset mobile 4 serie (GMA 4500MHD, if I recall good!) When I do an lspci -v i've got these : 00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) (prog-if 00 [VGA controller]) Subsystem: Fujitsu Limited. Unknown device 1451 Flags: bus master, fast devsel, latency 0, IRQ 177 Memory at f2000000 (64-bit, non-prefetchable) [size=4M] Memory at d0000000 (64-bit, prefetchable) [size=256M] I/O ports at 1800 [size=8] Capabilities: [90] Message Signalled Interrupts: 64bit- Queue=0/0 Enable- Capabilities: [d0] Power Management version 3 00:02.1 Display controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) Subsystem: Fujitsu Limited. Unknown device 1451 Flags: bus master, fast devsel, latency 0 Memory at f2400000 (64-bit, non-prefetchable) [size=1M] Capabilities: [d0] Power Management version 3 My question is, anyone already got this problem and how did you fix it? Thank you for your answer!

    Read the article

  • Unresponsive virtual OS

    - by confusedGeek
    Hopefully someone has a suggestion on how to resolve this. Configuration Host: Win 2003R2 w/Virtual Server 2005R2 Virtual1: Win 2003R2 w/Sql Server 2005 Virtual2: Win 2003R2 w/WSS 3.0 Situation This past weekend the power went out and took down the servers (no UPS, it's a desktop standing in as dev testing server). Since the servers went down the Virtual2 server after running WSS fairly heavily for an hour to two will become unresponsive via HTTP. If I login via virtual server's remote control I don't get anything beyond a background screen. The CPU counter on the virtual server's master status shows that it isn't doing anything. The only thing I have been able to do is to turn off Virtual2, which loses any state changes. Shutdown commands issue from the virtual server master status are ignored. After restarting Virtual2 the event logs and application logs don't indicate what caused the problem. Anyone have an idea as to how to repair the OS, or maybe what could be the problem? Thanks ahead of time.

    Read the article

  • SCO Unix - finding and pulling data

    - by lxlxlxl
    We recently acquired a company that was running a legacy application on a SCO Unix box. I can tell that the app is running off of http, but its reporting and data export functions are limited and I'd like to find a master db or data source on the box that maybe I can migrate into something more useful. Any ideas how I could possibly find that? Would there be something in apache or startup scripts that might point to a master db? do Unix applications have standard data source locations? I'm not sure if I'm wording this right so apologies in advance.

    Read the article

  • kloxo setup error

    - by ron
    Hi, i've just purchased vps from 2host, its unmanaged. the support told me to install kloxo. However i got the ff errors in step 1: --begin-- root@vpshostingtips:~# wget http:// download.lxlabs . com/download/kloxo/production/kloxo-install-master.sh --2010-05-06 04:17:04-- http:// download.lxlabs . com/download/kloxo/production/kloxo-inst all-master.sh Resolving download.lxlabs.com... failed: Temporary failure in name resolution. wget: unable to resolve host address `download.lxlabs.com' root@vpshostingtips:~# --end-- Note: i split the hyperlinks intentionally to post here can somebody tell me whats the reason for error? sorry so new to vps.

    Read the article

  • Server 2003 Functional Domain DFS Replication Problem (Files being moved to conflicted folder for no reason)

    - by Az
    We have 2 Windows 2003 servers configured with a DFS namespace and we are running into problems with the redirected profiles we have setup. Basically, one server is the FSMO master for all roles, and we have another DC that is the DFS namespace primary server. We have profile redirection setup using the \dfsnamespace\userprofile formula. The FSMO master DC locks up occasionally (don't ask :), and when it does, and we bring it back up... All of the user profiles hosted on the DFS namespace get overwritten when a user logs in. The current profile gets moved to the conflicting and deleted items folder. This strikes me as really odd considering the whole point of using DFS was to provide some redundancy in case one server went down. Can anyone help? Thanks in advance! -Nate

    Read the article

  • Highly Available Web Application (LAMP)

    - by Anthony Rizzo
    I work for a small company who provides a web application for thousands of users. Earlier this year they had one server hosted one company. We recently acquired another server in a different location with the hopes of one day making this a redundant failover machine. I understand what to do with the mysql replication, I plan on using a master-master replication setup, and rsync to sync the scripts and files, however I am at a stand still about how to configure the fail-over. Ideally I would like the two machines to accept requests, like a round robin dns, however if one machine goes down I do not want requests to go that machine. All of the solutions I am come across assumes high availability of servers in the same location, these servers are in two completely different locations with different public ip address. Any help would be great. Thanks

    Read the article

  • How do I use price data in one table for a calculation that is stored in another table?

    - by shane
    I'm still leanring PHP/MySQL but have learned quite a bit thanks to codies on StackOverflow. I'm trying to setup a sort of room reservations system using two tables: SETUP: Room price table: Has, prices for a type room a client may want to rent as well as the dates (day of week) they wish to use it. Pricing varies based on day of the week and per room. I've setup a different table for each room type as each room type carries different pricing for each day of the week. So, There is an Alpha room table, Bravo room, etc. Within Alpha table are headers for the days of the week with pricing pre-entered into the rows. Client info table: Has the name, address, date of room use, etc data for the specific client. EXAMPLE: Alpha-room price table: Sun = $100; Mon = $200; Tue=$300 and so on. Bravo-room price table: Sun = $100; Mon = $200; Tue=$300 and so on. Client data table: ClientName; date-of-room-use; address; day_subtotal; grand_total. QUESTION: I'm trying to find PHP code that will: look at the date of room use in the client data table, look up the associated cost for that date in the specific room pricing table, record that unit cost in the day subtotal of the client data table and sum a grand total in the grand total row of the client data table (assuming the room may be used more than one day by the customer). I know there's something to do with join but I'm finding it difficult to grasp the concept and, if someone can demonstrate using this example, I think I will have a better understanding of how to work this sort of transaction. Thank you ALL in advance for your suggestions or alternatvie approaches.

    Read the article

  • Mirroring MySQL server with diffrent configuration

    - by HTF
    I have to migrate MySQL server to a different data centre so I would like to create another MySQL slave server in new DC and then promote it to a master later on. I previously used LVM snapshots and Percona Xtrabackup for this purpose but this time I've optimized MySQL configuration file that prevents me from using these methods. Old server (backup): innodb_log_file_size = 256M innodb_log_files_in_group = 3 New server (restore): innodb_log_file_size = 512M innodb_log_files_in_group = 2 The Xtrabackup script and LVM snapshots copy the whole directory structure so the MySQL server won't start because there is a different size for InnoDB logs. Is there any solution to avoid a downtime in this case? I can't use mysqldumps as there is around 8000 databases so I would have to take the server down for a couple of hours. I was also thinking to use the old settings with Xtrabackup and then change it once the new server is promoted to a master - less downtime but I'm not sure if this will work? Thank you Regards

    Read the article

  • Setting up Dependencies for Nagios

    - by hfranco
    I'm trying to setup dependencies for a router and several servers. What I want to do is setup the router as the Master Host so if the router fails all other services on the servers won't alert. Unfortunately this is easier said than done. Is there an easy way to setup service dependencies for all services on a server for a Master Host (or my router)? Nagios has some documentation but it will be extremely time consuming to add a single service dependency definition for every service. http://nagios.sourceforge.net/docs/3_0/objectdefinitions.html#servicedependency

    Read the article

  • Update server version for postgres 9.1.2

    - by Nai
    I'm trying to run a postgis sql script and I'm running into the following error. Am I correct to say that updating my server version will fix it? If so, how can I go about updating it? I'm on Mac OSX Lion and installed Postgres via brew. Apparently I have an older version installed which is 9.1.2 but installing postgis installed postgres 9.2.1 on to my system. How can I point my postgres server to the new one? nai@nyc /usr/local/share/postgis (git::master) $ psql -d template_postgis -f postgis.sql SET BEGIN psql:postgis.sql:49: ERROR: incompatible library "/usr/local/Cellar/postgresql/9.2.1/lib/postgis-2.0.so": version mismatch DETAIL: Server is version 9.1, library is version 9.2. nai@nyc /usr/local/share/postgis (git::master) $ psql psql (9.2.1, server 9.1.2) WARNING: psql version 9.2, server version 9.1. Some psql features might not work.

    Read the article

  • LINQ : How to query how to sort result by most similarity/equality

    - by aNui
    I want to do a search for Music instruments which has its informations Name, Category and Origin as I asked in my post. But now I want to sort/group the result by similarity/equality to the keyword such as. If I have the list { Harp, Piano, Drum, Guitar, Guitarrón } and if I queried "p" the result should be { Piano, Harp } but it shows Harp first because of the list's sequence and if I add {Grand Piano} to the list and query "piano" the result shoud be like { Piano, Grand Piano } here's my code static IEnumerable<MInstrument> InstrumentsSearch(IEnumerable<MInstrument> InstrumentsList, string query, MInstrument.Category[] SelectedCategories, MInstrument.Origin[] SelectedOrigins) { var result = InstrumentsList .Where(item => SelectedCategories.Contains(item.category)) .Where(item => SelectedOrigins.Contains(item.origin)) .Where(item => { if ( (" " + item.Name.ToLower()).Contains(" " + query.ToLower()) || item.Name.IndexOf(query) != -1 ) { return true; } return false; } ) .Take(30); return result.ToList<MInstrument>(); } Or the result may be like my old self-invented algorithm that I called "by order of occurence", that is just OK to me. Is there any way to do that, please tell me. Thanks in advance.

    Read the article

  • 389 DS Achitecture for Multiple Sites

    - by Kyle Flavin
    I'm looking to deploy 389 Directory in my environment to replace an existing iPlanet installation. I would be using it primarily to store user account data for authentication purposes. I have two physically separate data centers that I would like to share the same directory tree. My initial thinking is to setup 389 DS as follows: -A Master/Consumer in DataCenter A -A Master/Consumer in DataCenter B -Replication agreement between both masters, to mirror the directory tree in both environments. Does this sound like a reasonable approach? Is there a better way to do it? (ie: four masters?) Is there documentation for best practices when setting up 389 DS in situations such as this? Thanks.

    Read the article

  • What is the effect on LVM snapshot size when a file block is rewritten with it's original contents?

    - by NevilleDNZ
    I'm exploring using LVM snapshot's to off site incremental archives from a snapshot "master" file system. In essence: simply copy across only the files on the "master" that have changed since the last incremental copy to the "archive". Then snapshot the "archive" to retain the incremental. I am a bit puzzled as to the block usage behaviour of the archive's own incremental snapshot. I'm expecting that LVM is not smart enough to know that the "file block" is actually unchanged, and the a new copy will be allocated and written for the fresh "archive" file system. Can anyone confirm this, or point me to a document/page that gives some hints? BTW: the OS hard disk cache, hard disk physical cache and hard disk itself also doesn't need to do any actual "disk writes" as the "disk block" likewise is unnecessary. Any pointers to discussion of this style of optimisation would also be ineresting.

    Read the article

  • OS X server 10.6 - how to restore default groups?

    - by Zoran Simic
    I've set up my OS X server as an open directory master first, then (experimenting), I've changed it to standalone server, then set it back as an open directory master again. Now, all the default groups I saw before are gone (Domain Administrators, Domain users etc). Do you know how to restore these groups? Note that the groups are gone only from the Workgroup Manager UI. They do seem to be still there otherwise. id -G gives the usual list of groups. If I create an account and makes its primary group 'staff', Workgroup Manager shows all the inherited groups properly (but not on the main list). If I create an account and associate it to a new group I just created, then the account has no inherited groups...

    Read the article

  • "remote file operation failed" on Hudson

    - by Aveen
    I am running a Windows slave for Husdon 1.337 (Linux master). When running a project on the Windows node, it fails with the following message: Building remotely on winTestSlave Checking out a fresh workspace because there's no workspace at C:\hudson\***\ejb remote file operation failed It did work yesterday and I have not upgraded Hudson or changed its configurations (or the slave's configurations) in any way. I establish the connection between the slave and the master by running the following command on a cygwim prompt on the slave: java -jar slave.jar -jnlpUrl http://myserver/computer/winTestSlave/slave-agent.jnlp I saw the issue http://issues.hudson-ci.org/browse/HUDSON-5374 and did as instructed in the work-around but that did not work. I also tried with a newer version of slave.jar (version 1.356) but that did not work either. Does anyone please have any idea of how to fix this? I really cannot find more information anywhere else!

    Read the article

  • Session persistence between multiple Rails / Unicorn servers with Redis as session_store on AWS

    - by d_ethier
    I've got 2 nginx EC2 instances pointing to 2 Unicorn EC2 instances in a round robin load balanced configuration. The two nginx instances are being the Elastic Load Balancer. Both Unicorn instances have a Redis session_store configured which is in a master/slave configuration with an Elastic IP attached to the master. I've tried configuring the session stickiness on the load balancer, but sessions are lost on each page refresh. I'm using the redis-store gem for the session_store configuration and redis support. Anyone have any ideas as to why this is not working?

    Read the article

  • trigger script on postfix delivery errors

    - by edovino
    I'm trying to get postfix to run a script on soft (4xx) and hard (5xx) delivery errors, but I'm not sure where to start. If I understand things correctly, I could insert (pipe-based) filters in the master.cf file, there's a whole 'milter' infrastructure available, an finally I suppose I could simply grep through the mail.info logs. So - any advice? Should I go the 'handle it via master.cf' route, and if so, what daemon should I intercept? 'bounce'? The grep-the-logs route is probably simplest, but I can't help but feel that there is a better way. Any advice appreciated!

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >