Search Results

Search found 4753 results on 191 pages for 'master slave'.

Page 119/191 | < Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >

  • Reverse lookup SERVFAIL

    - by Quan Tran
    I just set up a DNS server and a web server using Virtualbox. The IP address of the DNS server is 192.168.56.101 and the web server 192.168.56.102. Here are my configuration files for the DNS server: named.conf: // // named.conf // // Provided by Red Hat bind package to configure the ISC BIND named(8) DNS // server as a caching only nameserver (as a localhost DNS resolver only). // // See /usr/share/doc/bind*/sample/ for example named configuration files. // options { directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; //query-source address * port 53; //forward first; forwarders { 8.8.8.8; 8.8.4.4; }; listen-on port 53 { 127.0.0.1; 192.168.56.0/24; }; allow-query { localhost; 192.168.56.0/24; }; recursion yes; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; }; logging { channel default_debug { file "data/named.run"; severity debug 10; print-category yes; print-time yes; print-severity yes; }; }; zone "quantran.com" in { type master; file "named.quantran.com"; }; zone "56.168.192.in-addr.arpa" in { type master; file "named.192.168.56"; allow-update { none; }; }; include "/etc/named.rfc1912.zones"; include "/etc/named.root.key"; named.quantran.com: $TTL 86400 quantran.com. IN SOA dns1.quantran.com. root.quantran.com. ( 100 ; serial 3600 ; refresh 600 ; retry 604800 ; expire 86400 ) IN NS dns1.quantran.com. dns1.quantran.com. IN A 192.168.56.101 www.quantran.com. IN A 192.168.56.102 named.192.168.56: $TTL 86400 $ORIGIN 56.168.192.in-addr.arpa. @ IN SOA dns1.quantran.com. root.quantran.com. ( 100 ; serial 3600 ; refresh 600 ; retry 604800 ; expire 86400 ) ; minimum IN NS dns1.quantran.com. 101.56.168.192.in-addr.arpa. IN PTR dns1.quantran.com. 102 IN PTR www.quantran.com. When I try a normal lookup from the host (I configured so that the only nameserver the host uses is the DNS server 192.168.56.101): quan@quantran:~$ host www.quantran.com www.quantran.com has address 192.168.56.102 quan@quantran:~$ host dns1.quantran.com dns1.quantran.com has address 192.168.56.101 But when I try a reverse lookup: quan@quantran:~$ host -v 192.168.56.101 192.168.56.101 Trying "101.56.168.192.in-addr.arpa" Using domain server: Name: 192.168.56.101 Address: 192.168.56.101#53 Aliases: Host 101.56.168.192.in-addr.arpa not found: 2(SERVFAIL) Received 45 bytes from 192.168.56.101#53 in 0 ms quan@quantran:~$ host -v 192.168.56.102 192.168.56.101 Trying "102.56.168.192.in-addr.arpa" Using domain server: Name: 192.168.56.101 Address: 192.168.56.101#53 Aliases: Host 102.56.168.192.in-addr.arpa not found: 2(SERVFAIL) Received 45 bytes from 192.168.56.101#53 in 0 ms So why can't I perform a reverse lookup? Anything wrong with the zone configuration files? Thanks in advance :) Oh, here is the output from the log file /var/named/data/named.run when I perform the reverse lookup: quan@quantran:~$ host 192.168.56.102 192.168.56.101 Using domain server: Name: 192.168.56.101 Address: 192.168.56.101#53 Aliases: Host 102.56.168.192.in-addr.arpa not found: 2(SERVFAIL) /var/named/data/named.run: 02-Jun-2014 15:18:11.950 client: debug 3: client 192.168.56.1#51786: UDP request 02-Jun-2014 15:18:11.950 client: debug 5: client 192.168.56.1#51786: using view '_default' 02-Jun-2014 15:18:11.950 security: debug 3: client 192.168.56.1#51786: request is not signed 02-Jun-2014 15:18:11.950 security: debug 3: client 192.168.56.1#51786: recursion available 02-Jun-2014 15:18:11.950 client: debug 3: client 192.168.56.1#51786: query 02-Jun-2014 15:18:11.950 client: debug 10: client 192.168.56.1#51786: ns_client_attach: ref = 1 02-Jun-2014 15:18:11.950 query-errors: debug 1: client 192.168.56.1#51786: query failed (SERVFAIL) for 102.56.168.192.in-addr.arpa/IN/PTR at query.c:5428 02-Jun-2014 15:18:11.950 client: debug 3: client 192.168.56.1#51786: error 02-Jun-2014 15:18:11.950 client: debug 3: client 192.168.56.1#51786: send 02-Jun-2014 15:18:11.950 client: debug 3: client 192.168.56.1#51786: sendto 02-Jun-2014 15:18:11.951 client: debug 3: client 192.168.56.1#51786: senddone 02-Jun-2014 15:18:11.951 client: debug 3: client 192.168.56.1#51786: next 02-Jun-2014 15:18:11.951 client: debug 10: client 192.168.56.1#51786: ns_client_detach: ref = 0 02-Jun-2014 15:18:11.951 client: debug 3: client 192.168.56.1#51786: endrequest 02-Jun-2014 15:18:11.951 client: debug 3: client @0xb537e008: udprecv Also, I made some changes to the log section in named.conf.

    Read the article

  • Cloning Windows 7 installation from MBR to GPR drive and make it bootable

    - by Nelluk
    I've seen threads on similar topics - such as this one - but the answers never seem to solve how to make it bootable. I have Win 7 64-bit on a PC installed on a 2tb MBR volume. The motherboard is UEFI compatible. I just installed a secondary internal 3TB drive which will be partitioned as GPT. Is there a relatively easy way to clone my installation over to the new drive and have that drive be bootable? I have used EaseUS Partition Master to clone the C volume to the D volume, but that would not boot and I assume the issue is that one is MBR and one is GPT. Is there a process to do this?

    Read the article

  • How to have supervisord follows the new unicorn process after USR2 rolling restart?

    - by ybart
    I have configured supervisord to track my unicorn server process. When I send USR2 process, this performs a rolling restart. After this operation the old unicorn master have restarted and then changed PID. This caused supervisor to lose track of the unicorn process considering it as EXITED. How can I have supervisord to follow the new unicorn process after this operation ? Unicorn has a PID file available, but I have not found an option in supervisord configuration for this. An other option would be to have supervisord to send itself the USR2 signal, but I don't know how to perform this and whether it will prevent my problem from occurring.

    Read the article

  • postgresql table for storing automation test results

    - by Martin
    I am building an automation test suite which is running on multiple machines, all reporting their status to a postgresql database. We will run a number of automated tests for which we will store the following information: test ID (a GUID) test name test description status (running, done, waiting to be run) progress (%) start time of test end time of test test result latest screenshot of the running test (updated every 30 seconds) The number of tests isn't huge (say a few thousands) and each machine (say, 50 of them) have a service which checks the database and figures out if it's time to start a new automated test on that machine. How should I organize my SQL table to store all the information? Is a single table with a column per attribute the way to go? If in the future I need to add attributes but want to keep compatibility with old database format (ie I may not want to delete and create a new table with more columns), how should I proceed? Should the new attributes just be in a different table? I'm also thinking of replicating the database. In case of failure, I don't mind if the latest screenshots aren't backed up on the slave database. Should I just store the screenshots in its own table to simplify the replication? Thanks!

    Read the article

  • Why are my USB 2.0 devices hanging Windows XP?

    - by BenAlabaster
    Background on the machine I'm having a problem with: The machine was inherited and appears to be circa 2003 (there's a date stamp on the power supply which leads me to this conclusion). I've got it set up as a Skype terminal for my 2 year old to keep in touch with her grandparents and other members of the family - which everyone loves. It has a DFI CM33-TL/G ATX (identified using SiSoft Sandra) motherboard hosting an Intel Celeron 1.3GHz CPU, 768Mb PC133 SDRAM, a D-LINK WDA-2320 54G Wi-Fi network card and a generic USB 2.0 expansion board based on the NEC uPD720102 chipset containing 3 external and 1 internal USB sockets. It's also hosting a 1.44Mb floppy drive on FDD0, a new 80Gb Western Digital hard drive running as master on IDE0 and a Panasonic DVD+/-RW running as master on IDE1. All this is sitting in a slimline case running off a Macron Power MPT-135 135W Flex power supply. The motherboard is running a version of Award BIOS 05/24/2002-601T-686B-6A6LID4AC-00. Could this be updated? If so, from where? I've raked through the manufacturer's website but can't find any hint of downloads for either drivers or BIOS updates. The hard disk is freshly formatted and built with Windows XP Professional/Service Pack 3 and is up to date with all current patches. In addition to Windows XP, the only other software it's running is Skype 4.1 (4.2 hangs the whole machine as soon as it starts up, requiring a hard boot to recover). It's got a Daytek MV150 15" touch screen hooked up to the on board VGA and COM1 sockets with the most current drivers from the Daytek website and the most current version of ELO-Touchsystems drivers for the touch component. The webcam is a Logitech Webcam C200 with the latest drivers from the Logitech website. The problem: If I hook any devices to the USB 2.0 sockets, it hangs the whole machine and I have to hard boot it to get it back up. If I have any devices attached to the USB 2.0 sockets when I boot up, it hangs before Windows gets to the login prompt and I have to hard boot it to recover. Workarounds found: I can plug the same devices into the on board USB 1.0 sockets and everything works fine, albeit at reduced performance. I've tried 3 different kinds of USB thumb drives, 3 different makes/models of webcams and my iPhone all with the same effect. They're recognized and don't hang the machine when I hook them to the USB 1.0 but if I hook them to the USB 2.0 ports, the machine hangs within a couple of seconds of recognizing the devices were connected. Attempted solutions: I've seen suggestions that this could be a power problem - that the PSU just doesn't have the wattage to drive these ports. While I'm doubtful this is the problem [after all the motherboard has the same standard connector regardless of the PSU wattage], I tried disabling all the on board devices that I'm not using - on board LAN, the second COM port, the AGP connector etc. through the BIOS in what I'm sure is a futile attempt to reduce the power consumption... I also modified the ACPI and power management settings. It didn't have any noticeable affect, although it didn't do any harm either. Could the wattage of the PSU really cause this problem? If it can, is there anything I need to be aware of when replacing it or do I just need to make sure it's got a higher wattage than the current one? My interpretation was that the wattage only affected the number of drives you could hook up to the power connectors, is that right? I've installed the USB card in another machine and it works without issue, so it's not a problem with the USB card itself, and Windows says the card is installed and working correctly... right up until I connect a device to it. The only thing I haven't done which I only just thought of while writing this essay is trying the USB 2.0 card in a different PCI slot, or re-ordering the wi-fi and USB cards in the slots... although I'm not sure if this will make any difference - does anyone have any experience that would suggest this might work? Other thoughts/questions: Perhaps this is an incompatibility between the USB 2.0 card and the BIOS, would re-flashing the BIOS with a newer version help? Do I need to be able to identify the manufacturer of the motherboard in order to be able to find a BIOS edition specific for this motherboard or will any version of Award BIOS function in its place? Question: Does anyone have any ideas that could help me get my USB 2.0 devices hooked up to this machine?

    Read the article

  • How to stop basic Postfix after-queue script from BCC-ing sender?

    - by mjbraun
    I'm building a content filter for Postfix (2.9.3 package installed via apt on an Ubuntu 12.04 test VM) and I'm starting with a very basic Ruby (1.9.3) template and building up functionality. Strangely, when the script is enabled, messages sent are being forwarded on as normal, but also sent back to the sender which is not normal. Disabling the script disables this behavior. Any suggestions about what I have to change to stop that from happening? Thanks for any advice! /etc/postfix/master.cf (only the lines changed from the default) smtp inet n - - - - smtpd -o content_filter=dumper:dummy ... dumper unix - n n - 10 pipe flags=RF user=mailuser argv=/home/mailuser/mailfilter/dumper.rb ${sender} ${recipient}` /home/mailuser/mailfilter/dumper.rb #!/usr/bin/env ruby require 'open3' dir="/home/mailuser/emails" logfile="maillog.log" message = $stdin.read cmd = "/usr/sbin/sendmail -G -i #{ARGV[0]} #{ARGV[1]}" stdin, stdouterr, wait_thr = Open3.popen2e(cmd) stdin.print(message) logfile = File.open("#{dir}/#{logfile}", 'a') logfile.write(stdouterr) stdin.close stdouterr.close exit(0)

    Read the article

  • Simple, manageable DNS on EC2?

    - by dkulchenko
    I'm working on a large network of servers sitting on EC2, and need a way for the servers to know about each other's locations in the cloud. I thought the simplest way would be to use DNS, because if I replace the EC2 instance, I simply update the DNS record, and the rest of the servers will know about it (with names like users.db.mysoft.com, routing.mysoft.com, cluster1.memcached.mysoft.com). I'm considering setting up a master DNS server on a micro/small instance to accommodate this. I'd preferably need something that's as simple as a key-value store (hostname - IP) into which the platform could remotely add/remove entries. Can I do this with BIND? Or is there a better solution?

    Read the article

  • How do I setup unison to sync a folder one way

    - by Rob
    I have a 1tb NAS that has a 1tb usb external hard attached I have prepared the file system on the usb disk and mounted it I want to 100% sync my data from my nas to the usb disk - but I want it to be incremental and only have the NAS as the 'master' - eg if a file changes on the usb external hard drive I want it to ignore this change as its not the live version (not that I think the files will change on the usb disk but im paranoid the live could get overwritten) Also if a file gets deleted on live I want to retain the deleted file on the usb disk Can unison sync one-way and achive the above for me? if so with simply unison sorce/ target/ Work? Thanks Rob

    Read the article

  • ActiveScaffold custom action_link that should respond like update

    - by doug316
    I have a custom action link which is :inline and :post. It's a quick-link to update an attribute, and that part works just fine. After the action, my intent is to respond just like it was an update, so the row is re-rendered in the index. After the record is updated in my controller action, I call respond_to_action(:create) Just like in the create method in active scaffold. It seems like the content of the javascript response is returned correctly to the client, however, the problem: The content-type of the response header is "text/html" instead of "text/javascript", unlike with an actual update. So the JS is not executed and the row doesn't update. I can't figure out what the difference could possibly be here, I've traced extensively and even have replaced the respond_to_action with: respond_to do |format| format.js do render ... end end And it still won't set the content-type like every other action in the app. Anybody have a clue here? Something in active scaffold must be overriding the content-type and I can't figure out what it might be. Surprising this isn't documented, this doesn't seem like an unusual use-case, making an action a slave to the update re-render.

    Read the article

  • post-receive hook permission denied "unable to create file" error

    - by ThomasReggi
    Just got gitolite installed on my webserver and am trying to get a post-receive hook that can point the git dir in apache's direction. This is what my post-receive hook looks like. Got this script from the Using Git to manage a web site. #!/bin/sh echo "post-receive example.com triggered" GIT_WORK_TREE=/srv/sites/example.com/public git checkout -f This is the error response i'm getting back from git push origin master from my local workstation. These are files from within my repository. remote: post-receive example.com triggered remote: error: unable to create file .htaccess (Permission denied) remote: error: unable to create file .tm_sync.config (Permission denied) remote: fatal: cannot create directory at 'application': Permission denied Permissions of public. drwxr-xr-x 5 root root 4096 Jun 26 17:23 public

    Read the article

  • Working with mongodb from Java

    - by demas
    I have launch mongodb server: [[email protected]][~]% mongod --dbpatmongod --dbpath /home/demas/temp/ Mon Apr 19 09:44:18 Mongo DB : starting : pid = 4538 port = 27017 dbpath = /home/demas/temp/ master = 0 slave = 0 32-bit ** NOTE: when using MongoDB 32 bit, you are limited to about 2 gigabytes of data ** see http://blog.mongodb.org/post/137788967/32-bit-limitations for more Mon Apr 19 09:44:18 db version v1.4.0, pdfile version 4.5 Mon Apr 19 09:44:18 git version: nogitversion Mon Apr 19 09:44:18 sys info: Linux arch.local.net 2.6.33-ARCH #1 SMP PREEMPT Mon Apr 5 05:57:38 UTC 2010 i686 BOOST_LIB_VERSION=1_41 Mon Apr 19 09:44:18 waiting for connections on port 27017 Mon Apr 19 09:44:18 web admin interface listening on port 28017 I have created documents by console client: [[email protected]][~]% mongo MongoDB shell version: 1.4.0 url: test connecting to: test type "help" for help > db.some.find(); { "_id" : ObjectId("4bcbef3c3be43e9b7e04ef3d"), "name" : "mongo" } { "_id" : ObjectId("4bcbef423be43e9b7e04ef3e"), "x" : 3 } Now I am trying to work with MongoDb from Java: import com.mongodb.*; import java.net.UnknownHostException; public class test1 { public static void main(String[] args) { System.out.println("Start"); try { Mongo m = new Mongo("localhost", 27017); DB db = m.getDB("test"); DBCollection coll = db.getCollection("some"); coll.insert(makeDocument(10, "James", "male")); System.out.println("Finish"); } catch (UnknownHostException ex) { ex.printStackTrace(); } catch (MongoException ex) { ex.printStackTrace(); } } public static BasicDBObject makeDocument(int id, String name, String gender) { BasicDBObject doc = new BasicDBObject(); doc.put("id", id); doc.put("name", name); doc.put("gender", gender); return doc; } } But execution stops on line coll.insert(): [[email protected]][~/dev/study/java/mongodb]% javac test1.java [[email protected]][~/dev/study/java/mongodb]% java test1 Start There are not messages from mogodb server regarding accepted connection. Why?

    Read the article

  • Real time mirroring between two sql server databases

    - by Matt Thrower
    Hi, I'm a c# programmer, not a DBA and I've had the (mis)fortune to be handed a database admin task. So please bear this in mind when answering this question. What I've been asked to do is to create a real time two-way mirror between two databases with a 10 Megabit connection between them. So when either changes it updates the other. This is not a standard data mirroring/failover task where one DB is the master and the other is a backup - both are live and each needs to instantly reflect changes made to the other. In my head this sounds like a tall order, one which may even be impossible - after all in a rapidly changing environment with lots of users this is going to be massively resource intensive and create locks and queues of jobs all over the place. Is it possible? If so, can anyone either give me some basic instructions and/or point me at some places to start my reading and research? Cheers, Matt

    Read the article

  • Can I force NFS automounts to use NFSv3?

    - by Steve
    I have a linux server that is exporting NFSv4 as well as NFSv3. I have a Fedora14 client that is defaulting to NFSv4 when automounting NFS shares off of the linux server, and it seems to be causing some problems. All my other linux clients on the network are mounting via NFSv3 without issue, so is there a way I can tell automount to mount the share via v3? I am pulling my automount maps via LDAP, with an entry in my /etc/auto.master file like so: +auto_master, so I assume it's a bit different than listing options with a regular automount map? (.i.e. /home --nfsvers=3 fileserver:/DATA)

    Read the article

  • How to get the Three.js import/export scripts into Blender on Ubuntu?

    - by Bane
    I have been working with 3D primitives in Three.js, but now I want to import some models. I plan on using Blender, which I have just downloaded with: sudo apt-get install blender However, I was instructed to put the import/export scripts in the .blender/2.62/scripts/addons folder, but it does not exist! .blender/2.62 does exist, but it only has a config folder. The next thing I did is manually changed the script search path in Blender's preferences from // to my homefolder/scripts, which contained the required io_mesh_threejs folder (which, in turn had the .py scripts inside). I saved the changes, restarted Blender, but still nothing: in the menu there is no mention of Three.js at all! What do I do? It would be great if I knew the installation path for Blender, because maybe I could put those scripts there manually. Where should it be installed? EDIT: these are the scripts I'm talking about, along with the instructions: https://github.com/mrdoob/three.js/tree/master/utils/exporters/blender.

    Read the article

  • mail server administration

    - by kibs
    MY postfix does not show that it is listening to the smtp daemon getting mesaage below: The message WAS NOT relayed Reporting-MTA: dns; mail.mak.ac.ug Received-From-MTA: smtp; mail.mak.ac.ug ([127.0.0.1]) Arrival-Date: Wed, 19 May 2010 12:45:20 +0300 (EAT) Original-Recipient: rfc822;[email protected] Final-Recipient: rfc822;[email protected] Action: failed Status: 5.4.0 Remote-MTA: dns; 127.0.0.1 Diagnostic-Code: smtp; 554 5.4.0 Error: too many hops Last-Attempt-Date: Wed, 19 May 2010 12:45:20 +0300 (EAT) Final-Log-ID: 23434-08/A38QHg8z+0r7 undeliverable mail MTA BLOCKED OUTPUT FROM lsof -i tcp:25 command master 3014 root 12u IPv4 9429 TCP *:smtp (LISTEN) (Postfix as a user is missing )

    Read the article

  • Clever recording using AVFoundation

    - by martin
    Hello I am working on my master thesis and I am programming app for iOS using AVFoundation framework. I can make by myself session and attach devices to it and record video with sound. The main problem is that I need continous recording (3hours or longer). After three hours user will stop recording and user will choose time eg. 15 mins (max 30 mins) and only this last 15 mins will be stored to iphone memory. Is it possible to 'cut' video while recording or should I record it eg. in 10 minutes block and then delete old video segments and last two segments connect to one bigger? Will perform these connections (stop recording, start new recording and then connect these two segments) lags in final long video segment? Is there any way to perform this 'clever' recording? Thank you for any ideas.

    Read the article

  • Asp.net with RegularExpression problem

    - by Eyla
    Greetings, I'm try to do valdation textbox input to valdate a phone number. I have a asp.net textbox and checkbox. the defualt is to validate a us phone number and when I check the checkbox I should change the RegularExpression and error message to validate an international phone using my own RegularExpression. I have no problem to validate the international phone but the problem is when validating the usa phone number I'm always getting error message that it is invalde phone number. I used diffrent RegularExpression but did not work. Please look at my code and davice me. Regards, ! ..................... ASP.net Code ..................... <%@ Page Language="C#" MasterPageFile="~/Master.Master" AutoEventWireup="true" CodeBehind="UpdateContact.aspx.cs" Inherits="IMAM_APPLICATION.UpdateContact" Title="Untitled Page" %> <%@ Register Assembly="AjaxControlToolkit" Namespace="AjaxControlToolkit" TagPrefix="cc1" %> <asp:Content ID="Content1" ContentPlaceHolderID="ContentPlaceHolder1" runat="server"> <script src="js/jquery-1.4.1-vsdoc.js" type="text/javascript"></script> <script src="js/jquery.validate.js" type="text/javascript"></script> <script src="js/js.js" type="text/javascript"></script> <script type="text/javascript"> $(document).ready(function() { ValidPhoneHome("#<%= chkIntphoneHome%>"); $("#aspnetForm").validate({ // debug: true, rules: { "<%=txtHomePhone.UniqueID %>": { phonehome: true } }, errorElement: "mydiv", wrapper: "mydiv", // a wrapper around the error message errorPlacement: function(error, element) { offset = element.offset(); error.insertBefore(element) error.addClass('message'); // add a class to the wrapper error.css('position', 'absolute'); error.css('left', offset.left + element.outerWidth()); error.css('top', offset.top - (element.height() / 2)); } }); }) </script> <div id="mydiv"> <asp:CheckBox ID="chkIntphoneHome" runat="server" Text="Internation Code" Style="position: absolute; top: 620px; left: 700px;" onclick=" ValidPhoneHome(this)" /> <asp:TextBox ID="txtHomePhone" runat="server" Style="top: 650px; left: 700px; position: absolute; height: 22px; width: 128px" ></asp:TextBox> </div> </asp:Content> ............................. js.js File ................... var RegularExpression; var USAPhone = /(^[a-z]([a-z_\.]*)@([a-z_\.]*)([.][a-z]{3})$)|(^[a-z]([a-z_\.]*)@([a-z_\.]*)(\.[a-z]{3})(\.[a-z]{2})*$)/i; var InterPhone = /^\d{9,12}$/; var errmsg; function ValidPhoneHome(sender) { if (sender.checked == true) { RegularExpression = InterPhone; errmsg = "Enter 9 to 12 numbers as international number"; } else { RegularExpression = USAPhone; errmsg = "Enter a valid number"; } jQuery.validator.addMethod("phonehome", function(value, element) { return this.optional(element) || RegularExpression.test(value); }, errmsg); }

    Read the article

  • SVN Server not responding

    - by Rob Forrest
    I've been bashing my head against a wall with this one all day and I would greatly appreciate a few more eyes on the problem at hand. We have an in-house SVN Server that contains all live and development code for our website. Our live server can connect to this and get updates from the repository. This was all working fine until we migrated the SVN Server from a physical machine to a vSphere VM. Now, for some reason that continues to fathom me, we can no longer connect to the SVN Server. The SVN Server runs CentOS 6.2, Apache and SVN 1.7.2. SELinux is well and trully disabled and the problem remains when iptables is stopped. Our production server does run an older version of CentOS and SVN but the same system worked previously so I don't think that this is the issue. Of note, if I have iptables enabled, using service iptables status, I can see a single packet coming in and being accepted but the production server simply hangs on any svn command. If I give up waiting and do a CTRL-C to break the process I get a "could not connect to server". To me it appears to be something to do with the SVN Server rejecting external connections but I have no idea how this would happen. Any thoughts on what I can try from here? Thanks, Rob Edit: Network topology Production server sits externally to our in-house SVN server. Our IPCop (?) firewall allows connections from it (and it alone) on port 80 and passes the connection to the SVN Server. The hardware is all pretty decent and I don't doubt that its doing its job correctly, especially as iptables is seeing the new connections. subversion.conf (in /etc/httpd/conf.d) LoadModule dav_svn_module modules/mod_dav_svn.so <Location /repos> DAV svn SVNPath /var/svn/repos <LimitExcept PROPFIND OPTIONS REPORT> AuthType Basic AuthName "SVN Server" AuthUserFile /var/svn/svn-auth Require valid-user </LimitExcept> </Location> ifconfig eth0 Link encap:Ethernet HWaddr 00:0C:29:5F:C8:3A inet addr:172.16.0.14 Bcast:172.16.0.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe5f:c83a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:32317 errors:0 dropped:0 overruns:0 frame:0 TX packets:632 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2544036 (2.4 MiB) TX bytes:143207 (139.8 KiB) netstat -lntp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:3306 0.0.0.0:* LISTEN 1484/mysqld tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1135/rpcbind tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1351/sshd tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 1230/cupsd tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1575/master tcp 0 0 0.0.0.0:58401 0.0.0.0:* LISTEN 1153/rpc.statd tcp 0 0 0.0.0.0:5672 0.0.0.0:* LISTEN 1626/qpidd tcp 0 0 :::139 :::* LISTEN 1678/smbd tcp 0 0 :::111 :::* LISTEN 1135/rpcbind tcp 0 0 :::80 :::* LISTEN 1615/httpd tcp 0 0 :::22 :::* LISTEN 1351/sshd tcp 0 0 ::1:631 :::* LISTEN 1230/cupsd tcp 0 0 ::1:25 :::* LISTEN 1575/master tcp 0 0 :::445 :::* LISTEN 1678/smbd tcp 0 0 :::56799 :::* LISTEN 1153/rpc.statd iptables --list -v -n (when iptables is stopped) Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination iptables --list -v -n (when iptables is running, after one attempted svn connection) Chain INPUT (policy ACCEPT 68 packets, 6561 bytes) pkts bytes target prot opt in out source destination 19 1304 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 1 60 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:80 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:80 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW udp dpt:80 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 17 packets, 1612 bytes) pkts bytes target prot opt in out source destination tcpdump 17:08:18.455114 IP 'production server'.43255 > 'svn server'.local.http: Flags [S], seq 3200354543, win 5840, options [mss 1380,sackOK,TS val 2011458346 ecr 0,nop,wscale 7], length 0 17:08:18.455169 IP 'svn server'.local.http > 'production server'.43255: Flags [S.], seq 629885453, ack 3200354544, win 14480, options [mss 1460,sackOK,TS val 816478 ecr 2011449346,nop,wscale 7], length 0 17:08:19.655317 IP 'svn server'.local.http > 'production server'k.43255: Flags [S.], seq 629885453, ack 3200354544, win 14480, options [mss 1460,sackOK,TS val 817679 ecr 2011449346,nop,wscale 7], length 0

    Read the article

  • Get sessions' remote IP from Teamviewer log file

    - by etuardu
    I'd like to know who has logged in to my machine and when. I have two TeamViewer log files: Connections_incoming.txt and TeamViewer7_Logfile.log. The first one is quite plain and lists, as its name says, the incoming connections to the machine, reporting the local name of the remote host, login time, logout time, and some ids. e.g.: 173274362 MYLAPTOP 20-02-2012 17:32:16 20-02-2012 17:50:42 Master RemoteControl {C5AAE483-ED0B-54B8-9235-7AE597CAD342} This is almost all what I need, but unfortunately no remote IP address is reported here, so I checked for IPs in TeamViewer7_Logfile.log but it is really messy. It indeed contains some IP addresses but I can't understand which one is bound with the items in the first log file. Is there a way to interpolate the two logs to get what I need? Should I search the second file for some particular text? What do you suggest?

    Read the article

  • Correct password for ssh key rejected when ssh-d into machine

    - by user20342
    When I am logged into my machine directly, I can do all git operations, and when prompted for a password, the password is accepted. When I ssh into the same box and run git operations on the same repos, the password is rejected. Relevant section of .ssh/config looks like this: # Generic settings Host * ServerAliveInterval 600 ControlPath /tmp/ssh-%r@%h:%p ControlMaster auto KeepAlive yes IdentityFile ~/.ssh/id_rsa.pub Transaction looks like this when I login when I ssh into my box: {12-12-03 9:41}hbrown-wks2:~/workspace/spt/project@master??? hbrown% git pull Enter passphrase for key '/home/hbrown/.ssh/id_rsa.pub': Enter passphrase for key '/home/hbrown/.ssh/id_rsa.pub': Enter passphrase for key '/home/hbrown/.ssh/id_rsa.pub': Permission denied (publickey). fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. Using bash does not appear to make a difference (i.e. ssh-agent /bin/bash). This is a recent development, but I can't cite the change that caused it.

    Read the article

  • How to configure bind9 to route to host's IP

    - by Greg
    I'm running apache and bind9 on the same server. I would like to set a master zone to route back to this very machine's IP address without explicitly specifying it. Is this possible? If I use 127.0.0.1 for the A record, then when another computer on the network does an nslookup for mydomain.local, bind of course just returns the loopback ip (127.0.0.1) -- not the IP of the server. Is there to way to tell it to just return the network IP address for the server itself, as defined in /etc/network/interfaces?

    Read the article

  • SQL server peformance, virtual memory usage

    - by user45641
    Hello, I have a very large DB used mostly for analytics. The performance overall is very sluggish. I just noticed that when running the query below, the amount of virtual memory used greatly exceeds the amount of physical memory available. Currently, physical memory is 10GB (10238 MB) whereas the virtual memory returns significantly more - 8388607 MB. That seems really wrong, but I'm at a bit of a loss on how to proceed. USE [master]; GO select cpu_count , hyperthread_ratio , physical_memory_in_bytes / 1048576 as 'mem_MB' , virtual_memory_in_bytes / 1048576 as 'virtual_mem_MB' , max_workers_count , os_error_mode , os_priority_class from sys.dm_os_sys_info

    Read the article

  • ${extension} empty after catch-all alias in Postfix

    - by Paul Wagener
    I want a setup where an e-mailaddress like [email protected] redirects mail to the folder foo. I've already got dovecot configured and tested. It is called by postfix with this line in master.cf: dovecot unix - n n - - pipe flags=DRhu user=vmail:vmail argv=/usr/lib/dovecot/deliver -f ${sender} -d ${user}@${nexthop} -n -m ${extension} I expect ${extension} to expand to 'foo' but it is always empty. I've added recipient_delimiter = + to my main.cf. How can I get it to work? Update: I've got a catch-all alias that redirects @domain.com to [email protected]. It seems that the extension is empty because of this. So the question becomes: Can I have a catch-all so that [email protected] redirects to [email protected] without explicitly defining either the random or the ext part?

    Read the article

  • Puppet apache module causing 'Error 400 on SERVER: Invalid parameter identifier'

    - by Andy Shinn
    I am receiving the following error when trying to use the latest puppetlabs-apache module from github (https://github.com/puppetlabs/puppetlabs-apache): Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Invalid parameter identifier at /etc/puppet/environments/apache_update/modules/apache/manifests/mod.pp:40 on node zordon.mydomain.com Warning: Not using cache on failed catalog Error: Could not retrieve catalog; skipping run My node config looks like: node 'zordon.mydomain.com' { include template::common include template::puppetagent include template::lamp User::Create sudo::conf { 'joe': priority = 60, content = 'joe ALL=(ALL) NOPASSWD: ALL', require = User::Create['joe'], } } The template::lamp class is what uses apache module: class template::lamp { include myfirewall Firewall Firewall class { 'apache': } class { 'apache::mod::php': } class { 'apache::mod::ssl': } class { 'mysql::server': } } It looks like serverfault markup is getting garbled on Puppet realize statements. The User::Create and Firewall lines are just realizing a user and 2 firewall rules. I have verified that the /var/lib/puppet/lib/puppet/type/a2mod.rb type has the identifier parameter and it is the same MD5 as the server. I am using Puppet 3.0.1 on both agent and master. Any idea what may cause this?

    Read the article

  • Using Excel data in Microsoft Publisher

    - by TK
    I have never worked in Microsoft Publisher. To build the presentation we're having to input the same information from a microsoft excel master. For instance- My excel has these columns: Item Title, Item Description, Item Dimensions, Notes, Created Date From there, I'm having the RE-type the information underneath a picture of the item in powerpoint (or publisher) in order to present to the client. So I'm retyping the item name, description, dimensions, etc. I'm also reformatting slides each time I do this. I know there's a way to streamline this process, to build a powerpoint and/or something in publisher that will bring in the data needed based on a merge (or maybe macro), but I haven't been able to figure out how. Any suggestions?

    Read the article

< Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >