Search Results

Search found 19788 results on 792 pages for 'remote host'.

Page 388/792 | < Previous Page | 384 385 386 387 388 389 390 391 392 393 394 395  | Next Page >

  • OpenWRT LAN clients don't see each

    - by Valentin Galea
    I'm a beginner at using OpenWRT but I'm loving it already. Everything is running fine (internet, wifi) on all clients except one thing: the computers running in my LAN don't see each other. Pings to them or between them gives "destination host unreachable". This happens when using either the machine's IP's or their assigned hostnames. Running Backfire 10.03.1 - default settings everywhere except of course for the specific ISP stuff in the WAN. What's going on?

    Read the article

  • ssh authentication with public-private key pair

    - by Rui Gonçalves
    Hi! I'm wonder if is possible to authenticate the same user with different public-private keys pairs on the same remote host. For all production servers, the public-private key pair has been generated for the same user and then exported to the backup server for allowing ssh authentication without human intervention. However, I'm having problems on some production servers, once the password prompt is always displayed. Thanks in advance for the help, Best regards!

    Read the article

  • Virtual PC with Multi Monitors

    - by user41013
    Hi I have recently intalled Virtual PC and was wondering if it possible to make use of both of my monitors with it. I have managed to get a pseudo multi screen effect by connecting using Remote Desktop connection, but this isn't really what i wanted. Is this possible? Thanks

    Read the article

  • Virtual PC with Multi Monitors

    - by user34710
    Hi I have recently intalled Virtual PC and was wondering if it possible to make use of both of my monitors with it. I have managed to get a pseudo multi screen effect by connecting using Remote Desktop connection, but this isn't really what i wanted. Is this possible? Thanks

    Read the article

  • Routing and arp

    - by adolgarev
    If I have >ip ro 192.168.14.0/24 dev eth0 another host can obtain my mac address. But if I flush routing info: >ip ro flush table main arp resolution doesn't work. Broadcast packets "Who has 192.168.14.149" reach eth0 but OS (Linux) doesn't respond despite eth0 has address 192.168.14.149. What connection exists between routing and arp resolution?

    Read the article

  • Email hosting on home's Windows server 2003

    - by klay
    Hi guys, I am new to Server management, I have a static Ip address and I bought recently a domain name, I configure the domain name to target my Ip address. I am running windows server 2003 standard. what are the steps to host my email adresses? Do I need to buy anything else, or what I have is enough (static ip address, domain name, win server 2003, exchange server 2003) ?? thanks Guys

    Read the article

  • where does https on apache get it's configuration?

    - by Matthew
    So originally my host (mediatemple dv) has two default directories for the roots: 1)httpdocs/ 2)httpsdocs/ In the conf directory I changed the vhosts.conf and httpd.include and others to change from httpdocs to custom folders. Now I installed a new ssl certificate and https://example.com goes to the default page located at httpsdocs. I'm just wondering where configurations for apache are stored for ssl. Ideas?

    Read the article

  • how to delete files owned by Apache ?

    - by Revolter
    I've installed a CMS on a shared host running Apache, now when I was deleting the root directory with FTP, some folders left with a "Permission denied" error and I can't change their attributes. the best explanation I've got is that the CMS installer has placed the files and has assigned its ownership to the Apache server instead of my user name. (i don't know it can be done) Ijust haven't use the uninstaller because I've lost my admin password - -" so how to delete those folders ?

    Read the article

  • could not connect to server: Operation timed out

    - by JohnMerlino
    I am able to ssh into my ubuntu server with a user name and password from the terminal. However, when I try to connect to the server using the same name and password via pgadmin, I am getting the following error: could not connect to server: Operation timed out Is the server running on host "xx.xxx.xx.xxx" and accepting TCP/IP connections on port 5432? Why am I able to connect through terminal but not pgadmin?

    Read the article

  • Recommend alternative to tripwire?

    - by CarpeNoctem
    Looking for a host-based IDS comparable to tripwire. Preferably one that allows centralized management. Right now I use tripwire and though it works management and reporting through a central server would be ideal. I'm looking for recommendations that have actually been used and not just google results. Thanks!

    Read the article

  • Multicast hostname lookups on OSX

    - by KARASZI István
    I have a problem with hostname lookups on my OSX computer. According to Apple's HK3473 document it says for v10.6: Host names that contain only one label in addition to local, for example "My-Computer.local", are resolved using Multicast DNS (Bonjour) by default. Host names that contain two or more labels in addition to local, for example "server.domain.local", are resolved using a DNS server by default. Which is not true as my testing. If I try to open a connection on my local computer to a remote port: telnet example.domain.local 22 then it will lookup the IP address with multicast DNS next to the A and AAAA lookups. This causes a two seconds lookup timeout on every lookup. Which is a lot! When I try with IPv4 only then it won't use the multicast queries to fetch the remote address just the simple A queries. telnet -4 example.domain.local 22 When I try with IPv6 only: telnet -6 example.domain.local 22 then it will lookup with multicast DNS and AAAA again, and the 2 seconds timeout delay occurs again. I've tried to create a resolver entry to my /etc/resolver/domain.local, and /etc/resolver/local.1, but none of them was working. Is there any way to disable this multicast lookups for the "two or more label addition to local" domains, or simply disable it for the selected subdomain (domain.local)? Thank you! Update #1 Thanks @mralexgray for the scutil --dns command, now I can see my domain in the list, but it's late in the order: DNS configuration resolver #1 domain : adverticum.lan nameserver[0] : 192.168.1.1 order : 200000 resolver #2 domain : local options : mdns timeout : 2 order : 300000 resolver #3 domain : 254.169.in-addr.arpa options : mdns timeout : 2 order : 300200 resolver #4 domain : 8.e.f.ip6.arpa options : mdns timeout : 2 order : 300400 resolver #5 domain : 9.e.f.ip6.arpa options : mdns timeout : 2 order : 300600 resolver #6 domain : a.e.f.ip6.arpa options : mdns timeout : 2 order : 300800 resolver #7 domain : b.e.f.ip6.arpa options : mdns timeout : 2 order : 301000 resolver #8 domain : domain.local nameserver[0] : 192.168.1.1 order : 200001 Maybe it would work if I could move the resolver #8 to the position #2. Update #2 No probably won't work because the local DNS server on 192.168.1.1 answering for domain.local requests and it's before the mDNS (resolver #2). Update #3 I could decrease the mDNS timeout in /System/Library/SystemConfiguration/IPMonitor.bundle/Contents/Info.plist file, which speeds up the lookups a little, but this is not the solution.

    Read the article

  • Wiki/CMS with synchronization?

    - by Clinton Blackmore
    We're looking into putting up a wiki or CMS for internal use by our IT department. One of the big things we want to use it for is disaster recovery procedures. Given that a disaster, such as a power or network outage, might render the wiki inaccessible, it seems sensible to to host the wiki in two places so that if one is inaccessible, we can fall back to the other. Are there any wikis or CMSes that synchronize (or an alternate way to achieve a similar end)?

    Read the article

  • Why does localhost and 127.0.0.1 resolve to different locations on Mac OSC 10.8?

    - by Greg Wiley
    I set up a local server without MAMP, for various reasons. I used this tutorial: http://coolestguyplanettech.com/downtown/install-and-configure-apache-mysql-php-and-phpmyadmin-osx-108-mountain-lion I'm just wondering why the local IP and localhost resolve to two different locations. Right now the IP resolves to a Virtual Host I set up and the localhost resolves to the DocumentRoot established by httpd.conf

    Read the article

  • 550 relay not permitted

    - by Nick Swan
    We are using Fogbugz on our server to do customer support emails. Occasionally we get errors coming back when sending emails which say: 550 relay not permitted This seems to happen at random though, sometimes sending an email to a person works, next time to the same person it'll bounce back. I've tried setting up reverse DNS with the server host and creating the SPF record in GoDaddy but we still get some of these errors. Is there anything else I can do, and is there a way of testing these are actually configured correctly?

    Read the article

  • Setup a vpn server with a static IP with NAT and port forwarding

    - by Lorenzo Pistone
    I have a Debian VPS with an assigned static IP. I want to setup a virtual interface (with an ip like 192.168..) on my laptop (Ubuntu, dynamic IP) linked to the remote VPS, so that if I bind sockets to it the traffic is tunneled to the VPS and sent to the Internet from there. Port forwarding must be available. This looks like a job for a VPN software, but I'm clueless about the pieces of software needed. Any advice?

    Read the article

  • Has anyone achieved true differential sync with rsync in ESXi?

    - by Julius
    Berate me later on the fact that I'm using the service console to do anything in ESXi... I've got a working rsync binary (v3.0.4) that I can use in ESXi 4.1U1. I tend to use rsync over cp when copying VM's or backups from one local datastore to another local datastore. I've used rsync to copy data from one ESXi box to another but that was just for small files. In now trying to do true differential syncs of backups taken via ghettoVCB between my primary ESXi machine and a secondary one. But even when I do this locally (one datastore to another datastore on the same ESXi machine) rsync appears to copy the files in their entirety. I've got two VMDK's totally 80GB in size, and rsync still takes anywhere between 1 and 2 hours but the VMDK's aren't growing that much daily. Below is the rsync command I'm executing. I am copying locally because ultimately these files will get copied onto a datastore created from a LUN on a remote system. Its not an rsync that'll be serviced by an rsync daemon on a remote system. rsync -avPSI VMBACKUP_2011-06-10_02-27-56/* VMBACKUP_2011-06-01_06-37-11/ --stats --itemize-changes --existing --modify-window=2 --no-whole-file sending incremental file list >f..t...... VM-flat.vmdk 42949672960 100% 15.06MB/s 0:45:20 (xfer#1, to-check=5/6) >f..t...... VM.vmdk 556 100% 4.24kB/s 0:00:00 (xfer#2, to-check=4/6) >f..t...... VM.vmx 3327 100% 25.19kB/s 0:00:00 (xfer#3, to-check=3/6) >f..t...... VM_1-flat.vmdk 42949672960 100% 12.19MB/s 0:56:01 (xfer#4, to-check=2/6) >f..t...... VM_1.vmdk 558 100% 2.51kB/s 0:00:00 (xfer#5, to-check=1/6) >f..t...... STATUS.ok 30 100% 0.02kB/s 0:00:01 (xfer#6, to-check=0/6) Number of files: 6 Number of files transferred: 6 Total file size: 85899350391 bytes Total transferred file size: 85899350391 bytes Literal data: 2429682778 bytes Matched data: 83469667613 bytes File list size: 129 File list generation time: 0.001 seconds File list transfer time: 0.000 seconds Total bytes sent: 2432530094 Total bytes received: 5243054 sent 2432530094 bytes received 5243054 bytes 295648.92 bytes/sec total size is 85899350391 speedup is 35.24 Is this because ESXi is itself making so many changes to the VMDK's that as far as rsync is concerned the entire file has to be retransmitted? Has anyone actually achieved actual diff sync with ESXi?

    Read the article

  • IP Address doesnt get passed with Squid as a reverse proxy.

    - by Arcath
    Im using squid as a reverse proxy to host multiple web servers on one internet IP. It works fine and has been doing so for the past few months. I have just noticed that every request sent to my servers is logged as comming from the squid servers IP address. Is there anyway to make squid pass the originating IP to the web servers?

    Read the article

  • IP Address doesnt get passed with Squid as a reverse proxy.

    - by Arcath
    Im using squid as a reverse proxy to host multiple web servers on one internet IP. It works fine and has been doing so for the past few months. I have just noticed that every request sent to my servers is logged as comming from the squid servers IP address. Is there anyway to make squid pass the originating IP to the web servers?

    Read the article

  • FTP not listing files behind firewall (setsockopt (ignored): Permission denied)

    - by KennyDs
    We are developing a Magento application that has a module that works with FTP. Today we deployed this on the testing environment which is setup in the following way: Gateway server which has the following iptables rules: # iptables -L -n -v Chain INPUT (policy ACCEPT 2 packets, 130 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 165 13720 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED Chain FORWARD (policy ACCEPT 7 packets, 606 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT all -- eth1 eth0 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 15 965 ACCEPT all -- eth0 eth1 0.0.0.0/0 0.0.0.0/0 0 0 REJECT all -- eth1 eth1 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT 126 packets, 31690 bytes) pkts bytes target prot opt in out source destination These are set at runtime via the following bash script: #!/bin/sh PATH=/usr/sbin:/sbin:/bin:/usr/bin # # delete all existing rules. # iptables -F iptables -t nat -F iptables -t mangle -F iptables -X # Always accept loopback traffic iptables -A INPUT -i lo -j ACCEPT # Allow established connections, and those not coming from the outside iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A FORWARD -i eth1 -o eth0 -m state --state ESTABLISHED,RELATED -j ACCEPT # Allow outgoing connections from the LAN side. iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT # Masquerade. iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE # Don't forward from the outside to the inside. iptables -A FORWARD -i eth1 -o eth1 -j REJECT # Enable routing. echo 1 > /proc/sys/net/ipv4/ip_forward The gateway server is connected to the WAN via eth1 and is connected to the internal network via eth0. One of the servers from eth1 has the following problem when trying to list files over ftp: $ ftp -vd myftpserver.com Connected to myftpserver.com 220 Welcome to MY FTP Server ftp: setsockopt: Bad file descriptor Name (myftpserver.com:magento): XXXXXXXX ---> USER XXXXXXXX 331 User XXXXXXXX, password please Password: ---> PASS XXXX 230 Password Ok, User logged in ---> SYST 215 UNIX Type: L8 Remote system type is UNIX. Using binary mode to transfer files. ftp> ls ftp: setsockopt (ignored): Permission denied ---> PORT 192,168,19,15,135,75 421 Service not available, remote server has closed connection When I try listing the files in passive mode, same result. When I run the same command on the gateway server, everything works fine so I believe that the issue is happening because of the iptables rules not forwarding properly. Does anyone have an idea which rule I need to add to make this work?

    Read the article

  • Git init - .git: Permission Denied

    - by Gcoop
    Hi All, I am trying to initiate git on my remote server using ssh. When I run git init On the server in a folder I have write permissions to I get the following error. .git: Permission denied Do I need to assign any other permissions on that folder to be able to create the repository? Thanks

    Read the article

  • Intranet Ip - Access from Custom Domain

    - by Alexander Wigmore
    I have setup a local intranet in my office using IIS7 (Windows 7 Machine), currently it can be accessed through the PC's static IP, however I would like it so that internally it can just be accessed through an easier method, e.g typing in http://intranet (or something similar). There are over 60 PC's int he office, so individually updating Host files on the PC's is not really ideal. We don't need it to be accessible from the outside world (I.e, we don't care/want it to be an Extranet). Any tips?

    Read the article

< Previous Page | 384 385 386 387 388 389 390 391 392 393 394 395  | Next Page >