Search Results

Search found 14745 results on 590 pages for 'setting'.

Page 21/590 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • On setting up Apache and IIS to share the same IP

    - by miCRoSCoPiCeaRthLinG
    Hello, There are two different web-apps running on two (physically) different servers on our network... one of IIS and another one on Apache - both on port 80 since two machines are accessible by different IPs on our internal network. Now I want to expose both these services to the world. My idea is to somehow make the incoming connection redirect to the appropriate server based on user's choice of subdomain. Example xxx.domain.com maps to the IIS (Internal IP: 1.2.3.4) yyy.domain.com maps to Apache (Internal IP: 5.6.7.8) To the world, both these servers will share the same public IP. What kind of a configuration am I looking at and how do I go about trapping the subdomain requests and redirecting to the appropriate server? Thanks, m^e

    Read the article

  • setting up a proxy to mirror an SSH SOCKS connection

    - by aresnick
    I have two remote machines, remote1 and remote2. remote2 is only running sshd, and I can't run anything else on it. remote1 is a full-fledged server to which I have complete access. I can run a SOCKS proxy on remote2 via ssh -f -N -D *:8080 me@remote2 which lets me expose a SOCKS proxy on port 8080 on remote1. I'd like to authenticate this so that the proxy isn't sitting open. How can I do this? It seems like I should be able to use delegate, but I can't even seem to get its HTTP proxy functionality working. When I run delegated -r -P8081 SERVER=http PERMIT="*:*:*" REMITTABLE="*" I can't even get it to work on port 8081. Anyway, I was hoping someone could point me in the right direction to let me authenticate access to the SOCKS proxy connection? That is, I want to be able to point my browser's proxy at remote1 and browse the internet through the SSH SOCKS proxy/tunnel to remote2. squid doesn't support a SOCKS parent =( Thanks!

    Read the article

  • Setting up a purely Node.js http server on port 80

    - by Luke Burns
    I'm using a fresh install of Centos 5.5. I have Node installed and working (I'm just using Node -- no apache, or nginx.), but I cannot figure out how to make a simple server on port 80. Node is running and is listening to port 80. I'm just using the demo app: var http = require('http'); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Hello World\n'); }).listen(80, "x.x.x.x"); console.log('Server listening to port 80.'); When I visit my IP, it does not work. I obtained my ipaddress using ifconfig. I've tried different ports. So there must be something I am missing. What do I need to configure on my server to make this work? I would like to do this without installing apache or nginx. Luke Edit-- Ok so, I installed nginx and started it up, to see whether or not it is related to node, and I don't see its welcome page. So it definitely has something to do with the server. Am I retrieving the IP Address correctly by running: ifconfig then reading the inet addr under eth0?

    Read the article

  • setting up multi-mon windows + mac

    - by psychotik
    Ok this is a stretch, but I know of some Windows only software that does this so curious if there is something that does it on the mac too. I have 2 monitors and 2 development machines - a PC and a Mac Mini. I want to be able to view/control both concurrently (one on each monitor). The Mac only has DVI out, the PC has both and my monitors have both. Any suggestions on how I can do it? A solution involving KVM or software (VNC/remoting) is OK... Thanks!

    Read the article

  • Setting up Windows SBS 2008 network on Xen

    - by samyboy
    I'm trying to install a Windows SBS 2008 server in a Xen environment. The OS is booting fine. Unfortunately I can't figure out how to set up the network settings. Dom0 is a Debian Lenny hosting around 10 virtual servers. Here are the settings I'm using in the hosted Windows SBS: IP address: 10.20.0.8 Network mask: 255.255.0.0 Gateway: 10.20.0.1 Note that during the installation stage, Windows set the net mask at 255.255.255.0 without letting me choose. Gross. Windows SBS tells me I have a "limited connection". I can't ping the gateway nor any other IP except localhost and it's own IP (10.20.0.8). Here is the Xen config file: kernel = '/usr/lib/xen-3.2-1/boot/hvmloader' builder = 'hvm' memory = '4096' device_model='/usr/lib/xen-3.2-1/bin/qemu-dm' acpi=1 apic=1 pae=1 vcpus=1 name = 'winexchange' # Disks disk = [ 'phy:/dev/wnghosts/exchange-disk,ioemu:hda,w', 'file:/mnt/freespace/ISO/DVD1_Installation.iso,ioemu:hdc:cdrom,r' ] # Networking vif = [ 'mac=00:16:3E:0A:D0:1B, type=ioemu, bridge=xenbr0'] # video stdvga=0 serial='pty' ne2000=0 # Behaviour boot='c' sdl=0 # VNC vfb = [ 'type=vnc' ] vnc=1 vncdisplay=1 vncunused=1 usbdevice='tablet' This config is working with others Windows XP domU's. I tried to change the ne2000 values with 0 and 1 with no effect. I am far from having good Windows administration skills so I guess I definitely need some help on this case. Thanks.

    Read the article

  • Setting Up Multiple Domains (plus wildcard subdomains) to Point to the Same Site/VirtualHost

    - by Derek Reynolds
    I have my primary domain with wildcard subdomains setup already. username.maindomain.com and maindomain.com I want to provide my users with additional domains that they can select. additional1.com, additional2.com, additional3.com... These additional domains would also need to support wildcard subdomains (as the subdomains route to a username). Anyone know how to properly configure this in DNS and VirtualHost config? Currently I have the additional domains as A records pointing to the same IP as my main domain (with a wildcard subdomain A record for each as well). In my VirtualHost config I am placing the additional domain names in the ServerAlias directive. Let me know if any more detail is needed.

    Read the article

  • Setting lusca and dansguardian iptables on Ubuntu 12.04 to prevent loop

    - by Heri YT
    I have a server with ubuntu 12:04 operating system, which runs as a proxy cache server lusca and DansGuardian as well as internet content filter. With the following composition: the client browser - lusca - DansGuardian - internet. And all this running only on one machine only, the following is a partial configuration on my server lusca: http_port 3128 transparent cache_peer 192.168.0.1 parent 8080 0 no-query no-digest no-netdb-exchange default which is also only found on the DansGuardian default settings namely: filterip="blank" filterport=8080 proxyip=192.168.0.1 proxyport=3128 The question is: Can all goes well? By simply relying on one machine only? What causes the "WARNING: Forwarding loop detected for:"? is not problematic if we leave? How to solve "WARNING: Forwarding loop detected for:" found in / var / log / lusca / cache.log Thank you.

    Read the article

  • Word doesn't want to open hyperlinks, and I can't find the policy setting

    - by michaelb958
    So I have a Surface RT (running Windows RT 8.1), and I have some Word documents with links in them. The thing is, they don't work. When I try to activate one, this happens instead: It's kind of annoying. This is a personal device, so I am the organisation - and after much spelunking and web-searching, I still can't find the relevant policy, which means I can't change it. Is it talking about Group Policy or something else entirely? Is this a[nother] Windows RT limitation, or some obscure switch I haven't found yet, or...?

    Read the article

  • Setting up HTTPS across multiple servers

    - by JohnyD
    I'm looking to offer our online services over https and I'm having a couple of problems understanding how to accomplish this. To access our services you must pass through our ISA firewall to a Win2000 server running IIS6. About half our services are located here and the other half take you to a Win2003 server also running IIS6. So, in order to achieve this must each server have the proper certificate installed? ISA, IIS6_1 and IIS6_2? Is there a separate configuration that must be made to our ISA firewall? The other problem is with the CA and knowing how many certificates I need. It's important to note that the domain name for our services on IIS6_1 is www.domainname.com but the domain name on IIS6_2 is services.domainname.com. I believe that this will require me to purchase more than one certificate. It looks as though we will be going with Thawte's SSL123 as it's a good name and it's fast to get. Will I need to purchase 2 certificates (one for www that will be installed on our ISA firewall as well as IIS6_1, and one for services.domainname.com on IIS6_2)? Or will I need to purchase 3, the extra one being used on our firewall server? Another side question is about SAN's (subject alternative names). Is this basically adding sub-domains to your cert? So I could purchase one cert with 1 SAN for my www and services.? Thanks a lot for your help! Please let me know if I can provide any further information.

    Read the article

  • Setting Up Apache as a Forward Proxy with Cahcing

    - by Karl
    I am trying to set up Apache as a forward proxy with caching, but it does not seem to be working correctly. Getting Apache working as a forward proxy was no problem, but no matter what I do it is not caching anything, to disk or memory. I already checked to make sure nothing is conflicting in the mods_enabled directory with mod_cache (ended up commenting it all out) and also I tried moving all of the caching related fields to the configuration file for mod_cache. In addition I set up logging for caching requests, but nothing is being written to those logs. Below is my Apache config, any help would be greatly appreciated!! <VIRTUALHOST *:8080> ProxyRequests On ProxyVia On #ErrorLog "/var/log/apache2/proxy-error.log" #CustomLog "/var/log/apache2/proxy-access.log" common CustomLog "/var/log/apache2/cached-requests.log" common env=cache-hit CustomLog "/var/log/apache2/uncached-requests.log" common env=cache-miss CustomLog "/var/log/apache2/revalidated-requests.log" common env=cache-revalidate CustomLog "/var/log/apache2/invalidated-requests.log" common env=cache-invalidate LogFormat "%{cache-status}e ..." # This path must be the same as the one in /etc/default/apache2 CacheRoot /var/cache/apache2/mod_disk_cache # This will also cache local documents. It usually makes more sense to # put this into the configuration for just one virtual host. CacheEnable disk / #CacheHeader on CacheDirLevels 3 CacheDirLength 5 ##<IfModule mod_mem_cache.c> # CacheEnable mem / # MCacheSize 4096 # MCacheMaxObjectCount 100 # MCacheMinObjectSize 1 # MCacheMaxObjectSize 2048 #</IfModule> <Proxy *> Order deny,allow Deny from all Allow from x.x.x.x #IP above hidden for this post <filesMatch "\.(xml|txt|html|js|css)$"> ExpiresDefault A7200 Header append Cache-Control "proxy-revalidate" </filesMatch> </Proxy> </VIRTUALHOST> Thank you once again!

    Read the article

  • Odd behavior of setting REMOTE_ADDR between Apache, Nginx, and AWS ELB

    - by Chris Drumgoole
    I have encountered a strange issue and am curious if others have encountered this as well. and if there is absolutely anything that can be done.. We have a set up where we have multiple AWS EC2 Linux machines sitting behind a ELB. The EC2 machines are running Nginx. Let's refer to these as my production machines (because they are!) I also have a Rackspace cloud machine running apache. Completely separate. Let's call this the test server. Now, there's a ISP here in Singapore that seems to be funneling traffic through a transparent proxy or something, and when you do a IP check, the IP often changes. In fact, I noticed that when I check on http://www.whatismyip.com, the ip seems to be stable (doesn't change) across refreshes. But, http://www.whatismyipaddress.com, on refreshing, the IP changes! (so my ISP is doing weird stuff). Now, back to my set up, I noticed a couple of things: Checking the REMOTE_ADDR variable from PHP when connecting to a single Nginx production machine (bypassing the load balancer), is set to the stable IP that does change. Checking the REMOTE_ADDR variable from PHP when connecting to the test Apache server, it is set to the IP that does change on refreshes. Checking the headers when connecting to the nginx production machines through the ELB, the ELB sets the HTTP_X_FORWARDED_FOR to the stable IP. Has anyone experienced this odd behavior? Is there nothing that I can do? And which IP should I "trust"? (the one Apache gives, or the one ELB and Nginx gives?) Thanks! Chris

    Read the article

  • Setting up multiple wireless access points on same network

    - by SqlRyan
    I'd like to add wireless to my network, and I need multiple access points to cover the whole area. I'd like to set them up so that there's only one "wireless network" that the clients see, and it switches them as seamlessly as possible between access points as they wander around (if that's not possible, then at least have it so that they don't need to set up the security by hand on each one the first time, if possible). I've searched online, and there are quite a few sets of mixed instructions (same vs different SSID, frequency, does the security need to match exactly, etc.). Can somebody who has some experience doing this please let me know what they did? I imagine it's pretty simple, but there seems to be no clear cut "yes, you can do this" online, even though I know you can. I have a mid-size LAN with about 20 workstations and two Domain Controllers on it. Also, I'll be doing this with consumer wireless components, if it makes a difference, not enterprise-level components (ie. Linksys rather than Cisco).

    Read the article

  • Need help setting up mail DNS records

    - by Dave
    Hi, We are hosting our web site on host monster, but want our email to continue to be hosted at the old site. Our domain points to the hostmonster DNS servers, but I can't figure out the right configuration for the remote email servers. We have one MX entry, which is priority: 0 domain: ourdomain.com And then we have these DNS entries ... name: mail.ourdomain.com ttl: 14400 class: IN type: A record: old.host.ip.address name: mail1.ourdomain.com ttl: 14400 class: IN type: A record: old.host.secondip.address Can someone tell me what I need to add/edit to get mail to correctly route to our old host? Thanks, - Dave

    Read the article

  • setting apache to read from my user's directory.

    - by Adnan
    Hello, I have Centos 5 running Apache/2.2.3. Now the default folder is /var/www/html and whatever I put in it shows when I browse it from the web. Now I would like to to create a folder www under my user bob and have all files loaded from that folder; /home/bob/www When I change the document root folder in my httpd.config I get an 403 error, I have even tried with virtualhosts but the same error shows. Any ideas on what to do next?

    Read the article

  • setting visual bell to flash in iTerm

    - by blackwing
    Hi, I am using iTerm on OSX (leopard) to ssh to a linux machine. I run screen on the dev machine to save my work between sessions. I am not a big fan of audio bell and I don't like screen's default 'Wuff Wuff' bell (or any other little message shown at bottom of the page). What I like though is to have flash (foreground and blackground colors swapped for a fraction of a second) as my visual bell. I used to use PuTTY and it is as simple as ticking a checkbox but I can't find such an option in iTerm. My question is how can I set my visual bell to flash? The ideal answer would work with iTerm on local computer, iTerm sshed to a linux server, and iTerm sshed to a linux server and ran screen.

    Read the article

  • Setting up my own VPN or SSH server

    - by confusedWorker
    http://lifehacker.com/#!237227/geek-to-live--encrypt-your-web-browsing-session-with-an-ssh-socks-proxy http://ca.lifehacker.com/5763170/how-to-secure-and-encrypt-your-web-browsing-on-public-networks-with-hamachi-and-privoxy If I set up my own VPN or similar server on my always on computer at home, they say I could access gmail from my work computer. My question is, will the IT guys at work be able to notice something strange is going on if I'm on gchat at work through one of these things? (by IT guys I mean the two guys in charge of our network at work - its a small company)

    Read the article

  • Need help setting up OpenLDAP on OSX Mountain Lion

    - by rjcarr
    I'm trying to get OpenLDAP manually configured on OSX Mountain Lion. I'd prefer to do it manually instead of installing OSX server, but if that's the only option (i.e., OpenLDAP on OSX isn't meant to be used without server) then I'll just install it. I've seen guides that mostly just say to change the password in slapd.conf and then start the server and it should work. However, whenever I try to do anything with the client it tells me this: ldap_bind: Invalid credentials (49) I've tried encrypting the password as well as leaving it plain it doesn't seem to matter. The version is 2.4.28 and I've read as of 2.4 OpenLDAP uses slapd.d directories, but that doesn't seem to be the case in OSX. There was mention of an 'olcRootPW' I should use (instead of 'rootpw' in slapd.conf), but I only found that in a file named slapd.ldif. So ... I'm really confused. Has anyone got OpenLDAP working on OSX Moutain Lion without the server tools?

    Read the article

  • Setting Mercurial with Active Directory authentication and authorisation

    - by jbx
    I am evaluating the possibilities of moving my organisation to Mercurial, however I am stumbling on 2 basic requirements which I can't find proper pointers to. How do I set up Mercurial's central repository to authenticate users with the central active directory and only allow them to push or pull if they have the right credentials? How do I set up a Mercurial project repository to only allow users pertaining to a specific group to push / pull source code? We need this to have per-project authorisation. On which HTTP servers (IIS or Apache etc.) are the above 2 requirements supported? Apologies if I am asking something obvious or if I am missing something fundamental about how authentication and authorisation works. Thanks.

    Read the article

  • Setting up a DNS name server for a mass virtual host with Bind9

    - by Dez
    I am trying to set up a chrooted DNS name server in a local LAN like this everyone connected in the LAN can have access to the mass virtual hosts defined for a development ambience without having to edit manually their local /etc/hosts one by one. The mass virtual host is named example.user.dev (VirtualDocumentRoot /home/user/example ) and example.test (DocumentRoot /var/www/example). I set up everything and the /var/log/syslog doesn't show any error, but when checking the DNS with: host -v example.test Doesn't find the host. Also using the dig command I don't receive answer. dig -x example.test ; << DiG 9.5.1-P3 << -x imprimere ;; global options: printcmd ;; Got answer: ;; -HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 47844 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;imprimere.in-addr.arpa. IN PTR ;; AUTHORITY SECTION: in-addr.arpa. 600 IN SOA a.root-servers.net. dns-ops.arin.net. 2010042604 1800 900 691200 10800 ;; Query time: 108 msec ;; SERVER: 80.58.0.33#53(80.58.0.33) ;; WHEN: Mon Apr 26 11:15:53 2010 ;; MSG SIZE rcvd: 107 My configuration is the following: /etc/bind/named.conf.local zone "example.test" { type master; allow-query { any; }; file "/etc/bind/zones/master_example.test"; notify yes; }; zone "1.168.192.in-addr.arpa" { type master; allow-query { any; }; file "/etc/bind/zones/master_1.168.192.in-addr.arpa"; notify yes; }; /etc/bind/named.conf.options Note: We have an static IP address so I forward the querys to DNS server to said IP address. options{ directory "/var/cache/bind"; forwarders { 80.34.100.160; }; auth-nxdomain no; listen-on-v6 { any; }; }; /etc/bind/zones/master_example.test $ORIGIN example.test. $TTL 86400 @ IN SOA example.test. root.example.test. ( 201004227 ; serial 28800 ; refresh 14400 ; retry 3600000 ; expire 86400 ) ; min ; TXT "example.test, DNS service" @ IN NS example.test. localhost A 127.0.0.1 example.test. A 192.168.1.52 example A 192.168.1.52 www CNAME example.test. /etc/hosts 127.0.0.1 localhost example 192.168.1.52 localhost example example.test /etc/resolv.conf Note: For Bind I just added the 3 last lines. nameserver 80.58.0.33 nameserver 80.58.61.250 nameserver 80.58.61.254 search example.test search example nameserver 192.168.1.52

    Read the article

  • Setting a Static IP Running FreeBSD8 in VirtualBox hosted on Windows 7

    - by gvkv
    I'm using VirtualBox on Windows 7 (host) to run a FreeBSD (guest) based web server. I`ve assigned a static ip of 192.168.80. 1 to the (virtualized) NIC which is run in bridged mode. The problem is that when I ping an external server (such as google.com) I get a No route to host error: dimetro# ping google.com PING google.com (66.249.90.104): 56 data bytes ping: sendto: No route to host ... I can ping the BSD server from both another virtualized machine and my host machine and from the server, I can ping everything on the network. The router ip is 192.168.1.1/16. ADDENDUM: I have the following lines in /etc/rc.conf on the BSD VM to configure networking: defaultrouter="192.168.1.1" ifconfig_em0="inet 192.168.80.1 netmask 255.255.0.0"

    Read the article

  • Setting up a dualboot by installing cloned partitions using clonezilla

    - by Nimjox
    I'm trying to setup a dual boot system where I have Windows 7 and Linux Mint. Here's the kicker both are partitions I've saved using Clonzezilla from different places and to make matters worse Linux Mint is formated as a LVM. I need both of these images specifically as windows is a corporate image that I must use and the other is a development image that took me a week to setup. I've gotten it almost all working but my issue is that I can't get clonezilla to not mess up the partition table of Windows when installing Mint or vise-vera. I can use the (-k1 option) which doens't copy the partition table but then I have a unusable partition when it clones and I'm not sure how to fix the partition table. Here's what I'm doing: Using Gparted to make partitions sda1 40GB ntfs (windows), sda2 extended 70GB, sda5 lvm2 pv 69.99 GB (Linux), sda3 500MB (GRUB) Clonezilla windows image into sda1 partition (keeping partition table) Clonezilla linux image into sda5 partition (not recreating partition table) After all that I can boot into windows using the default MBR. I can use rescue-repair cd to reinstall GRUB which will see Windows 7 but I can't get it to see the Linux OS. I'm thinking its because of the sda5 partition but I'm not sure any ideas on what I could do to get this working or where I might be going wrong. If there is any additional detail you need please let me know and I'll edit as this is a lot.

    Read the article

  • Setting up CIFS ISO Repository for Xen

    - by user85610
    I recently started working with Xen, to try to make better use of an extra desktop box for development testing. I'd like to be able to do OS installs on it without having to burn discs, but I'm having some trouble actually being able to get it to boot OS ISOs from a Windows share. My Windows box is running Win 7, and it's on a domain. I created a CIFS ISO SR in Xen, specifying the correct username and password to use. Xen is able to scan the share, and I see the ISOs that are in the folder, and can select them in the list in XenCenter. However, when I try to start the VM, I get "Error: Starting VM 'linxcentos' - INVALID_SOURCE - Unable to access a required file in the specified repository: file:///tmp/cdrom-repo-hIz-H7/isolinux/vmlinuz." I tried booting a different Linux ISO and got the same result. I know that the ISOs are valid because I was able to install them without issue when I tried VMWare ESXi earlier. What am I missing here? It's Xen/XenCenter 6 and I'm trying to install the newest version of Centos. I may end up burning it for now, but I'd like to get this to work, at least just for the principle of not letting mysterious behaviors go unsolved...

    Read the article

  • Setting up subdomain to respond on :443 with apache2

    - by compucuke
    I read through some guides on this and I believe it is possible to have apache respond to a subdomain through ssl. I have domain.com responding on 80 and I do not need domain.com responding on 443. Rather, the only use I have for ssl is for the subdomain sub.domain.com. So my site should be http://domain.com http://www.domain.com https://sub.domain.com https://www.sub.domain.com My CNAME records are as follows sub.domain.com xxx.xx.xx.xxx *.sub.domain.com xxx.xx.xx.xxx The A record exists but should not matter for the example. I set up a separate config file in sites-enabled for sub.domain.com NameVirtualHost xxx.xx.xx.xxx:443 <VirtualHost xxx.xx.xx.xxx:443> SSLEngine on SSLStrictSNIVHostCheck on SSLProtocol -ALL +SSLv3 +TLSv1 SSLCipherSuite ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:-MEDIUM ServerAlias sub.domain.com DocumentRoot /usr/local/www/ssl/documents/ SSLCertificateFile /root/sub.domain.com.crt SSLCertificateKeyFile /root/sub.domain.com.key Alias /robots.txt /usr/local/www/ssl/documents/robots.txt Alias /favicon.ico /usr/local/www/ssl/documents/favicon.ico Alias /js/libs /usr/local/www/ssl/documents/js/libs Alias /media/ /usr/local/www/documents/media/ Alias /img/ /usr/local/www/ssl/documents/img/ Alias /css/ /usr/local/www/ssl/documents/css/ <Directory /usr/local/www/ssl/documents/> Order allow,deny Allow from all </Directory> WSGIDaemonProcess sub.domain.com processes=2 threads=7 display-name=%{GROUP} WSGIProcessGroup sub.domain.com WSGIScriptAlias / /usr/local/www/wsgi-scripts/script.wsgi <Directory /usr/local/www/wsgi-scripts> Order allow,deny Allow from all </Directory> </VirtualHost> Now, it is important to mention that https://domain.com responds with what I have running from script.wsgi above instead of on https://sub.domain.com. It does not respond to sub.domain.com. checking https://sub.domain.com causes a 105 error. This is a DNS error but I am convinced the DNS does not have a problem with the CNAME records, they just point to my IP. Am I doing something that Apache can not do?

    Read the article

  • Help setting up a secondary authoritative DNS server.

    - by GLB03
    We have three Authoritative DNS servers and three recursive/caching DNS servers on my campus. Authoritative servers DNS1- Windows 2003 DNS2- Old Red Hat ----- Replacing w/ newer version DNS3- Windows 2008 (I installed) Caching and Recursive resolvers servers Server1- Windows 2003 Server2- CentOS 5.2 (I installed) Server3- CentOS 5.3 (I installed) I am replacing DNS2 with a newer Red Hat version, but have no documentation on how it was implemented. I have setup caching and windows authoritative servers, but not a linux secondary authoritative server. I have a perl script from the original server that pulls data from our DNS1 server. We use DJBDNS and TinyDNS on our linux servers. Our Network Engineer says the DNS2 server I am replacing is an authoritative server that doesn't need to be caching, but the only instructions I see is for an Authoritative server that does caching as well. Can someone point me in the right directions. I thought I was on the right track with using these instructions but when I query my new dns server I get "No response from server", I have temporarily disabled iptables to eliminate it from being an issue. ps -aux | grep dns avahi 3493 0.0 0.2 2600 1272 ? Ss Apr24 0:05 avahi-daemon: running [newdns2.local] root 5254 0.0 0.1 3920 680 pts/0 R+ 09:56 0:00 grep dns root 6451 0.0 0.0 1528 308 ? S Apr29 0:00 supervise tinydns dnslog 6454 0.0 0.0 1540 308 ? S Apr29 0:00 multilog t ./main tinydns 9269 0.0 0.0 1652 308 ? S Apr29 0:00 /usr/local/bin/tinydns

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >