Search Results

Search found 4468 results on 179 pages for 'zone transfer'.

Page 61/179 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • DNS configuration issues. Clients inside network unable to resolve DNS server's name

    - by hydroparadise
    Setup the DNS service on Ubuntu 12.04 64 and all apears to be well except that my dhcp clients do not recognize my DNS servers hostname. When doing a nslookup on one of my Windows clients, I get C:\Users\chad>nslookup Default Server: UnKnown Address: 192.168.1.2 Where I would expect the FQDN in the spot where UnKnown is seen. The DNS server know's itself pretty well, but I think only because I have an entry in the /etc/hosts file to resolve. There's so many places to look I don't even know where to begin. Are there any logs I can look at? Something. Places I've looked at and configured: /etc/bind/zones/domain.com.db /etc/bind/zones/rev.1.168.192.in-addr.arpa /etc/bind/named.conf.local EDIT: '/etc/bind/zones/rev.1.168.192.in-addr.arpa' @ IN SOA dns-serv1.mydomain.com [email protected]. ( 2006081401; 28800; 604800; 604800; 86400 ) IN NS dns-serv1.mydomain.com. 2 IN PTR dns-serv1 2 IN PTR mydomain.com EDIT 2: '/etc/bind/named.conf.local' zone "mydomain.com" { type master; file "/etc/bind/zones/mydomain.com.db"; }; zone "1.168.192.in-addr.arpa" { type master; file "/etc/bind/zones/rev.0.168.192.in-addr.arpa"; };

    Read the article

  • Connecting to RDS database from EC2 instance using bind9 CNAME alias

    - by mptre
    I'm trying to get internal DNS up and running on a EC2 instance. The main goal is to be able to define CNAME aliases for other AWS services. For example: Instead of using the RDS endpoint, which might change over time, an alias mysql.company.int can be used instead. I'm using bind9 and here's my config files: /etc/bind/named.conf.local zone "company.int" { type master; file "/etc/bind/db.company.int"; }; /etc/bind/db.company.int ; $TTL 3600 @ IN SOA company.int. company.localhost. ( 20120617 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; @ IN NS company.int. @ IN A 127.0.0.1 @ IN AAAA ::1 ; CNAME mysql IN CNAME xxxx.eu-west-1.rds.amazonaws.com. The dig command ensures me my alias is working as excepted: $ dig mysql.company.int ... ;; ANSWER SECTION: mysql.company.int. 3600 IN CNAME xxxx.eu-west-1.rds.amazonaws.com. xxxx.eu-west-1.rds.amazonaws.com. 60 IN CNAME ec2-yyy-yy-yy-yyy.eu-west-1.compute.amazonaws.com. ec2-yyy-yy-yy-yyy.eu-west-1.compute.amazonaws.com. 589575 IN A zzz.zz.zz.zzz ... As far as I can understand a reverse zone isn't needed for a simple CNAME alias. However when I try to connect to MySQL using my newly created alias the operation is giving me a timeout. $ mysql -uuser -ppassword -hmysql.company.int ERROR 2003 (HY000): Can't connect to MySQL server on 'mysql.company.int' (110) Any ideas? Thanks in advantage!

    Read the article

  • Dynamically updating DNS records with NSUPDATE fails

    - by Thuy
    I've got my own nameserver ns3.epnddns.com and domain epnddns.com I wanted to try and update the records dynamiclly from home using nsupdate but when I run nsupdate -k Kwww.epnddns.com.+157+17183.key i get the following errors Kwww.epnddns.com.+157+17183.key:1: unknown option 'www.epnddns.com.' Kwww.epnddns.com.+157+17183.key:2: unexpected token near end of the file Kwww.epnddns.com.+157+17183.{private,key}: unexpected token Not sure why I get these errors, I'll post my complete setup. Generated keys on my home pc, using dnssec-keygen -a HMAC-MD5 -b 128 -n HOST www.epnddns.com. created /var/named/ and put the keys there and chmod them to 600. transfered the keys to my nameserver ns3.epnddns.com, created /var/named/ ,put the keys there and chmod them to 600 made dnskey.conf in /var/named and added key www.epnddns.com. { algorithm hmac-md5; secret "my secret from they keys=="; }; chmod to 600 then in /etc/bind/named.conf.local include "/var/named/dnskeys.conf"; zone "epnddns.com" { type master; file "/etc/bind/zones/epnddns.com.zone"; allow-transfer { myhomeip; }; //its my home ip so not in the same network allow-update { key www.epnddns.com.; }; }; I restarted bind without any error messages so it seems to be working on the nameserver at least. But on my homepc when i try and run the nsupdate i get those error messages. Thanks in advance for any help or insightful advice.

    Read the article

  • Adding 2008 Server to 2008 Domain

    - by Phillip
    Hello, I'm trying to create a lab for testing before I deploy solutions, I'm no experienced IT Administrator, and therefore I come here for help. I'm running 2 Virtual Servers on the same machine on a local connection between those two. They'are able to ping each other. Their names is TSDATA1 and TSDATA2 where TSDATA1 is the Domain Controller. I am able to ping between those two, on both "ping TSDATA1" and "ping 10.0.0.1" which is the IP address of TSDATA1. The IP address of TSDATA2 is 10.0.0.2. I'm trying to join the domain with TSDATA2 both I'm getting this error when trying: Note: This information is intended for a network administrator. If you are not your network's administrator, notify the administrator that you received this information, which has been recorded in the file C:\Windows\debug\dcdiag.txt. The following error occurred when DNS was queried for the service location (SRV) resource record used to locate an Active Directory Domain Controller for domain tsdata.local: The error was: "DNS name does not exist." (error code 0x0000232B RCODE_NAME_ERROR) The query was for the SRV record for _ldap._tcp.dc._msdcs.tsdata.local Common causes of this error include the following: The DNS SRV records required to locate a AD DC for the domain are not registered in DNS. These records are registered with a DNS server automatically when a AD DC is added to a domain. They are updated by the AD DC at set intervals. This computer is configured to use DNS servers with the following IP addresses: 10.0.0.1 One or more of the following zones do not include delegation to its child zone: tsdata.local local . (the root zone) For information about correcting this problem, click Help. I've figured out it has something to do with DNS lookup, but I have no clue what to do. Can anyone help?

    Read the article

  • How to configure DNS server to forward queries about particular domain AND all of its subdomains

    - by user71061
    I have DNS server (linux box with bind9), which is authorative for some domains, and forward all other queries to external DNS server of my ISP provider. So far no problem. Now I want that queries about some specific domains were forwarded to my internal DNS server, f.e.: zone "some_domain" { type forward; forwarders { some_internal_dns_ip; }; }; So far still no problem, all works ok. But then, I want also to forward some reverse DNS queries to my internal DNS. So, I have added: zone "16.172.in-addr.arpa" { type forward; forwarders { some_internal_dns_ip; }; }; And this doesn't work as I expect. Queries about "16.172.in-addr.arpa" (for example 1.16.172.in-addr.arpa) are resolved correctly, but reverse queries about full address (for example 1.1.16.172.in-addr.arpa) are not. I understand that my server should use here some recursive query, but could not configure it. I have already tried adding following options recursion yes; allow-recursion { 127.0.0.1; }; allow-recursion-on { 127.0.0.1; }; but with no success . (I have used loopback address here, because I need this functionality only for my DNS host, and not for its clients) Any suggestions?

    Read the article

  • Advance DNS - Redirecting Emails to new webhost

    - by Martin
    I am not to sure if this question belongs here but I will surely find out soon enough. I have two web hosts (Not sure why it has been setup this way but it has). I do not want to use the original web host to handle the emails as the Data that we get from them is 500 mb which is already full with hosting the website. The second web host has an unlimited data plan and was created so we could use this host for the email accounts. Now the problem is I have reset the Advance DNS Zone records on both accounts and I am not sure what they were before. (Silly me should have taken a backup of how it was setup before hand I know) Emails were working before and going to the second hosts server now they are going to the first host but it has no email addresses setup for use so all emails are bouncing saying that the address does not exist. Host 1 IP: 192.185.96.110 Host 2 IP: 27.54.88.66 So far I have changed the Advanced DNS Zone record on Host 1 with the following: A Record: mail.australisinstitute.qld.edu.au - 27.54.88.66 I have not made any changes on Host 2 and both hosts have the default MX Records. If I need to provide any more information I can but I just hope someone can decipher what I have said haha. Cheers in advance!

    Read the article

  • Setting up DNS using VirtualMin/WebMin

    - by Nyxynyx
    I am moving from a cPanel server to one where I've installed VirtualMin. The LAMP stack and the website files have been setup properly and I can access the website by its IP address. Problem: Now its time to point my domain mydomain.com to my new server. After reading many sites describing setting up bind and master zones, I am pretty confused as to what to do, especially coming from a cPanel server where its really simple to set this up. Attempt Tried to register my nameservers ns1.mydomain.com and ns2.mydomain.com at my domain registrar, but I am missing the IPs I need to point these nameservers to. Should I set ns1.mydomain.com to the IP addres of my web server, and not register ns2.mydomain.com? When specifying the DNS for mydomain.com, the first one I've set it to ns1.apadment.com. On the manager/admin page of my webhost provider, I am given the option to create a secondary slave DNS, which I assigned to the IP address of my server. Though I am not sure how the slave DNS will copy the info from my web server? I have assigned this secondary DNS ns.hostprovider.com as the second DNS for mydomain.com I tried creating a Virtual Server under Virtualmin, but it seems to mess up Apache's DocumentRoot for the site by creating and enabling a new vhost file that ends with .conf. I edited the .conf file to point DocumentRoot back to where its supposed to be /var/www/mydomain instead of /user/mydomain.com I believe the next step is to setup the zone. Virtualmin has already created a Master Zone with 8 different addresses (www.mydomain.com, ftp.mydomain.com...). Under Nameservers, there are already 2 records. One is the hostname (random name given by hostprovider, ns12345.ip123-123.net), the other is the secondary slave DNS provided by the host provider. Does having BIND running on my web server makes the server the master DNS? Thank you!

    Read the article

  • High latency issue for web service call from amazon aws ec2 to local server

    - by SibzTer
    We have a legacy web application that is running in our data center on premises located in Houston. We have a developed a new .net 4 based web application in order to provide new features to customers. The new web application is hosted in amazon aws ec2 environment (N. Virginia region us-east-1b zone). In order to get seamlessly integrate with the legacy application the new web application makes web service calls to retrieve data. We are seeing an unusually high latency time in the order of 5+ seconds for these web service calls. The exact same web service call returns in less than a second on our local PCs (which makes sense given physical proximity to the actual server). The weird part is that we have developers in California who also have the same milliseconds response time. We are testing the web service response using third party tools such as SoapUI, Google Chrome extensions such as Advanced REST Client, Postman REST Client, etc. As if this wasnt weird enough, we have noticed the same low latency from certain other ec2 instances while testing which are in the same region and availability zone as well. If we experienced the high latency consistently from all the ec2 instances I could understand. But there is something else going on. Comparing the various stats and results between the low latency and high latency ec2 servers do not show any significant differences: ping (constant 40ms), tracert, winmtr, etc. We have instances that are in the VPC as well. So I tried both the public and private IP address of the web service host server and that didnt make a difference either for the above results. We need to resolve this latency issue as this is causing the resulting web pages to load very slowly (almost 15+ seconds which is simply unacceptable). The ec2 instances have Windows Server Datacenter 64 bit. Let me know if there is any other infor I can provide to help diagnose this.

    Read the article

  • postgresql deleteing old tables

    - by BB
    I have a postgresql database which stores my radius connection information. What I want to do is only store a months worth of logs. How would I craft a sql statement that I can run from cron that would go and delete and rows that where older then a month? Format of the date in the table. that date is taken from acctstoptime collum Date format 2010-01-27 16:02:17-05 Format of the table in question. -- Table: radacct -- DROP TABLE radacct; CREATE TABLE radacct ( radacctid bigserial NOT NULL, acctsessionid character varying(32) NOT NULL, acctuniqueid character varying(32) NOT NULL, username character varying(253), groupname character varying(253), realm character varying(64), nasipaddress inet NOT NULL, nasportid character varying(15), nasporttype character varying(32), acctstarttime timestamp with time zone, acctstoptime timestamp with time zone, acctsessiontime bigint, acctauthentic character varying(32), connectinfo_start character varying(50), connectinfo_stop character varying(50), acctinputoctets bigint, acctoutputoctets bigint, calledstationid character varying(50), callingstationid character varying(50), acctterminatecause character varying(32), servicetype character varying(32), xascendsessionsvrkey character varying(10), framedprotocol character varying(32), framedipaddress inet, acctstartdelay integer, acctstopdelay integer, freesidestatus character varying(32), CONSTRAINT radacct_pkey PRIMARY KEY (radacctid) ) WITH (OIDS=FALSE); ALTER TABLE radacct OWNER TO radius; -- Index: freesidestatus -- DROP INDEX freesidestatus; CREATE INDEX freesidestatus ON radacct USING btree (freesidestatus); -- Index: radacct_active_user_idx -- DROP INDEX radacct_active_user_idx; CREATE INDEX radacct_active_user_idx ON radacct USING btree (username, nasipaddress, acctsessionid) WHERE acctstoptime IS NULL; -- Index: radacct_start_user_idx -- DROP INDEX radacct_start_user_idx; CREATE INDEX radacct_start_user_idx ON radacct USING btree (acctstarttime, username);

    Read the article

  • When pointing to new DNS servers is there any chance of E-mails being lost if the old E-mail hosting service is still up?

    - by LaserBeak
    I am changing webhosts and will be using the new hosts mail servers instead of the old ones. I have created all the correctly named mailboxes on the new service but have also not yet cut ties with the old webhost. I am expecting that even if the new DNS values which point to the new hosts DNS servers and respective SOA\zone file with the new MX values have not yet propagated and an E-mail is directed at the old hosts mail servers as per the mx records in the SOA\zone records which the old hosting provider holds, the E-mail would still come through to the mailbox that's on the old host providers mail servers. So I am just trying to reaffirm if I got this right and it's essentially impossible for me to loose an E-mail since it will hit either the old hosts mail servers or the new ones ? Also is it possible to configure the same E-mail account to check and collect mail from different mail servers by entering multiple pop3 addresses ? And if I choose to keep the old web hosts mail hosting services as a backup by specifying the mx records for it with a lower priority in the SOA records hosted by the new webhost, is it possible to have any incoming E-mails sent to both servers by the mail daemon so I have two copies? Or is my only option having the primary mail server forward the E-mail somehow to the old mailserver ?

    Read the article

  • Widespread misinterpretation of DNS rules in resolving wildcards

    - by Dominic Sayers
    [EDITED to add: This problem has gone away on its own. I believe Cloudflare's name resolution may have been to blame. See my own answer below] Here is a snippet of my zone file *.example.com. 300 IN CNAME proxy.herokuapp.com. foo.example.com. 300 IN A 111.111.111.111 If I dig @8.8.8.8 foo.example.com I get the answer I expect: ;; ANSWER SECTION: foo.example.com. 30 IN A 111.111.111.111 The same is true of all other public DNS servers I've tried. However, when I try to set up a check with Pingdom to a URL on foo.example.com it instead sends the traffic to my Heroku app referenced by the *.example.com RR. The same is true of checks set up on New Relic, Errplane and traffic generated by the Heroku app itself. So on the one side, all public DNS servers interpret the zone file one way. Yet four service providers all interpret it a different way, one that differs to the standard suggested by RFC 4592. My question is: are these reputable, mature service providers all wrong? Or is it little me?

    Read the article

  • postgresql deleteing old records from log tables

    - by Max
    I have a postgresql database which stores my radius connection information. What I want to do is only store a months worth of logs. How would I craft a sql statement that I can run from cron that would go and delete and rows that where older then a month? Format of the date in the table. that date is taken from acctstoptime collum Date format 2010-01-27 16:02:17-05 Format of the table in question. -- Table: radacct CREATE TABLE radacct ( radacctid bigserial NOT NULL, acctsessionid character varying(32) NOT NULL, acctuniqueid character varying(32) NOT NULL, username character varying(253), groupname character varying(253), realm character varying(64), nasipaddress inet NOT NULL, nasportid character varying(15), nasporttype character varying(32), acctstarttime timestamp with time zone, acctstoptime timestamp with time zone, acctsessiontime bigint, acctauthentic character varying(32), connectinfo_start character varying(50), connectinfo_stop character varying(50), acctinputoctets bigint, acctoutputoctets bigint, calledstationid character varying(50), callingstationid character varying(50), acctterminatecause character varying(32), servicetype character varying(32), xascendsessionsvrkey character varying(10), framedprotocol character varying(32), framedipaddress inet, acctstartdelay integer, acctstopdelay integer, freesidestatus character varying(32), CONSTRAINT radacct_pkey PRIMARY KEY (radacctid) ) WITH (OIDS=FALSE); ALTER TABLE radacct OWNER TO radius; -- Index: freesidestatus CREATE INDEX freesidestatus ON radacct USING btree (freesidestatus); -- Index: radacct_active_user_idx CREATE INDEX radacct_active_user_idx ON radacct USING btree (username, nasipaddress, acctsessionid) WHERE acctstoptime IS NULL; -- Index: radacct_start_user_idx CREATE INDEX radacct_start_user_idx ON radacct USING btree (acctstarttime, username);

    Read the article

  • How can I set up a local nameserver and modify DNS zones on it?

    - by Joe Hopfgartner
    This is a follow up to this question. I am having an issue with a Router that doesn't support hairpinning properly. See the link above for details. Now I want to set up a local DNS server that Hosts in our LAN can use to resolve public Hostnames (usual webbrowsing... ). Additionally I want to modify certain zones. In our LAN we have some servers serving resources that are not available in our public dns zone. We always have to configure our local LMHost files accordingly. For example we have a staging installation with a new feature running on a local Webserver, and we cannot access it with the IP directly because the website runs in a named virtual host container, we have to configure LMHost file to point some domain to the local IP address. And now we have also the Hair pinning issue. So my question is: What software can I use? Will bind do the job? I just need to insert some A entries into the zone. As easy as possible. We have local Linux/Ubuntu servers.

    Read the article

  • DNS Problems with .pt configuration

    - by Tony S.
    Hello everyone! I have a hosting service with aplus.net, however I had a need to register a .pt domain, but aplus doesnt have this service, so I contacted a .pt registar, called hostingbug.net, to do this. So now I'm owner of a .pt domain, lets say, example.pt. I gave hostingbug the aplus nameservers needed for propagation. And here began the problems. When hostingbug tried to configure, the following error was displayed: <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_4.2 <<>> @64.29.151.221 click.pt. NS +norecurse (1 server found) global options: printcmd connection timed out no servers could be reached And they told me that aplus.net needed to create a new dns zone for .pt domains. So I contacted aplus.net, and they didnt understand this issue, and told me that everything was fine with their servers, and sent me back to hostingbug. So I'm felling like a ping pong ball right now... How can I configure this "new dns zone" for .pt domains? Anyone have clue of how to do this so I can tell them? Or should I cancel aplus services? Thanks in advance

    Read the article

  • Nginx and Gunicorn hanging on GET requests

    - by whatWhat
    I'm using Nginx + Gunicorn which is serving my Django project. All GET requests hang for ~1 min. The content seems to be available immediately as I can see it in the Browser inspector but the browser itself looks like it's still waiting for more data. Heres my Ngnix config #allow for up to 3 connections per second. limit_req_zone $binary_remote_addr zone=one:10m rate=3r/s; server { listen 80; server_name example.com; root /var/www/example.com/example/; # serve directly - analogous for static/staticfiles location /media/ { # this changes depending on your python version root /home/example/; } location /static/ { # if asset versioning is used if ($query_string) { expires max; } root /var/www/example.com; } location / { #Allow for a burst of 50. limit_req zone=one burst=50 nodelay; proxy_pass_header Server; proxy_set_header Host $http_host; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_connect_timeout 10; proxy_read_timeout 10; proxy_pass http://localhost:8001/; } # what to serve if upstream is not available or crashes error_page 500 502 503 504 /media/50x.html; } My Gunicorn Config: bind = "127.0.0.1:8001" workers = 3 worker_class = "gevent" Is there anything obvious that would be causing the requests to stay open for so long?

    Read the article

  • Nginx : Proper use of limit_req_zone and limit_req

    - by xperator
    I have 2 website running on VPS. Their purpose is sharing music files and publishing news. Both of them use wordpress. What I am trying is that I want to prevent little hackers from flooding the webserver and putting stress on the server to make it crash. The problem is that after using limit_req_zone and limit_req my website became very slow. Browsing Wordpress control panel takes a long long time. I tried changing values but it didn't improve much. I guess the problem is Wordpress because it's the only script I am using on both front and back end. Here is the last setting which seems to be more responsive than others : limit_req_zone $binary_remote_addr zone=flood:5m rate=10r/m; location ~ \.php$ { limit_req zone=flood burst=100 nodelay; } What are the optimal values that should be used in my case (wp) ? I want the website have it's normal behavior, On the other hand stopping lifeless people from flooding. Another question, Is it safe and enough to use limit_req only on php files ?

    Read the article

  • DNS request times out then succeeds on my local network. Why?

    - by Dan
    I have a W2K3 Server that is the Domain Controller and also the DNS server. I wanted to make another DNS zone on my network called "something.local" and then make 'A' records to point requests like 'admin.something.local' and 'www.something.local' to machines on my network. I keep getting DNS timeouts but then after 2 tries it succeeds. Why would this happen? How can I troubleshoot? From my desktop I run: nslookup admin.something.local and get: Server: server.domain.com.au.local Address: 192.168.0.10 DNS request timed out. timeout was 2 seconds. DNS request timed out. timeout was 2 seconds. Name: admin.something.local Address: 192.168.0.191 If I go back the other way: nslookup 192.168.0.191 I get: Server: server.domain.com.au.local Address: 192.168.0.10 Name: admin.something.local Address: 192.168.0.191 My DNS server address is 192.168.0.10. The new DNS zone is not hooked up to active directory. I do not have much experience with DNS. Yesterday it was working fine. I have tried doing an 'ipconfig /flushdns' on both my desktop and the DNS server

    Read the article

  • Why do HTTP loopback connections not work on my subdomains?

    - by memeLab
    I have a shared hosting account at Jumba running Linux kernel 2.6.9-103.ELsmp (don't know if that helps) with cpanel 1.0 (RC1). I am using the WordPress plugin Backup Buddy, which requires HTTP loopback connections to monitor / complete backups. This works fine on memelab.com.au, but doesn't work at any subdomain (e.g.: staging.memelab.com.au). Is it possible to setup an A record or some such to remedy this? I'm aware of a workaround, (setting WP_ALTERNATE_CRON) but I find this unsatisfactory due to the messy URLs. BackupBuddy:_Frequent_Support_Issues#HTTP_Loopback_Connections_Disabled Here is the reply from my host: …as main domain have it's own separate DNS entry it have localhost entry which helps for looback connections where as subdomains don't have separate DNS zone, so it is not possible to create looback connections for it. I have cpanel access to the 'advanced zone editor' - is there anything tricky I can do there? maybe 127.0.0.2? (I remember reading that there were at least 8 available local IPs available on (some) Linuxes.) All the A records point to the server IP, with the exception of localhost.memelab.com.au which points to 127.0.0.1. I've just tried entering a new A record: localhost.itours.memelab.com.au pointing to 127.0.0.2. I still get the warning in Backup Buddy that loopback is not active, and Cpanel won't let me enter 127.0.0.1 (guess it doesn't work like that!) nslookup itours.memelab.com.au Server: 203.88.112.33 Address: 203.88.112.33#53 Non-authoritative answer: Name: itours.memelab.com.au Address: 117.55.224.177

    Read the article

  • PTR and A record must match?

    - by somecallmemike
    RFC 1912 Section 2.1 states the following: Make sure your PTR and A records match. For every IP address, there should be a matching PTR record in the in-addr.arpa domain. If a host is multi-homed, (more than one IP address) make sure that all IP addresses have a corresponding PTR record (not just the first one). Failure to have matching PTR and A records can cause loss of Internet services similar to not being registered in the DNS at all. Also, PTR records must point back to a valid A record, not a alias defined by a CNAME. It is highly recommended that you use some software which automates this checking, or generate your DNS data from a database which automatically creates consistent data. This does not make any sense to me, should an ISP keep matching A records for every PTR record? It seems to me that it's only important if the IP address that the PTR record describes is hosting a service that is sensitive to DNS being mismatched (such as email hosting). In that case the forward zone would be configured under a domain name (examples follow the format 'zone - record'): domain.tld -> mail IN A 1.2.3.4 And the PTR record would be configured to match: 3.2.1.in-addr.arpa -> 4 IN PTR mail.domain.tld. Would there be any reason for the ISP to host a forward lookup for an IP address on their network like this?: ispdomain.tld -> broadband-ip-1 IN A 1.2.3.4

    Read the article

  • bind9 - forwarders are not working

    - by Sarp Kaya
    I am experiencing an issue with bind. If i want to resolve any domain name that is on the zone file. It works fine. However, when I try to resolve anything that does not belong to the zone file. I know that actual DNS servers that are being forwarded are working fine. But somehow bind9 fails to use them. The content of /etc/bind/named.conf.options is: options { directory "/var/cache/bind"; forwarders { 131.181.127.32; 131.181.59.48; }; dnssec-validation auto; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; I have also tried to use only one ip address and it still did not work. also the content of /etc/bind/named.conf is: include "/etc/bind/named.conf.options"; include "/etc/bind/named.conf.local"; include "/etc/bind/named.conf.default-zones"; So there is no problem with including options file. Any recommendations for fixing this problem?

    Read the article

  • Exchange 2010 update timezone of all calendar items

    - by Andrew
    We are currently operating Exchange 2010 server with Outlook 2010 clients on a ship. We have just changed timezones for the first time in quite a while today. Is there any way to rebase all the calendars and/or update all the calendar items to the new timezone at the same time? I have looked at the following tools already. Microsoft Exchange Calendar Update Configuration Tool - http://www.microsoft.com/en-us/download/details.aspx?id=6266 (Doesn't support exchange 2010) Time Zone Data Update Tool for Microsoft Office Outlook - http://www.microsoft.com/en-us/download/details.aspx?id=17291 The Time Zone Data Update Tool for Microsoft Office Outlook does work for individual users, but has some serious downsides. Including each user needs to run it (approx 400 users), and also it only seems to work on the default account in Outlook 2010, a lot of our users have role accounts as well that we would need to run the tool on. The only way I can find to get this tool to run on the role accounts is to make the role account the default account in outlook, and that in itself is quiet an involved process especially if you have 2 or 3 role accounts. So is there a way to just change all calendar items on our Exchange server to a different timezone in one go? We are a little unique in terms of the whole organisation can change timezones over night, meeting rooms and all, but surely a product as advanced as Exchange 2010 allows us to do what we need.

    Read the article

  • Google Apps: MX records for zonefile

    - by 23tux
    Hi everybody, I have a question about using Google Apps for handling emails. I don't want to set up a whole entire mail system on my server, so I decided to use Google Apps. The ownership of my domain is approved, and now I'm trying to change the MX records in the zone file of my domain. But I think I'm doing wrong, it doesn't work. I want to use mail.mydomain.com as a adress to the mail server for POP, SMTP and IMAP. My zone file looks like this: $TTL 86400 @ IN SOA ns1.first-ns.de. postmaster.robot.first-ns.de. ( 2011011700 ; serial 14400 ; refresh 1800 ; retry 604800 ; expire 86400 ) ; minimum @ IN NS robotns3.second-ns.com. @ IN NS robotns2.second-ns.de. @ IN NS ns1.first-ns.de. @ IN A 111.111.111.111 localhost IN A 127.0.0.1 www IN A 111.111.111.111 ftp IN CNAME www loopback IN CNAME localhost mail IN CNAME @ relay IN CNAME www @ IN MX 10 ALT1.ASPMX.L.GOOGLE.COM. @ IN MX 10 ASPMX3.GOOGLEMAIL.COM. @ IN MX 10 ASPMX2.GOOGLEMAIL.COM. @ IN MX 10 ASPMX.L.GOOGLE.COM. @ IN MX 10 ALT2.ASPMX.L.GOOGLE.COM. I hope someone can figure out, what's wrong with this configuration. When I start a ping on mail.mydomain.org I get an answer from 111.111.111.111 and not from the google server ALT1.ASPMX.L.GOOGLE.COM. thx, tux

    Read the article

  • nginx with fail2ban and mod_security

    - by Mahesh
    I forgot to update my fail2ban config for nginx. I just moved to nginx from apache. Today, I got a lot of cals from a single IP. IP tried to access login pages with post and get methods IP tried to use nginx as a proxy (GET http:/...) IP searched images, js, css folders IP tried to inject -d url_allow_fopen =1 and something similar. Most of the calls ended with 404. http { limit_req_zone $binary_remote_addr zone=app:10m rate=5r/s; ... server { ... location / { limit_req zone=app burst=50; } I got approximately 50 requests from that ip for a second. So i updated my nginx like the above. Will it avoid too many connections per second now? I have updated my fail2ban jail.local to support nginx. I am confused with the nginx-noscript.conf [Definition] failregex = ^<HOST> -.*GET.*(\.php|\.asp|\.exe|\.pl|\.cgi|\scgi) ignoreregex = I am serving php with nginx. I checked apache's noscript.conf and which has .php extension on it too. I tested this above settings before restarting fail2ban and got thousands of ips matched. I removed php and nothing matched. Do i need .php| in nginx-noscript.conf? Using mod_security and fail2ban together bring any problem? When i was searching today, i came to know mod_security is available for nginx too. So i am planning to use it too.

    Read the article

  • Why do I get swap space related errors when I still have lots of free memory in Solaris 10?

    - by Tom Duckering
    I am seeing a few of my services suffering/crashing with errors along the lines of "Error allocating memory" or "Can't create new process" etc. I'm slightly confused by this since logs show that at the time the system has lots of free memory (around 26GB in one case) of memory available and is not particularly stressed in any other way. After noting a JVM crash with similar error with the added query of "Out of swap space?" it made me dig a little deeper. It turns out that someone has configured our zone with a 2GB swap file. Our zone doesn't have capped memory and currently has access to as much of the 128GB of the RAM as it need. Our SAs are planning to cap this at 32GB when they get the chance. My current thinking is that whilst there is memory aplenty for the OS to allocate, the swap space seems grossly undersized (based on other answers here). It seems as though Solaris is wanting to make sure there's enough swap space in case things have to swap out (i.e. it's reserving the swap space). Is this thinking right or is there some other reason that I get memory allocation errors with this large amount of memory free and seemingly undersized swap space?

    Read the article

  • Disk fragmentation when dealing with many small files

    - by Zorlack
    On a daily basis we generate about 3.4 Million small jpeg files. We also delete about 3.4 Million 90 day old images. To date, we've dealt with this content by storing the images in a hierarchical manner. The heriarchy is something like this: /Year/Month/Day/Source/ This heirarchy allows us to effectively delete days worth of content across all sources. The files are stored on a Windows 2003 server connected to a 14 disk SATA RAID6. We've started having significant performance issues when writing-to and reading-from the disks. This may be due to the performance of the hardware, but I suspect that disk fragmentation may be a culprit at well. Some people have recommended storing the data in a database, but I've been hesitant to do this. An other thought was to use some sort of container file, like a VHD or something. Does anyone have any advice for mitigating this kind of fragmentation? Additional Info: The average file size is 8-14KB Format information from fsutil: NTFS Volume Serial Number : 0x2ae2ea00e2e9d05d Version : 3.1 Number Sectors : 0x00000001e847ffff Total Clusters : 0x000000003d08ffff Free Clusters : 0x000000001c1a4df0 Total Reserved : 0x0000000000000000 Bytes Per Sector : 512 Bytes Per Cluster : 4096 Bytes Per FileRecord Segment : 1024 Clusters Per FileRecord Segment : 0 Mft Valid Data Length : 0x000000208f020000 Mft Start Lcn : 0x00000000000c0000 Mft2 Start Lcn : 0x000000001e847fff Mft Zone Start : 0x0000000002163b20 Mft Zone End : 0x0000000007ad2000

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >