Search Results

Search found 17651 results on 707 pages for 'unix domain sockets'.

Page 117/707 | < Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >

  • DNS Problems with .pt configuration

    - by Tony S.
    Hello everyone! I have a hosting service with aplus.net, however I had a need to register a .pt domain, but aplus doesnt have this service, so I contacted a .pt registar, called hostingbug.net, to do this. So now I'm owner of a .pt domain, lets say, example.pt. I gave hostingbug the aplus nameservers needed for propagation. And here began the problems. When hostingbug tried to configure, the following error was displayed: <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5_4.2 <<>> @64.29.151.221 click.pt. NS +norecurse (1 server found) global options: printcmd connection timed out no servers could be reached And they told me that aplus.net needed to create a new dns zone for .pt domains. So I contacted aplus.net, and they didnt understand this issue, and told me that everything was fine with their servers, and sent me back to hostingbug. So I'm felling like a ping pong ball right now... How can I configure this "new dns zone" for .pt domains? Anyone have clue of how to do this so I can tell them? Or should I cancel aplus services? Thanks in advance

    Read the article

  • Haproxy and CNAME

    - by user123354
    I want to create a simple load balancer for the two servers. The problem is with CNAME records, I think. Let's say I have two the same applications on AppFog.com. app1.aws.af.cm and app2.aws.af.cm Here is my haproxy.cfg file: global maxconn 2000 daemon defaults mode http clitimeout 60000 srvtimeout 30000 contimeout 4000 option httpclose listen http_proxy bind [myip]:80 mode http stats enable stats auth user:passwd stats uri /stats balance source option httpchk option forwardfor server host01 app1.aws.af.cm:80 maxconn 300 check server host02 app2.aws.af.cm:80 maxconn 300 check But this only resolving IP for domain app1.aws.af.cm and app2.aws.af.cm, which obviously doesn't work if I open this IP in web browser. The problem is that AppFog doesn't have public IP for application (same as OpenShift). How to do that Haproxy to perform a proper connection between Load Balancer and this two servers? Example: This is real app - http://freechat.eu01.aws.af.cm Haproxy only resolves IP for this domain which is 46.51.204.8:80 Of course this IP will not show my application, only an error page. Sorry for my poor English.

    Read the article

  • few basic questions on webhosting (namservers & dns records)

    - by claws
    I bought a domain name on name.com & I want to use free webhosting on 110mb.com By default name.com integrates services of Google apps. Name server entries are ns1.name.com ns2.name.com ns3.name.com ns4.name.com When I registered on 110mb.com it gave me two addresses ns1.110mb.com ns2.110mb.com This is where I'm lost. The concept is that "Domain name should point to an address of the server where the website is hosted" right? Then why are these 4 entires by default. How exactly is it working? should I remove these 4 and then add 110mb.com servers or just append 110mb.com server addresses to name.com ones. I would like to use google apps. If I change these name server addresses would that remove google apps? I especially want to use email service of google. And I really don't understand what is CNAME, MX, or something something. I want to learn about these stuff & how it exactly works. When I search for webhost tutorial. I'm unable to find any fruitful results.

    Read the article

  • Can many addon domains slow down cPanel or create problems?

    - by Marco Demaio
    I actually resell hosting plans by using one single cPanel account and many addon domains under it. Basically for each new user I don't create a new cP account, but I simply create a new ADDON domain and give him the necessary space. I know in this way the final user won't be able to manage his emails and he want be able to access all cPanel features, but that's ok. My only question is: "is there a limit on the number of cPanel addon domain that can be added?". I know in my account I'm allowed to add unlimited addon domains, but is there something that might happen that slows down cPanel or could create problems? I mean is there any suggestions you could give me about the way i'm using cPanel which might not be the usual way of using it. (As for instance: "be aware that Awstast could run very slow or crash", or "be aware that mails from different addon domains under the same cp account could create many problems.", etc.) Many thanks!

    Read the article

  • Problems forwarding zone to another DNS server.

    - by sebastian nielsen
    I have a authorative DNS server at 83.248.21.18 which are authorative for the domain "finahemgoteborg.se". Now my registrar is requiring me to have 2 DNS servers for the domain, so I would now want the machine 85.228.103.141 just forward all incoming queries for "finahemgoteborg.se" to the 83.248.21.18 server. In the 85.228.103.141 BIND server, I have the following config: zone "finahemgoteborg.se" in { type forward; forwarders {83.248.21.18;}; }; But the problem is that 85.228.103.141 is still responding with "REFUSED" when querying it for example www.finahemgoteborg.se A record. How can I fix it. I do NOT want to set up a master/slave situation, just one nameserver that forwards to a another. Edit The Rest of named.conf: options { directory "/var/cache/bind"; version "none"; allow-recursion {"none";}; minimal-responses no; }; zone "sebn.us.to" in{ type master; file "/etc/bind/sebn.us.to"; }; zone "ns1sebn.us.to" in{ type master; file "/etc/bind/sebn.us.to"; }; zone "ns2sebn.us.to" in{ type master; file "/etc/bind/sebn.us.to"; }; zone "finahemgoteborg.se" in{ type forward; forwarders {83.248.21.18;}; };

    Read the article

  • few basic questions on webhosting (namservers & dns records)

    - by claws
    I bought a domain name on name.com & I want to use free webhosting on 110mb.com By default name.com integrates services of Google apps. Name server entries are ns1.name.com ns2.name.com ns3.name.com ns4.name.com When I registered on 110mb.com it gave me two addresses ns1.110mb.com ns2.110mb.com This is where I'm lost. The concept is that "Domain name should point to an address of the server where the website is hosted" right? Then why are these 4 entires by default. How exactly is it working? should I remove these 4 and then add 110mb.com servers or just append 110mb.com server addresses to name.com ones. I would like to use google apps. If I change these name server addresses would that remove google apps? I especially want to use email service of google. And I really don't understand what is CNAME, MX, or something something. I want to learn about these stuff & how it exactly works. When I search for webhost tutorial. I'm unable to find any fruitful results.

    Read the article

  • Parking domains and avoiding so called "search engine penalities"

    - by senthilkumar-c
    I have purchased two domains from one particular registrar and hosting from GoDaddy. Assume they are domain1.com and domain2.com Assume my hosting IP address is 111.111.111.111 I added both domain1.com and domain2.com in my domain management control panel and gave the same two nameservers for both domains at my registrar's control panel. So, now, both domains should show the same website. When I ping "domain1.com" or "domain2.com" the results say - Pinging domain1.com [111.111.111.111] with 32 bytes of data: Pinging domain2.com [111.111.111.111] with 32 bytes of data: respectively. So, they both point to the same hosting IP. BUT, internally, I have configured IIS to point them to different folders so that different websites are shown. (My hosting plan is expensive and I intend to use the space and bandwidth for many websites). But still, technically, all domains point to same IP address. Is this a bad thing? Is this what is called "domain parking"? I read some search engine forum posts that two domains pointing to the same IP/Website will be penalised by search engines and stuff. I have also read that simply "parking" the domains won't attract penality. I don't know whether what I have done is parking or the so called "wrong" thing. Can someone shed light on what I have done and what I should do? I don't want to be blacklisted by any search engine. P.S. I know this is not a search engine forum, but I am new to website hosting and domains and I am very weak in nearly all technical terms and concepts relating to web hosting and domains. I thought this will be a good place to understand these things.

    Read the article

  • Did my registrar screw up or is this how name server propagation works?

    - by Brad
    So my company has a number of domains with a large registrar that shall go unnamed. We are making some changes to our DNS infrastructure and the first of those is we are moving our secondary DNS from one server on site to four servers offsite. So we updated the name servers for each domain at the registrar by removing the entry for the old secondary name server and adding the four new ones. I monitored the old secondary server for requests and when I saw no new requests had been made for 24 hours I shut it down. That was this morning. I assumed at this point everything was good. Unfortunately this was my mistake. I should have gone and made sure name servers at large were returning the correct NS records. So this afternoon we were performing maintenance on our primary DNS server and we shut it down. This is when I started getting alerts from our external monitoring. I checked and sure enough, the DNS server used there reported the only NS record for our primary domain was the primary name server. The new secondary servers were not listed and neither was the old secondary. Is it unreasonable of me to have assumed that because the update was from ns1.mydomain.com ns2.mydomain.com to ns1.mydomain.com ns1.backupdns.com ns2.backupdns.com ns3.backupdns.com ns4.backupdns.com in one step at the registrar that there should be no intermediate state where the only NS record was for ns1.mydomain.com? Going forward to be safe obviously I will always leave the old name servers alone until after I'm 100% sure the new ones have propagated and only then remove the old name servers from the registrar. However, I'd still like to know if my registrar screwed up or if my expectation was unreasonable.

    Read the article

  • Windows 2008 R2 DNS cant resolve own SOA

    - by user46742
    We have two Domain Controllers for our network. They both run DHCP, DNS, and ADS. They are both VM's sitting on MS Hyper V Server 2008 on separate physical hosts. We had our primary DC go down a week ago. I upgraded an already existing VM to Primary DC and built a new VM for the secondary. Both DNS servers are running and the SOA is configured correctly for Primary DC 1. However when I run the best practice analyzer it states the server cannot resolve it's own SOA. Check the configuration in the adapter. I checked and they are configured properly. I also went through the DNS entries thoroughly and made sure there was no records of the previous DC that went down. NSLOOKUP resolves the domain and primary dc fine. I also checked the firewalls on the machines and our physical firewall for any deny packets. Any suggestions? I appreciate any help!

    Read the article

  • Windows 7 : access denied to ONE server from ONE computer

    - by Gregory MOUSSAT
    We have a domain served by some Windows 2003 servers. We have several Windows 7 Pro clients. ONE client computer can't acces ONE member Windows 2003 server. The other computers can acces every servers. And the same computer can access other servers. With explorer, the message says the account is no activated. With the command line, the message says the account is locked. With commande line : net use X: \\server\share --> several seconds delay, then error (says the account is locked) net use X: \\server\share /USER:current_username --> okay net use X: \\server\share /USER:domain_name\current_username --> okay From the same computer, the user can access other servers. From another computer, the same user can access any server, including the one denied from the original computer. Aleady done : unjoin then join the cilent from the domain. check the logs on the server : nothing about the failed attempts (?!) Is their any user mapping I'm not aware of ?

    Read the article

  • Some websites hosted on my server cant be reached from some places.

    - by valter
    Hello. I have a bloblem that is causing me headaches to solve. I have a webserver at 100tb.com, running CentOS. I also have these nameservers setted up: 67.213.220.170 ns1.maisturismo.net 67.213.220.171 ns2.maisturismo.net My domain is at Godaddy. I added two Host Summary pointig to the nameserver ips... NS1 to the first IP, and NS2 to the second... Than I changed the nameservers of maisturismo.net to ns1.maisturismo.net and ns2.maisturismo.net http://img20.imageshack.us/i/dnswm.jpg/ Bellow the image showing my dns records to maisturismo.net http://img137.imageshack.us/i/nameservers.jpg/ Its strange... Everythink looks fine, but the webiste is not reachable from [zend2.com][1] proxy, and from some other places, like a friend's house, that dont use the same web provider that I use. I have another nameserver setted up on my server, that have the same problem, All websites that use it cant be reached from zend2.com and from my friends house, except a ".com.br"(Brazillian Domain). Do you have same idea about, what is causing this? I really cant imagine what is the problem... Thanks. [1]: http:// zend2.com

    Read the article

  • C: Running Unix configure file in Windows

    - by Shiftbit
    I would like to port a few applications that I use on Linux to Windows. In particular I have been working on wdiff. A program that compares the differences word by word of two files. Currently I have been able to successfully compile the program on windows through Cygwin. However, I would like to run the program natively on Windows similar to the Project: UnixUtils. How would I go about porting unix utilities on a windows environment? My possible guess it to manually create the ./configure file so that I can create a proper makefile. Am I on the right track? Has anyone had experience porting GNU software to windows? Update: I've compiled it on Code::Blocks and I get two errors: wdiff.c|226|error: `SIGPIPE' undeclared (first use in this function) readpipe.c:71: undefined reference to `_pipe' readpipe.c:74: undefined reference to `_fork This is a linux signal that is not supported by windows... equvilancy? wdiff.c|1198|error: `PRODUCT' undeclared (first use in this function)| this is in the configure.in file... hardcode would probably be the fastest solution... Outcome: MSYS took care of the configure problems, however MinGW couldnt solve the posix issues. I attempt to utilize pthreads as recommended by mrjoltcola. However, after several hours I couldnt get it to compile nor link using the provided libraries. I think if this had worked it would have been the solution I was after. Special mention to Michael Madsen for MSYS.

    Read the article

  • File Format DOS/Unix/MAC code sample

    - by mac
    I have written the following method to detemine whether file in question is formatted with DOS/ MAC, or UNIX line endings. I see at least 1 obvious issue: 1. i am hoping that i will get the EOL on the first run, say within first 1000 bytes. This may or may not happen. I ask you to review this and suggest improvements which will lead to hardening the code and making it more generic. THANK YOU. new FileFormat().discover(fileName, 0, 1000); and then public void discover(String fileName, int offset, int depth) throws IOException { BufferedInputStream in = new BufferedInputStream(new FileInputStream(fileName)); FileReader a = new FileReader(new File(fileName)); byte[] bytes = new byte[(int) depth]; in.read(bytes, offset, depth); a.close(); in.close(); int thisByte; int nextByte; boolean isDos = false; boolean isUnix = false; boolean isMac = false; for (int i = 0; i < (bytes.length - 1); i++) { thisByte = bytes[i]; nextByte = bytes[i + 1]; if (thisByte == 10 && nextByte != 13) { isDos = true; break; } else if (thisByte == 13) { isUnix = true; break; } else if (thisByte == 10) { isMac = true; break; } } if (!(isDos || isMac || isUnix)) { discover(fileName, offset + depth, depth + 1000); } else { // do something clever } }

    Read the article

  • segmentation fault on Unix - possible stack corruption

    - by bob
    hello, i'm looking at a core from a process running in Unix. Usually I can work my around and root into the backtrace to try identify a memory issue. In this case, I'm not sure how to proceed. Firstly the backtrace only gives 3 frames where I would expect alot more. For those frames, all the function parameters presented appears to completely invalid. There are not what I would expect. Some pointer parameters have the following associated with them - Cannot access memory at address Would this suggest some kind of complete stack corruption. I ran the process with libumem and all the buffers were reported as being clean. umem_status reported nothing either. so basically I'm stumped. What is the likely causes? What should I look for in code since libumem appears to have reported no errors. Any suggestions on how I can debug furhter? any extra features in mdb I should consider? thank you.

    Read the article

  • Set-Cookie error appearing in logs when deployed to google appengine

    - by Jesse
    I have been working towards converting one of our applications to be threadsafe. When testing on the local dev app server everything is working as expected. However, upon deployment of the application it seems that Cookies are not being written correctly? Within the logs there is an error with no stack trace: 2012-11-27 16:14:16.879 Set-Cookie: idd_SRP=Uyd7InRpbnlJZCI6ICJXNFdYQ1ZITSJ9JwpwMAou.Q6vNs9vGR-rmg0FkAa_P1PGBD94; expires=Wed, 28-Nov-2012 23:59:59 GMT; Path=/ Here is the block of code in question: # area of the code the emits the cookie cookie = Cookie.SimpleCookie() if not domain: domain = self.__domain self.__updateCookie(cookie, expires=expires, domain=domain) self.__updateSessionCookie(cookie, domain=domain) print cookie.output() Cookie helper methods: def __updateCookie(self, cookie, expires=None, domain=None): """ Takes a Cookie.SessionCookie instance an updates it with all of the private persistent cookie data, expiry and domain. @param cookie: a Cookie.SimpleCookie instance @param expires: a datetime.datetime instance to use for expiry @param domain: a string to use for the cookie domain """ cookieValue = AccountCookieManager.CookieHelper.toString(self.cookie) cookieName = str(AccountCookieManager.COOKIE_KEY % self.partner.pid) cookie[cookieName] = cookieValue cookie[cookieName]['path'] = '/' cookie[cookieName]['domain'] = domain if not expires: # set the expiry date to 1 day from now expires = datetime.date.today() + datetime.timedelta(days = 1) expiryDate = expires.strftime("%a, %d-%b-%Y 23:59:59 GMT") cookie[cookieName]['expires'] = expiryDate def __updateSessionCookie(self, cookie, domain=None): """ Takes a Cookie.SessionCookie instance an updates it with all of the private session cookie data and domain. @param cookie: a Cookie.SimpleCookie instance @param expires: a datetime.datetime instance to use for expiry @param domain: a string to use for the cookie domain """ cookieValue = AccountCookieManager.CookieHelper.toString(self.sessionCookie) cookieName = str(AccountCookieManager.SESSION_COOKIE_KEY % self.partner.pid) cookie[cookieName] = cookieValue cookie[cookieName]['path'] = '/' cookie[cookieName]['domain'] = domain Again, the libraries in use are: Python 2.7 Django 1.2 Any suggestion on what I can try?

    Read the article

  • Random-access archive for Unix use

    - by tylerl
    I'm looking for a good format for archiving entire file-systems of old Linux computers. TAR.GZ The tar.gz format is great for archiving files with UNIX-style attributes, but since the compression is applied across the entire archive, the design precludes random-access. Instead, if you want to access a file at the end of the archive, you have to start at the beginning and decompress the whole file (which could be several hundred GB) up to the point where you find the entry you're looking for. ZIP Conversely, one selling point of the ZIP format is that it stores an index of the archive: filenames are stored separately with pointers to the location within the archive were to find the data. If I want to extract a file at the end, I look up the position of that file by name, seek to the location, and extract the data. However, it doesn't store file attributes such as ownership, permissions, symbolic links, etc. Other options? I've tried using squashfs, but it's not really designed for this purpose. The file format is not consistent between versions, and building the archive takes a lot of time and space. What other options might suit this purpose better?

    Read the article

  • Deploying play! 2.0 application on an apache server with a reverse proxy

    - by locrizak
    I'm trying to deploy my play! 2.0 application on an Ubuntu 11.10 server and I have been running into error after error and hope someone can help me here. I am try to deploy my Play! application using a reverse proxy on Apache 2. I have enabled the apache proxy modules and configured the proxy.conf file in mods_enabled. The vhost for my domain looks like this: <Directory /var/www/stage.domain.com AllowOverride None Order Deny,Allow Deny from all </Directory <VirtualHost *:80 DocumentRoot /var/www/stage.domain.com/web ServerName stage.domain.com ServerAdmin webmaster@stage.domain.com # ProxyRequests Off # ProxyPreserveHost On <Proxy * Order allow,deny Allow from all </Proxy # ProxyVia On # ProxyPass /play/ http://localhost:9000/ # ProxyPassReverse /play/ http://localhost:9000/ ErrorLog /var/log/ispconfig/httpd/stage.domain.com/error.log ErrorDocument 400 /error/400.html ErrorDocument 401 /error/401.html ErrorDocument 403 /error/403.html ErrorDocument 404 /error/404.html ErrorDocument 405 /error/405.html ErrorDocument 500 /error/500.html ErrorDocument 502 /error/502.html ErrorDocument 503 /error/503.html <IfModule mod_ssl.c </IfModule <Directory /var/www/stage.domain.com/web Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory <Directory /var/www/clients/client2/web7/web Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory # Clear PHP settings of this website <FilesMatch "\.ph(p3?|tml)$" SetHandler None </FilesMatch # mod_php enabled AddType application/x-httpd-php .php .php3 .php4 .php5 php_admin_value sendmail_path "/usr/sbin/sendmail -t -i -fwebmaster@stage.domain.com" php_admin_value upload_tmp_dir /var/www/clients/client2/web7/tmp php_admin_value session.save_path /var/www/clients/client2/web7/tmp # PHPIniDir /var/www/conf/web7 php_admin_value open_basedir /var/www/clients/client2/web7/:/var/www/clients/client2/web7/web:/va$ # add support for apache mpm_itk <IfModule mpm_itk_module AssignUserId web7 client2 </IfModule <IfModule mod_dav_fs.c # Do not execute PHP files in webdav directory <Directory /var/www/clients/client2/web7/webdav <FilesMatch "\.ph(p3?|tml)$" SetHandler None </FilesMatch </Directory # DO NOT REMOVE THE COMMENTS! # IF YOU REMOVE THEM, WEBDAV WILL NOT WORK ANYMORE! # WEBDAV BEGIN # WEBDAV END </IfModule # <Location /play/ # ProxyPass http://localhost:9000/ # SetEnv force-proxy-request-1.0 1 # SetEnv proxy-nokeepalive 1 # </Location ProxyRequests Off ProxyPass /play/ http://localhost:9000/ ProxyPassReverse /play/ localhost:9000/ ProxyPass /play http://localhost:9000/ ProxyPassReverse /play http://localhost:9000/ # SetEnv force-proxy-request-1.0 1 # SetEnv proxy-nokeepalive 1 </VirtualHost This vhost file was generated by ispconfig and I have not touched anything that was there before just added onto. As you can see by the commented out parts I have tried a lot of different things based on random tutorials I have found but all of them have ended up in Internal Server Error, 503 and most often a '502 Bad Gateway`. I can start play and it does connect successfully to my database. I can get a page to show up when there is an error and the play! stack trace error pages comes up but where everything is fine I get one of the errors above. My application.conf file looks like this: db info ....... application.mode=PROD logger.root=ERROR # Logger used by the framework: logger.play=INFO # Logger provided to your application: logger.application=DEBUG http.path="/play/" XForwardedSupport="127.0.0.1" And my hosts file looks like this (I have never changed or added anything to the host file): 127.0.0.1 localhost 127.0.1.1 matrix # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters Any insights onto what I might be doing wrong or if theres anything I can try please let me know! Thanks!! Edit Again the reverse proxy will work (I checked with sending to to google.com). Its when there is a successful connection to Netty. It's like Netty refuses the connection to the page. Edit 2 output from apachectl -S _default_:8081 127.0.0.1 (/etc/apache2/sites-enabled/000-apps.vhost:10) *:8090 is a NameVirtualHost default server 127.0.0.1 (/etc/apache2/sites-enabled/000-ispconfig.vhost:10) port 8090 namevhost 127.0.0.1 (/etc/apache2/sites-enabled/000-ispconfig.vhost:10) *:80 is a NameVirtualHost default server 127.0.0.1 (/etc/apache2/sites-enabled/000-default:1) port 80 namevhost 127.0.0.1 (/etc/apache2/sites-enabled/000-default:1) port 80 namevhost domain.com (/etc/apache2/sites-enabled/100-domain.com.vhost:7) port 80 namevhost domain.com (/etc/apache2/sites-enabled/100-domain.com.vhost:7) port 80 namevhost domain.com (/etc/apache2/sites-enabled/100-domain.com.vhost:7) port 80 namevhost domain.com (/etc/apache2/sites-enabled/100-domain.com.vhost:7) port 80 namevhost domain.com (/etc/apache2/sites-enabled/100-domain.com.vhost:7) port 80 namevhost stage.domain.com (/etc/apache2/sites-enabled/100-stage.domain.com.vhost:7) port 80 namevhost domain.com (/etc/apache2/sites-enabled/100-domain.com.vhost:7)

    Read the article

  • Too many Bind query (cache) denied, DNS attack?

    - by Jake
    Once Bind crashed and I did: tail -f /var/log/messages I see a massive number of logs every second. Is this a DNS attack? or is there something wrong? Sometimes I see a domain in logs like this: dOmAin.com (upper and lower). As you see there is only one single domain in the logs with different IPs Oct 10 02:21:26 mail named[20831]: client 74.125.189.18#38921: query (cache) 'ns1.domain2.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 192.221.144.171#38833: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 74.125.189.17#42428: query (cache) 'ns2.domain2.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 192.221.146.27#37899: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 193.203.82.66#39263: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 8.0.16.170#59723: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 80.169.197.66#32903: query (cache) 'dOmAin.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 134.58.60.1#47558: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 192.221.146.34#47387: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 8.0.16.8#59392: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 74.125.189.19#64395: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 217.72.163.3#42190: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 83.146.21.252#22020: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 192.221.146.116#57342: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 193.203.82.66#52020: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 8.0.16.72#64317: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 80.169.197.66#31989: query (cache) 'dOmAin.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 74.125.189.18#47436: query (cache) 'ns2.domain2.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 74.125.189.16#44005: query (cache) 'ns1.domain2.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 85.132.31.10#50379: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 94.241.128.3#60106: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 85.132.31.10#59118: query (cache) 'domain.com/A/IN' denied Oct 10 02:21:26 mail named[20831]: client 212.95.135.78#27811: query (cache) 'domain.com/A/IN' denied /etc/resolv.conf ; generated by /sbin/dhclient-script nameserver 4.2.2.4 nameserver 8.8.4.4 Bind config: // generated by named-bootconf.pl options { directory "/var/named"; /* * If there is a firewall between you and nameservers you want * to talk to, you might need to uncomment the query-source * directive below. Previous versions of BIND always asked * questions using port 53, but BIND 8.1 uses an unprivileged * port by default. */ // query-source address * port 53; allow-transfer { none; }; allow-recursion { localnets; }; //listen-on-v6 { any; }; notify no; }; // // a caching only nameserver config // controls { inet 127.0.0.1 allow { localhost; } keys { rndckey; }; }; zone "." IN { type hint; file "named.ca"; }; zone "localhost" IN { type master; file "localhost.zone"; allow-update { none; }; }; zone "0.0.127.in-addr.arpa" IN { type master; file "named.local"; allow-update { none; }; };

    Read the article

  • More Poll() ?'s

    - by ultifinitus
    Back again! I've been doing some async socket programming with select() on windows,and it's been working quite well. However it's only scalable up to 1024 clients.Poll() is the way to get around that limitation, and I know it works on both linux and unix. But it doesn't work with a windows system correct? I read about WsaPoll(), does it have the exact same functionality? What libraries would I have to link to in order to use it? Can I increase the socket number safely in windows with FD_SETSIZE? My end program will be on a linux server. However I am testing on a windows system right now. Should I just swap my test machine over to a linux box? (probably going to anyway) Otherwise what would you recommend to use with windows? (sorry for all of the questions, I am doing research on my own, I promise =D)

    Read the article

  • Transfer websites and domains to new server

    - by Albert
    We have currently around 40 websites and 80+ domains/sub-domains in a shared 1&1 hosting package, and we just acquired a managed dedicated server with 1&1 as well. Now it's time to start transferring everything over to the new server. Transferring just the websites and databases wouldn't be a problem, it would take time but it's pretty straight forward. The problem comes when transferring the domains, let me explain why. Many of the websites we have are accessible via sub-domains of a parent domain. Ideally, we would like to transfer the sites one by one, in order to check for each one that everything works fine in the new server. However, since we also need to transfer the domain so it's managed in the new server, once we do that means that all the websites using that domain need to be already in the new server before transferring that domain, thus not allowing the "one by one" philosophy. Another issue is the downtime when transferring the domain, from the moment it stops working in the hosting package and becomes active in the new server. I believe there's nothing we can do here. So my question is if there's any way we can do the "one by one" transferring of the websites (and their corresponding sub-domains) in the circumstances described above. One idea I had would be: 1. Let's say we have website A, which is accessible using subdomain.mydomain.com (and there are many other websites accessible via other sub-domains of mydomain.com) 2. Transfer the files of website A to the new server 3. Point a test domain in the new server to the website A's folder (the new server comes with a "test" domain) 4. Test if website A works with that "test" domain 5. In the old hosting, somehow point the real sub-domain (subdomain.mydomain.com) to the new location of website A, in a way that user always see the same URL as always 6. Repeat 2-5 for every website belonging to the same domain 7. Once all are working in the new server, do the actual transfer of the domain to the new server, and then re-create all the sub-domains and point them to their corresponding website That way, users wouldn't notice that there's been a change (except for a small down time of the websites when doing the domain transfer). The part I'm not sure about is point 5 of the above. Is there any way to do that? I mean do it in a way that users see the original domain all the time in their browser, even for internal pages (so not only for the "home page", which would be sub-domain.mydomain.com, but also for example for the contact page, which would be sub-domain.mydomain.com/contact.php). Is there any way to do this? Or are we SOL and we're going to have to transfer all at the same time?

    Read the article

  • .htaccess rewrite www to non-www and remove .html

    - by lester8891
    I need rewrite rules to redirect the following: http://www. to http:// /file.html to /file I've tried using a combination of these but each time it results in a redirect loop on one of the situations RewriteBase / RewriteRule ^(.*)\.html$ $1 [NC] RewriteCond %{HTTP_HOST} ^www.domain.co.uk [NC, L] RewriteRule ^(.*)$ http://domain.co.uk/$1 [R=301] I figure it's probably something to do with the flags but I don't know how to fix it. Just to be clear it needs to do all these situations: http://www.domain.co.uk to http://domain.co.uk http://www.domain.co.uk/file.html to http://domain.co.uk/file http://domain.co.uk to http://domain.co.uk http://domain.co.uk/file.html to http://domain.co.uk/file Thanks!

    Read the article

  • Exchange 2003 - how to route ALL mail (including internal) via an external SMTP gateway? (Or, domain

    - by Scandalon
    Short version: Is there a way to have Exchange route all email, including internal AD users that would normally be routed directly, through an external gateway? (SMTP, probably a "Smart Host" in exchange nomenclature.) Longer version: I'm not an email expert/admin/orevencompetent. Inherited an exchange 2003 server, migrating to web-based SaaS provider. To add to the fun, we're also (forced by deadlines) transitioning domains. What we (my boss) wants is any email sent to the new domain to have a copy sent to both domains. Getting mail sent to the new domain/provider to then be copied/forwarded to our old domain/exchange is easy. But we want mail sent from the old domain to the old domain to get sent to the new domain as well. However: If we route all outgoing exchange mail through the new provider gateway, w/ the new domain forwarding to the old, we'd get an email loop. The "solution" desired is for an exchange user that sends to another exchange user to still be sent via the external gateway, which would in turn be sent to the new domain, and copied/forwarded back to the old domain. Is it possible? A bit of a strange request I'm sure. And I expect that what we're attempting to do is DoingItWrong(tm). Any better ideas?

    Read the article

  • How can I limit the number of open AF_INET sockets?

    - by Stefano Palazzo
    Is there a way to limit the number of concurretly open AF_INET sockets (only)? If so, how do I do it, and how will the networking behave if I'm above the limit? For background: My cheap commodity router is a bit eager to detect 'syn flooding'. When it does, it crashes (and doesn't automatically restart itself). I'm thinking limiting concurrent connections to around 1000 should keep it from bickering.

    Read the article

  • Development Approach: User Interface In or Domain Model Out?

    - by Berin Loritsch
    While I've never delivered anything using Smalltalk, my brief time playing with it has definitely left its mark. The only way to describe the experience is MVC the way it was meant to be. Essentially, all the heavy lifting for your application is done in the business objects (or domain model if you are so inclined). The standard controls are bound to the business objects in some way. For example, a text box is mapped to an object's field (the field itself is an object so it's easy to do). A button would mapped to a method. This is all done with a very simple and natural API. We don't have to think about binding objects, etc. It just works. Yet, in many newer languages and APIs you are forced to think from the outside in. First with C++ and MFC, and now with C# and WPF, Microsoft has gotten it's developer world hooked on GUI builders where you build your application by implementing event handlers. Java Swing development isn't so different, only you are writing the code to instantiate the controls on the form yourself. For some projects, there may never even be a domain model--just event handlers. I've been in and around this model for most of my carreer. Each way forces you to think differently. With the Smalltalk approach, your domain is smart while your GUI is dumb. With the default VisualStudio approach, your GUI is smart while your domain model (if it exists) is rather anemic. Many developers that I work with see value in the Smalltalk approach, and try to shoehorn that approach into the VisualStudio environment. WPF has some dynamic binding features that makes it possible; but there are limitations. Inevitably some code that belongs in the domain model ends up in the GUI classes. So, which way do you design/develop your code? Why? GUI first. User interaction is paramount. Domain first. I need to make sure the system is correct before we put a UI on it. There's pros and cons for either approach. Domain model fits in there with crystal cathedrals and pie in the sky. GUI fits in there with quick and dirty (sometimes really dirty). And for an added bonus: How do you make sure the code is maintainable?

    Read the article

  • What are good CLI tools for JSON?

    - by jasonmp85
    General Problem Though I may be diagnosing the root cause of an event, determining how many users it affected, or distilling timing logs in order to assess the performance and throughput impact of a recent code change, my tools stay the same: grep, awk, sed, tr, uniq, sort, zcat, tail, head, join, and split. To glue them all together, Unix gives us pipes, and for fancier filtering we have xargs. If these fail me, there's always perl -e. These tools are perfect for processing CSV files, tab-delimited files, log files with a predictable line format, or files with comma-separated key-value pairs. In other words, files where each line has next to no context. XML Analogues I recently needed to trawl through Gigabytes of XML to build a histogram of usage by user. This was easy enough with the tools I had, but for more complicated queries the normal approaches break down. Say I have files with items like this: <foo user="me"> <baz key="zoidberg" value="squid" /> <baz key="leela" value="cyclops" /> <baz key="fry" value="rube" /> </foo> And let's say I want to produce a mapping from user to average number of <baz>s per <foo>. Processing line-by-line is no longer an option: I need to know which user's <foo> I'm currently inspecting so I know whose average to update. Any sort of Unix one liner that accomplishes this task is likely to be inscrutable. Fortunately in XML-land, we have wonderful technologies like XPath, XQuery, and XSLT to help us. Previously, I had gotten accustomed to using the wonderful XML::XPath Perl module to accomplish queries like the one above, but after finding a TextMate Plugin that could run an XPath expression against my current window, I stopped writing one-off Perl scripts to query XML. And I just found out about XMLStarlet which is installing as I type this and which I look forward to using in the future. JSON Solutions? So this leads me to my question: are there any tools like this for JSON? It's only a matter of time before some investigation task requires me to do similar queries on JSON files, and without tools like XPath and XSLT, such a task will be a lot harder. If I had a bunch of JSON that looked like this: { "firstName": "Bender", "lastName": "Robot", "age": 200, "address": { "streetAddress": "123", "city": "New York", "state": "NY", "postalCode": "1729" }, "phoneNumber": [ { "type": "home", "number": "666 555-1234" }, { "type": "fax", "number": "666 555-4567" } ] } And wanted to find the average number of phone numbers each person had, I could do something like this with XPath: fn:avg(/fn:count(phoneNumber)) Questions Are there any command-line tools that can "query" JSON files in this way? If you have to process a bunch of JSON files on a Unix command line, what tools do you use? Heck, is there even work being done to make a query language like this for JSON? If you do use tools like this in your day-to-day work, what do you like/dislike about them? Are there any gotchas? I'm noticing more and more data serialization is being done using JSON, so processing tools like this will be crucial when analyzing large data dumps in the future. Language libraries for JSON are very strong and it's easy enough to write scripts to do this sort of processing, but to really let people play around with the data shell tools are needed. Related Questions Grep and Sed Equivalent for XML Command Line Processing Is there a query language for JSON? JSONPath or other XPath like utility for JSON/Javascript; or Jquery JSON

    Read the article

< Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >