Search Results

Search found 21310 results on 853 pages for 'multiple domains'.

Page 402/853 | < Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >

  • Setting up a externally facing server on Windows. How do i setup DNS/Nameservers?

    - by Jason Miesionczek
    So i have a domain name that i would like to host from my static ip internet connection. I have windows server 2008 r2 installed, and dns setup. The dns server is currently behind a firewall, and i have the appropriate rules to allow traffic to reach it. My question is, what entries do i need to create in the DNS so that i can have some nameservers to use at my domain registrar, so that the domain correctly points to the server? I know that most domains have nameservers like ns1.domain.com, ns2.domain.com, etc. What would i point those to in my DNS?

    Read the article

  • How to make qmail to feed email into shell command?

    - by Nopik
    Hi, I have some server running qmail, with many users/domains configured via plesk. I would like to get emails from specified address delivered to one of my users being feed to shell command, as addition to normal delivery. So, basically, I have some shell script, and I'd like qmail to invoke my script with the email, and the continued with delivery as usual. If necessary, I can do recipient/sender filterind on script side, though it could be more efficient and cleaner, if qmail would feed only correct emails to my script. Anyway, how to accomplish that?

    Read the article

  • Virtual hosting in Varnish with individual vcl files for configuration

    - by Michael Sørensen
    I wish to use varnish to put in front of an apache and a tomcat on the same server. Depending on the ip requested, it goes to a different backend. This works. Now for most of the sites the default varnish logic will work just fine. However for some specific sites I wish to use custom VCL code. I can test for host name and include config files for the specific domains, but this only works inside the individual methods recv etc. Is there a way to include a complete set of instructions, in one file, per domain, without having to manage separate files for subdomain_recv, subdomain_fetch etc? And preferably without running seperate instances of varnish. When I try to include a file on the "root level" of default.vcl, I get a compilation error. Best regards, Michael

    Read the article

  • Authentication issues setting up iRedMail on Debian

    - by Sergio Rinaudo
    I'm setting up an exchange server using iRedMail. Following the official iRedMail installation guide (http://www.iredmail.org/install_iredmail_on_debian.html) and the Digital Ocean guide (https://www.digitalocean.com/community/articles/how-to-install-iredmail-on-ubuntu-12-04-x64) I was able to install iRedMail without any problems, so I have all the services up and running. I can configure domains and emails using iRedAdmin BUT I have problem both sending and receiving email, what I get from Roundcube is 'Authentication error' when trying to send an email. Also I can't receive anything. I also tried to connect to the mx server using telnet, it connects, but after the STARTTLS command, when I start to write "MAIL FROM:" the connection is lost. Something in the configuration is not working (at the moment I have the configuration written by the iRedMail installation) but I do not know where, I hope someone can enlight me! Thank you

    Read the article

  • Two SSL certs for a domain in DirectAdmin

    - by Bart van Heukelom
    If I were to get 2 SSL certificates, one for example.com and one for www.example.com, is there a way to install them both on the site example.com in DirectAdmin? The default interface only allows installing one for both versions. If not, can I separate the 2 domains into 2 sites? One of them would only be a redirection, so there wouldn't be any duplication of site files. (Please don't answer with "one certificate should work for both". It doesn't always. This is a DirectAdmin question)

    Read the article

  • How can I add config options for a specific hostname outside <VirtualHost>?

    - by Boldewyn
    I'm using Apache 2.2 and let it serve domains foo.example.com and bar.example.com with <VirtualHost> statements: <VirtualHost 127.0.0.1:80> ServerName foo.example.com </VirtualHost> <VirtualHost 127.0.0.1:80> ServerName bar.example.com </VirtualHost> My problem is, that I need to add configuration options, that are only targeted at foo.example.com, in a separate file (let's say, /etc/apache/sites-enabled/foo.conf). This file will be included, before the VirtualHost statement is issued, but it can't be embedded inside it. Can I (and if yes, how) target configuration settings to foo.example.com requests only, outside the VirtualHost container?

    Read the article

  • How can I setup nginx to serve virtualhosts with rails(unicorn/passenger) and php-fpm

    - by NewAlexandria
    I would like to serve multiple sites on one instance. I install nginx, php-fpm, and a rails app. I use sites like this to guide me. I configure php-fpm to listen to a local socket listen = /var/run/php-fpm/php-fpm.sock I configure ngnix with multiple hosts: include /etc/nginx/conf.d/*.conf I have several site php conf files like /etc/nginx/conf.d/site1.conf server { listen 80; server_name site1.com www.site1.com; root /var/www/site1; location / { index index.html index.php; } location ~ \.php$ { fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; include fastcgi_params; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param SCRIPT_FILENAME $document_root/$fastcgi_script_name; } } and rails site conf files like upstream rails { server 127.0.0.1:3000; } server { listen 80; server_name site2.com www.site2.com; root /var/www/site2; location / { proxy_pass http://rails; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $host; proxy_set_header X-Url-Scheme $scheme; } } I have a unicorn rails server running via rails s -p 3000 Yet, no sites come up for either site1.com or site2.com. I can get to the rails site at www.site2.com:3000 What is wrong? I've spent 2 days (nearly 30hr) trying many different blogs, SO / SF questions, etc. Please share your insight or answer. edit 1: No log entries are created when I try to visit either site. It's like the requests never come in.

    Read the article

  • Passenger_wsgi.py given precedence over DirectoryIndex?

    - by Walkerneo
    I was having an issue with my site today, where apache wasn't serving index.php by default. I had moved passenger_wsgi.py to the directory above document root so that I could serve python files without having to use PassengerAppRoot in the .htaccess file. I wanted to do this because I set up a development sub-domain on the site, and I wanted to use a different passenger_wsgi for the two domains, but that meant having different .htaccess files for the different PassengerAppRoots. Is there a way to have passenger_wsgi.py where it was and still let apache serve the index.phps? edit: I'm sorry, I'm tired. I just realized that the way this probably works is that passenger_wsgi.py is handling the routing instead of apache.

    Read the article

  • Eliminate default SSL certificate

    - by microchasm
    I'm setting up a server for local access. I created a CA and have SSL certs signed and working on other domains. The problem is I'm trying to create a cert for a domain name that is the same as the host name? I copied the steps to make the cert for the other domain, but when I create and sign this cert, and modify httpd.conf with the path to the cert and key, the localhost.localdomain cert seems to be taking precedence. In other words, when I view the cert in firefox, it is the localhost.localdomain cert instead of the one I just created. I looked at ssl.conf, and tried to change the default path to the one issued, and I tried to comment out the VirtualHost, but neither worked. How can I override the servers default certificate with the one I issued and signed? Thanks.

    Read the article

  • How to defend agains botnet http requests

    - by Killercode
    I have a server with WHM + CPanel and 5 of my costumer got infected with zbot. This means that the domains they have are constantly receiving requests to certain destinations. I tried to use mod_security but seems that it can't filter every requests... I don't really know why? I still see in the access log the connection comming in and it's consuming a LOT of bandwidth and server load Those accounts have already been clean so all of those requests go to error 404 (the ones catched on mod_security I am dropping the connection). Is there anymore ways to defend against this requests?

    Read the article

  • Industrial strength cloud file storage

    - by ArthurG
    I'm looking for an industrial strength cloud file storage system. It will be used by multiple people in a startup. Our requirements: Transparent file system access: files and folders in the file system must be able transparently access (read and write) files in the cloud; files must be synchronized whenever network access is available and buffered otherwise. The system must be usable by non-technical people. Access control: we need to control who can access which files, at least on a very coarse basis. e.g., the developers will be able to access the system design documents, only the corporate folks can access recruiting documents, and only management can access certain corporate documents. Dropbox provides this via Sharing folders, but that's not adequate, if I understand it correctly, because there's no authentication of the sharing user. so the cloud service should have a notion of an account (our startup) with multiple users with distinct credentials and rights for each user Clients: it must be accessible from Macs and PCs; I would hope that it supports Linux (e.g., Ubuntu) too Security: it must provide robust security Backup: the cloud service must reliably backup the files Versioning: change version history, is a big plus, but not required Not free: we're willing to pay for the service So far, we've reviewed the following, albeit not completely thoroughly: Dropbox: has all except 1) Access control, which is provided via Sharing folders, but that's not adequate, if I understand it correctly, because there's no authentication of the sharing user. and 2) Security, as discussed here http://www.economist.com/blogs/babbage/2011/05/internet_security and here http://blog.dropbox.com/?p=821. Windows Live Mesh, has all except 1) Clients, only supporting Windows 7 and OS X. SpiderOak has all, except 1) Transparent file system access, which is only available for 1 user. Amazon Cloud, doesn't offer 1) Transparent file system access Rackspace Cloud Drive has all except 1) Access control and 2) Versioning I'll gladly include any clarifications or additional systems the community provides. Arthur

    Read the article

  • Two users using the same same user profile while not in a domain.

    - by Scott Chamberlain
    I have a windows server 2003 acting as a terminal server, this computer is not a member of any domain. We demo our product on the server by creating a user account. The person logs in uses the demo for a few weeks and when they are done we delete the user account. However every time we do this it creates a new folder in C:\Documents and Settings\. I know with domains you can have many users point at one profile and make it read only so all changes are dumped afterwords, but is there a way to do that when the machine is not on a domain? I would really like it if I didn't have to remote in and clean up the folders every time.

    Read the article

  • I bought a domain name at GoDaddy and hosting at Dreamhost but the first doesn't work!

    - by janooChen
    I added the Dreamhost's nameservers like 12 hours ago to: I entered to the following panel: Nameservers -> Set Nameservers (I have specific nameservers for my domains) and added Dreamhost's nameservers liek this: Nameserver 1: NS1.DREAMHOST.COM Nameserver 2: NS2.DREAMHOST.COM Nameserver 3: NS3.DREAMHOST.COM So now in the admin panel I see this: Nameservers Nameservers: (Last Update 2/10/2011) NS1.DREAMHOST.COM NS2.DREAMHOST.COM NS3.DREAMHOST.COM But I get this when I run the analysis tools: Attention Required! There are critical issues Accessing Your Web Site Accessing Your Web Site Properly configuring your domain name and hosting account ensures that visitors can access your site. Did I do something wrong or I have to wait 24 to 48 hours? (Dreamhost does display my page because I can access the other domain name I bought together with the hosting) (By the way, if everyone uses the same nameserver, how will go GoDaddy know which is the hosting space that I purchased among all others)? Thanks in advance.

    Read the article

  • Transferring NS records to a new server

    - by lanemiller
    I feel like that was NOT worded well, but here is my current predicament. I recently had a GoDaddy dedicated server, and decided after their customer support failed to do anything but disappoint, to switch to Rackspace. We have 2 ns records that point to our godaddy server, and we have a few sites left on the server, that rely on it for their DNS zones, and the owners of the domains fail to respond to us. So, the question is, if I need to transfer the sites off of the OLD godaddy NS, can I point the A records from my ns1.domain.com and ns2.domain.com to match up with IP addresses of the Rackspace nameservers? OR, do I cname my NS records to match the rackspace ones? I DO know that this isn't advised, either method, but I need to get these sites moved before Godaddy tries charging another $2k for the server.

    Read the article

  • FTP Logs in IIS 7.5

    - by Jacob84
    I know this is weird, but the thruth is that I can't find the FTP logs in one IIS 7.5 Server. In the IIS Management Console, I've gone to the server, click on FTP Logging that appears inside FTP group (with other options like FTP Messages and FTP Request filtering). Seems that the configured folder for logs is: C:\InetPubFolder\logs\LogFiles If I go there, I can find a lot of folders with the structure W3SVC#, where # it's a number. They all contain logs, but they are HTTP logs, plenty of GET and POST verbs. Am I missing something? The server contains a lot of domains and It's hard to find.

    Read the article

  • server performance metrics report and practicality

    - by Anjesh
    I have a need of preparing web server (apache-php) performance report containing important metrics like CPU usage, disk io, memory usage on user basis. Couple of domains are hosted in the same server and they run from separate users using fcgi. The reason being sometimes some hosted applications take lots of cpu usage, making the server slow for other applications (running as separate users). i am planning to develop scripts for this, as i can't seem to find any simple utilities for this purpose. This script will take snapshots of the user wise metrics at defined periods say 15 minutes and record it. Any abnormalities will be reported via emails. How practical is that? also would be interesting to know what else need to be recorded.

    Read the article

  • Can spell checking be disabled by default on OS X?

    - by Lri
    Is there some way I could disable continuous spell checking or other settings in the substitutions menu by default? System Preferences only has an option to disable autocorrect. defaults write -g CheckSpellingWhileTyping -bool false would be overridden by keys on the property lists of applications. This would only apply to applications that have been used before: #!/bin/bash for d in $(defaults domains | tr -d ,); do osascript -e "app id \"$d\"" > /dev/null 2>&1 [ $? == 1 ] && continue echo $d defaults write $d CheckSpellingWhileTyping -bool false defaults write $d SmartDashes -bool false defaults write $d SmartLinks -bool false defaults write $d SmartQuotes -bool false defaults write $d SmartCopyPaste -bool false defaults write $d TextReplacement -bool false done

    Read the article

  • Setting up IIS7 to mimic a GoDaddy shared hosting plan

    - by NerdFury
    I host multiple domains on a GoDaddy shared hosting account. I would like to setup a website locally in IIS 7 that mimics the setup of my hosted account so that I can test and debug applications locally before deploying, as debugging after deploying, or discovering there are issues after deploying is frustrating. I have created a folder WebRoot, at put my main application in that folder. I created a website in IIS 7 and pointed it at that folder. I setup bindings with a fake domain, and created a matching entry in my hosts file to make the fake domain point at my 127.0.0.1. I then created a folder www.otherdomain.com under webroot. I then created an application underneath my website, and pointed it at this folder. I can't find how I can add bindings to the web application to have it referenced as a different fake domain, rather than a subdirectory under my root domain. What would be the proper way to setup IIS to best simulate the environment on the GoDaddy servers.

    Read the article

  • Catch-all DNS record

    - by Christian Sciberras
    Intro Our users have the ability to buy a domain (eg: user1.com) and make it point to out website, (eg: example.com), by simply pointing user1.com to ns1/ns2.example.com . Issue So far everything's good, however, example.com does not like this; we need to set up WHM/cpanel to make the server accept user1.com . Problem is, we'd rather made this automatic, possibly without having to use WHM API. The question We need some sort of "catch-all" wildcard entry so that we capture all of our user's possible domains.

    Read the article

  • How to redirect domain to new server?

    - by hfidgen
    I've got a domain registered with a hosting company who I no longer wish to use. I'm happy for them to keep managing my domain, but I want my domain to point to my new (better) server which i've just bought and set up. I know my new server's IP address and Nameservers, What do I need to do in my domain management control panel to make it point to my new server? Change the "A" record to the new IP? Change the nameservers to my new hosts nameservers? Is that it? Are there no other record on either server which need changing? I always get confused by who needs to do what when it comes to domains... Thanks, Hugh

    Read the article

  • postfix: force server to send mail outside of localhost

    - by LoneWolfPR
    I have a php file that sends mail using the mail() function. The problem is one of the forms sends to a domain that is registerred on my server while having the mail handled on a different server. Postfix looks locally only. When it doesn't find the email address is rejects the message. How can I configure postfix to send mail to all domains through the internet and not locally? Update Ok. So it wasn't a postfix issue at all. I simply needed to turn off mail to that domain from the command line. For anyone that needs that command it is (at least on my system): /usr/local/psa/bin/domain --update example.com -mail_service false

    Read the article

  • Domain migration - 301 Redirect of all contentes of directory)

    - by Trufa
    Hi, I would like to know if it is possible to do the following considering that I would like to migrate domains. I have lets say: one.com/files/one.html one.com/files/two.php one.com/other/three.html one.com/other/four.doc one.com/other/subdirectory/five.doc I am migrating to two.com So I would like to make RESPECTIVE 301 redirects to the following: two.com/old/files/one.html two.com/old/files/two.php two.com/old/other/three.html two.com/old/other/four.doc two.com/old/other/subdirectory/five.doc I've tried with cPanel and although I come "close" with the redirects option I can't seem to make it happen. The folders are not much (10 -12) the file are a lot, and obviously impossible to make it manually. How would you proceed? Can this/ should this be done with regex from the .htaccess?? Can you direct all the elements of a subdirectory in the manner expressed above? I hope the question is clear enough, if not please ask for any clarification needed!! Thanks in advance!!

    Read the article

  • Why do most routers not include local DNS?

    - by user785194
    I need to change my firewall/router, and I'd prefer something with built-in DNS to resolve queries on the local subnets. I've got a mixed Linux/Windows system, often with only one computer turned on, and I frequently have problems resolving local names. I don't want to keep a Linux box permanently on just for DNS, and I'd prefer to have DNS in my router appliance, which is always on. I search Google for this occasionally but never find anything. You always get the obvious answers - it's not possible, put everything in /etc/hosts, NetBIOS, dedicated box, etc. So what am I missing? Why don't "cheap" routers let you do this? I'm pretty sure that Cisco kit does this. Almost all cheap routers will let you do MAC address reservation, to let them assign static IP addresses for DHCP. So why can't they simply do DNS as well for everything on the local subnets, just passing through remote domains to the ISP?

    Read the article

  • Nginx Server Block Not Working? - Already running other vhosts just this one not working

    - by daveaspinall
    Im running a Debian 6 LEMP server with multiple virtual hosts and everything has been fine for 5 or so sites. But I've just tried adding another but for some reason it's just not working. By not working I mean in Chrome I get the "Oops! Google Chrome could not connect to subdomain.domain.net" error. I've changed the domain for security to subdomain.example.com and the IP is masked. Hosts file (I have multiple sub domains): xxx.xxx.xx.xxx *.example.com *.example Server Block: server { listen 80; server_name subdomain.example.com; access_log /srv/www/subdomain.example.com/logs/access.log; error_log /srv/www/subdomain.example.com/logs/error.log; root /srv/www/subdomain.example.com/public_html; location / { index index.html index.htm index.php; } location ~ \.php$ { include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } I've created the system link to the file in the /etc/nginx/sites-enabled/ directory and restarted/reloaded nginx. DNS seems fine: # ping -c 2 subdomain PING subdomain.example.com (xxx.xxx.xx.xxx) 56(84) bytes of data. 64 bytes from www.example.com (xxx.xxx.xx.xxx): icmp_req=1 ttl=64 time=0.035 ms 64 bytes from www.example.com (xxx.xxx.xx.xxx): icmp_req=2 ttl=64 time=0.048 ms Checking the file with cURL works: # curl http://subdomain.example.com HTML - OK Emptied browser cache but still no dice. Anything I'm missing? Like I mentioned, I have a few sites running fine on the server currently so php-fpm etc etc are working. Any help would be much appreciated! Cheers, Dave

    Read the article

  • Lighttpd referer issue

    - by Chris
    I have a problem to block files from accessing from different domains as my one. I have added to my lighty config in the "virual host" following: $HTTP["referer"] !~ "^($|http://www\.my-site\.net)" { url.access-deny = ( "" ) } but anyway the site www.example.com can access http://player.my-site.net/player.swf, also it can be accessed directly without a referrer. any idea? //EDIT here is my old apache .htaccess with a rewrite rule thats works perfect, but i dont know how to convert it for lighty: RewriteEngine on RewriteBase / RewriteCond %{HTTP_REFERER} !^http://my-site\.net/ [NC] RewriteCond %{HTTP_REFERER} !^http://www\.my-site\.net/ [NC] RewriteCond %{HTTP_REFERER} !^http://player\.my-site\.net/ [NC] RewriteCond %{HTTP_REFERER} !^http://stream\.my-site\.net/ [NC] RewriteRule .* - [L,R=404]

    Read the article

< Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >