Search Results

Search found 2291 results on 92 pages for 'webserver'.

Page 25/92 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • Problem installing Exchange Server

    - by Carlos
    I can't connect to the instance of exchange server 2010 through EMC on the local machine running w2k8 r2. I've checked all the default website bindings, the kerberos auth and WSMan are set to native type in powershell and I still get this error message. Connecting to remote server failed with the following error message: The WS-Management service does not support the request. It was running the command 'Discover-ExchangeServer -UseWIA $true -suppresserror $true'

    Read the article

  • How to process requests twice in Apache

    - by Pieter
    In order to perform realistic tests for a new backend server, I'd like to process all Apache requests twice. So simply handle all the live requests with the old server, as it's done right now, but then also 'duplicate' the requests to a different virtual host, where the new backend is deployed, which will process the request and log the response. What's the best / most simple way to achieve this in Apache? (the backend is a FastCGI process)

    Read the article

  • Multiple websites each with an SSL certificate of its own

    - by ServerDown
    Hi, We run cent os, plesk with apache and php, mysql. There are around 25 sites and each of them need an SSL certificate now. The host cannot have more than 16 IPs on the same server. Is it possible to have all these sites use just one IP address and have SSL certificate setup for each site? If yes, please let me know how I can set this up. Thanks

    Read the article

  • Identify Long Running or Slow PHP Scripts

    - by Kirk
    I have web server that is getting around 25K visits a day up at yougetsignal.com. Sometimes the site feels a bit sluggish. I am hosting it on nginx with php5-fpm. Is there a way for me to see a list of all of the long running requests that are coming to the site? I'd love to have a real-time list of all of the active requests that PHP is handling and how long they have been running. Kind of like top, but just for the web server. This would let me know how long requests are taking and which script is the culprit. Anyone have any ideas on how I can do this?

    Read the article

  • IIS 6 Compression for a folder

    - by Brian
    Hello, I want to enable IIS 6 compression for all of the folders in my application except one; in this one folder, I do some things that fails when IIS 6 compression is enabled... Any advice? Thanks.

    Read the article

  • How can I transfer a logged in user's login data from one server to another?

    - by Martin
    I have one server "A" where users can login. Login is verified by an LDAP server "L". I have a different server "B" were users can log in, too. Login is verified by the same LDAP server as before. Both servers are standard web servers with PHP. My goal is: If a user is logged in to server "A", and if he clicks a link to log in to server "B", the user should automatically be logged in without re-entering username and password. What is a good and secure way to achieve this? I can't submit username and crypted password to server "B". I can't use the PHP session of server "A", because it does not exit on "B". Cookies won't work either. I think that there is a way, but I just can't see it. Any help is very much appreciated.

    Read the article

  • Nginx + Ubuntu 9.10, gzip not functioning

    - by Matt
    Hey there, So I installed and configured Nginx 0.7.62 on a new Slicehost Ubuntu 9.10 slice. All seems to work fine with the server, except that gzip isn't working for one reason or another. I made sure that it's setting were correct in /etc/nginx/nginx.conf: user www-data; worker_processes 3; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 1024; # multi_accept on; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; #tcp_nopush on; keepalive_timeout 2; tcp_nodelay on; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript; gzip_disable "MSIE [1-6]\."; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } This normally wouldn't be a big deal, but gzip support could save considerable bandwidth for my site. Does anyone have any ideas of what to check, or has anyone else run into this problem?

    Read the article

  • Web Server Users - Best Practice

    - by Toby
    I was wondering what is considered best practice when several developers/administrators require access to the same web server. Should there be one non-root user with a secure username and password unqiue to the web server which everyone logs in as or should there be a username for each person. I am leaning towards a username for each person to aid in logging etc however then does the same user keep the same credentials over several servers, or should at least their password change depending on the server they are on? Should any non-root user of the system be added to the sudoers file or is it best practice to leave everyone off it and only let root perform certain tasks? Any help would be greatly appreciated.

    Read the article

  • Web Server Users - Best Practice

    - by Toby
    I was wondering what is considered best practice when several developers/administrators require access to the same web server. Should there be one non-root user with a secure username and password unqiue to the web server which everyone logs in as or should there be a username for each person. I am leaning towards a username for each person to aid in logging etc however then does the same user keep the same credentials over several servers, or should at least their password change depending on the server they are on? Should any non-root user of the system be added to the sudoers file or is it best practice to leave everyone off it and only let root perform certain tasks? Any help would be greatly appreciated.

    Read the article

  • how to have a vm image act as a server host

    - by tipu
    I have a VM Ware player, VMware-player-3.0.1-227600.exe I downloaded a cent os image, it's version is CentOS release 5.4 (Final) I have apache installed and listening on port 8080. However when I visit my ip address, x.x.x.x.:8080/ I don't get the default apache page as I would by going to localhost:8080/ What do I have to do in my image or vmware to get it to serve?

    Read the article

  • Apache httpd: Send error logs to syslog and local disk? Without touching /etc/syslog.conf?

    - by Stefan Lasiewski
    I have an Apache httpd 2.2 server. I want to log all messages using syslog, so that the requests are sent to our central syslog server. I also want to ensure that all log messages are sent to local disk, so that a sysadmin can have easy access to the log files on the local system. It is easy to send HTTP access logs to both the local disk and to syslog. One common method is: LogFormat "%V %h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined CustomLog logs/access_log combined CustomLog "|/usr/bin/logger -t httpd -i -p local4.info" combined But it is not easy to do this for error logs. The following configuration doesn't work, because the error logs only use the last ErrorLog stanza. The first ErrorLog stanza is ignored. ErrorLog logs/error_log ErrorLog syslog:local4.error How can I ensure that Apache errors logs are written to the local disk and are sent to syslog? Is it possible to do this without touching /etc/syslog.conf ? I am fine if my users want to manage their own Apache configuration files, but I do not want them touching system files such as /etc/syslog.conf

    Read the article

  • What file or directory is unique to Plesk and unlikely to change in the future?

    - by Tim Post
    I am writing a program that needs to be able to detect the presence of various web hosting control panels. I don't use Plesk, or have access to a server running Plesk, so I am unable to ascertain what file or directory I could test for to reliably conclude that the system is running Plesk. What file or directory indicates "Yes, this has Plesk installed" that is unlikely to ever change in the future, but would be present on all versions of Plesk out in the wild? Thanks in advance :)

    Read the article

  • What is a good solution for an adaptive iptables daemon?

    - by Matt
    I am running a series of web servers and already have a pretty good set of firewall rules set up, however I'm looking for something to monitor the traffic and add rules as needed. I have denyhosts monitoring for bad SSH logins, and that's great - but I'd love something I could apply to the whole machine that would help prevent bute force attacks against my web applications as well, and add rules to block IPs that display evidence of common attacks. I've seen APF, but it looks as though it hasn't been updated in several years. Is it still in use and would it be good for this? Also, what other solutions are out there that would manipulate iptables to behave in some adaptive fashion? I'm running Ubuntu Linux, if that helps.

    Read the article

  • How to know if my nginx is in good health?

    - by Howard
    I am running a nginx on EC2 (m1.small) for SSL termination. I am using 2 workers on Ubuntu, with latest nginx (stable), the network throughput is around 2Mbps and system load average is around 2 to 3. I am wondering if this system is in good health for now, e.g. what is the queue length (I know nginx can handle a lot of concurrent request, but I mean before the request is being served, how many of them need to wait before being served) what is the average queue time for a given request to be served. I want to know because if my nginx is cpu bounded (e.g. due to SSL), I will need to upgrade to a faster instance. My current nginx status Active connections: 4076 server accepts handled requests 90664283 90664283 104117012 Reading: 525 Writing: 81 Waiting: 3470

    Read the article

  • root directory - www or public_html

    - by Phil Jackson
    Is the root directory where all files are kept (directly from accessing from FTP) always "www" or "public_html" depending on what OS? Or is it possible to rename this folder? And if so, what would be unique about this folder to be able to identify it? i.e. currently I just wrote this; my $root; my $ftp = Net::FTP->new($DB_ftpserver, Debug => 0) or die "Cannot connect to some.host.name: $@"; $ftp->login($DB_ftpuser, $DB_ftppass) or die "Cannot login ", $ftp->message; my @list = $ftp->dir; if( scalar @list != 0 ) { foreach( @list ){ if( $_ =~ m/www$/g ){ $root = "www"; last; }elsif( $_ =~ m/public_html$/g ){ $root = "public_html"; last; } } } but would not work if it has a different name. Any help much appreciated.

    Read the article

  • Why is it good to have website content files on a separate drive other than system (OS) drive?

    - by Jeffrey
    I am wondering what benefits will give me to move all website content files from the default inetpub directory (C:) to something like D:\wwwroot. By default IIS creates separate application pool for each website and I am using the built-in user and group (IURS) as the authentication method. I’ve made sure each site directory has the appropriate permission settings so I am not sure what benefits I will gain. Some of the environment settings are as below: VMWare Windows 2008 R2 64 IIS 7.5 C:\inetpub\site1 C:\inetpub\site2 Also as this article (moving the iis7 inetpub directory to a different drive) points out, not sure if it's worth the trouble to migrate files to a different drive: PLEASE BE AWARE OF THE FOLLOWING: WINDOWS SERVICING EVENTS (I.E. HOTFIXES AND SERVICE PACKS) WOULD STILL REPLACE FILES IN THE ORIGINAL DIRECTORIES. THE LIKELIHOOD THAT FILES IN THE INETPUB DIRECTORIES HAVE TO BE REPLACED BY SERVICING IS LOW BUT FOR THIS REASON DELETING THE ORIGINAL DIRECTORIES IS NOT POSSIBLE.

    Read the article

  • Is CSF overkill?

    - by A4J
    My server runs just my own sites (vBulletin forums - which are always patched with security fixes) and Rails sites using the latest version) so do I really need CSF? (http://configserver.com/cp/csf.html) Or is it unnecessary for this kind of server set-up? I have already done the usual (disable SSH login, pub-key auth, very strong passwords everywhere else etc) It was often recommended by users over at the cPanel forums - but I guess most of them are hosts there.

    Read the article

  • FTP server (vsftpd) with webgui

    - by manutenfruits
    I want to build a file server to make users able to upload and download mostly multimedia, but also common files. Right now I have an Arch installation with vsftpd and I'm about to install miniDLNA for multimedia sharing. The only problem is that FTP doesn't seem to fit my needs, because almost always makes the users need a client such as FileZilla to make the server friendly. I have been looking for a web frontend for vsftp but apart from management interfaces there's nothing. I need a frontend accessible from a browser through which users can navigate throught the folders in an easier and more elegant way than the plain FTP display that browsers make by default. It should be able to let users upload files and, as an awesome extra, let them play the multimedia directly on the browser. For this, I am willing to dump FTP if needed, I've heard about HTTP File Servers but don't know too much about it. I could code everything myself, but there's gotta be something out there already.

    Read the article

  • How to figure out which directory is web server root?

    - by matt
    I want to view websites hosted on my Mac when running Windows VMware Fusion. I have an entry in the Windows hosts file to enable the routing: #ip of my mac domain i use on the VM to access it 192.168.1.70 mymac However, it resolves to an empty directory as a 404 is generated. I can see the access log on my Mac that everything is OK access wise. Firefox on VMware states the following response headers: Server Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 PHP/5.3.1 Any ideas how I can figure out what directory is being served? I am lost in a maze of twisty httpd.conf passages. localhost on my Mac resolves to my ~/Sites directory. 192.168.1.70 resolves to the same empty directory/404. Thanks.

    Read the article

  • lighttpd: why using port >= 9000 does not work properly

    - by yejinxin
    I had a lighttpd server which works normally. I can access this website from outside(non-localhost) via http://vm.aaa.com:8080. Let's just assume that it's a simple static website, without php or mysql. Now I want to copy this website as a test one(using another port) in the same machine. And I do not want to use virtual host. So I just copy the whole files of original server, including lighttpd's bin/ conf/ htdocs/ lib/ and so on folders. And I made some required change, including changing lighttpd.conf. Now what I'm confused is, if change the port to a number that is less than 9000, it works perfectly. But if the port is changed to a number that is equal or greater than 9000, lighttpd can start, but I can not access the new website from outside, while I do can access the new website from INSIDE(I mean in the same LAN or localhost). The access log from INSIDE is like below: vm.aaa.com:9876 10.46.175.117 - - [08/Oct/2012:13:18:47 +0800] "GET / HTTP/1.1" 200 15 "-" " curl/7.12.1 (x86_64-redhat-linux-gnu) libcurl/7.12.1 OpenSSL/0.9.7a zlib/1.2.1.2 libidn/0.5.6" Command I used to start lighttpd is: bin/lighttpd -f conf/lighttpd.conf -m lib/ -D My lighttpd.conf is like: server.modules = ( "mod_access", "mod_accesslog", ) var.rundir = "/home/work/lighttpd_9876" server.port = 9876 server.bind = "0.0.0.0" server.pid-file = var.rundir + "/log/lighttpd.pid" server.document-root = var.rundir + "/htdocs/" var.cronolog_path = "/home/work/lighttpd_9876/cronolog/sbin/cronolog" server.errorlog = ... accesslog.filename = ... ... So why is this happening? I've tried several diffrent ports, still the same. Isn't that ports between 8000 and 65535 are the same?

    Read the article

  • Set up lnux box for hosting a-z [apache mysql php ssl]

    - by microchasm
    I am in the process of reinstalling the OS on a machine that will be used to host a couple of apps for our business. The apps will be local only; access from external clients will be via vpn only. The prior setup used a hosting control panel (Plesk) for most of the admin, and I was looking at using another similar piece of software for the reinstall - but I figured I should finally learn how it all works. I can do most of the things the software would do for me, but am unclear on the symbiosis of it all. This is all an attempt to further distance myself from the land of Configuration Programmer/Programmer, if at all possible. I can't find a full walkthrough anywhere for what I'm looking for, so I thought I'd put up this question, and if people can help me on the way I will edit this with the answers, and document my progress/pitfalls. Hopefully someday this will help someone down the line. The details: CentOS 5.5 x86_64 httpd: Apache/2.2.3 mysql: 5.0.77 (to be upgraded) php: 5.1 (to be upgraded) The requirements: SECURITY!! Secure file transfer Secure client access (SSL Certs and CA) Secure data storage Virtualhosts/multiple subdomains Local email would be nice, but not critical The Steps: Download latest CentOS DVD-iso (torrent worked great for me). Install CentOS: While going through the install, I checked the Server Components option thinking I was going to be using another Plesk-like admin. In hindsight, considering I've decided to try to go my own way, this probably wasn't the best idea. Basic config: Setup users, networking/ip address etc. Yum update/upgrade. Upgrade PHP: To upgrade PHP to the latest version, I had to look to another repo outside CentOS. IUS looks great and I'm happy I found it! cd /tmp #wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/epel-release-1-1.ius.el5.noarch.rpm #rpm -Uvh epel-release-1-1.ius.el5.noarch.rpm #wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/ius-release-1-4.ius.el5.noarch.rpm #rpm -Uvh ius-release-1-4.ius.el5.noarch.rpm yum list | grep -w \.ius\. [will list all packages available in the IUS repo] rpm -qa | grep php [will list installed packages needed to be removed. the installed packages need to be removed before you can install the IUS packages otherwise there will be conflicts] #yum shell >remove php-gd php-cli php-odbc php-mbstring php-pdo php php-xml php-common php-ldap php-mysql php-imap Setting up Remove Process >install php53 php53-mcrypt php53-mysql php53-cli php53-common php53-ldap php53-imap php53-devel >transaction solve >transaction run Leaving Shell #php -v PHP 5.3.2 (cli) (built: Apr 6 2010 18:13:45) This process removes the old version of PHP and installs the latest. To upgrade mysql: Pretty much the same process as above with PHP #/etc/init.d/mysqld stop [OK] rpm -qa | grep mysql [installed mysql packages] #yum shell >remove mysql mysql-server Setting up Remove Process >install mysql51 mysql51-server mysql51-devel >transaction solve >transaction run Leaving Shell #service mysqld start [OK] #mysql -v Server version: 5.1.42-ius Distributed by The IUS Community Project And this is where I'm at. I will keep editing this as I make progress. Any tips on how to Configure Virtualhosts for SSL, setting up a CA, setting up SFTP with openSSH, or anything else would be appreciated.

    Read the article

  • What free OS should I use on my VPS?

    - by earlz
    Hello, I looked a bit but didn't see any duplicate of this so my question is which free(open source) OS do you use on servers and why do you use that OS? Background I have a VPS at Linode. There is a broad range of options for which OS I can put on it including both 32 and 64bit OSs. I just use it to run my small blog and for hosting random files. It's very low traffic. I have been using 64bit Arch Linux on my VPS and though I love the OS for general usage, for a server the constant breakage is troublesome. So I'm considering trying something new and am looking for suggestions.

    Read the article

  • Best practice, or generally best way to set up web-hosting server, permissions, etc.

    - by Jagot
    Hi, I'm about to set up a server upon which a friend and I will be hosting web sites, and I'll be using Debian. I've set up a LAMP solution many times just to using for local testing purposes, but never for actual production use. I was wondering what are the best practices are in terms of setting the server up, in reference specifically to accessing the web root directory. A couple of the options I have seen: Set up a single user account on the server for us both to use and use a virtual host to point to the somewhere in the home directory, e.g. /home/webdev/www. Set each of us up a user account, and grant permissions in some way to /var/www (What would be the best way? Set up a new group?) I want to get this right when I first set this up as there won't be any going back for a while once our first site is up and running. Appreciate any guidance in advance.

    Read the article

  • Fail-over caching reverse proxy

    - by sybreon
    Is there a way to configure varnish or any other caching reverse proxy, to serve pages from its cache when the back-end fails? At the moment, if the back-end goes down a 503 Service Unavailable error would be returned to the browser. I would prefer it if visitors got to see a cached version than an error page while the back-end is being fixed. My setup: [varnish (public ip)] <=== [router] <=== [web server (private ip)] PS: I have only one back-end web server.

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >