Search Results

Search found 2349 results on 94 pages for 'webdev webserver'.

Page 26/94 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Fail-over caching reverse proxy

    - by sybreon
    Is there a way to configure varnish or any other caching reverse proxy, to serve pages from its cache when the back-end fails? At the moment, if the back-end goes down a 503 Service Unavailable error would be returned to the browser. I would prefer it if visitors got to see a cached version than an error page while the back-end is being fixed. My setup: [varnish (public ip)] <=== [router] <=== [web server (private ip)] PS: I have only one back-end web server.

    Read the article

  • How to figure out which directory is web server root?

    - by matt
    I want to view websites hosted on my Mac when running Windows VMware Fusion. I have an entry in the Windows hosts file to enable the routing: #ip of my mac domain i use on the VM to access it 192.168.1.70 mymac However, it resolves to an empty directory as a 404 is generated. I can see the access log on my Mac that everything is OK access wise. Firefox on VMware states the following response headers: Server Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 PHP/5.3.1 Any ideas how I can figure out what directory is being served? I am lost in a maze of twisty httpd.conf passages. localhost on my Mac resolves to my ~/Sites directory. 192.168.1.70 resolves to the same empty directory/404. Thanks.

    Read the article

  • lighttpd: why using port >= 9000 does not work properly

    - by yejinxin
    I had a lighttpd server which works normally. I can access this website from outside(non-localhost) via http://vm.aaa.com:8080. Let's just assume that it's a simple static website, without php or mysql. Now I want to copy this website as a test one(using another port) in the same machine. And I do not want to use virtual host. So I just copy the whole files of original server, including lighttpd's bin/ conf/ htdocs/ lib/ and so on folders. And I made some required change, including changing lighttpd.conf. Now what I'm confused is, if change the port to a number that is less than 9000, it works perfectly. But if the port is changed to a number that is equal or greater than 9000, lighttpd can start, but I can not access the new website from outside, while I do can access the new website from INSIDE(I mean in the same LAN or localhost). The access log from INSIDE is like below: vm.aaa.com:9876 10.46.175.117 - - [08/Oct/2012:13:18:47 +0800] "GET / HTTP/1.1" 200 15 "-" " curl/7.12.1 (x86_64-redhat-linux-gnu) libcurl/7.12.1 OpenSSL/0.9.7a zlib/1.2.1.2 libidn/0.5.6" Command I used to start lighttpd is: bin/lighttpd -f conf/lighttpd.conf -m lib/ -D My lighttpd.conf is like: server.modules = ( "mod_access", "mod_accesslog", ) var.rundir = "/home/work/lighttpd_9876" server.port = 9876 server.bind = "0.0.0.0" server.pid-file = var.rundir + "/log/lighttpd.pid" server.document-root = var.rundir + "/htdocs/" var.cronolog_path = "/home/work/lighttpd_9876/cronolog/sbin/cronolog" server.errorlog = ... accesslog.filename = ... ... So why is this happening? I've tried several diffrent ports, still the same. Isn't that ports between 8000 and 65535 are the same?

    Read the article

  • Set up lnux box for hosting a-z [apache mysql php ssl]

    - by microchasm
    I am in the process of reinstalling the OS on a machine that will be used to host a couple of apps for our business. The apps will be local only; access from external clients will be via vpn only. The prior setup used a hosting control panel (Plesk) for most of the admin, and I was looking at using another similar piece of software for the reinstall - but I figured I should finally learn how it all works. I can do most of the things the software would do for me, but am unclear on the symbiosis of it all. This is all an attempt to further distance myself from the land of Configuration Programmer/Programmer, if at all possible. I can't find a full walkthrough anywhere for what I'm looking for, so I thought I'd put up this question, and if people can help me on the way I will edit this with the answers, and document my progress/pitfalls. Hopefully someday this will help someone down the line. The details: CentOS 5.5 x86_64 httpd: Apache/2.2.3 mysql: 5.0.77 (to be upgraded) php: 5.1 (to be upgraded) The requirements: SECURITY!! Secure file transfer Secure client access (SSL Certs and CA) Secure data storage Virtualhosts/multiple subdomains Local email would be nice, but not critical The Steps: Download latest CentOS DVD-iso (torrent worked great for me). Install CentOS: While going through the install, I checked the Server Components option thinking I was going to be using another Plesk-like admin. In hindsight, considering I've decided to try to go my own way, this probably wasn't the best idea. Basic config: Setup users, networking/ip address etc. Yum update/upgrade. Upgrade PHP: To upgrade PHP to the latest version, I had to look to another repo outside CentOS. IUS looks great and I'm happy I found it! cd /tmp #wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/epel-release-1-1.ius.el5.noarch.rpm #rpm -Uvh epel-release-1-1.ius.el5.noarch.rpm #wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/ius-release-1-4.ius.el5.noarch.rpm #rpm -Uvh ius-release-1-4.ius.el5.noarch.rpm yum list | grep -w \.ius\. [will list all packages available in the IUS repo] rpm -qa | grep php [will list installed packages needed to be removed. the installed packages need to be removed before you can install the IUS packages otherwise there will be conflicts] #yum shell >remove php-gd php-cli php-odbc php-mbstring php-pdo php php-xml php-common php-ldap php-mysql php-imap Setting up Remove Process >install php53 php53-mcrypt php53-mysql php53-cli php53-common php53-ldap php53-imap php53-devel >transaction solve >transaction run Leaving Shell #php -v PHP 5.3.2 (cli) (built: Apr 6 2010 18:13:45) This process removes the old version of PHP and installs the latest. To upgrade mysql: Pretty much the same process as above with PHP #/etc/init.d/mysqld stop [OK] rpm -qa | grep mysql [installed mysql packages] #yum shell >remove mysql mysql-server Setting up Remove Process >install mysql51 mysql51-server mysql51-devel >transaction solve >transaction run Leaving Shell #service mysqld start [OK] #mysql -v Server version: 5.1.42-ius Distributed by The IUS Community Project And this is where I'm at. I will keep editing this as I make progress. Any tips on how to Configure Virtualhosts for SSL, setting up a CA, setting up SFTP with openSSH, or anything else would be appreciated.

    Read the article

  • What free OS should I use on my VPS?

    - by earlz
    Hello, I looked a bit but didn't see any duplicate of this so my question is which free(open source) OS do you use on servers and why do you use that OS? Background I have a VPS at Linode. There is a broad range of options for which OS I can put on it including both 32 and 64bit OSs. I just use it to run my small blog and for hosting random files. It's very low traffic. I have been using 64bit Arch Linux on my VPS and though I love the OS for general usage, for a server the constant breakage is troublesome. So I'm considering trying something new and am looking for suggestions.

    Read the article

  • Why is it good to have website content files on a separate drive other than system (OS) drive?

    - by Jeffrey
    I am wondering what benefits will give me to move all website content files from the default inetpub directory (C:) to something like D:\wwwroot. By default IIS creates separate application pool for each website and I am using the built-in user and group (IURS) as the authentication method. I’ve made sure each site directory has the appropriate permission settings so I am not sure what benefits I will gain. Some of the environment settings are as below: VMWare Windows 2008 R2 64 IIS 7.5 C:\inetpub\site1 C:\inetpub\site2 Also as this article (moving the iis7 inetpub directory to a different drive) points out, not sure if it's worth the trouble to migrate files to a different drive: PLEASE BE AWARE OF THE FOLLOWING: WINDOWS SERVICING EVENTS (I.E. HOTFIXES AND SERVICE PACKS) WOULD STILL REPLACE FILES IN THE ORIGINAL DIRECTORIES. THE LIKELIHOOD THAT FILES IN THE INETPUB DIRECTORIES HAVE TO BE REPLACED BY SERVICING IS LOW BUT FOR THIS REASON DELETING THE ORIGINAL DIRECTORIES IS NOT POSSIBLE.

    Read the article

  • Hide/Replace Nginx Location Header?

    - by Steven Ou
    I am trying to pass a PCI compliance test, and I'm getting a single "high risk vulnerability". The problem is described as: Information on the machine which a web server is located is sometimes included in the header of a web page. Under certain circumstances that information may include local information from behind a firewall or proxy server such as the local IP address. It looks like Nginx is responding with: Service: https Received: HTTP/1.1 302 Found Cache-Control: no-cache Content-Type: text/html; charset=utf-8 Location: http://ip-10-194-73-254/ Server: nginx/1.0.4 + Phusion Passenger 3.0.7 (mod_rails/mod_rack) Status: 302 X-Powered-By: Phusion Passenger (mod_rails/mod_rack) 3.0.7 X-Runtime: 0 Content-Length: 90 Connection: Close <html><body>You are being <a href="http://ip-10-194-73-254/">redirect ed</a>.</body></html> I'm no expert, so please correct me if I'm wrong: but from what I gathered, I think the problem is that the Location header is returning http://ip-10-194-73-254/, which is a private address, when it should be returning our domain name (which is ravn.com). So, I'm guessing I need to either hide or replace the Location header somehow? I'm a programmer and not a server admin so I have no idea what to do... Any help would be greatly appreciated! Also, might I add that we're running more than 1 server, so the configuration would need to be transferable to any server with any private address.

    Read the article

  • Is It Possible To Self-Teach PHP, Wordpress, CentOS (Linux), Apache, Nginx etc?

    - by Aahan
    consider me a total noob, who uses a Windows PC and has never touched Linux. But I want to administer, manage and take responsibility of my server, at least at some point, if not now. But since I am a full-time blogger I am unable to find time to study at an institute. So, here is my question — - Is It Possible To Self-Teach HTML, CSS, PHP, JavaScript, Wordpress, CentOS (or for that matter any Linux distro), Apache, Nginx, and Varnish? Yes, beginning with HTML, absolutely all of them. I might seem overly ambitious and foolish, but I just want to do it. Aren't there any self-taught server admins? (1) Please help me out with the names of good books, links and whatever you can. (2) How long would it take me to get there (approximately)? 3 years? 5 years? (I have good touch with HTML & Wordpress.) This is a great community, I hope at least some of you will shoot some suggestions at me.

    Read the article

  • Strange mysql problem moving website from Ubuntu server to Mac server

    - by evan
    I'm moving a website (php/mysql) from an Ubuntu server to a OSX 10.6 server. I've set up apache to run php scripts and setup the newest version of mysql on the mac. I just copied all of the php files and dumped/imported all of the mysql databases (including the mysql users database). When I visit the page being served on the Mac the page is able to connect to the database, but not query. Specifically this function mysql_error() is returning this error message NO SUCH FILE OR DIRECTORY The reason it's strange is that I'm able to change the php connection strings for mysql on the Ubuntu server so that it points to the Mac server and the page works correctly (so it seems mysql is correctly set up on the mac and definitely contains all of the users and tables it should). Thinking it was something to do with file permissions on the mac, I've changed all of the files 755 and it hasn't helped. Any ideas? Thanks!! UPDATE: I've found this error which I'm relatively certain is related to this problem in /var/log/apache2/error_log PHP Warning: mysql_query(): A link to the server could not be established

    Read the article

  • What are the risks in putting website files in the "root" folder of a shared web hosting server?

    - by Obay Ouano
    A site I've been asked to manage is hosted (shared) on GoDaddy, with this folder structure: / public_html public_ftp mail stats logs etc... However, the website files are stored in the / folder, and NOT in public_html. I'm not sure if this is how GoDaddy sets up their customers' accounts, or if the old web developer accidentally changed it from public_html to root. But when we call up GoDaddy to tell them to correct this (move files to public_html), they won't change it and insist that there is no security risk unless someone gets a hold of the FTP password. Is this true? (I have always read that website files should be inside public_html.) If not, where could this setting be changed? The .htaccess is empty.

    Read the article

  • Adding multiple websites with different SSL certificates in IIS 7

    - by Timka
    I'm having troubles using SSL for 2 different websites on my IIS 7 server. Please see my setup below: website1: my.corporate.portal.com SSL certificate for website1: *.corporate.portal.com https/443 binded to my.corporate.portal.com website2: client.portal.com SSL certificate issued for: client.portal.com When I try to bind https in IIS7 with the client's certificate, I don't have an option to put host name(grayed out) and as soon as I select 'client.portal.com' cert, I'm getting the following error in IIS: At least one other site is using the same HTTPS binding and the binding is configured with a different certificate. Are you sure that you want to reuse this HTTPS binding and reassign the other site or sites to use the new certificate? If I click 'yes' my.corporate.portal.com website stops using the proper SSL cert. Could you suggest something?

    Read the article

  • startup cassandra layout

    - by davidkomer
    We've got a relatively low-traffic site (~1K pageviews/day) hosted on a single server, and expect it to grow significantly over the next few years. I'm thinking of moving over to Rackspace CloudServer or EC2 and firing up 3 nodes (all on CentOS): 2 x Web (Apache) - with loadbalancer 1 x MySQL (for the Wordpress powered part) The question is where to put Cassandra right now... Should it sit on each Web node, or the MySQL node? My thought right now is to put it on Web nodes. It's my understanding that Cassandra has the benefits of fault-tolerance (i.e. if we take a node down, the site is still operational). So even with only 2 nodes, we'd have that benefit as opposed to just putting it on the MySQL node. Also, as we scale up and add another node, a cassandra instance can come along with it and the php can always run its queries on localhost. Is this a good idea?

    Read the article

  • nginx rewrite for /blah/(.*) /$1

    - by skrewler
    I'm migrating from mod_php to nginx. I got everything working except for this rewrite.. I'm just not familiar enough with nginx configuration to know the correct way to do this. I came up with this by looking at a sample on the nginx site. server { server_name test01.www.myhost.com; root /home/vhosts/my_home/blah; access_log /var/log/nginx/blah.access.log; error_log /var/log/nginx/blah.error.log; index index.php; location / { try_files $uri $uri/ @rewrites; } location @rewrites { rewrite ^ /index.php last; rewrite ^/ht/userGreeting.php /js/iFrame/index.php last; rewrite ^/ht/(.*)$ /$1 last; rewrite ^/userGreeting.php$ /js/iFrame/index.php last; rewrite ^/a$ /adminLogin.php last; rewrite ^/boom\/(.*)$ /boom/index.php?q=$1 last; rewrite ^favicon.ico$ favico_ry.ico last; } # This block will catch static file requests, such as images, css, js # The ?: prefix is a 'non-capturing' mark, meaning we do not require # the pattern to be captured into $1 which should help improve performance location ~* \.(?:ico|css|js|gif|jpe?g|png)$ { # Some basic cache-control for static files to be sent to the browser expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; } include php.conf; } The issue I'm having is with this rewrite: rewrite ^ht\/(.*)$ /$1 last; 99% of requests that will hit this rewrite are static files. So I think maybe it's getting sent to the static files section and that's where things are being messed up? I tried adding this but it didn't work: location ~* ^ht\/.*\.(?:ico|css|js|gif|jpe?g|png)$ { # Some basic cache-control for static files to be sent to the browser expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; } Any help would be appreciated. I know the best thing to do would be to just change the references of /ht/whatever.jpg to /whatever.jpg in the code.. but that's not an option for now.

    Read the article

  • How to install Reddit Open Source on a web server

    - by Shubz
    I have been playing around with the Reddit open source software and have been getting no where fast. I was wondering if anybody can instruct me on how to install the software on a web server. I know how to install normal php scripts etc, but I've never installed a software such as a python or rails script before. I'm not very good with commands but I know how to run them. If that makes sense. Thanks!

    Read the article

  • Is Page-Loading Time Relevant?

    - by doug
    Take this (ServerFault) page for instance. It has about 20 elements. When the last of these has loaded, the page is deemed "loaded"--but not before. This is certainly the protocol used by our testing service (which is among the small group of well-known vendors that offer that sort of service). Obviously this method is based on a clear, definite endpoint--therefore it's easy to apply w/ concomitant reliability. I think it's also the metric used by the popular Firefox plugin, 'YSlow.' For my employer's website, nearly always the last-to-load items are tracking code, tracking pixels, etc., so from the user's point of view--their perception--the page was "loaded" well before it had actually loaded based on the criterion used by our testing service (15-20% is a rough estimate). I'm sure i'm not the first person to consider this nor the first to wonder if it is causing micro-optimization while ignoring overall system-level, or user-perceived performance. So my question is, are there are other more practical (yet still reasonably precise) measures of page loading time?

    Read the article

  • Using Arch Linux computer as a server for Rack Apps

    - by wxl
    What would be the best way to go about using an Arch Linux computer as a Rack (as in Ruby Rack, not an actual rack server) server? Here's what I want to be able to do: Automatically deploy on a git push to the server. (I already have this worked out, on post-receive the server checks out the app to /home/git/app from /home/git/app.git.) Run a Rack server application to serve up this app, one that can be restarted on demand. Run a MongoDB server Be able to access the app by going to my-server.local/app or something similar. (It's really only going to be used on the local network, no port forwarding or outside use) Any ideas would be greatly appreciated. I apologize if this seems too "do it for me".

    Read the article

  • Apache crashing at random intervals. Can not find a reason in log files

    - by Nick Downton
    We are having an issue with a VPS running plesk 9.5 on ubuntu 8.04 At seemingly random intervals Apache will disappear and needs to be started manually. I have checked the apache error log, /var/log/messages, individual virtual host apache error files and cannot find anything that coincides with the time of the failure. dmesg is empty which is a bit odd. We have also had the psa service go down for no apparent reason but apache stay up. I'm at a loss to diagnose this really because all the log files I can find do not point to any issues. Are there any others I can look at? Memory usage sits at about 55% (out of 400mb) and it isn't a particularly high trafficed server. Any pointers as to where else I can find out what is going on would be very much appreciated. Nick

    Read the article

  • Do all domains on the same shared hosting server have the same IP or ID

    - by silow
    Here's what I've got: siteA.com and siteB.com are hosted on hostgator. They're hosted on the same account of a shared hosting server (not VPS or dedicated). script.php is an external site that each of these 2 sites are accessing. I noticed that when siteA.com or siteB.com access the outside script.php, the script identifies them both as 1a.12.12ab.static.theplanet.com (apparently because hostgator uses theplanet.com servers). The fact that they're identified as the same value isn't surprising because after all they're hosted on the same account /home/user123/public_html. What I'm wondering about is how about other websites that are hosted on the same shared hosting server, but under other accounts. Basically other websites that are under another developer's control, but just happen to share the same hardware (hosting server). Do they also have the exact same identifier 1a.12.12ab.static.theplanet.com or that changes by account?

    Read the article

  • IIS7 80 port outside doesn't work

    - by ihorko
    I have created web site, added to IIS 7, in the binding set up host name as "mysite.com". (here "mysite.com" is my registered domain that points to my IP address) So when I assigned port 8095 and open site as mysite.com:8095 it succesfully opens on my local pc and outsite my network pc, but if I set up port 80 there, http://mysite.com opens only on my pc, but not in outside pc. Firewall is disabled. How to resolve that problem, please help!?

    Read the article

  • Sandbox on a linux server for group members

    - by mgualt
    I am a member of a large group (academic department) using a central GNU/Linux server. I would like to be able to install web apps like instiki, run version control repositories, and serve content over the web. But the admins won't permit this due to security concerns. Is there a way for them to sandbox me, protecting their servers in case I am hacked? What is the standard solution for a problem like this?

    Read the article

  • server performance metrics report and practicality

    - by Anjesh
    I have a need of preparing web server (apache-php) performance report containing important metrics like CPU usage, disk io, memory usage on user basis. Couple of domains are hosted in the same server and they run from separate users using fcgi. The reason being sometimes some hosted applications take lots of cpu usage, making the server slow for other applications (running as separate users). i am planning to develop scripts for this, as i can't seem to find any simple utilities for this purpose. This script will take snapshots of the user wise metrics at defined periods say 15 minutes and record it. Any abnormalities will be reported via emails. How practical is that? also would be interesting to know what else need to be recorded.

    Read the article

  • Allow SFTP to a single folder not in home directory

    - by Brandon
    I have a web server that I use to host my websites. I have all the websites in folders in /srv/www. There is a Wordpress site that I want to give SFTP access to another developer, how would I go about doing this so that they only have access to /srv/www/thesite.com and not any other directories? Running Ubuntu 9.04.

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >