Search Results

Search found 10640 results on 426 pages for 'apache2 module'.

Page 110/426 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • Can Apache be configured to specify more than one docroot per virtualhost?

    - by syn4k
    I have a vhost which specifies <VirtualHost *:80> DocumentRoot "/private/var/www/html/cms/sites/" ServerName localhost.com </VirtualHost> I would like to know if localhost.com can also point to /private/var/www/html/wordpress/. This seems like a no brainer but Apache is like black magic; these things are always possible. Anyway, I already know that I could specify a new ServerName entry and set a new docroot. The problem is, both directories need to be available as roots. If I need to provide more info, I will gladly do so.

    Read the article

  • Apache redirect some requests to another server

    - by mucie
    We just bought a new server. We want our old server to respond the https connections(because of ssl certificate) and new server to respond the rest. New server is ready but i don't know how to redirect requests to new one. mydomain.com => old machine ip 10.10.10.41 => new machine Requests will come through mydomain.com. If it is https: respond else redirect to 10.10.10.41 How should i configure apache for this situation?

    Read the article

  • http 301 redirection issue

    - by Guilhem Soulas
    I'm a little bit lost with a redirection. I want mysite.com, www.mysite.com and www.mysite.co.uk to redirect to mysite.co.uk. In Apache, I wrote this for mysite.co.uk in order to redirect www to the root domain: RewriteEngine on RewriteCond %{HTTP_HOST} ^www RewriteRule ^/(.*) http://mysite.co.uk/$1 [L,R=301] And for mysite.com, I wrote this redirect to mysite.co.uk: ServerName www.mysite.com RewriteEngine on RewriteRule ^/(.*) http://mysite.co.uk/$1 [L,R=301] This way, I can make the redirection work properly from www.mysite.com to mysite.co.uk, but it doesn't work for mysite.com too mysite.co.uk (without the www) at the same time. Could someone tell me how to make all my redirections work in all cases?

    Read the article

  • Is there a way to use something like RewriteRule ... [PT] for an external URL?

    - by nbolton
    I have a non-apache web server running on port 8000, but this cannot be accessed from behind corporate firewalls. So, I would like to use my apache 2 server as a proxy to this other web server. I've tried using: RewriteEngine On RewriteRule /.* http://buildbot.synergy-foss.org:8000/builders/ [PT] ... but this does not work; I get: Bad Request Your browser sent a request that this server could not understand. However, it worked fine with [R]. Update: Also, when using ProxyPass, I get this error: Forbidden You don't have permission to access / on this server.

    Read the article

  • PHP CPU utilization limit

    - by knightrider
    I have done some research on the net regarding the problem. My questions is NOT how to reduce cpu utilization by improving algorithm or improving the performance by using multitasking or limiting CPU per system user. I have a website where user logs in does some processing and logout. The site uses linux server, php and apache. The problem is that I cant control the amount of CPU allocated to each user. ie I want give a guarantee that a user will get say atleast 5% of CPU (assume total number of users is less than 20). How can I do this? Any solution (A php code, apache server settings, or any out of box soln) is welcomed. Thankyou very much for reading this :)

    Read the article

  • uploading files greater than 1MB = connection resets

    - by Legit
    I'm using nginx on the frontend as "proxy cache" and apache on the backend, i've set my PHP settings to the following: error_log = /var/www/site1/php_error.log error_reporting = 22527 file_uploads = On log_errors = On max_execution_time = 0 max_file_uploads = 20 max_input_time = -1 memory_limit = 512M post_max_size = 0 upload_max_filesize = 1000M What's the problem? Uploading files less than 1MB is successful but anything greater than that, Google Chrome outputs: Error 101 (net::ERR_CONNECTION_RESET): The connection was reset. I already checked for the error log file but it doesn't exist in the directory. I also checked /var/log/httpd/error_log but no uploading related problems. I don't know anything else which might have caused the problem so I have reached out for your helping hand. Thanks!

    Read the article

  • Too Many ESTABLISHED connection from a single IP address in Apache

    - by ananthan
    netstat -ntp |grep 80 shows too many ESTABLISHED connection from single IP address. Around 300 of them and it is not an attack and user is using a 2G connection to access Apache. This is the case with other 2G connections also. As a result of this Apache is running out of children. Earlier it was showing too many close_wait and after enabling tcp_tw_reuse and tcp_tw-recycle there is not much close_wait but the number of ESTABLISHED connections increased. We are using Ubuntu 11.04 having 48 GB ram keepalive On keepalive timeout 10 max clients 800 max-request-perchild 4000 timeout 300 I have set syn_ack to 1 and syn_retries to 2. On wifi there is no such issue. Connections are closing properly, but with 2G connections Apache is running out of children and too many ESTABLISHED connection. also i have tried setting timeout from default 300 to 30,but since our project is image hosting for mobile phones,clients couldn't upload images properly as they are getting frequent time out.Also there were a lot of 408 messages so changed it to the default 300

    Read the article

  • Rewrite 2 different directories with htaccess?

    - by jason
    I have a tricky problem (for me at least). I'm trying to rewrite / to a folder /webroot/www. I have some simple code and it works: RewriteRule ^$ /webroot/www/ [L] However at the same time if the URL starts with components, followed by anything else (ex. foo, as in /components/foo), and foo is an actual directory that exists inside components, I should rewrite to /components/foo/www instead. How can I achieve that? I can't seem to figure it out. I'm using Apache with .htaccess.

    Read the article

  • OOMK kills mysql and apache when there is still a lot[?] of mem

    - by Flyer
    let me first say that I'm pretty new ti *nix systems and even more to server management. Anyway, I've got a little problem. I got VPS with 1gb mem, system is debian 6. I have few sites running on it, though some load can only be caused by one of them. Recently, OOMK started to kill mysql, causing wp and phpbb giving error that it can't connect to mysql server. Error itself is not good, especially if it happens at night and site becomes unavailable until I wake up and restart mysql. I have probably bad line in my cron which can be cause of it all (again, I'm new to it) */20 * * * * sync; echo 3 > /proc/sys/vm/drop_caches Well, if you need any information, let me know, since I don't really know which information can be useful here. Also, I'd like to know if it's not too bad to have above cron task.

    Read the article

  • Apache forwarding without redirecting (application won't follow redirects)

    - by DrewVS
    Recently we had to move /task to /public/task, and I'd like to configure Apache to redirect accordingly. However, using mod_rewrite, though it works in the browser, seems to break applications making api calls to the above location. What happens is the application returns a page with the message saying the page was moved, but the app doesn't follow the redirect. So, is there a way to simply forward any traffic to /task to /public/task without 'redirecting', i.e, returning a redirect status code? EDIT: Here's a little more information. I've found a simple test to clarify what I'm trying to fix. Here is the URL path that needs forwarding: https://mydomain.com/task Needs to go to: https://mydomain.com/public/task If I use curl against the original domain, it just returns a redirect page notice. If I add the -L flag, which tells curl to follow redirects, it then follows the redirect successfully. I assume something very similar is happening in the application (which I don't have access to) that makes calls to the /task URL path. Since I cannot modify the application to make it follow redirects properly, I'm looking for a solution I can implement in Apache.

    Read the article

  • htaccess rewriting all subdomains to subdirectories

    - by indorock
    I'm trying to build a catch-all for any subdomains (not captured by previous rewrite rules) for a certain domain, and serve a website from a subdirectory that resides in the same folder as the .htaccess file. I already have my vhosts.conf to send all unmapped requests to a "playground" folder, where I want to easily create new subdomains by simply adding a subfolder. So, my structure looks like this: /var/www/playground |-> /foo |-> /bar The .htacces living inside the /playground folder and /foo and /bar being seperate websites. I want http://foo.domain.com to point to /foo and http://bar.domain.com to /bar. Here is my .htaccess file: RewriteEngine On RewriteCond %{HTTP_HOST} ^([^.]+).domain.com$ [NC] RewriteCond %{REQUEST_URI} !^/%1/(.*) RewriteRule ^(.*) /%1/$1 [L] This is supposed to capture the subdomain, add it as a subfolder in RewriteRule, then append after the slash and path information. The second RewriteCond is there to prevent an infinite loop. My idea was that %1 in the second RewriteCond would be able to capture the capture group in the first RewriteCond. But so far I haven't had any success, it's always ending up in a redirect loop. If I would replace %1 in the second RewriteCond with hardcoded 'foo' or 'bar', it works, which leads me to believe that you cannot refer to a capture group inside a RewriteCond. Is is true? Or am I missing something?

    Read the article

  • how to serve php files on a Apache server (localhost) running Coldfusion/MySql?

    - by frequent
    I'm still learning my ways around on my localhost server, whih is running Apache 2.2, Coldfusion8 and MySQL Server 5.5 (on Windows XP). I need to work on a site I inherited, which also ran some PHP scripts under the same setup. I have installed PHP5 on my localhost, but when I open a dummy page with: <?php phpinfo();?> I only get plain text returned, so I guess I haven't configured Apache correctly to also serve PHP (while defaulting to Coldfusion). Question: Where do I need to get started if I want PHP to work on my current setup, too? Is there something I need to add to the httpd.conf file? If possible I don't want to uninstall/reinstall everything, because it took forever to get everything to work (excluding php). Thanks for any pointers!

    Read the article

  • Can't access apache from outsite my local network

    - by valter
    UPDATED: Now, when I type my external ip like xxx.xxx.xxx.xxx:8079, i can access xampp defaults page. But the strange is that when someone else from outside my network, try to access it using the same ip, it doesnt work. I Think it should, because its the external ip. I'm getting crazy. I have tried for hours to access xampp defaults page from outside my local network. My ISP blocks port 80 and 8080. So I changed apache to listen to port 8079 Listen 8079 My local computer ip is 10.1.1.2 I can access the webserver, from any computer on my local network when I type http://10.1.1.2:8079 I also oppended the port 8079 on my modem, as the image shows bellow. (I think i did it right) When apache is running on my computer, if I test the port 8079 at http://canyouseeme.org/ i get the message "Success: I can see your service on xxx.xxx.xxx.xxx on port (8079) Your ISP is not blocking port 8079" If apache is not running I get "Error: I could not see your service on xxx.xxx.xxx.xxx on port (8079) Reason: Connection refused". So, it's clear that the port 8079 is oppened. But when I type xxx.xxx.xxx.xxx:8079 on google chrome for example, I get Oops! Google Chrome could not connect to xxx.xxx.xxx.xxx:8079 What can I do to solve this, to allow apache to server the pages? I don't know what else I shoud configure. Please, help me. Thanks.

    Read the article

  • How to set up apache with parallel plesk?

    - by Ran Gualberto
    I'm working with the Windows Server 2008 (a godaddy Windows dedicated server). My problem is that .htaccess is not working in the server. And I just figured out that apache is not installed. I would like to know how to run the apache with the plesk (with existing php setup). And how to run the apache with the current site directory C:\inetpub\vhosts. My goal is to make .htaccess work on the server with plesk and with the directory C:\inetpub\vhosts.

    Read the article

  • Centos 6.2, Apache 2 and Listen port for socket connection

    - by salvosav
    I'm trying to make a socket connection between a client and my server through a php script. To do this, I opened a port on iptables, and configure a virtual host with apache to redirect all traffic from my door to the folder that contains the file index.php, which is my script. Doing some tests the door is open, but using command netstat -ltn , I see ':::35100'. Looking on the net I understand if in this way is only listen inward and not outward. But I don't understand how can I turn this ':::35100 '-' in this '0.0.0.0:35100 '. Or, better yet, how to add this rule. Any ideas? thanks

    Read the article

  • What is the alternative of Apache's global Alias in IIS? (e.g. Alias /phpMyAdmin "c:/AppServ/www/phpMyAdmin")

    - by Sk8erPeter
    I know there's an "Add Virtual Directory..." option in every given sites in IIS with which I can set up e.g. phpMyAdmin's path to be reached with prepending /phpmyadmin to the address (e.g. http://example.com/phpmyadmin), but isn't there a "global" setting similar to Apache's Alias? For example, in Apache this setting looks like this: <IfModule mod_alias.c> Alias /phpMyAdmin "c:/AppServ/www/phpMyAdmin" Alias /phpmyadmin "c:/AppServ/www/phpMyAdmin" </IfModule> This way I reach phpmyadmin with every hosts. (http://example1.com/phpmyadmin, http://example2.com/phpmyadmin also does work) But in IIS, do I have to add a virtual directory to every sites? I'm just curious, because we would like to serve some domain's content, so there would be multiple sites. It would be more comfortable to do it once (or have the opportunity to remove it once), but if I have to, I do add a virtual directory for each sites. (I know, maybe it's the better solution, because I can have a site where I don't want phpmyadmin to be available, but I was just curious.) Thanks in advance!

    Read the article

  • Issue with domain redirection

    - by phphunger
    I have a flash games website which is hosted at godaddy server. But because of heavy traffic on my site the godaddy server gets down. So I changed the hosting to Midphase server. The strange thing is i have created new name servers and new database in my mid phase server but still the web site is coming from the godaddy server. When i do any modifications from the godaddy ftp i am getting the modifications but when i do any modifications in the mid phase server. No changes are happening. One strange thing is the who.is is showing the new name servers and new server details but not getting the new servers contents. Can anyone help me in this regard?

    Read the article

  • Virtual folder for multiple sites

    - by Cups
    I am creating a very simple flat file CMS for small (multilingual) websites. The little file writing that goes on is handled by 4 scripts in a publicly available folder in each site named /edit. Given that I have 2 websites now working on that simple system: websiteA/index.php (etc) websiteA/edit/ websiteB/index.php (etc) websiteB/edit/ What is the best way of making that /edit folder "virtual" in order that these and each subsequent website owner can login to their view of /edit and yet the code only exists in one place. I do not want the website owners to have to login from a central website, but from their own /edit directory. I have already read about different solutions seemingly using the <Directory> directive in my httpd.conf declaration for each website, and also using straight mod_rewrite but admit to now becoming confused about some of the terminology. Each website has its own config file which contains path settings and so on. What in your opinion is the best way to handle this? EDIT In light of a reply, I suppose that given a virtual host directive such as this: <VirtualHost 00.00.00.00:80> DocumentRoot /var/www/html/websitea.com ServerName www.websitea.com ServerAlias websitea.com DirectoryIndex index.htm index.php CustomLog logs/websitea combined </VirtualHost> Is it possible to create an alias inside that directive for the folder websitea.com/edit ?

    Read the article

  • Apply rewrite rule for all but all the files (recursive) in a subdirectory?

    - by user784637
    I have an .htaccess file in the root of the website that looks like this RewriteRule ^some-blog-post-title/ http://website/read/flowers/a-new-title-for-this-post/ [R=301,L] RewriteRule ^some-blog-post-title2/ http://website/read/flowers/a-new-title-for-this-post2/ [R=301,L] <IfModule mod_rewrite.c> RewriteEngine On ## Redirects for all pages except for files in wp-content to website/read RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !/wp-content RewriteRule ^(.*)$ http://website/read/$1 [L,QSA] #RewriteRule ^http://website/read [R=301,L] RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> My intent is to redirect people to the new blog post location if they propose one of those special blog posts. If that's not the case then they should be redirected to http://website.com/read. Nothing from http://website.com/wp-content/* should be redirected. So far conditions 1 and 3 are being met. How can I meet condition 2?

    Read the article

  • Multiple SSL domains on the same IP address and same port?

    - by John
    This is a Canonical Question about Hosting multiple SSL websites on the same IP. I was under the impression that each SSL Certificate required it's own unique IP Address/Port combination. But the answer to a previous question I posted is at odds with this claim. Using information from that Question, I was able to get multiple SSL certificates to work on the same IP address and on port 443. I am very confused as to why this works given the assumption above and reinforced by others that each SSL domain website on the same server requires its own IP/Port. I am suspicious that I did something wrong. Can multiple SSL Certificates be used this way?

    Read the article

  • unable to connect site to different port

    - by JohnMerlino
    I have a domain was registered at godaddy named http://mysite.com/. I logged into godaddy and I went to All Products Domains Domain Management. I clicked on the appropriate domain and it took me to the Domain Details page. I clicked Launch under DNS Manager and it took me to the Zone File Editor. I noticed that notify.mysite.com was pointing to an IP address pointing to a dead server, so I switched it to an operating server. Then I pinged the domain to see where it was pointing to and it was correctly pointing to the working server. So I copied the default configuration under sites-available: sudo cp default notify.mysite.com. And then I made some edits to it to have it point to a different document root to serve files at a different port: Listen 1740 Listen 64.135.xx.xxx:1740 #I also tried this as well: NameVirtualHost 64.135.xx.xxx:1740 <VirtualHost 64.135.xx.xxx:1740> ServerAdmin [email protected] ServerName notify.mysite.com DocumentRoot /var/www/test/public <Directory /var/www/test/public> Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> Then I enabled the virtual host. Then I went to the document root and added an index.html file with some text in it. Then I restarted apache. The restart gave no errors. Then I type the correct domain in URL: http://notify.mysite.com:1740/ and I get: Oops! Google Chrome could not connect to notify.mysite.com:1740 Somehow it took out all my other sites. Now even the ones that were responding on port 80 are no longe responding, even though I did not touch the virtual hosts for them. I get this message now: Oops! Google Chrome could not connect to mysite.com However, ping responds: ping mysite.com PING mysite.com (64.135.12.134): 56 data bytes 64 bytes from 64.135.12.134: icmp_seq=0 ttl=49 time=20.839 ms 64 bytes from 64.135.12.134: icmp_seq=1 ttl=49 time=20.489 ms The result of telnet: $ telnet guarddoggps.com 80 Trying 64.135.12.134... telnet: connect to address 64.135.12.134: Connection refused telnet: Unable to connect to remote host

    Read the article

  • Apache Virtualhosts with PHP and custom logs - how to isolate PHP errors?

    - by Repox
    I'm trying to setup a simple hosting enviroment for my application on an Ubuntu server. I created a virtualhost like this: <VirtualHost *:80> ServerAdmin [email protected] ServerName www.example.com ServerAlias example.com DocumentRoot /home/owner/example.com/docs <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/owner/example.com/docs/> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /home/owner/example.com/logs/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /home/owner/example.com/logs/access.log combined php_flag log_errors on php_value error_log /home/owner/example.com/logs/php-error.log </VirtualHost> Now, my problem is that PHP errors and warnings are thrown in the error.log - not the php-error.log as I was hoping. How can achieve this?

    Read the article

  • Apache with mod_perl eating memory when idle

    - by syneticon-dj
    An Apache webserver running a mod_perl application is exposing abnormal memory usage - after the "day load" ceases, the system's memory is being exhausted by the Apache processes and oom_killer is being invoked. As the load returns the following morning, the memory usage normalizes - probably because Apache workers get recycled periodically if a sufficient number of hits is generated: This is the graph for apache hits per second to correlate: The remaining 2 hits per second throughout the night are induced by HAProxy checks - it runs HEAD http://mydomain.example.com/running HTTP/1.0 requests against the server every half a second with "running" being a static file (i.e. not invoking any perl code). It also seems that disabling these checks remedies the memory usage problem, but obviously cannot be a solution. All of 3 similarly configured servers (behind HAProxy) expose this behavior. The running OS is Ubuntu 10.10, Apache version 2.2.16. This seems to be a memory leak but I have no idea how to start debugging it - any hints?

    Read the article

  • Apache + Passenger, requests not forwarded to Rails

    - by olalonde
    I'm trying to set up Apache + Passenger (mod_rails) and everything works fine except Apache won't forward requests to Rails. From my Apache log: File does not exist: /var/www/rails_apps/domain.com/public/controllerName My Rails app works fine with Mongrel and I also have some PHP/MySQL sites running fine. I have no .htaccess in my Rails app directory. What is going wrong? How could I troubleshoot this problem ?

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >