Search Results

Search found 10640 results on 426 pages for 'apache2 module'.

Page 99/426 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • IPv4 NameVirtualHost, IPv6 VirtualHost

    - by MadHatter
    Like many of us, I have an apache server (2.2.15, plus patches) with a lot of virtual hosts on it. More than I have IPv4 addresses, to be sure, which is why I use NameVirtualHost to run lots of them on the same IPv4 address. I'm busily trying to get everything I do IPv6-enabled. This server now has a routed /64, which gives me an awful lot of v6 addresses to throw around. What I'm trying to find is a simple way to tell each v4-NameVirtualHost that it should also function as a VirtualHost on a unique ipv6 address. I really, really don't want to have to define each virtual host twice. Does anyone know of an elegant way to do this? Or to do something comparable, in case I've embedded any dangerously-ignorant assumptions in my question?

    Read the article

  • Repeated requests on our server?

    - by pitty.platsch
    I encountered something strange in the access log of our Apache server which I cannot explain. Requests for webpages that I or my colleagues do from the office's Windows network get repeated by another IP (that we don't know) a couple of seconds later. The user agent repeating our requests is Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; InfoPath.2) Has anyone an idea? Update: I've got some more information now. The referrer of the replicate is set to the URL I requested before and it's not the exact same request as the protocol version is changed from 'HTTP/1.1' to 'HTTP/1.0'. The IP is not just one, it's just one of a subnet (80.40.134.*). It's just the first request to a resource that's get repeated, so it seems the "spy" is building up some kind of cache of visited places. The repeater is also picky. I tried randomly URLs with different HTTP status codes and different file patterns. 301s and 200s are redone, 404s not. Image extensions seem to be ignored. While doing my tests I discovered that this behavior seems to be common as I found other clients visiting just after the first requests: 66.249.73.184 - - [25/Oct/2012:10:51:33 +0100] "GET /foobar/ HTTP/1.1" 200 10952 "-" "Mediapartners-Google" 50.17.125.180 - - [25/Oct/2012:10:51:33 +0100] "GET /foobar/ HTTP/1.1" 200 41312 "-" "Mozilla/5.0 (compatible; proximic; +http://www.proximic.com/info/spider.php)" I wasn't aware about this practice, so I don't see it that much as a threat anymore. I still want to find out who this is, so any further help is appreciated. I'll try later if this also happens if I query some other server where I have access to the access logs and will update here then.

    Read the article

  • How to limit the number of concurrent CGI script invocations in Apache 2.2?

    - by hsivonen
    How can I limit the number of concurrent CGI invocations in Apache 2.2.x? More specifically, my problem is this: I have Apache hosting a Bugzilla instance and other stuff on one server. There's very little legitimate concurrent use of Bugzilla. However, it's trivial to mount a Denial of Service attack on the whole server by ignoring robots.txt and simply fetching a lot of bug pages that fork a process and hit a database.

    Read the article

  • Apache 2.0 is showing only text and not php files

    - by denonth
    I have a web application written with PHP, html and JavaScript. On my pc I have installed a EasyPHP program which has Apache and everything installed. But I wanted to put this web app to my server and I have installed a Apache 2.0 but my php files are displayed as text or it starts to download them automatically. I have tried several things one of them is to add this to my conf file: AddType application/x-httpd-php .php AddType application/x-httpd-php-source .phps But it's still not working. What else can I do? Thank you

    Read the article

  • PHP unable to allocate memory.

    - by AlReece45
    On my way to the office this morning, every website on our shared VPS started giving the same error (several times, not the typical memory_limit error which is fatal): Warning: Unknown: Unable to allocate memory for pool. in Unknown on line 0 The shared server is a 64-bit OpenVZ container running cPanel. There are only ~6 VPSes on the host-- this is the largest one at only 4GB. The host itself has 24GB RAM. As the below graphs show, the memory usage on the host and VPS are both rather low. CPU Usage/Disk/Host all seem to be normal. RlimitMem was set to 583653034, yet the memory usage is about the same as it usually is. Apache 2.2, PHP 5.2 (mod_php) Restarting Apache has corrected the problem for now. However, I'd like to prevent it from happening again and I'm not sure what was limiting the memory. RlimitMem was set to 583653034, yet the memory usage is about the same as it usually is. There's seems to be plenty of memory: what caused this error? VPS Memory Usage Host Memory Usage APC Information apc.ttl=0 apc.shm_size=0 apc.mmap_file_mask=(blank) 1 Segment(s) with 32.0 MBytes (mmap memory, pthread mutex locking)

    Read the article

  • Nginx , Apache , Mysql , Memcache with server 4G ram. How optimize to enought memory?

    - by TomSawyer
    i have 1 dedicated server with Nginx proxy for Apache. Memcache, mysql, 4G Ram. These day, my visitor on my site wasn't increased, but my server get overload always in some specified time. (9AM - 15PM) Ram in use is increased second by second to full. that's moment, my server will get overload. i have to kill all apache , mysql service and reboot it to get free memory. that's the circle. here is my ram in use at the moment 160(nginx) 220(apache) 512(memcache) 924(mysql) here's process number 4(nginx) 14(apache) 5(memcache) 20(mysql) and here's my my.cnf config. someone can help me to optimize it? [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql skip-locking skip-networking skip-name-resolve # enable log-slow-queries log-slow-queries = /var/log/mysql-slow-queries.log long_query_time=3 max_connections=200 wait_timeout=64 connect_timeout = 10 interactive_timeout = 25 thread_stack = 512K max_allowed_packet=16M table_cache=1500 read_buffer_size=4M join_buffer_size=4M sort_buffer_size=4M read_rnd_buffer_size = 4M max_heap_table_size=256M tmp_table_size=256M thread_cache=256 query_cache_type=1 query_cache_limit=4M query_cache_size=16M thread_concurrency=8 myisam_sort_buffer_size=128M # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 [mysqldump] quick max_allowed_packet=16M [mysql] no-auto-rehash [isamchk] key_buffer=256M sort_buffer=256M read_buffer=64M write_buffer=64M [myisamchk] key_buffer=256M sort_buffer=256M read_buffer=64M write_buffer=64M [mysqlhotcopy] interactive-timeout [mysql.server] user=mysql basedir=/var/lib [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid

    Read the article

  • Server downtime - are these APC warnings the cause?

    - by DisgruntledGoat
    Yesterday I had a problem with my dedicated server (Ubuntu 10.04, LAMP). It wasn't down per se, but running incredibly slowly as if we had a massive overload of visitors (though I don't think we did). It's running smoothly again now. I've been checking through log files etc to see if I can find any issues, the only strange thing is a bunch of these errors, occurring at about the same time as the downtime: [apc-warning] Unable to allocate memory for pool. in [file] on line 49. And a bit later on: [apc-warning] GC cache entry '[file1]' (dev=2056 ino=8988092) was on gc-list for 3601 seconds in [file2] on line 746. Could these errors indicate the cause of the server slowdown, or are they simply a result of the server being slow in the first place? What would be the solution?

    Read the article

  • Clustering with Vsphere and Apache load balacing

    - by Anonymous2011
    I was looking at Vsphere to automatically load balance my apache web servers, and mysql servers but will this actually do the job? I know it says it does auto load balancing but not actually sure it quite means what I want to achieve. Is there an easier way to set up a php-apache and mysql cluster within virtual machines? Or are there any guides to clustering? (I have tried googling without much luck) Any help towards understanding/setting up clustering and load balancing within virtual machines appreciated.

    Read the article

  • Apache shuts down from time to time

    - by Dugi
    I'm having trouble with my VPS as it keeps shutting apache down at least twice a day. The server is running on CentOS 6 with the latest apache. By shutting down I mean I have to go into SSH and type in this command in order to bring it up again: /sbin/service httpd start I'm not very good with servers and my host doesn't seem to have a nice customer service. Any help would be appreciated as these unexpected downtimes really know to kill one's mood.

    Read the article

  • Apache not finding index.php by default, set rule for routing through index.php

    - by eoinoc
    Apache on the server is set to find index.php by default, and that works for a normal folder. However, I have a .htaccess rule to route all requests through my routing script: RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ index.php [QSA,L] With these .htaccess contents, the server returns a 404 error. Only by specifying /index.php does the routing script get called. Any tips on what I am doing wrong?

    Read the article

  • Apache rewrite rules behind a nginx proxy

    - by Tuinslak
    Hi, I am running nginx (:80) in front of an Apache webserver (:8080) Nginx config (snippet): location / { proxy_pass http://www.domain.tld:8080; proxy_set_header X-Real-IP $remote_addr; If I set localhost instead of www.domain.tld, my browser gets redirect to http://localhost:8080. Apache rewrite rules: RewriteEngine On Options +FollowSymlinks RewriteBase / RewriteCond %{HTTP_HOST} !^www\. RewriteRule ^ http://www.%{HTTP_HOST}%{REQUEST_URI} [L,R=301] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_URI} !\..+$ RewriteCond %{REQUEST_URI} !/$ RewriteRule (.*) http://%{HTTP_HOST}/$1/ [L,R=301] RewriteCond %{REQUEST_URI} !v2/ RewriteRule ^(.*)$ v1/$1 [L] So far, so good. However, every link (which uses relative paths) appears as http://www.domain.tld:8080/page instead of staying on port 80. Is there any way to solve this through the rewrite rules? I don't want to use absolute paths. Thanks

    Read the article

  • Full Apache config migration

    - by Victor Rashkov
    I searched alot and didn't find an applicable answer. I have a working LAMP setup on Ubuntu machine and I have to migrate to a new server in a different country. The old server is 11.10, the new server is 12.04LTS. My problem is that I simply can not remember the steps I followed when I configured the current server which is not the basic LAMP install. It is Apache with FastCGI, SuEXEC, a GD library, worker MPM and all sitting on top of a mhddfs system. There are also other configs I've changed and I can not recall what they are. Because of the complexity of the setup, my attempts to migrate to the new server fail. I get permissions errors, cgi problems etc. Therefore my question is : Is there a sane way to simply tar a full backup of the current web server installation, including MySQL, Php amd the apache server with all configs, and then move it to the new machine? I shall be forever thankful on any advise. So far non of thise I found here gave me an answer. Thanks!

    Read the article

  • How do I configure ubuntu server's iptables to allow java without opening the floodgates?

    - by rofls
    I'm new to servers, so please bear with me. I have my amateur site running. Problem is, I followed Rackspace's instructions on setting up iptables and am pretty sure that's why the java server I'm trying to use on port 8080 isn't working (it runs the script but my android test app doesn't connect to it). When I try running the same java server script on port 80 it doesn't even start. I also ran nmap on my domain and saw that indeed only port 80 and 22 (for ssh) are responding. Is it possible to run Java and apache happily on the same server? If so, how can I configure my iptables correctly. (I'm aware that I should probably do some sort of filtering in the java server itself, but will figure that out later).

    Read the article

  • How to set original error message for apache 2.2

    - by ffffff
    Apache 2.2 default 414 message is Request-URI Too Large The requested URL's length exceeds the capacity limit for this server. I wanna set custom message so I set http.conf ErrorDocument 414 /error/HTTP_REQUEST_URI_TOO_LARGE.html.var But I do not get along well How to set original error message for apache 2.2

    Read the article

  • mod_pagespeed drops headers

    - by kuboslav
    I have installed mod_pagespeed in Apache. It drops header X-UA-Compatible. When I turned of mod_pagespeed X-UA-Compatible appear in headers. Have anyone idea how to disable this in mod_pagespeed? My .htaccess file: <IfModule mod_headers.c> Header set X-UA-Compatible "IE=Edge,chrome=1" </IfModule> <IfModule mod_rewrite.c> Options -MultiViews RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^ index.php [L] </IfModule> Thanks a lot!

    Read the article

  • How to reverse proxy with or without trailing slash

    - by DM
    I have a apache web server that needs to reverse proxy a site. So example.com/test/ or example.com/test pull from the same other webserver. I have setup a reverse proxy for the one without the trailing slash like this: ProxyPass /test http://othersite.com/test ProxyPassReverse /testhttp://othersite.com/test But it doesn't work with a trailing slash. Any Ideas? I have tried redirecting from /test/ to /test with no luck. Thanks.

    Read the article

  • Beast / CRIME / Beach attack and stopping it

    - by user2143356
    I have read so much on all this but not entirely sure I understand what has gone on. Also, is this one, two or three problems? It looks to me like three, but it's all very confusing: Beast CRIME Beach It seems the solution may be to simply not use compression with HTTPS traffic (or is that just on one of them?) I use GZIP compression. Is that okay, or is that part of the problem? I also use Ubuntu 12.04 LTS Also, is non-HTTPS traffic okay? So after reading all the theory I just want the solution. I think this may be the solution, but can someone please confirm I have understood everything so I am not likely to suffer from this attack: SOLUTION: Use GZIP compression on HTTP traffic, but don't use any compression on HTTPS traffic

    Read the article

  • Apache LocationMatch throws 500 and AddOutputFilterByType does nothing

    - by tackleberry
    I need to add below directives to apache. But I get 500 when I add these lines. <LocationMatch "^/assets/.*$"> Header unset ETag FileETag None # RFC says only cache for 1 year ExpiresActive On ExpiresDefault "access plus 1 year" </LocationMatch> Additionally response is not gzipped when I add: AddOutputFilterByType DEFLATE text/html text/css application/javascript application/x-javascript Apache version is: Server version: Apache/2.2.22 (Unix) App: rails 3.2 app When I checked response&request for gzip problem, I see that browser requested gzip: Accept-Encoding gzip, deflate but response not gzipped.

    Read the article

  • Secure against c99 and similar shells

    - by Amit Sonnenschein
    I'm trying to secure my server as much as i can without limiting my options, so as a first step i've prevented dangerous functions with php disable_functions = "apache_child_terminate, apache_setenv, define_syslog_variables, escapeshellarg, escapeshellcmd, eval, exec, fp, fput, ftp_connect, ftp_exec, ftp_get, ftp_login, ftp_nb_fput, ftp_put, ftp_raw, ftp_rawlist, highlight_file, ini_alter, ini_get_all, ini_restore, inject_code, mysql_pconnect, openlog, passthru, php_uname, phpAds_remoteInfo, phpAds_XmlRpc, phpAds_xmlrpcDecode, phpAds_xmlrpcEncode, popen, posix_getpwuid, posix_kill, posix_mkfifo, posix_setpgid, posix_setsid, posix_setuid, posix_setuid, posix_uname, proc_close, proc_get_status, proc_nice, proc_open, proc_terminate, shell_exec, syslog, system, xmlrpc_entity_decode" but i'm still fighting directory travel, i can't seems to be able to limit it, by using a shell script like c99 i can travel from my /home/dir to anywhere on the disc. how can i limit it once and for all ?

    Read the article

  • Fixed ruby/mysql connection with new libmysql.dll, and broke Apache in the process

    - by jmtoporek
    Ok so bit of background - all my development has been on a local Windows 7 machine. I had Apache with PHP/MySQL running with no issues. Been using ruby (1.9.3 and latest rails release 3.2.9) with built in webrick server, but had a devil of a time connecting to mysql. Did some research, updated my libmysql.dll file in c:/ruby/bin and it worked! Very happy... except now Apache stopped working. In my attempt to resolve the issue I found an older copy of libmysql.dll, renamed the new file, copied the old file back to c:ruby/bin and apache works, ruby does not. So I can take this ass backwards approach but obviously this seems pretty stupid. I was surprised that Apache was using the dll file in ruby/bin folder. I presume this is related to path variables perhaps? I guess I was hoping someone could direct me as to how I can use one dll file for apache and another for ruby. Or if you have some other smarter approach - I've smart enough to follow directions to install apache from scratch and enable php on windows as well as ubuntu, but I'm not much of a sys admin, just a semi competent web developer.

    Read the article

  • Apache's htcacheclean doesn't scale: How to tame a huge Apache disk_cache?

    - by flight
    We have an Apache setup with a huge disk_cache (500.000 entries, 50 GB disk space used). The cache grows by 16 GB every day. My problem is that the cache seems to be growing nearly as fast as it's possible to remove files and directories from the cache filesystem! The cache partition is an ext3 filesystem (100GB, "-t news") on an iSCSI storage. The Apache server (which acts as a caching proxy) is a VM. The disk_cache is configured with CacheDirLevels=2 and CacheDirLength=1, and includes variants. A typical file path is "/htcache/B/x/i_iGfmmHhxJRheg8NHcQ.header.vary/A/W/oGX3MAV3q0bWl30YmA_A.header". When I try to call htcacheclean to tame the cache (non-daemon mode, "htcacheclean-t -p/htcache -l15G"), IOwait is going through the roof for several hours. Without any visible action. Only after hours, htcacheclean starts to delete files from the cache partition, which takes a couple more hours. (A similar problem was brought up in the Apache mailing list in 2009, without a solution: http://www.mail-archive.com/[email protected]/msg42683.html) The high IOwait leads to problems with the stability of the web server (the bridge to the Tomcat backend server sometimes stalls). I came up with my own prune script, which removes files and directories from random subdirectories of the cache. Only to find that the deletion rate of the script is just slightly higher than the cache growth rate. The script takes ~10 seconds to read the a subdirectory (e.g. /htcache/B/x) and frees some 5 MB of disk space. In this 10 seconds, the cache has grown by another 2 MB. As with htcacheclean, IOwait goes up to 25% when running the prune script continuously. Any idea? Is this a problem specific to the (rather slow) iSCSI storage? Should I choose a different file system for a huge disk_cache? ext2? ext4? Are there any kernel parameter optimizations for this kind of scenario? (I already tried the deadline scheduler and a smaller read_ahead_kb, without effect).

    Read the article

  • How to use chain.p7b with Apache?

    - by Debianuser
    I wanted to setup a SSL website on Apache and applied for a certificate from my local ISP. All they sent me was a single file named chain.p7b. I have always used certificates from other vendors without any issues but they usually provide two files to be configured as SSLCertificateFile and SSLCertificateChainFile in Apache. Following instructions from several online resources, I opened the p7b file in Windows and extracted 4 certificates from the file. I then tried configuring Apache with one of the files and it worked, but shows a warning: The certificate is not trusted because no issuer chain was provided. I though I have to use remaining 3 files as SSLCertificateChainFile and/or SSLCACertificateFile. I tried that but it didn't work so I am assuming it might be something completely different. Anyone faced this issue before? The following page http://www-01.ibm.com/support/docview.wss?uid=swg21458997 talks about using a keystore but is that relevant to Apache?

    Read the article

  • Trouble with apache starting on boot with ssl api key

    - by molleman
    Im Running on Centos, the trouble is when i restart my server i need to start my apache and varnish service I use this to start both of them service httpd restart && service varnish restart But i would likw both of them to start when i reboot the server I read i could use this chkconfig httpd on But this is only for apache could i do this chkconfig varnish on Finally when i do y usual start of httpd , i am asked for my api key for SSL , am i able to incorporate this into resarting both varnish and httpd on start up. Or am i doomed to run this command everytime i resart

    Read the article

  • One specific VirtualHost in MAMP getting all the requests

    - by julien_c
    I'm pulling my hair out over a seemingly trivial issue... I'm using MAMP 2.0 and want to configure a Virtual Host for local development. Here's my httpd-vhosts.conf: NameVirtualHost *:80 <VirtualHost *:80> DocumentRoot /Applications/MAMP/htdocs/mysite/public ServerName mysite.local </VirtualHost> As soon as I add the VirtualHost directive, every request to http://localhost gets redirected to the DocumentRoot specified by mysite.local. Why?

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >