Search Results

Search found 10640 results on 426 pages for 'apache2 module'.

Page 99/426 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • Why domain.com appears as theplant.com when hosted on hostgator?

    - by silow
    I have a script that's supposed to detect the url of its caller website. If the caller is another website, it should give something like http://callersite.com. I'm using this line of php code (though I suspect this won't matter for sysadmins) gethostbyaddr($_SERVER['REMOTE_ADDR']) I'm testing with a caller site that's hosted on hostgator. What I'm noticing though is that I don't get callersite.com, I get something like 1a.12.12ab.static.theplanet.com. I don't know what theplanet.com is and why I'm not getting caller site.com. Also what do I need to do to really get the domain of the site making a call to my script? -- Thanks for the explanation. Some have advised I use $_SERVER['HTTP_REFERER'] but it's not what I'm after. My script acts as an API. Another website makes a curl request to it and gets an output and later on presents it to the user. So http referrer gives false since the caller site.com is making a direct call to me. So any hope?

    Read the article

  • connection to apache server switches sockets connection

    - by Newben
    I have just post a question but I post an other one because the problem is not the one I had in thought when asking the latter. So, I am running some rails app on osx, when I run rails s, everything works fine. If I shut down the apache server (mamp) and if I run rails s again, I have this message Can't connect to local MySQL server through socket '/Applications/MAMP/tmp/mysql/mysql.sock', which for sure is normal. For info, my mamp server is running, and the connection must pass through /Applications/MAMP/Library/bin/mysql, so I aliased it by setting in my bash profile : alias mysql="/Applications/MAMP/Library/bin/mysql" Now, when I launch a rails generate command type, I get this message : /$root/vendor/bundle/ruby/2.0.0/gems/mysql2-0.3.11/lib/mysql2/client.rb:44:in `connect': Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2) (Mysql2::Error) So how it can be ?

    Read the article

  • Apache RewriteEngine, redirect sub-directory to another script

    - by Niklas R
    I've been trying to achieve this since about 1.5 hours now. I want to have the following transformations when requesting sites on my website: homepage.com/ => index.php homepage.com/archive => index.php?archive homepage.com/archive/site-01 => index.php?archive/site-01 homepage.com/files/css/main.css => requestfile.php?css/main.css The first three transformations can be done by using the following: RewriteEngine on RewriteRule ^/?$ index.php RewriteRule ^/?(.*)$ index.php?$1 However, I'm stuck at the point where all requests to the files subdirectory should be redirected to requestfile.php. This is one of the tries I've done: RewriteEngine on RewriteRule ^/?$ index.php RewriteRule ^/?files/(.+)$ requestfile.php?$1 RewriteRule ^/?(.*)$ index.php?$1 But that does not work. I've also tried to put [L] after the third line, but that didn't help as I'm using this configuration in .htaccess and sub-requests will transform that URL again, etc. I fuzzed with the RewriteCond command but I couldn't get it to work. How needs the configuration to look like to achieve what I desire?

    Read the article

  • Why are some web clients requesting a page named "cache"?

    - by Toto
    We see errors like this in the apache error log: [Thu May 17 14:32:35 2012] [error] [client 192.168.1.1] File does not exist: /home/www-data/mywebsite.com/r/cache, referer: http://www.mywebsite.com/r/1010 It is strange because: There is no reference in the code/url about a folder/file "cache". The folder/file "cache" does not exist The client is randomly trying to access a "cache" folder everywhere on the website. It is always trying to access the folder/file "cache" following this pattern: Pattern: /level1/.../levelwhatever/filename (referer) /level1/.../levelwhatever/cache We run a LAMP (Debian stable: PHP 5.3.3-7+squeeze9. We also use APC 3.1.3p1). We use Google Analytics and AdSense. We do not know how to reproduce the problem. Note: I replaced the user's IP in the code for privacy.

    Read the article

  • Logical move of a server to UK, what do I do with the SSL certificates

    - by flyfishr64
    I have been asked to move a rails application from the US to the UK. This involves bringing up the rails stack on Ubuntu 8.04.4; that's completed. I'm stumped with the SSL configuration though. The plan was to bring this server up with the same domain name but temporarily use a subdomain (app2.xxx.com instead of app.xxx.com) during the move and for testing, then rename it to app.xxx.com when we're ready for the cutover (does that make sense?). In the meantime, we need a new cert for the app2 subdomain. So to generate a CSR, I need a server key but do I need a new one, or should I copy the one from the existing production server?

    Read the article

  • Apache Alias / VirtualHost run as different user

    - by inx
    I tried to create an alias or virtual host to run as different user. Well below is part of apache httpd.conf that doesn't work. Or, is it even possible? <VirtualHost blah:80> user DifferentUser group DifferentGroup ServerAdmin blah DocumentRoot blah ServerName blah ServerAlias blah ScriptAlias /cgi-bin/ blah DirectoryIndex index.html index.htm default.htm index.shtml index.php ErrorLog logs/blah-error_log CustomLog logs/blah-access_log common <Directory "/blah/"> Options Indexes FollowSymLinks MultiViews ExecCGI AllowOverride all Order Deny,Allow Deny from none Allow from all </Directory> </VirtualHost>

    Read the article

  • deploy git project and permission issue

    - by nixer
    I have project hosted with gitolite on my own server, and I would like to deploy the whole project from gitolite bare repository to apache accessible place, by post-receive hook. I have next hook content echo "starting deploy..." WWW_ROOT="/var/www_virt.hosting/domain_name/htdocs/" GIT_WORK_TREE=$WWW_ROOT git checkout -f exec chmod -R 750 $WWW_ROOT exec chown -R www-data:www-data $WWW_ROOT echo "finished" hook can't be finished without any error message. chmod: changing permissions of `/var/www_virt.hosting/domain_name/file_name': Operation not permitted means that git has no enough right to make it. The git source path is /var/lib/gitolite/project.git/, which is owned by gitolite:gitolite And with this permissions redmine (been working under www-data user) can't achieve git repository to fetch all changes The whole project should be placed here: /var/www_virt.hosting/domain_name/htdocs/, which is owned by www-data:www-data. What changes I should do, to work properly post-receive hook in git, and redmine with repository ? what I did, is: # id www-data uid=33(www-data) gid=33(www-data) groups=33(www-data),119(gitolite) # id gitolite uid=110(gitolite) gid=119(gitolite) groups=119(gitolite),33(www-data) does not helped. I want to have no any problem to work apache (to view project), redmine to read source files for project (under git) and git (doing deploy to www-data accessible path) what should I do ?

    Read the article

  • Force HTTPS with AWS Elastic load balancer

    - by panos2point0
    I need to redirect all incoming HTTP traffic to HTTPS on my elastic load balancer. I tired using Apache mod_rewrite: RewriteEngine On RewriteCond %{HTTP:X-Forwarded-Proto} !https RewriteRule !/status https://%{SERVER_NAME}%{REQUEST_URI} [L,R] Taking advantage of the X-Forwarded-Proto header added by the load balancer, this rule should instruct the users browser to request the HTTPS version of the same URL. So far It doesn't work (no redirection happens). What am I doing wrong? Is there a better way to do this?

    Read the article

  • Why is my server performance degrading to the point of stopping, periodically?

    - by Pascal Aschwanden
    So, once in a while, I see in firebug that a request takes over 15 or even 60 seconds to respond and sometimes never. Here is what I've ruled out: It's not the CPU, cuz every time I check the Server load its less then 6 for all 3 numbers It's not the memory, because thats fairly low too, less the 50% It's not the I/O anymore, because I've seen the graphs that Joyent sent back to me when I requested them, and they show less then 3MB of I/O (mostly all read). It's not the SQL performance - I've profiled every last SQL command that runs, and they're all (99.9% of them anyway) running in less then 30ms, most run in less then 5ms. Oh and I've been profiling all the script execution times, and even the when the problem occurs, the script always manages to finish in 50ms or less (that's 1 / 20th of a second ). Now, I do run alot of ajax calls. 1 every 2 seconds per user and I have 300 DAU+. But, even if all 300 are playing simultaneously, thats still only 150 calls per second max. The only other thing I can think of is that one of my neighbors is funky. The problem is highly intermittent. 99% of the time it works perfectly and there's excellent performance. but 99%+ is not good enough. Eventually the performance gets so bad I have to restart the server, at which point everything is fine again. I've done this about 4 times now. Any ideas? Note: this is on joyent, vps, intro package 256mb of ram with bursting. here are the mysql dump info: Traffic ø per hour Received 18 MiB 29 MiB Sent 134 MiB 221 MiB Total 151 MiB 251 MiB Connections ø per hour % max. concurrent connections 5 --- --- Failed attempts 0 0.00 0.00% Aborted 0 0.00 0.00% Total 9,418 15.59 k 100.00%

    Read the article

  • htaccess: how to rewrite to clean urls and redirect old urls to the new clean ones?

    - by Sebastian
    With htaccess I'm trying to make my sites urls clean. I use very basic urls like: www.mysite.com/pagename.php ("pagename" is variable). I want www.mysite.com/pagename to display the content of /pagename.php So this is in my htaccess-file now: Options +FollowSymlinks RewriteEngine On RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^([^\.]+)$ $1.php [NC,L] But I also want my old urls (/pagename.php), when called, to be rewritten to www.mysite.com/pagename How to do this? I can't figure it out (get loops all the time)... Thanks in advance!

    Read the article

  • How to limit the number of concurrent CGI script invocations in Apache 2.2?

    - by hsivonen
    How can I limit the number of concurrent CGI invocations in Apache 2.2.x? More specifically, my problem is this: I have Apache hosting a Bugzilla instance and other stuff on one server. There's very little legitimate concurrent use of Bugzilla. However, it's trivial to mount a Denial of Service attack on the whole server by ignoring robots.txt and simply fetching a lot of bug pages that fork a process and hit a database.

    Read the article

  • Allowing users in from an IP address without certificate client authentication

    - by John
    I need to allow access to my site without SSL certificates from my office network and with SSL certificates outside. Here is my configuration: <Directory /srv/www> AllowOverride All Order deny,allow Deny from all # office network static IP Allow from xxx.xxx.xxx.xxx SSLVerifyClient require SSLOptions +FakeBasicAuth AuthName "My secure area" AuthType Basic AuthUserFile /etc/httpd/ssl/index Require valid-user Satisfy Any </Directory> When I'm inside network and have certificate - I can access. When I'm inside network and haven't certificate - I can't access, it requires certificate. When I'm outside network and have certificate - I can't access, it shows me basic login screen When I'm outside network and haven't certificate - I can't access, it shows me basic login screen and following configuration works perfectly <Directory /srv/www> AllowOverride All Order deny,allow Deny from all Allow from xxx.xxx.xxx.xxx AuthUserFile /srv/www/htpasswd AuthName "Restricted Access" AuthType Basic Require valid-user Satisfy Any </Directory>

    Read the article

  • Subversion 1.7.x and expat location in configure

    - by ditto
    I am running CentOS 6.3 64bit and DirectAdmin control panel. Currently I have installed Apache Subversion 1.6.19 without any problems. I have installed expat and expat-devel and neon-devel using yum. When installing Apache Subversion 1.6.19 this configure command works fine: ./configure --prefix=/usr --with-ssl --with-apxs=/usr/sbin/apxs --with-apr=/usr/bin/apr-config However when installing Apache Subversion 1.7.7 using the same configure command as above, I get this error after doing commmand "make": /etc/httpd/lib/libaprutil-1.so: undefined reference to `XML_StopParser' collect2: ld returned 1 exit status make: *** [subversion/svnadmin/svnadmin] Error 1 However I found out I can solve that problem by adding this into the configure command: --with-expat=includes:lib_search_dirs:libs So it then looks like this: ./configure --prefix=/usr --with-ssl --with-expat=includes:lib_search_dirs:libs --with-apxs=/usr/sbin/apxs --with-apr=/usr/bin/apr-config However that configure command then give this warning: configure: WARNING: Expat found amongst libraries used by APR-Util, but Subversion libraries might be needlessly linked against additional unused libraries. It can be avoided by specifying exact location of Expat in argument of --with-expat option. So I want to solve that. I have experimentet alot, but not been able to figure out how to "specifying exact location of Expat" in configure command, and how to find out what the location should be? However after a lot of searching I found this: http://subversion.tigris.org/issues/show_bug.cgi?id=3997 - that is a FreeBSD user saying this: Building Subversion 1.7.x on FreeBSD currently requires a configure flag: --with-expat=/usr/local/include:/usr/local/lib:expat As that is the default location of expat on that platform, it would be nice if configure detected it automatically. However I am not using FreeBSD, I am running CentOS 6.3 64bit. Also remember I said I have installed expat and expat-devel and neon-devel using yum. However I tried to use the expat/command path posted by the FreeBSD user, and it seems to work, it does not give errors when running configure command, and does not give errors when running "make". This is what I used then: ./configure --prefix=/usr --with-ssl --with-expat=/usr/local/include:/usr/local/lib:expat --with-apxs=/usr/sbin/apxs --with-apr=/usr/bin/apr-config But this server is a production server, and therfor I need your help to advice if this is also correct to run on a CentOS server? Is the following path in expat command correct on CentOS?: --with-expat=/usr/local/include:/usr/local/lib:expat If not, please advice what it should be changed to. Thanks in advance for any confirmation or help on this!

    Read the article

  • How do I host multiple domains on Ubuntu Server (Hardy Heron)?

    - by markle976
    I am trying to figure out the best way to host multiple domains on my Ubuntu server. I have tried multiple options, but I can't get everything to work the way I want it to. I want to be able to add domains without having to restart Apache each time. I tried using mod_vhost_alias (see below), but that maps www.domain.com and domain.com to different folders. I also need to be able to use mod_rewite to map requests for domain.com/app/* to domain.com/somescript.php current httpd.conf: UseCanonicalName Off VirtualDocumentRoot /var/www/%0 Any thoughts?

    Read the article

  • Apache, logerror and logrotate: what is the best method?

    - by OlivierDofus
    Here's a vhost example of my sites: <VirtualHost *:80> DocumentRoot /datas/web/woog ServerName woog.com ServerAlias www.woog.com ErrorLog "|/httpd-2.2.8/bin/rotatelogs /logs/woog/error_log 86400" CustomLog "|/httpd-2.2.8/bin/rotatelogs /logs/woog/access_log 86400" combined DirectoryIndex index.php index.htm <Location /> Allow from All </Location> <Directory /*> Options FollowSymLinks AllowOverride Limit AuthConfig </Directory> </VirtualHost> I've got 12 sites running now. This gives something like: [Shake]:/sources/software/mod_log_rotate# ps x | grep rotate /httpd-2.2.8/bin/rotatelogs /logs/[hidden siteweb]/error_log 86400 /httpd-2.2.8/bin/rotatelogs /logs/[hidden siteweb]/error_log 86400 [snap (as many error_log as virtual hosts)] /httpd-2.2.8/bin/rotatelogs /logs/[hidden siteweb]/access_log 86400 /httpd-2.2.8/bin/rotatelogs /logs/[hidden siteweb]/access_log 86400 [snap (as many access_log as virtual hosts)] grep rotate [Shake]:/sources/software/mod_log_rotate# !!! I've been looking everywhere but I've only found mod_log_rotate. The "little" problem is that the author (very good C developper) explains: "Unfortunately Apache error logs are handled in such a way that we can't work the same log rotation magic on them. Like transfer logs they support piped logging though so you can still use rotatelogs for them. " So my question is: what would be the best way to handle multiple logs? If I just do a very classical log and I use the system's "logrotate" program couldn't this be a good deal? How would/do you deal with that? Thank you!

    Read the article

  • Beast / CRIME / Beach attack and stopping it

    - by user2143356
    I have read so much on all this but not entirely sure I understand what has gone on. Also, is this one, two or three problems? It looks to me like three, but it's all very confusing: Beast CRIME Beach It seems the solution may be to simply not use compression with HTTPS traffic (or is that just on one of them?) I use GZIP compression. Is that okay, or is that part of the problem? I also use Ubuntu 12.04 LTS Also, is non-HTTPS traffic okay? So after reading all the theory I just want the solution. I think this may be the solution, but can someone please confirm I have understood everything so I am not likely to suffer from this attack: SOLUTION: Use GZIP compression on HTTP traffic, but don't use any compression on HTTPS traffic

    Read the article

  • How to use chain.p7b with Apache?

    - by Debianuser
    I wanted to setup a SSL website on Apache and applied for a certificate from my local ISP. All they sent me was a single file named chain.p7b. I have always used certificates from other vendors without any issues but they usually provide two files to be configured as SSLCertificateFile and SSLCertificateChainFile in Apache. Following instructions from several online resources, I opened the p7b file in Windows and extracted 4 certificates from the file. I then tried configuring Apache with one of the files and it worked, but shows a warning: The certificate is not trusted because no issuer chain was provided. I though I have to use remaining 3 files as SSLCertificateChainFile and/or SSLCACertificateFile. I tried that but it didn't work so I am assuming it might be something completely different. Anyone faced this issue before? The following page http://www-01.ibm.com/support/docview.wss?uid=swg21458997 talks about using a keystore but is that relevant to Apache?

    Read the article

  • Full Apache config migration

    - by Victor Rashkov
    I searched alot and didn't find an applicable answer. I have a working LAMP setup on Ubuntu machine and I have to migrate to a new server in a different country. The old server is 11.10, the new server is 12.04LTS. My problem is that I simply can not remember the steps I followed when I configured the current server which is not the basic LAMP install. It is Apache with FastCGI, SuEXEC, a GD library, worker MPM and all sitting on top of a mhddfs system. There are also other configs I've changed and I can not recall what they are. Because of the complexity of the setup, my attempts to migrate to the new server fail. I get permissions errors, cgi problems etc. Therefore my question is : Is there a sane way to simply tar a full backup of the current web server installation, including MySQL, Php amd the apache server with all configs, and then move it to the new machine? I shall be forever thankful on any advise. So far non of thise I found here gave me an answer. Thanks!

    Read the article

  • Difference between SSLCertificateFile and SSLCertificateChainFile?

    - by chrisjlee
    Normally with a virtual host an ssl is setup with the following directives: Listen 443 SSLCertificateFile /home/web/certs/domain1.public.crt SSLCertificateKeyFile /home/web/certs/domain1.private.key SSLCertificateChainFile /home/web/certs/domain1.intermediate.crt From: For enabling SSL for a single domain on a server with muliple vhosts, will this configuration work? What is the difference between SSLCertificateFile and SSLCertificateChainFile ? The client has purchased a CA key from GoDaddy. It looks like GoDaddy only provides a SSLCertificateFile (.crt file), and a SSLCertificateKeyFile (.key file) and not at SSLCertificateChainFile. Will my ssl still work without a SSLCertificateChainFile path specified ? Also, is there a canonical path where these files should be placed?

    Read the article

  • How to use mod_proxy to let my index of Apache go to Tomcat ROOT and be able to browse my other Apache sites

    - by Dagvadorj
    I am trying to use my Tomcat application (deployed at ROOT) to be viewed from Apache port 80. To do this, I used mod_proxy, since mod_jk made me try harder. I used sth like this in httpd.conf: <location http://www.example.com> Order deny,allow Allow from all PassProxy http://localhost:8080/ PassProxyReverse http://localhost:8080/ </location> <Proxy *> Order deny,allow Allow from all </Proxy> And now I can not retrieve my previous sites on Apache, which was running prior to my configuration. How can I have both running?

    Read the article

  • How to log size of cookies in request header with apache

    - by chrisst
    We have an issue on our site with cookies growing too large. We have already expanded the acceptable header size and throttled the cookie sizes for now, but I'd like to figure out what the average client's header sizes are, specifically of the cookies. I've created an apache log that captures the cookies being set on each request: LogFormat "%{Cookie}i" cookies But this just spits out the entire contents of all cookies in the header. Is there a way to have apache just log the size (or just length of the string) per request?

    Read the article

  • Separate domains vs. one domain with alias-domains

    - by Quasdunk
    I have tried to ask this question a few days ago but I'm afraid it was not clear enough, so here's another try. I have set up a LAMP-server using ISPConfig 3 for the administration. PHP is running over Fast-CGI. I have several domains, like my_site.com, my_site.net and my_site.org, but they all point to the same application/website. Each domain has its own web-root-folder and is running under its own user. The application itself is in a common directory which is owned by another user, like so: # path to my_application (owned by web1) /var/www/clients/client1/web1/web/my_application/ # sym-link to my_application from my_site.com-web-root (owned by web5) /var/www/my_site.com/web -> /var/www/clients/client1/web1/web/ # sym-link to my_application from my_site.net (owned by web4) /var/www/my_site.net/web -> /var/www/clients/client1/web1/web/ With a setup like this I have encountered a few problems concerning the permissions when performing filesystem-operations with PHP. For instance, if the application is called via my_site.com, the user web5 is trying to write something to the application-folder. But the application-folder is owned by the user web1, so web5 is not allowed to write there. As far as I unterstand, this is how Fast-CGI works. After some research and asking a few people, the solution seems to be to break it all down to one domain (e.g. my_site.com) and define the other domains (my_site.org, my_site.net) as alias for this one domain. That way, there would be only one user who has all necessary permissions. However, this would mean that we'd have to buy a multidomain SSL-certificate - but we already have an SSL-certificate for each domain. We were able to use them with our previous provider (managed hosting), and there we also had only one web-directory and multiple domains. So if this was possible, I wonder: Is putting all the domains together into one v-host with one main- and several alias-domains the right approach in this case? Or may I have misunderstood something?

    Read the article

  • Mod_rewrite on Alias directory

    - by crunchline
    I am moving from a WIMP setup to a WAMP setup. My images are on a separate drive. I need a default image served, when a file is not found. Currently it returns a 404 on images not found in the /images directory, and the 404.jpg is displayed for files not found on all the other directories. I tried changing [L] to [PT] but that did not do anything. in httpd.conf Alias /images "D:/images" in .htaccess RewriteCond %{REQUEST_FILENAME} !-s RewriteRule ^.*\.(gif|jpg|png)$ /404.jpg [L]

    Read the article

  • Apache Getting Bogged Down By Certain Script (Wp-Cron.php) - How To Kill Process Automatically

    - by user50037
    I have a server that is running a number of wordpress blogs, and a number of them have several hundred/thousand posts. Every couple of days, the server slows to a crawl due to a file being run on Wordpress called WP-cron.php. My entire apache process log turns into this : http:// imgur.com/A7K9k.png Times that by quite a bit. And server no go. Each process takes up about 1.1% of ram. And when we have 50 of them on the go. It gets insane. Not all of them are coming from the same blog, they are pretty widespread. In the Apache process page of WHM, they are usually ALL set to the status of "C", which means closing. But they can sit there until they crash the server, and they still hold the memory. Just google "wp-cron.php load" and you will find plenty of people with similar issues. In anycase, we have think it is down to users adding a tonne of dead "pinglists" to their wordpress installation. Which in turn wordpress loops through them endlessly. Problem number 1. Does anyone have any other suggestions about what would cause the Wordpress file wp-cron.php to loop endlessly. I still think it is down to pings, because all of the people we have contacted about their account load going sky high, have had massive ping lists. Problem number 2. Even if it is down to excessive pinglists in wordpress. We cannot be babying every single account on the server waiting for it to start spawning the wp-cron processes. It often happens overnight, and I start getting SMS alerts at 2am about the load. I have CSF installed, which apparently would have ended the processes if they ran over XXX time. But I have been told that it won't catch the processes because they end up in this state of "closing" (They show up as "C" on the Apache page of WHM). Apparently CSF will only kill processes that are "running" which C does not count. I have seen various other scripts such as : http://dltj.org/article/die-apache-die/ . I took a look at the stat of /proc. But I was boggled at which delimited part was the time running. And if there was any way I could connect it back to an actual Apache process, so that I could see what file was running (So only close connections connected to wp-cron.php, with a state of "C"). Overall I know Problem 2 glosses over the real reason. But I do put the whole thing to excessive pinglists in Wordpress. But I just cannot sit there and babysit every single installation 24/7. So I need a way to save the server when I am not available. Any help would be much appreciated.

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >