Search Results

Search found 3247 results on 130 pages for 'apache2'.

Page 81/130 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • Logical move of a server to UK, what do I do with the SSL certificates

    - by flyfishr64
    I have been asked to move a rails application from the US to the UK. This involves bringing up the rails stack on Ubuntu 8.04.4; that's completed. I'm stumped with the SSL configuration though. The plan was to bring this server up with the same domain name but temporarily use a subdomain (app2.xxx.com instead of app.xxx.com) during the move and for testing, then rename it to app.xxx.com when we're ready for the cutover (does that make sense?). In the meantime, we need a new cert for the app2 subdomain. So to generate a CSR, I need a server key but do I need a new one, or should I copy the one from the existing production server?

    Read the article

  • Apache Request IP Based Security

    - by connec
    I run an Apache server on my home system that I've made available over the internet as I'm not always at my home system. Naturally I don't want all my home server files public, so until now I've simply had: Order allow, deny Deny from all Allow from 127.0.0.1 in my core configuration and just Allow from all in the htaccess of any directories I wanted publicly viewable. However I've decided a better system would be to centralise all the access control and just require authentication (HTTP basic) for requests not to 127.0.0.1/localhost. Is this achievable with Apache/modules? If so how would I go about it? Cheers.

    Read the article

  • Windows 7 - XP Mode - Apache

    - by Howard
    I've setup Virtual PC and XP Mode on my Windows 7 Pro. Using Apache 2.0.52 I have no problems having my website up and running on the Windows 7 machine. But Under VPC/XP Mode the best I can do is Localhost mode. What do I need to do to enable http connections? I need the XP Mode as besides the website I also run a Web BBS and a Dos based (via telnet) BBS. Some of the apps in the Dos BBS just won't work under 64 bit, no matter what setting (capability) are used. Thanks in advance...

    Read the article

  • solr Security help

    - by Camran
    I have solr setup with Jetty on my Ubuntu server. On any computer now, I can type my_ip:8983/solr/ and the page will show upp to anybody. How can I disable this so that only I can access that port and the solr admin? Thanks

    Read the article

  • Enable mod_deflate per directory level

    - by z1_jabbar
    I am using following code, when i access site it only compress all the jsp inside all the urls path under /abc but it ignores all the js and css files. I want to compress js and css files under all the subfolders in /abc path? How I can do this. Thanks! <LocationMatch "/abc"> <IfModule mod_deflate.c> SetOutputFilter DEFLATE # Don't compress images SetEnvIfNoCase Request_URI \ \.(?:gif|jpe?g|png)$ no-gzip dont-vary #Don't compress PDFs SetEnvIfNoCase Request_URI \.pdf$ no-gzip dont-vary #Don't compress compressed file formats SetEnvIfNoCase Request_URI \.(?:7z|bz|bzip|gz|gzip|ngzip|rar|tgz|zip)$ no-gzip dont-vary <IfModule mod_headers.c> Header append Vary User-Agent </IfModule> </IfModule> </LocationMatch>

    Read the article

  • Complex Apache Logging

    - by Shishant
    Hello, I have a file hosting site and I want to know what should be the apache log format/code to log records for only filesizes above 5mb that were served as download/output bandwidth So log looks like this visitors_ip filepath(or filename) output bandwidth One more thing data should be recorded ONLY FOR COMPLETED DOWNLOADS which I believe is checked through %X I think output bandwidth is same as the filesize that was served as download if whole file is downloaded. Thank You

    Read the article

  • Apache user owns git project root, with git-http-backend setup, but still having permissions problems

    - by Luke
    I've setup git-http-backend on my vps server (CentOS), under one of its users. The apache user owns the git project root directory - /home/theuser/git/, as below: drwxrwxr-x 3 apache apache The apache user also owns everything inside that directory. But I'm still getting the following error in git when trying to push: error: unpack failed: unpack-objects abnormal exit The apache error log shows the following error: error: insufficient permission for adding an object to repository database ./objects I've tried every combination of user permissions and enabled read/write access, but not getting anywhere. Should the git user own this folder? Can someone explain exactly what user should own this folder, or what steps I might take to fix this problem?

    Read the article

  • Difference between SSLCertificateFile and SSLCertificateChainFile?

    - by chrisjlee
    Normally with a virtual host an ssl is setup with the following directives: Listen 443 SSLCertificateFile /home/web/certs/domain1.public.crt SSLCertificateKeyFile /home/web/certs/domain1.private.key SSLCertificateChainFile /home/web/certs/domain1.intermediate.crt From: For enabling SSL for a single domain on a server with muliple vhosts, will this configuration work? What is the difference between SSLCertificateFile and SSLCertificateChainFile ? The client has purchased a CA key from GoDaddy. It looks like GoDaddy only provides a SSLCertificateFile (.crt file), and a SSLCertificateKeyFile (.key file) and not at SSLCertificateChainFile. Will my ssl still work without a SSLCertificateChainFile path specified ? Also, is there a canonical path where these files should be placed?

    Read the article

  • Why is my server performance degrading to the point of stopping, periodically?

    - by Pascal Aschwanden
    So, once in a while, I see in firebug that a request takes over 15 or even 60 seconds to respond and sometimes never. Here is what I've ruled out: It's not the CPU, cuz every time I check the Server load its less then 6 for all 3 numbers It's not the memory, because thats fairly low too, less the 50% It's not the I/O anymore, because I've seen the graphs that Joyent sent back to me when I requested them, and they show less then 3MB of I/O (mostly all read). It's not the SQL performance - I've profiled every last SQL command that runs, and they're all (99.9% of them anyway) running in less then 30ms, most run in less then 5ms. Oh and I've been profiling all the script execution times, and even the when the problem occurs, the script always manages to finish in 50ms or less (that's 1 / 20th of a second ). Now, I do run alot of ajax calls. 1 every 2 seconds per user and I have 300 DAU+. But, even if all 300 are playing simultaneously, thats still only 150 calls per second max. The only other thing I can think of is that one of my neighbors is funky. The problem is highly intermittent. 99% of the time it works perfectly and there's excellent performance. but 99%+ is not good enough. Eventually the performance gets so bad I have to restart the server, at which point everything is fine again. I've done this about 4 times now. Any ideas? Note: this is on joyent, vps, intro package 256mb of ram with bursting. here are the mysql dump info: Traffic ø per hour Received 18 MiB 29 MiB Sent 134 MiB 221 MiB Total 151 MiB 251 MiB Connections ø per hour % max. concurrent connections 5 --- --- Failed attempts 0 0.00 0.00% Aborted 0 0.00 0.00% Total 9,418 15.59 k 100.00%

    Read the article

  • Apache 2.0 is showing only text and not php files

    - by denonth
    I have a web application written with PHP, html and JavaScript. On my pc I have installed a EasyPHP program which has Apache and everything installed. But I wanted to put this web app to my server and I have installed a Apache 2.0 but my php files are displayed as text or it starts to download them automatically. I have tried several things one of them is to add this to my conf file: AddType application/x-httpd-php .php AddType application/x-httpd-php-source .phps But it's still not working. What else can I do? Thank you

    Read the article

  • How to limit the number of concurrent CGI script invocations in Apache 2.2?

    - by hsivonen
    How can I limit the number of concurrent CGI invocations in Apache 2.2.x? More specifically, my problem is this: I have Apache hosting a Bugzilla instance and other stuff on one server. There's very little legitimate concurrent use of Bugzilla. However, it's trivial to mount a Denial of Service attack on the whole server by ignoring robots.txt and simply fetching a lot of bug pages that fork a process and hit a database.

    Read the article

  • Hosts file in Apache keep changing for OS Linux Redhat [on hold]

    - by jack f
    I have installed Apache server. Two clients ex client_1 and client_2. The operation that we are performing on client_1 reflecting to client_2. We have etc/hosts file in our software install location which is keep on changing for client_2 with client_1 IP address. If I correct the entries in hosts file to client_2 also in the next few minuets it is changing automatically to the client_1(if we start the client_1 service). Please explain the use of hosts file and where and when it will change by Apache service. The hosts file in the location /etc/hosts/ for the both clients are same ============================================= Do not remove the following line, or various programs that require network functionality will fail. 127.0.0.1 localhost.localdomain localhost Local LAN 190.0.0.1 client_1.Example.com client_1 190.0.0.2 client_2.Example.com client_2 HR LAN 10.1.74.2 client_1hr peer 10.1.74.3 client_2hr ESP LAN 10.69.69.1 client_1esp 10.69.69.2 client_2esp Any help will be appreciated. Thanks in advance, Jack F

    Read the article

  • Beast / CRIME / Beach attack and stopping it

    - by user2143356
    I have read so much on all this but not entirely sure I understand what has gone on. Also, is this one, two or three problems? It looks to me like three, but it's all very confusing: Beast CRIME Beach It seems the solution may be to simply not use compression with HTTPS traffic (or is that just on one of them?) I use GZIP compression. Is that okay, or is that part of the problem? I also use Ubuntu 12.04 LTS Also, is non-HTTPS traffic okay? So after reading all the theory I just want the solution. I think this may be the solution, but can someone please confirm I have understood everything so I am not likely to suffer from this attack: SOLUTION: Use GZIP compression on HTTP traffic, but don't use any compression on HTTPS traffic

    Read the article

  • Full Apache config migration

    - by Victor Rashkov
    I searched alot and didn't find an applicable answer. I have a working LAMP setup on Ubuntu machine and I have to migrate to a new server in a different country. The old server is 11.10, the new server is 12.04LTS. My problem is that I simply can not remember the steps I followed when I configured the current server which is not the basic LAMP install. It is Apache with FastCGI, SuEXEC, a GD library, worker MPM and all sitting on top of a mhddfs system. There are also other configs I've changed and I can not recall what they are. Because of the complexity of the setup, my attempts to migrate to the new server fail. I get permissions errors, cgi problems etc. Therefore my question is : Is there a sane way to simply tar a full backup of the current web server installation, including MySQL, Php amd the apache server with all configs, and then move it to the new machine? I shall be forever thankful on any advise. So far non of thise I found here gave me an answer. Thanks!

    Read the article

  • Force HTTPS with AWS Elastic load balancer

    - by panos2point0
    I need to redirect all incoming HTTP traffic to HTTPS on my elastic load balancer. I tired using Apache mod_rewrite: RewriteEngine On RewriteCond %{HTTP:X-Forwarded-Proto} !https RewriteRule !/status https://%{SERVER_NAME}%{REQUEST_URI} [L,R] Taking advantage of the X-Forwarded-Proto header added by the load balancer, this rule should instruct the users browser to request the HTTPS version of the same URL. So far It doesn't work (no redirection happens). What am I doing wrong? Is there a better way to do this?

    Read the article

  • nagios ldap-group based front end login permission issues

    - by Eleven-Two
    I want to grant users access to the nagios 3 core frontend by using an active directory group ("NagiosWebfrontend" in the code below). The login works fine like this: AuthType Basic AuthName "Nagios Access" AuthBasicProvider ldap AuthzLDAPAuthoritative on AuthLDAPURL "ldap://ip-address:389/OU=user-ou,DC=domain,DC=tld?sAMAccountName?sub?(objectClass=*)" AuthLDAPBindDN CN=LDAP-USER,OU=some-ou,DC=domain,DC=tld AuthLDAPBindPassword the_pass Require ldap-group CN=NagiosWebfrontend,OU=some-ou,DC=domain,DC=tld Unfortunately, every nagios page just shows "It appears as though you do not have permission to view information for any of the services you requested...". I got the hint, that I am missing a contact in nagios configuration which is equal to my login, but creating one with the same name as the domain user had no effect on this issue. However, it would be great to find a solution without manually editing nagios.conf for every new user, so the admins could grant access to nagios by just putting the user to "NagiosWebfrontend" group. What would be the best way to solve it?

    Read the article

  • Fix Fatal Error Condition showing system path

    - by JMC
    I've noticed there are a large number of servers running Magento Commerce that will return a fatal error showing the system path: Fatal error: Uncaught exception 'Exception' with message 'File '/usr/local/www/magento/data1702/media/css' does not exists.' in /usr/local/www/magento/data1702/lib/Varien/File/Transfer/Adapter/Http.php:96 Stack trace: #0 /usr/local/www/magento/data1702/get.php(205): Varien_File_Transfer_Adapter_Http->send('/usr/local/www/...') #1 /usr/local/www/magento/data1702/get.php(165): sendFile('/usr/local/www/...') #2 {main} thrown in /usr/local/www/magento/data1702/lib/Varien/File/Transfer/Adapter/Http.php on line 96 Magento as an application is generally good about supressing error messages. How can a linux server running apache be configured to avoid returning this error message since the app has problems suppressing it.

    Read the article

  • REMOTE_USER not getting set?

    - by landed
    I am trying to setup LDAP Authentication in Joomla using a plugin called JMapMyLDAP (in fact 4 plugins each doing a different job). I need to pull a part of a string out of the server variable REMOTE_USER and this should be visible (we see here http://timplummer.com.au/4-how-to-integrate-joomla-3-with-active-directory-using-ldap.html) in phpinfo(); The issue is that REMOTE_USER is not set or at least not appearing. A few things to note (if you don't mind) here- conceptually I am not really understanding authentication as a whole subject it appears to be vast despite my years working in websites. Yes I used asp and built php pages to check a user is who they say they are with a token(/session?) that was given to just them and then they are identified when a stateless request is made to the server. Thats my level of understanding. This sounds different to the basic authentication in apache where a password sits in a file and a username and the user needs to login to a basic form to get access to the folder/docs this is via an .htaccess file. Ok so with the LDAP to work I need to get REMOTE_USER this sounds very reasonable as how else do we know is making the request. Thank you.

    Read the article

  • Why domain.com appears as theplant.com when hosted on hostgator?

    - by silow
    I have a script that's supposed to detect the url of its caller website. If the caller is another website, it should give something like http://callersite.com. I'm using this line of php code (though I suspect this won't matter for sysadmins) gethostbyaddr($_SERVER['REMOTE_ADDR']) I'm testing with a caller site that's hosted on hostgator. What I'm noticing though is that I don't get callersite.com, I get something like 1a.12.12ab.static.theplanet.com. I don't know what theplanet.com is and why I'm not getting caller site.com. Also what do I need to do to really get the domain of the site making a call to my script? -- Thanks for the explanation. Some have advised I use $_SERVER['HTTP_REFERER'] but it's not what I'm after. My script acts as an API. Another website makes a curl request to it and gets an output and later on presents it to the user. So http referrer gives false since the caller site.com is making a direct call to me. So any hope?

    Read the article

  • Using sudo /etc/init.d/httpd start complains for log file rights

    - by SCO
    I created a custom log directory with the root account, and chmoded it to 777 teporarily. ls -la /var/mylogs/log/ total 16 drwxrwxrwx 2 root root 4096 Jun 24 06:27 . drwxr-xr-x 5 root root 4096 Jun 24 06:25 .. When I try to start the service from a user (lets say "myuser", which is in the sudoers files as myuser ALL=(ALL) ALL), it fails because of the permissions : sudo /etc/init.d/httpd start Starting httpd: (13)Permission denied: httpd: could not open error log file /var/mylogs/log/httpd_error.log. Unable to open logs However, the following is successfull : sudo bash /etc/init.d/http start So I guess these two methods are not equivalent, although to me doing sudo was the same than logging into the root account and issuing the commands. Any clue ? Thank you !

    Read the article

  • Accessing localhost (xampp) from another computer over LAN network - how to?

    - by vr3690
    I have just set up a wi-fi network at home. I have all my files on my desktop computer (192.168.1.56) and want to access localhost over there from another computer (192.168.1.2). On my desktop I can access localhost through the normal http://localhost. Apache is running on port 80 as usual. Exactly what do I have to do to achieve this? There is documentation on the net but they either don't work or are too fragment and confusing to understand. I think I have to make changes to my apache's httpd.conf file and the hosts file. Any ideas as to what changes to make?

    Read the article

  • Serving index.html from a subdirectory

    - by xbonez
    In my document root, I have to directories: home and foobar, both with their own index.html files. How can I set it up so that when someone visits my site at example.com, they see the contents on home/index.html? I tried using an index.php with a redirect in document root, as well as a .htaccess redirect, but both of them change the URL in the browser to example.com/home/, which I would like to ideally avoid.

    Read the article

  • Subversion 1.7.x and expat location in configure

    - by ditto
    I am running CentOS 6.3 64bit and DirectAdmin control panel. Currently I have installed Apache Subversion 1.6.19 without any problems. I have installed expat and expat-devel and neon-devel using yum. When installing Apache Subversion 1.6.19 this configure command works fine: ./configure --prefix=/usr --with-ssl --with-apxs=/usr/sbin/apxs --with-apr=/usr/bin/apr-config However when installing Apache Subversion 1.7.7 using the same configure command as above, I get this error after doing commmand "make": /etc/httpd/lib/libaprutil-1.so: undefined reference to `XML_StopParser' collect2: ld returned 1 exit status make: *** [subversion/svnadmin/svnadmin] Error 1 However I found out I can solve that problem by adding this into the configure command: --with-expat=includes:lib_search_dirs:libs So it then looks like this: ./configure --prefix=/usr --with-ssl --with-expat=includes:lib_search_dirs:libs --with-apxs=/usr/sbin/apxs --with-apr=/usr/bin/apr-config However that configure command then give this warning: configure: WARNING: Expat found amongst libraries used by APR-Util, but Subversion libraries might be needlessly linked against additional unused libraries. It can be avoided by specifying exact location of Expat in argument of --with-expat option. So I want to solve that. I have experimentet alot, but not been able to figure out how to "specifying exact location of Expat" in configure command, and how to find out what the location should be? However after a lot of searching I found this: http://subversion.tigris.org/issues/show_bug.cgi?id=3997 - that is a FreeBSD user saying this: Building Subversion 1.7.x on FreeBSD currently requires a configure flag: --with-expat=/usr/local/include:/usr/local/lib:expat As that is the default location of expat on that platform, it would be nice if configure detected it automatically. However I am not using FreeBSD, I am running CentOS 6.3 64bit. Also remember I said I have installed expat and expat-devel and neon-devel using yum. However I tried to use the expat/command path posted by the FreeBSD user, and it seems to work, it does not give errors when running configure command, and does not give errors when running "make". This is what I used then: ./configure --prefix=/usr --with-ssl --with-expat=/usr/local/include:/usr/local/lib:expat --with-apxs=/usr/sbin/apxs --with-apr=/usr/bin/apr-config But this server is a production server, and therfor I need your help to advice if this is also correct to run on a CentOS server? Is the following path in expat command correct on CentOS?: --with-expat=/usr/local/include:/usr/local/lib:expat If not, please advice what it should be changed to. Thanks in advance for any confirmation or help on this!

    Read the article

  • How do I host multiple domains on Ubuntu Server (Hardy Heron)?

    - by markle976
    I am trying to figure out the best way to host multiple domains on my Ubuntu server. I have tried multiple options, but I can't get everything to work the way I want it to. I want to be able to add domains without having to restart Apache each time. I tried using mod_vhost_alias (see below), but that maps www.domain.com and domain.com to different folders. I also need to be able to use mod_rewite to map requests for domain.com/app/* to domain.com/somescript.php current httpd.conf: UseCanonicalName Off VirtualDocumentRoot /var/www/%0 Any thoughts?

    Read the article

  • proxy pass domain FROM default apache port 80 TO nginx on another port

    - by user10580
    Im still learning server things so hope the title is descriptive enough. Basically i have sub.domain.com that i want to run on nginx at port 8090. I want to leave apache alone and have it catch all default traffic at port 80. so i am trying something with a virtual name host to proxy pass to sub.domain.com:8090, nothing working yet and go no idea what the right syntax could be. any ideas? most of what i found was to pass TO apache FROM nginx, but i want to the do the opposite. LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_http_module modules/mod_proxy_http.so <VirtualHost sub.domain.com:80> ProxyPreserveHost On ProxyRequests Off ServerName sub.domain.com DocumentRoot /home/app/public ServerAlias sub.domain.com proxyPass / http://appname:8090/ (also tried localhost and sub.domain.com) ProxyPassReverse / http://appname:8090/ </VirtualHost> when i do this i get [warn] module proxy_module is already loaded, skippin [warn] module proxy_http_module is already loaded, skipping [error] (EAI 2)Name or service not known: Could not resolve host name sub.domain.com -- ignoring! and yes, the app is working (i have it running on port 80 with another subdomain) and it works at sub.domain.com:8090

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >