Search Results

Search found 12289 results on 492 pages for 'apache license'.

Page 86/492 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • Ubuntu unattended-upgrades stops apache

    - by Robbie
    This morning i was alerted to the fact that both apache instances serving my app were not responding to requests from my load balancer. I attempted apachectl restart and it said apache was not running. So, i started apache on both instances and got the service up again. I then followed the logs and worked out that both had performed upgrades via the unattended-upgrades package moments before they stopped responding. /var/log/unattended-upgrades/unattended-upgrades.log 2013-07-02 06:30:51,875 INFO Starting unattended upgrades script 2013-07-02 06:30:51,875 INFO Allowed origins are: ['o=Ubuntu,a=precise-security'] 2013-07-02 06:33:57,771 INFO Packages that are upgraded: accountsservice apache2 apache2-mpm-prefork apache2-utils apache2.2-bin apache2.2-common apparmor apport apt apt-transport-https apt-utils bind9-host binutils dbus dnsutils gnupg gpgv isc-dhcp-client isc-dhcp-common krb5-locales libaccountsservice0 libapt-inst1.4 libapt-pkg4.12 libbind9-80 libc-bin libc-dev-bin libc6 libc6-dev libcurl3-gnutls libdbus-1-3 libdbus-glib-1-2 libdns81 libdrm-intel1 libdrm-nouveau1a libdrm-radeon1 libdrm2 libexpat1 libfreetype6 libgc1c2 libgnutls-dev libgnutls-openssl27 libgnutls26 libgnutlsxx27 libisc83 libisccc80 libisccfg82 liblwres80 libruby1.8 libx11-6 libx11-data libxcb1 libxext6 libxml2 linux-firmware linux-image-virtual linux-libc-dev linux-virtual multiarch-support openssl perl perl-base perl-modules python-apport python-crypto python-keyring python-problem-report python-software-properties ri1.8 ruby1.8 ruby1.8-dev sudo tzdata update-manager-core 2013-07-02 06:33:57,772 INFO Writing dpkg log to '/var/log/unattended-upgrades/unattended-upgrades-dpkg_2013-07-02_06:33:57.772399.log' 2013-07-02 06:36:10,584 INFO All upgrades installed I'm running Ubuntu 12.04 on Amazon EC2 servers. I have unattended-upgrades installed and configured as follows: /etc/apt/apt.conf.d/50unattended-upgrades // Automatically upgrade packages from these (origin:archive) pairs Unattended-Upgrade::Allowed-Origins { "${distro_id}:${distro_codename}-security"; // "${distro_id}:${distro_codename}-updates"; // "${distro_id}:${distro_codename}-proposed"; // "${distro_id}:${distro_codename}-backports"; }; // List of packages to not update Unattended-Upgrade::Package-Blacklist { }; /etc/apt/apt.conf.d/20auto-upgrades APT::Periodic::Update-Package-Lists "1"; APT::Periodic::Unattended-Upgrade "1"; I've struggled to find documentation about what happens to running processes during an upgrade. - Is this expected behaviour? Or should unattended-upgrades restart apache after upgrading it? - What can I do to ensure apache is restarted correctly? Should I just blacklist the apache package?

    Read the article

  • Getting 502 instead of 503 when all backend servers are down running HAProxy behind Apache

    - by scarba05
    I'm testing running HAProxy as a dedicated load balancer behind Apache 2.2, replacing our current configuration where we use Apache's load balancer. In our current, Apache only, set-up if all the backend (origin) servers are down Apache will serve a 503 service unavailable message. With HAProxy I get a 502 bad gateway response. I'm using a simple reverse proxy rewrite rule in Apache RewriteRule ^/(.*) http://127.0.0.1:8000/$1 [last,proxy] In HAProxy I have the following (running in default tcp mode) defaults log global option tcp-smart-accept timeout connect 7s timeout client 60s timeout queue 120s timeout server 60s listen my_server 127.0.0.1:8000 balance leastconn server backend1 127.0.0.1:8001 check observe layer4 maxconn 2 server backend1 127.0.0.1:8001 check observe layer4 maxconn 2 Testing connecting directly to the load balancer when the backend servers are down: [root@dev ~]# wget http://127.0.0.1:8000/ test.html --2012-05-28 11:45:28-- http://127.0.0.1:8000/ Connecting to 127.0.0.1:8000... connected. HTTP request sent, awaiting response... No data received. So presumably this is down to the fact that HAProxy accepts the connection and then closes it.

    Read the article

  • Apache 2 Fails to Start After Upgrade with No Errors

    - by Mark Davidson
    Hi all Hoping someone can help me with a server issue. Recently we upgraded to the latest apache on 2 boxes within are organisation. One being the master box the other being for failover. The upgrade went fine on the master box but on the failover box apache fails to start with no errors, being output or logged. Both boxes have the exact same configuration so found this a bit strange. I've reinstalled apache and have been through checking the configs and did not find any obvious errors. Eventally I ran a syntax check on each config file being included and found that one of the files apparently has syntax errors. Invalid command 'Order', perhaps misspelled or defined by a module not included in the server configuration Invalid command 'php_value', perhaps misspelled or defined by a module not included in the server configuration Invalid command 'GeoIPEnable', perhaps misspelled or defined by a module not included in the server configuration I've trippled checked all the modules are enabled but it still fails. I've googled the subject of these errors loads but have been unable to fine a solution. I was wondering if anyone had encountered such a problem before and could point me towards a solution. Thanks for your help in advance. P.s: Apache related versions on server. ii apache2 2.2.3-4+etch10 Next generation, scalable, extendable web se ii apache2-mpm-prefork 2.2.3-4+etch10 Traditional model for Apache HTTPD 2.1 ii apache2-utils 2.2.3-4+etch10 utility programs for webservers ii apache2.2-common 2.2.3-4+etch10 Next generation, scalable, extendable web se ii libapache2-mod-geoip 1.1.8-2 GeoIP support for apache2 ii libapache2-mod-php5 5.2.0+dfsg-8+etch15 server-side, HTML-embedded scripting languag

    Read the article

  • Apache: how to set custom 401 error page and save original behaviour

    - by petRUShka
    I have Kerberos-based authentication with Apache/2.2.3 (Linux/SUSE). When user is trying to open some url, browser ask him about domain login and password like in HTTP Basic Auth. If user cancel such request 3 times Apache returns 401 Authorization Required error page. My current virtual host config is <Directory /home/user/www/current/public/> Options -MultiViews +FollowSymLinks AllowOverride None Order allow,deny Allow from all AuthType Kerberos AuthName "Domain login" KrbAuthRealms DOMAIN.COM KrbMethodK5Passwd On Krb5KeyTab /etc/httpd/httpd.keytab require valid-user </Directory> I want to set nice custom 401 error page with some instructions for users. And I added such line in virtual host config: ErrorDocument 401 /pages/401 It works, when user can't authorize apache redirects him to my nice page. But Apache doesn't ask user login\password as it did before. I want this functionality and nice error page simultaneously! Is it possible to make it works properly?

    Read the article

  • Apache 2.2.14: SSLCARevocation location

    - by Doc
    I am installing a .crl in my apache config. It looks like this: VirtualHost default DocumentRoot "web" ServerName example.com SSLEngine on SSLCertificateFile "cert.crt" SSLCertificateKeyFile "key.key" SSLCertificateChainFile "cert.ca-bundle" SSLProtocol -all +SSLv3 SSLCipherSuite SSLv3:+HIGH:+MEDIUM Directory Order deny,allow Allow from all SSLCACertificateFile "ClientRootCert.crt" SSLVerifyClient require SSLVerifyDepth 3 SSLCARevocationFile "CRLList.crl" Directory VirtualHost When Apache is started, I get the error: SSLCARevocationFile not allowed here When I place SSLCARevocationFile above the Directory tag, Apache starts, but all client certs are rejected with the message: ssl_error_expired_cert_alert (both revoked and active certs) How to solve this?

    Read the article

  • Apache-style multiviews with Nginx

    - by Kenn
    I'm interested in switching from Apache/mod_php to Nginx for some non-CMS sites I'm running. The sites in question are either completely static HTML files or simple PHP, but the one thing they have in common is that I'm currently using Apache's mod_negotiation to serve them up without file extensions. I'm not concerned with actual content negotiation; I'm using this just so I don't have to use file extensions in my URLs. For example, the file at /info/contact.php is accessed via a URL of just /info/contact The actual file is a .php file in that location, but I don't use the extension in the URLs. This gives me slightly shorter, cleaner URLs and also doesn't expose what's essentially a meaningless implementation detail to the user. In Apache, all this takes is enabling mod_negotiation and adding +MultiViews to the Options for the site. In Nginx I gather I'll be rewriting somehow but being new to Nginx, I'm not exactly sure how to do it. These sites are currently working fine proxied from Nginx to Apache, but I'd like to try running them solely with Nginx/fastcgi. They work fine this way as long as I'm using the extensions, so the fastcgi aspect is working great. My concern now is just with removing those extensions. It's important to keep in mind that the filename is not always in the URL, in the case of subdirectories. That is, /foo/bar should look for /foo/bar.php or /foo/bar/index.php /foo/ should look for /foo/index.php Is there a simple way to achieve this with Nginx or should I stick with proxying to Apache?

    Read the article

  • Apache 2.4 and PHP 5.4 getting connection reset errors in the browser

    - by zuallauz
    In the weekend I upgraded my development web server to Apache 2.4 and PHP 5.4. In my web application which was previously working great on Apache 2.2 and PHP 5.3 it now starts getting these messages saying the "connection was reset" in Firefox. See screenshot. I am connecting to the linux machine via local LAN. I'm assuming it might be something to do with the new version of Apache or PHP, or the new LAMP stack which I downloaded from BitNami? It would seem to happen every 5-10 requests and throw this error, perhaps more likely to trigger it is if I send a POST request from a page. Is it timing out the script or something? These are just basic dynamic pages I'm loading and they worked perfectly in Apache 2.2 and PHP5.3. Here are my httpd.conf and PHP.ini if that has any clues. Any ideas? Any help much appreciated.

    Read the article

  • Microsoft Basic Office 2007 Activation Keys won't work after re-installing on my Laptop

    - by Rolnik
    So, I've upgraded my hard-drive on my laptop, and proceeded to grab my trusty copper-faced Official MS Office disk to do an install. I have three licenses with the fancy green-blue paper that identifies the license keys. Problem is, that for each of these license keys, when the Office 2007 software asks that I enter the "Product Key" it states: The key is incorrect. Verify that you have the correct key, and then retype it Why would Microsoft want to inhibit/prohibit re-installs on the same machine that the software was initially installed to? Incidentally, the same goofy error happens with each of the three valid product key (activation keys) that I enter.

    Read the article

  • How to speed up apache

    - by Zen_silence
    We have a server with 8Cores, 16GB of RAM and RAID 0 SAS 10K drives. Our goal is to use this to serve a fairly simple php application quickly. We have tested all other components and we think we have narrowed it down to apache is our bottleneck. I am no apache guru I have done some research and tested a couple things but when i test with JMeter launching 100 concurrent connections against the server the first 10 - 20 come back quickly 30 - 100ms but the rest take between 1000ms to 3000ms. Anyone have any ideas on what to change in our apache config to make this faster right now its a vanilla install of apache.

    Read the article

  • apache server mapping to tomcat issues

    - by Karthick
    I am working on the flex application. Back end is spring/Hibernate. The application is working fine in my local XP system. When i am trying to deploy in the server i'm facing the issue. The issue is how to map the java in apache when i am by passing the apache & works in tomcat my application is working good. But not in the apache.. This can be fixed by mapping the java in apache. I don't know how to map this. can u help me out My server properties Linux lumiin.ch 2.6.18-028stab095.1 i686 Regards Karthick

    Read the article

  • Installing Apache Lucene for LAMP server

    - by Pawan
    I have Ubuntu running for LAMP (Linux, Apache, MySQL and PHP) server. To provide better search capability one of my friend recommended to install "Apache Lucene". While reading about it I came to know that "Apache Lucene" required tomcat and java to run. Please let me know if it be feasible to have it or there are other better alternates for LAMP stack. I am looking for some proven solution. Thanks :)

    Read the article

  • Securing php on a shared apache

    - by Jack
    I'm going to install apache+php in a server where two users, A and B, will deploy their website. I'm trying to achieve isolation of users' space for security reasons: that is no scripts from site A should be able to read files in site B. To achieve this I installed suphp. Website files of user A are owned by A:A with perm=700 and user of B are owned by B:B with perm=700. Suphp works great, but apache complains about permissions to read .htaccess. How can I let apache to read .htaccess in every dir of A and B while keeping isolation between site A and site B? I played with ownership (group = www-data) and permissions (750) but I found no way to keep isolation granted. Any idea? Maybe by running apache as root, but in this case are there any drawbacks?

    Read the article

  • Apache httpOnly Cookie Information Disclosure CVE-2012-0053

    - by John
    A PCI compliance scan, on a CentOS LAMP server fails with this message. The server header and ServerSignature don't expose the Apache version. Apache httpOnly Cookie Information Disclosure CVE-2012-0053 Can this be resolved by simply specifying a custom ErrorDocument for the 400 Bad Request response? How is the scanner determining this vulnerability, is it invoking a bad request then looking to see if it's the default Apache 400 response?

    Read the article

  • Apache httpd workers retry

    - by David Newcomb
    I have an Apache httpd web server running mod_proxy and mod_proxy_balancer. The whole of /somedir is sent to 2 worker machines which service the requests using the round robin scheduler. Each worker machine is running IIS but I don't think that is important. I can demonstrate the load balancer working by repeatedly requesting a single page which contains the IP address of the machine and can see that it switches from one to the other in a predictable round robin fashion. If I switch off one of the IIS servers and start requesting the same page then each page only contains the IP address of the machine that is up. However, if I start IIS and don't run my IIS application then /somedir returns 500 (as it should). I've added 500 to the failonstatus (Apache 2.4) so when it hits the error Apache places the worker machine into error state. Apache still returns the proxy error to the client though. How can I make Apache catch the proxy failure and retry using a different worker in the same way that a connection failure does. Update There is almost the same question asked in StackOverflow so joining them together. http://stackoverflow.com/questions/11083707/httpd-mod-proxy-balancer-failover-failonstatus-transperant-switching

    Read the article

  • WAMP starts Apache or Mysql, but not both?

    - by ladenedge
    When I install WAMP, the Apache and Mysql services are set to run as the LocalService user and all works well. However, because I need to access remote UNC paths in my PHP code, I need to run at least Apache as a user that exists on both the local host and the remote host - I'll call him WampUser. When both Apache and Mysql are set to start as WampUser, I cannot start both at the same time. If both are stopped, I can start either successfully. When I attempt to start the other, I get Error 1053: The service did not respond to the start or control request in a timely fashion. This error appears immediately - there is no timeout. When at least one of the services is set to start as LocalService, both start fine. I can, therefore, solve my problem by setting Apache to WampUser and Mysql to LocalService, but I'm more interested in why this is happening in the first place. I'm especially curious because this situation does not occur on other servers - something I've done to this server has made these two services exclusive when running as the same user. Here are some miscellaneous data points: I am using Windows Server 2003. I've provided recursive Full Control to the C:\wamp directory for WampUser. Nothing appears in the event log after the service fails. No log entries appear in either the Mysql log or the Apache error log. Neither application appears in the process list when the appropriate service is stopped. Any ideas?

    Read the article

  • mod_rpaf problems with Nginx front, Apache back-end after Ubuntu upgrade

    - by Kenn
    I'm running an Nginx front-end for static files, and proxying to an Apache backend for PHP and Passenger, using Apache's mod_rpaf to set the correct remote IP address on the backend. Everything worked fine until I upgraded to Ubuntu 12.04 (Precise). Now Apache reports all connections coming from 127.0.0.1. Here's the relevant configuration. Nothing here changed with the upgrade. Nginx: proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; mod_rpaf: <IfModule mod_rpaf.c> RPAFenable On RPAFsethostname On RPAFproxy_ips 127.0.0.1 ::1 RPAFheader X-Forwarded-For </IfModule> I'm using %{X-Forwarded-For}i in my Apache LogFormat directive and the access logs are showing the correct remote address, so I know Nginx is passing the address along properly. In a phpinfo() test, HTTP_X_FORWARDED_FOR is showing the correct remote address, but REMOTE_ADDR is 127.0.0.1. This is reflected in PHP applications as well, such as WordPress comments. I've tried switching Nginx and mod_rpaf to X-Real-IP with no effect. Did something change that I missed? Relevant version info, everything installed from the Ubuntu repository: Nginx 1.1.19 Apache 2.2.22 mod_rpaf 0.6

    Read the article

  • Keepalived with apache unable to bind interface on Backup server

    - by davideagle
    I have two debian 6 servers running keepalived 1.1.20 with one server acting as a Master and the other as a Backup. Both servers host apache 2.4 that have a global Listener on all interfaces on port 80 (Listen *:80) how ever I have some sites that require a listener for port 443 (SSL) and that is configured for each VirtualHost in the Apache config since I do not want every VirtualHost to listen on port 443. The problem is when I try to start Apache on the Backup machine that does not hold the virtual interface the VirtualHost is supposed to be listening on, I get AH00072: make_sock: could not bind to address 1.1.1.1:443. I know this is expected behavior of Apache. The real question is are there any known workarounds or solutions to this scenario?

    Read the article

  • Relation between Apache and DNS configuration

    - by KayKay
    I configured my DNS (bind9) to accept every subdomain, using a wildcarded 'A' record : *.mydomain.tld. IN A xx.xx.xx.xx I configured Apache to serve some specific subdomains using virtual hosts : <VirtualHost *:80> ServerName sub1.mydomain.tld ServerAlias sub1.mydomain.tld JkMount / sub1JK JkMount /* sub1JK </VirtualHost> when I ping from a remote computer on a subdomain configured in apache I get a response. When I ping on a subdomain that is not configured in apache, the host is not found. I don't understand why apache configuration would affect DNS resolution like this? I would appreciate any information that helps me understand this. Thanks a lot.

    Read the article

  • Mounted NFS directory not writable by Apache / PHP

    - by phpfour
    Need some help here with NFS. Here's what I have (all servers running CentOS 5.6 with SELinux): 172.17.20.1 - Primary server with static IP. Varnish redirects requests to the web servers. 172.17.20.2 - Web server 1 172.17.20.3 - Web server 2 The application residing on the web servers is running Drupal and I need both of them to share the same files directory. I have created a folder in 172.17.20.1 called /var/nfs with root user. Here is my /etc/exports content: /var/nfs 172.17.20.2(rw,sync,no_root_squash) 172.17.20.3(rw,sync,no_root_squash) On both the web servers (172.17.20.2/3), I have it mounted like below: [root@web2 ~]# mount ... 172.17.20.1:/var/nfs on /mnt/nfs/var/nfs type nfs (rw,sync,hard,intr,addr=172.17.20.1) On all the servers, I've added the user apache to the root group to get the desired write access: [root@main ~]# cat /etc/group root:x:0:root,apache .... .... apache:x:48: [root@web1 ~]# cat /etc/group root:x:0:root,apache .... .... apache:x:48: Despite all this, when I try to write files into the /mnt/nfs/var/nfs folder from Drupal/PHP, it cannot write to it. I even tried with a simple PHP upload script but it doesn't work, so the problem is not with Drupal. Any help you guys can do is much appreciated. I've spent hours and hours with it, without any success :( Thanks in advance.

    Read the article

  • How to use chain.p7b with Apache?

    - by Debianuser
    I wanted to setup a SSL website on Apache and applied for a certificate from my local ISP. All they sent me was a single file named chain.p7b. I have always used certificates from other vendors without any issues but they usually provide two files to be configured as SSLCertificateFile and SSLCertificateChainFile in Apache. Following instructions from several online resources, I opened the p7b file in Windows and extracted 4 certificates from the file. I then tried configuring Apache with one of the files and it worked, but shows a warning: The certificate is not trusted because no issuer chain was provided. I though I have to use remaining 3 files as SSLCertificateChainFile and/or SSLCACertificateFile. I tried that but it didn't work so I am assuming it might be something completely different. Anyone faced this issue before? The following page http://www-01.ibm.com/support/docview.wss?uid=swg21458997 talks about using a keystore but is that relevant to Apache?

    Read the article

  • Executing a command as apache

    - by Lord Loh.
    This script keeps outputting a 1. and I cannot understand why. <?php passthru("nohup sudo rndc reload sd.example.com",$op); print_r($op); ?> I have also tried the above code without the nohup. I have the following line in my sudoers file apache ALL = NOPASSWD: /usr/sbin/rndc reload sd.example.com Just to test, temporally, I allowed apache a shell, logged in as apache by sudo su apache and successfully managed to execute sudo rndc reload sd.example.com. I do not see any error message in my log files wither. What could I be possibly doing wrong? None of the similar threads have pointed me to anything that solved my problem or debug it.

    Read the article

  • Apache configuration file visualization/testing

    - by Matt Holgate
    Is there a tool available (or a debug mode built into Apache) that will allow me to interactively test and explain an Apache configuration for a given request? In particular, I'd like to be able to see which directives will apply when requesting a specific URL. For example, the output for the URL http://myserver.com/foo/bar/bar.html might look something like: Allow from 192.168.0.3 <-- From <Location /foo/bar> in myserver.com vhost Require valid user <-- From <Directory /var/www/foo> in global configuration Satisfy any <-- From <File bar.html> in global configuration [Background: why do I want this? The apache merging rules for configuration directives are quite complex to get right. It would be great to have a tool which allows you to check that your rules are doing exactly what you want, and would be a good learning tool]. If there isn't such a tool, is there a debug option in Apache that will log such information for each incoming request?

    Read the article

  • Mod_pagespeed, Varnish and Apache cache issues after new code pushes

    - by WerkkreW
    I have a rather strange issue. In my environment we are running a load balanced cluster of 8 apache servers with a master-master MySQL backend. In front of apache we have Varnish in the cache layer. We have been running Apache mod_pagespeed for several weeks now and for the most part it has been working great. The issue arises when we do fresh code updates from Git, and and/all of the JS/CSS assets change. Basically the problem appears to be two fold. One, after the code push we generally take the opportunity to flush varnish, restart apache, and restart varnish. In doing this all of the mod_pagespeed combinied/minified files are cleared out ensuring that all of the new JS/CSS assets are fresh. The problem is, upon doing this the file names that mod_pagespeed creates change, but the old files (appear) to be still cached for many people client side leading to very unexpected results. However, if we do not restart apache, the changes to the files may or may not appear client side due to the cached minified assets. The simple solution is to disable mod_pagespeed, however I would rather not do that as it has made a fairly large impact in performance. I feel as if there must be a better way to deal with the inconsistencies in cache between the client and server to prevent having people to go to great lengths or perform a large number of page refreshes to see a working page. I can provide configuration snippets if anyone needs them. If you would like to inspect the site, source, headers, or anything try the following addresses: http://wellplayed.org http://wellplayed.org/tv Thanks in advance!

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >