Search Results

Search found 10359 results on 415 pages for 'apache bench'.

Page 367/415 | < Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >

  • How to deal with malicious domain redirections?

    - by user359650
    It is possible for anybody to buy a domain name containing negative terms and point it to someone's website in order to damage their reputation. For instance someone could buy the domain child-pornography.com and point it to the address 64.34.119.12 which is the address behind stackoverflow.com and people navigating to the domain in question would end up visualizing content from StackExchange which would be detrimental to StackExchange's image. To illustrate this, I added the entry 64.34.119.12 child-pornography.com to my /etc/hosts file and tested. Here is what I obtained: I personally found this user experience terrible as someone could think that Stack Exchange are in favor of child pornography and awaiting support from the community to create a Q&A site about it. I tested with other websites and experienced other behaviors that I would categorize as follows: 1 - Useful 404 page (happens with stackoverflow.com): For me the worst way of handling this as the image of the targeted website is directly associated with the offending domain. The more useful the 404 page, the bigger the impression that the targeted website would be willing to help with child pornography. 2 - Redirection (happens with microsoft.com): For instance when accessing child-pornography.com you get redirected to www.microsoft.com. It isn't as bad as above as the offending domain name never appears alongside the targeted website's content, but still bad in my opinion as it gives the impression the targeted website bought the offending domain and redirected it to their website to get more traffic. 3 - Server error (happens with lemonde.fr): You get an error from the webserver which page doesn't contain any content that can be associated with the targeted website (e.g. default Apache 404 page, completely blank page). I believe that is good as the identify of the targeted website isn't revealed. Above are the various behaviors I experienced, but I also thought about a fourth way of dealing with this which is described below. 4 - Disclaimer page (haven't found any website implementing that technique): Display a message such as : "You ended here because someone bought and linked the child-pornography.com domain to our website. We do not own this domain and do not associate ourselves with it. This request has been logged by our servers and we will raise this issue with the competent authorities to have this domain taken down. If you want to access our website, please click here." The good thing about this method is that it can be implemented at application layer (good if you don't have control over web server which happens with some hosting solutions), allows you to protect yourself from any liability, and offer the visitor to be redirected to your own website. Which of the above options would you implement to deal with malicious domain linking (IMO only options 3 and 4 are worth considering) ?

    Read the article

  • Server-infrastructure recommendations

    - by Tim van Elsloo
    Here's the thing: I need a cheap, fast, reliable infrastructure that can dynamically scale (like Amazon S3: cloud-storage). I'm thinking of 3 different type of 'servers'. Application-server Should be able to run CentOS (or another light Linux-distr.) Should be able to run Apache Should be able to run PHP Should be able to run GD (so it does rely on it's cpu). Should be extremely reliable and fast. Database-server Should be able to run MySQL Should be able to... well, do nothing else :P. Should be extremely reliable and fast. Storage-server Should be able to run some kind of file-transfer-deamon (like FTP, CouchDB, etc.) Should be able to do nothing else. Should be extremely reliable and fast. So technically, by transferring all static data to 2 different servers/services, the application-server can totally focus on the webpages. My questions: What services do you recommend? Which is cheaper, faster and more reliable: using my own server, or using some cloud-storage/cloud-computing-service (like Amazon S3, CloudFiles, etc.)? How can I prevent bandwidth abuse (such as dos-attacks causing the bill to be extremely high)? What's the difference between "including CDN" and "excluding CDN"? It seems the price doesn't differ at CloudFiles? Do you have to pay "including CDN" + "excluding CDN" when you decide to enable the delivery-network? Or have you only got to pay "including CDN"? Should I use my own nameserver too or can I use my domain-hoster's nameservers? What are the minimum software specifications of a nameserver. Can I write some software myself? Does anyone have a good protocol-description? I hope you can answer my questions. Answers I shouldn't write my own nameserver-software. Instead, I should use something like bind. (http://osspro.com/2010/05/04/linux-create-your-own-domain-name-server-dns/).

    Read the article

  • How exactly is Google Webmaster Tools measuring "Site Performance"?

    - by Rémi
    I've been working for two months now on improving our response time (mainly server side) on a new forum (a brand new product on a technical point of view) we've launched in Germany a few month ago and I'm a lot surprised by the results I get. I monitor our response time using Apache logs and our own implementation of Boomerang beacon. Using my stats, I can see that our new product responds in about 680 ms where our old product was responding in about 1050 ms. On the other side, Google Webmaster Tool tells us that our pages have an average reponse time of about 1500 ms today where it was 700 three months ago with our old product. I've figured that GWT was taking client side metrics into account so I've added some measures on our Boomerang beacon and everything looks just fine. I've also ran some random pages on ySlow and Google's Page Speed and everything looks better than it was before. We event have a 82% on Google's Page Speed tool which is quite cool for a site with some ads in it :) Lately, we have signed a deal with Akamai to use two of their products : CDN for our static files (we were using another CDN before but it wasn't very effective) and RMA to improve Networks routes. We have also introduced a new agressive cache mecanism to ensure that most of the pages served to crawlers are cached by our memcache grid. After checking my metrics, it seems that this changes have improved from 650ms to about 500ms, which is good (still not great but it is definitly an improvement). But webmaster tools continues to report an increasing average response time where we see it decreasing in the same time. Have you ever had the same kind of wierd behavior on your sites while doing performance improvements ? Do you have any idea how to monitor the same thing Google does with Site Performance in Google Webmaster Tools so that we could improve our site and constantly check if it is what Google wants ? Edit 2011/07/26 : Thanks for your answers guys ! Nevertheless, I was not precise enough. The main issue we have is not with the Site Performance page but with the Crawl Stats one for now. We probably found an issue on our side with some very slow pages (around 3000 ms !!) and we are trying to fix them. I'll keep you posted as soon I'll have some infos. Thanks again !

    Read the article

  • Unable to connect to mail server via IMAP and roundcube

    - by mrhatter
    I am having trouble getting the final parts of my mail server up and working. I followed this tutorial to get everything set up on the mail server side. I have installed roundcube for webmail and configured it but it is saying "error connecting, connection refused" when attempting to connect to it using IMAP. This is thorough the "test imap" section of its installer. Also it is giving me an error message about perissions for it's log and temp folders but that's not as important as acutally getting mail to work. I have also tried connecting to the mail server using thunderbird however it cannot establish a connection either and I know my login information is correct. I know that the databases are working correctly based on the roundcube installer telling me that they have been "successfully initialized". Here are my firewall rules -A INPUT -i lo -j ACCEPT -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT -A INPUT -p tcp -m tcp --dport 25 -j ACCEPT -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -p tcp -m tcp --dport 443 -j ACCEPT -A INPUT -p tcp -m tcp --dport 465 -j ACCEPT -A INPUT -p tcp -m tcp --dport 487 -j ACCEPT -A OUTPUT -p tcp -m tcp --dport 993 -j ACCEPT -A INPUT -j DROP Which I set up in iptables. I have modified them from what I used in this tutorial I'm not sure what to try next. Any help would be wonderful! I am using Ubuntu 14.04 server, apache 2.4.7, roundcube 1.0.1, and the latest versions of dovecot and postfix. The email databases are contained in mysql. I am running this on a VPS server. UPDATE: I have changed from iptables to using ufw. I have run the following commands to set up a basic firewall with ufw. ufw default deny ufw allow ssh ufw allow http ufw allow https ufw allow imap ufw allow imaps ufw allow smtp I then used telnet to check all of the mail ports. But Port 993 isnt working even though ufw says both 993 and 993/tcp are open. What am I missing?

    Read the article

  • Access denied 403 errors after migrating my site

    - by AgA
    I've recently migrated my Joomla site from one shared hosting to another with Hostgator. GWT notified me about many 403 access denied pages. I've checked with Firebug too, and even though browser is displaying full page correctly but http return is 403. I've checked the home page but it's correctly returing 200 response. The same is shown by Fetch as Google in GWT(pasted this in the bottom). The site is 3 years old and I regularly do such migrations. I've copied the files and database "AS IS". I've even cleared all the caches but no luck. There is only one change: previously the site was primary domain but now it's add-on one. What could be the issue? This is how Googlebot fetched the page. Fetch as Google URL: http://MYSITE.COM/-----------------REMOVED.html Date: Thursday, June 20, 2013 at 10:32:14 PM PDT Googlebot Type: Web Download Time (in milliseconds): 3899 HTTP/1.1 403 Forbidden Date: Fri, 21 Jun 2013 05:32:15 GMT Server: Apache P3P: CP="NOI ADM DEV PSAi COM NAV OUR OTRo STP IND DEM" Expires: Mon, 1 Jan 2001 00:00:00 GMT Cache-Control: post-check=0, pre-check=0 Pragma: no-cache Set-Cookie: 0e4f6b53991c80cf39d57a6db58bb58d=ee2d880e8db0f1fc03c5612ea5a77004; path=/ Last-Modified: Fri, 21 Jun 2013 05:32:19 GMT Keep-Alive: timeout=5, max=75 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/html; charset=utf-8 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en-gb" lang="en-gb" > <head> <base href="http://www.mysite.com/-----------------rajiv-yuva-shakthi-programme-finance-planning.html" /> <meta http-equiv="content-type" content="text/html; charset=utf-8" /> <meta name="robots" content="index, follow" /> <meta name="keywords" content="" /> <<<<<<TRIMMED>>>>>>>>>>>>>>

    Read the article

  • PHP - Internal APIs/Libraries - What makes sense?

    - by Mark Locker
    I've been having a discussion lately with some colleagues about the best way to approach a new project, and thought it'd be interesting to get some external thoughts thrown into the mix. Basically, we're redeveloping a fairly large site (written in PHP) and have differing opinions on how the platform should be setup. Requirements: The platform will need to support multiple internal websites, as well as external (non-PHP) projects which at the moment consist of a mobile app and a toolbar. We have no plans/need in the foreseeable future to open up an API externally (for use in products other than our own). My opinion: We should have a library of well documented native model classes which can be shared between projects. These models will represent everything in our database and can take advantage of object orientated features such as inheritance, traits, magic methods, etc. etc. As well as employing ORM. We can then add an API layer on top of these models which can basically accept requests and route them to the appropriate methods, translating the response so that it can be used platform independently. This routing for each method can be setup as and when it's required. Their opinion: We should have a single HTTP API which is used by all projects (internal PHP ones or otherwise). My thoughts: To me, there are a number of issues with using the sole HTTP API approach: It will be very expensive performance wise. One page request will result in several additional http requests (which although local, are still ones that Apache will need to handle). You'll lose all of the best features PHP has for OO development. From simple inheritance, to employing the likes of ORM which can save you writing a lot of code. For internal projects, the actual process makes me cringe. To get a users name, for example, a request would go out of our box, over the LAN, back in, then run through a script which calls a method, JSON encodes the output and feeds that back. That would then need to be JSON decoded, and be presented as an array ready to use. Working with arrays, as appose to objects, makes me sad in a modern PHP framework. Their thoughts (and my responses): Having one method of doing thing keeps things simple. - You'd only do things differently if you were using a different language anyway. It will become robust. - Seeing as the API will run off the library of models, I think my option would be just as robust. What do you think? I'd be really interested to hear the thoughts of others on this, especially as opinions on both sides are not founded on any past experience.

    Read the article

  • Teamcity build agent gives 504 gateway timeout

    - by Anthony
    I have a new teamcity build agent machine, which when started up tries to connect to the build server and fails. It never shows up in the connected, disconnected or unauthorised agents tabs of the build server web interface. The logs on the build agent show that it fails to connect with a 504 gateway timeout. This is from teamcity-agent.log [2012-09-04 15:34:59,776] INFO - buildServer.AGENT.registration - Registering on server http://10.0.10.16, AgentDetails{Name='my-local', AgentId=null, BuildId=null, AgentOwnAddress='10.0.1.14', AlternativeAddresses=[10.0.10.32], Port=8080, Version='21424', PluginsVersion='21424-md5-somechecksum', AvailableRunners=[ABunchOfPlugins], AvailableVcs=[SomeRunners], AuthorizationToken='sometoken'} [2012-09-04 15:35:53,606] WARN - buildServer.AGENT.registration - Call http://10.0.10.16/RPC2 buildServer.registerAgent3: org.apache.xmlrpc.XmlRpcClientException: Server returned incorrect status code: 504 Gateway Time-out [2012-09-04 15:35:53,606] WARN - buildServer.AGENT.registration - Connection to TeamCity server is probably lost. Will be trying to restore it. Take a look at logs/teamcity-agent.log for details (unless you're using custom logging). (I have edited some identifying data out of this log excerpt) But I can reach the build server. In fact, tracert shows that it is very nearby. Tracing route to TEAMCITYSERVER [10.0.10.16] over a maximum of 30 hops: 1 <1 ms <1 ms <1 ms 10.0.2.1 2 <1 ms <1 ms <1 ms TEAMCITYSERVER [10.0.10.16] Trace complete. I can see a teamcity login page if I hit http://10.0.10.16 in the browser. The teamcity service is logging in as the same (local administrator) account as I used to log in and test the network. The build agent is a windows 2008 server VM hosted on Ubuntu 12.04 under Oracle VirtualBox. I have disabled firewalls on both the Windows and Ubuntu machines. Other VMS with similar configuration can connect fine and do not report this error. What can possibly be preventing this connection?

    Read the article

  • Invalid keystore format with SSL in Tomcat 6

    - by strauberry
    I'm trying to setup SSL in my local Tomcat 6 installation. For this, I followed the official How-To doing the following: $JAVA_HOME/bin/keytool -genkey -v -keyalg RSA -alias tomcat -keypass changeit -storepass changeit $JAVA_HOME/bin/keytool -export -alias tomcat -storepass changeit -file /root/server.crt Then changing the $CATALINA_BASE/conf/server.xml, in-commenting this: <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" keystoreFile="/root/.keystore" keystorePass="changeit" /> After starting Tomcat, I get this Exception: INFO: Initializing Coyote HTTP/1.1 on http-8080 30.06.2011 10:15:24 org.apache.tomcat.util.net.jsse.JSSESocketFactory getStore SCHWERWIEGEND: Failed to load keystore type JKS with path /root/.keystore due to Invalid keystore format java.io.IOException: Invalid keystore format at sun.security.provider.JavaKeyStore.engineLoad(JavaKeyStore.java:633) at sun.security.provider.JavaKeyStore$JKS.engineLoad(JavaKeyStore.java:38) at java.security.KeyStore.load(KeyStore.java:1185) When I look into the keystore with keytool -list I get root@host:~# $JAVA_HOME/bin/keytool -list Enter key store password: changeit Key store type: gkr Key store provider: GNU-CRYPTO Key store contains 1 entry(ies) Alias name: tomcat Creation timestamp: Donnerstag, 30. Juni 2011 - 10:13:40 MESZ Entry type: key-entry Certificate fingerprint (MD5): 6A:B9:...C:89:1C Obviously, the keystore types are different. How can I change the type and will this fix my problem? Thank you!

    Read the article

  • MySQL won't start or won't installed

    - by Owen
    Hi there, I'm trying to get a local LAMP setup on my Ubuntu desktop. I'm successfully got PHP install but I'm having trouble with MySQL If PHP tries to connet to MySQL I get this error: Warning: mysql_connect() [function.mysql-connect]: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) in /var/www/testing.php on line 3 Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) If I try via command line I get much the same error: owen@desktop:~$ mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (13) Weirdly "/var/run/mysqld" does not exist. Running a whereis command I get the following: owen@desktop:~$ whereis mysqld.sock mysqld: /usr/sbin/mysqld /usr/share/man/man8/mysqld.8.gz So is MySQL even installed? Well acording to dpkg owen@desktop:~$ dpkg -l | grep mysql ii libapache2-mod-auth-mysql 4.3.9-13ubuntu1 Apache 2 module for MySQL authentication ii libdbd-mysql-perl 4.016-1 Perl5 database interface to the MySQL database ii libmysqlclient15off 5.1.30really5.0.83-0ubuntu3 MySQL database client library ii libmysqlclient16 5.1.49-1ubuntu8.1 MySQL database client library ii mysql-admin 5.0r14+openSUSE-2.1 GUI tool for intuitive MySQL administration ii mysql-client-5.1 5.1.49-1ubuntu8.1 MySQL database client binaries ii mysql-client-core-5.1 5.1.49-1ubuntu8.1 MySQL database core client binaries ii mysql-common 5.1.49-1ubuntu8.1 MySQL database common files, e.g. /etc/mysql/my.cnf ii mysql-gui-tools-common 5.0r14+openSUSE-2.1 Architecture independent files for MySQL GUI Tools ii mysql-query-browser 5.0r14+openSUSE-2.1 Official GUI tool to query MySQL database ii mysql-server 5.1.49-1ubuntu8.1 MySQL database server (metapackage depending on the latest version) ii mysql-server-5.1 5.1.49-1ubuntu8.1 MySQL database server binaries and system database setup ii mysql-server-core-5.0 5.1.30really5.0.83-0ubuntu3 MySQL database core server files ii mysql-server-core-5.1 5.1.49-1ubuntu8.1 MySQL database server binaries ii php5-mysql Can someone please help I'm really confused as what to do next. I'm not a Linux expert at all most of these commands I've ran I found of diffrent blogs and help forums.

    Read the article

  • Apache2, FastCGI, PHP-FPM, APC on virtualmin panel with nginx front end reverse proxy

    - by Ünsal Korkmaz
    My dream setup: php 5.3.6 + mysql 5.5.10 on Apache2, FastCGI, PHP-FPM, APC with nginx 1.0 front end reverse proxy. And as free server management panel: Virtualmin GPL on centos 5.6 In a new centos 5.6 setup. Using this code for installing virtualmin: wget http://software.virtualmin.com/gpl/scripts/install.sh chmod +x install.sh ./install.sh After setup, i see php is 5.1 and mysql is 5.0 version. And system not supporting php-fpm but supporting fcgid wrapper. I did following changes: wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/ius-release-1.0-6.ius.el5.noarch.rpm wget http://dl.iuscommunity.org/pub/ius/stable/Redhat/5/x86_64/epel-release-1-1.ius.el5.noarch.rpm rpm -Uvh ius-release*.rpm epel-release*.rpm yum install yum-plugin-replace yum remove mysql.i386 yum replace mysql --replace-with mysql55 service mysqld restart chkconfig mysqld on mysql_upgrade --password=1234 yum replace php --replace-with php53u yum install php53u-fpm php53u-pecl-apc service httpd restart chkconfig php-fpm on service php-fpm start I am not sure why virtualmin installing both mysql.i386 and 64 bit version together but needed to remove one of them for using yum replace. So i had php 5.3.6 + mysql 5.5.10 with PHP-FPM, APC installed. But virtualmin not supporting PHP-FPM + fastcgi and its still running on fcgid. I am ultra newbie on server management so i couldnt find workaround after this. I want to switch fcgid wrapper to PHP-FPM + fastcgi at least for 1 virtual server. And if i can find a fix for this section, i want to setup nginx 1.0 as front end reverse proxy for serving static files and passing php files to apache. http://nginxcp.com/ is what i want but its for cpanel.

    Read the article

  • Postfix unable to create lock file, permission denied

    - by John Bowlinger
    I thought I had my postfix configuration all set up on my Amazon Ubuntu server but I guess not. I'm trying to set up an admin email account for 3 virtually hosted Apache websites. Here's my postfix main.cf file: myhostname = ip-XX-XXX-XX-XXX.us-west-2.compute.internal alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = ip-XX-XXX-XX-XXX.us-west-2.compute.internal, localhost.us-west-2.compute.internal, , localhost relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all virtual_mailbox_domains = example1.com, example2.com, example3.com virtual_mailbox_base = /var/mail/vhosts virtual_mailbox_maps = hash:/etc/postfix/vmailbox virtual_minimum_uid = 100 virtual_uid_maps = static:115 virtual_gid_maps = static:115 virtual_alias_maps = hash:/etc/postfix/virtual Here's my vmailbox file: [email protected] example1.com/admin [email protected] example2.com/admin [email protected] example3.com/admin @example1.com example1.com/catchall @example2.com example2.com/catchall @example3.com example3.com/catchall And finally my virtual file: [email protected] postmaster [email protected] postmaster [email protected] postmaster When I try to send an email to through netcat to my one of my domains, I get: unable to create lock file /var/mail/vhosts/example1.com/admin.lock: Permission denied This is despite the fact that I set example1.com group to postfix and also my virtual_uid_maps and virtual_gid_maps are both set to Postfix group id of 115.

    Read the article

  • Trying to run chrooted suPHP with UserDir, getting 500 server error

    - by Greg Antowski
    I've managed to get suPHP working fine with UserDir (i.e. PHP files run from the /home/$username/public_html) directory, but I can't get it to work when I chroot it to the user's home directory. I've been following this guide: http://compilefailure.blogspot.co.nz/2011/09/suphp-chroot-gotchas.html And adapting it to my needs. I'm not creating vhosts, I just want PHP scripts to be jailed to the user's home directory. I've gotten to the part where you use makejail and set up a symlink. However even with the symlink set up correctly, PHP scripts won't run. This is what's shown in the Apache error log: SoftException in Application.cpp:537: Could not execute script "/home/jimmy/public_html/test.php" [error] [client 127.0.0.1] Caused by SystemException in API_Linux.cpp:444: execve() for program "/usr/bin/php-cgi" failed: No such file or directory The thing is, if I try running either of the following commands in the terminal it works without any issues: /home/jimmy/usr/bin/php-cgi /home/jimmy/public_html/test.php /usr/bin/php-cgi /home/jimmy/public_html/test.php I've been trying for hours to get this going and documentation for this kind of stuff is almost non-existent. If anyone could help me out with this, I'd be extremely grateful.

    Read the article

  • Issues with ProxyPass and ProxyPassReverse when proxying to localhost and a different TCP port

    - by mbrownnyc
    I am attempting to use ProxyPass and ProxyPassReverse to proxy requests through Apache to another server instance that is bound to the localhost on a different TCP port that the Vhost exists (VHost is bound to :80, when the target is bound to :5000). However, I am repeatedly receiving HTTP 503 when accessing the Location. According to the ProxyPass documentation... <VirtualHost *:80> ServerName apacheserver.domain.local DocumentRoot /var/www/redmine/public ErrorLog logs/redmine_error <Directory /var/www/redmine/public> Allow from all Options -MultiViews Order allow,deny AllowOverride all </Directory> </VirtualHost> PassengerTempDir /tmp/passenger <Location /rhodecode> ProxyPass http://127.0.0.1:5000/rhodecode ProxyPassReverse http://127.0.0.1:5000/rhodecode SetEnvIf X-Url-Scheme https HTTPS=1 </Location> I have tested binding the alternate server to the interface IP address, and the same issue occurs. The server servicing request is an instance of python paste:httpserver, and it has been configured to use the /rhodecode suffix (as I saw this to be mentioned in other posts about ProxyPass). The documentation from the project itself, Rhodecode, reports to use the above. The issue is persistent if I target another server that is serving on a different port. Does ProxyPass allow proxying to a different TCP port? [update] I won't delete this, in case someone comes across the same issue. I had set an ErrorLog, and in that ErrorLog the following error was reported: [Wed Nov 09 11:36:35 2011] [error] (13)Permission denied: proxy: HTTP: attempt to connect to 127.0.0.1:5000 (192.168.100.100) failed [Wed Nov 09 11:36:35 2011] [error] ap_proxy_connect_backend disabling worker for (192.168.100.100) After some more research, I attempted to set SELinux to permissive (echo 0 >/selinux/enforce), and try again. It turns out the SELinux boolean httpd_can_network_connect must be set to 1. For persistence on reboot: setsebool -P httpd_can_network_connect=1

    Read the article

  • Unable to install ffmpeg-php

    - by matt74tm
    I followed the instructions on http://www.mysql-apache-php.com/ffmpeg-install.htm but ffmpeg-php does not show up in my phpinfo() The commands I ran (in order) #yum install ffmpeg ffmpeg-devel ... Public key for faac-1.26-1.el5.rf.x86_64.rpm is not installed #rpm -Uhv http://apt.sw.be/redhat/el5/en/i386/rpmforge/RPMS/rpmforge-release-0.3.6-1.el5.rf.i386.rpm ... 1:rpmforge-release ########################################### [100%] #yum install ffmpeg ... Complete! #wget http://space.dl.sourceforge.net/project/ffmpeg-php/ffmpeg-php/0.6.0/ffmpeg-php-0.6.0.tbz2 ... #tar -xjf ffmpeg-php-0.6.0.tbz2 #cd ffmpeg-php-0.6.0 #phpize ... configure: error: ffmpeg headers not found. Make sure ffmpeg is compiled as shared libraries using the --enable-shared option #yum install ffmpeg-devel ... Complete! #./configure ... config.status: creating config.h #make ... Build complete. Don't forget to run 'make test'. #make install Installing shared extensions: /usr/local/lib/php/extensions/no-debug-non-zts-20090626/ #ls -al /usr/local/lib/php/extensions/no-debug-non-zts-20090626/ ... -rwxr-xr-x 1 root root 185285 Sep 20 03:36 ffmpeg.so* ... #nano /usr/local/lib/php.ini In which I put these two lines at the end of the php.ini file [ffmpeg] extension=ffmpeg.so Then, #service httpd restart But phpinfo() still does not show any 'ffmpeg' section. This is the correct php.ini because: #php -i | grep php\.ini Configuration File (php.ini) Path => /usr/local/lib Loaded Configuration File => /usr/local/lib/php.ini

    Read the article

  • Pecl complies .so extensions for OSX built-in PHP and not MAMP

    - by Camsoft
    I've installed the sphinx binaries and libraries and am now trying to install the PECL sphinx module. My system is running OS X 10.6 with MAMP 1.8.2 installed. I try to install sphinx using the following command: sudo pecl install sphinx The PECL command outputs the following: running: phpize Configuring for: PHP Api Version: 20090626 Zend Module Api No: 20090626 Zend Extension Api No: 220090626 The versions above don't match the versions listed when doing a phpinfo(). It seems that PECL is trying to complie against the built-in version of PHP. If I ignore the errors and continue the it will successfully compile and place the sphinx.so file in: /usr/lib/php/extensions/no-debug-non-zts-20090626/sphinx.so when in fact it should be: /Applications/MAMP/bin/php5/lib/php/extensions/no-debug-non-zts-20060613/ I've tried copying the sphinx.so file to the MAMP extensions dir but when I restart apache PHP displays the following warning: PHP Startup: Unable to load dynamic library '/Applications/MAMP/bin/php5/lib/php/extensions/no-debug-non-zts-20060613/sphinx.so I think this is because MAMP is 32bit and the built-in PHP is 64bit so PECL complies for 64bit. I might be completely wrong but I did read this when I goggled on the topic. Does anyone know how to get PECL to map to the MAMP version of PHP instead of the built-in version?

    Read the article

  • Sticky connection and HTTPS support for HAProxy

    - by Saif
    Hi Mates, We have 2 HTTP Load balancer with HAproxy and heartbeat. There are 4 apache nodes in this cluster. It's doing round robin load balancing. The HTTP cluster working fine. We are having problem with our portal because it uses SSO. We need sticky connection support in our HAproxy. Also we need load balancing for HTTPS traffic. Here's our HAproxy conf file. global # to have these messages end up in /var/log/haproxy.log you will # need to: # # 1) configure syslog to accept network log events. This is done # by adding the '-r' option to the SYSLOGD_OPTIONS in # /etc/sysconfig/syslog # # 2) configure local2 events to go to the /var/log/haproxy.log # file. A line like the following can be added to # /etc/sysconfig/syslog # # local2.* /var/log/haproxy.log # log 127.0.0.1 local0 log 127.0.0.1 local1 notice chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon # turn on stats unix socket stats socket /var/lib/haproxy/stats #--------------------------------------------------------------------- # common defaults that all the 'listen' and 'backend' sections will # use if not designated in their block #--------------------------------------------------------------------- defaults mode http log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 #--------------------------------------------------------------------- # main frontend which proxys to the backends #--------------------------------------------------------------------- frontend main *:5000 acl url_static path_beg -i /static /images /javascript /stylesheets acl url_static path_end -i .jpg .gif .png .css .js use_backend static if url_static default_backend app #--------------------------------------------------------------------- # static backend for serving up images, stylesheets and such #--------------------------------------------------------------------- backend static balance roundrobin server static 127.0.0.1:4331 check #--------------------------------------------------------------------- # round robin balancing between the various backends #--------------------------------------------------------------------- backend app listen ha-http 10.190.1.28:80 mode http stats enable stats auth admin:xxxxxx balance roundrobin cookie JSESSIONID prefix option httpclose option forwardfor option httpchk HEAD /haproxy.txt HTTP/1.0 server apache1 portal-04:80 cookie A check server apache2 im-01:80 cookie B check server apache3 im-02:80 cookie B check server apache4 im-03:80 cookie B check Please advice. Thanks for your help in advance.

    Read the article

  • Apache2 Permission denied: access to / denied

    - by futureled
    Hi, after installing and starting apache2 i can't open the website and got the error "Forbidden You don't have permission to access / on this server." I tried some different options in the httpd.conf, but nothing helped me solving this problem. All permissions for every directory are "drwxr-xr-x". The directory /www contains a file names index.html with the same permissions. Please do not wonder, the time in the errorlog is not correctly. I have no idea what the problem is, i hope someone can help me. my httpd.conf: ServerRoot "/etc/apache2" Listen 80 User daemon Group daemon ServerAdmin [email protected] DocumentRoot "/var/www" Options FollowSymLinks AllowOverride None Order Deny,Allow Deny from all Options Indexes FollowSymLinks AllowOverride All Order allow,deny Allow from all DirectoryIndex index.html Order allow,deny Deny from all Satisfy All ErrorLog /var/apache2/logs/error_log LogLevel warn LogFormat "%h %l %u %t \"%r\" %s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %s %b" common <IfModule logio_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio </IfModule> CustomLog /var/apache2/logs/access_log common ScriptAlias /cgi-bin/ "/usr/share/apache2/cgi-bin/" AllowOverride None Options None Order allow,deny Allow from all DefaultType text/plain TypesConfig /etc/apache2/mime.types AddType application/x-compress .Z AddType application/x-gzip .gz .tgz SSLRandomSeed startup builtin SSLRandomSeed connect builtin my error_log: [Sat Jan 01 00:50:26 2000] [notice] caught SIGTERM, shutting down [Sat Jan 01 00:50:33 2000] [warn] Init: Session Cache is not configured [hint: SSLSessionCache] [Sat Jan 01 00:50:34 2000] [notice] Apache/2.2.3 (Unix) mod_ssl/2.2.3 OpenSSL/0.9.8j configured -- resuming normal operations [Sat Jan 01 00:50:36 2000] [error] [client 192.168.1.44] (13)Permission denied: access to / denied [Sat Jan 01 00:50:37 2000] [error] [client 192.168.1.44] (13)Permission denied: access to / denied [Sat Jan 01 00:50:37 2000] [error] [client 192.168.1.44] (13)Permission denied: access to / denied [Sat Jan 01 00:50:37 2000] [error] [client 192.168.1.44] (13)Permission denied: access to / denied [Sat Jan 01 00:50:38 2000] [error] [client 192.168.1.44] (13)Permission denied: access to / denied [Sat Jan 01 00:50:38 2000] [error] [client 192.168.1.44] (13)Permission denied: access to / denied

    Read the article

  • PHP-APC Installation

    - by Leo
    Trying to get my head around the way to install APC cache on PHP 5.3.13. That's a VPS with apache, configured preferably through whm/cpanel (although not only). I read a bunch of articles where it was suggested to use FastCGI with APC, as suPHP doens't do well with opcode caching, and fcgid_module doesn't do it right for APC either. Noted that fcgid_module is a newer package than FastCGI and that's what whm/cpanel installs for you but ok, that can be solved I guess. Then I'm reading that php-fpm is a much better alternative to manage the php processes, especially for APC. Ok. Then I realised that php-fpm is included in php core since 5.3 and got confused. Does that mean I don't have to use FastCGI/fcgid_module (and what should I use instead of them - mod_php or cgi?)? Or does that mean that I still need to get the older FastCGI module, and configure it to use one process per user (or just one process?)? Or would fcgid_module work as well? And how bad would it be just to go with mod_php/APC to avoid troubles of installing php-fpm and FastCGI (whm/cpanel doesn't support neither) given than Varnish would serve most of the static content anyway - no php process need to be created for static content. Any examples of their FastCGI/fcgid_module/php-fpm/APC configurations would be greatly appreciated as well.

    Read the article

  • Unable to install ffmpeg-php

    - by matt_tm
    Hi, I followed the instructions on http://www.mysql-apache-php.com/ffmpeg-install.htm but ffmpeg-php does not show up in my phpinfo() The commands I ran (in order) #yum install ffmpeg ffmpeg-devel ... Public key for faac-1.26-1.el5.rf.x86_64.rpm is not installed #rpm -Uhv http://apt.sw.be/redhat/el5/en/i386/rpmforge/RPMS/rpmforge-release-0.3.6-1.el5.rf.i386.rpm ... 1:rpmforge-release ########################################### [100%] #yum install ffmpeg ... Complete! #wget http://space.dl.sourceforge.net/project/ffmpeg-php/ffmpeg-php/0.6.0/ffmpeg-php-0.6.0.tbz2 ... #tar -xjf ffmpeg-php-0.6.0.tbz2 #cd ffmpeg-php-0.6.0 #phpize ... configure: error: ffmpeg headers not found. Make sure ffmpeg is compiled as shared libraries using the --enable-shared option #yum install ffmpeg-devel ... Complete! #./configure ... config.status: creating config.h #make ... Build complete. Don't forget to run 'make test'. #make install Installing shared extensions: /usr/local/lib/php/extensions/no-debug-non-zts-20090626/ #ls -al /usr/local/lib/php/extensions/no-debug-non-zts-20090626/ ... -rwxr-xr-x 1 root root 185285 Sep 20 03:36 ffmpeg.so* ... #nano /usr/local/lib/php.ini In which I put these two lines at the end of the php.ini file [ffmpeg] extension=ffmpeg.so Then, #service httpd restart But phpinfo() still does not show any 'ffmpeg' section. This is the correct php.ini because: #php -i | grep php\.ini Configuration File (php.ini) Path => /usr/local/lib Loaded Configuration File => /usr/local/lib/php.ini

    Read the article

  • How can I get HTTPD to serve the html/php files and not list/index them when they are in folder for virtual host. Using Centos 6.0

    - by LaserBeak
    My virtual hosts are configured as below, initally I could not even get to the /public_html/ directory when typing example.com and apache would just serve me up the default welcome page, I would also get the error: Directory index forbidden by Options directive: /var/www/html/example.com/public_html/ in the log . After editing the welcome.conf page (- Index) so it does not show again when I now type example.com the/public_html/ contents (Index.php) are indexed in the browser. Where as I want it to actually execute and diplay the index.php page. vhost.conf , located in etc/httpd/vhost.d/ NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin [email protected] ServerName localhost ServerAlias localhost.example.com DocumentRoot /var/www/html/example.com/public_html/ ErrorLog /var/www/html/example.com/logs/error.log CustomLog /var/www/html/example.com/logs/access.log combined </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] ServerName example.org ServerAlias www.example.org DocumentRoot /var/www/html/example.org/public_html/ ErrorLog /var/www/html/example.org/logs/error.log CustomLog /var/www/html/example.org/logs/access.log combined </VirtualHost> httpd.conf, settings on default, added onto end: Include /etc/httpd/vhosts.d/*.conf Root directories: DocumentRoot "/var/www/html"

    Read the article

  • What kind of “sysadmin stuff” should I show to students during a talk?

    - by Gregory Eric Sanderaon
    A teacher asked me If I could talk about my job as a linux sysadmin in his class. The course is called "Introduction to Operating systems" and i've been given 45 minutes to talk. The students are beginning their second year, so they've had a bit of experience with programming in different languages. What i'm like to do is show a series of hands-on examples of the kinds of things I do on a regular basis. I've already got a few ideas jotted down, but I'm afraid that they might be either too advanced or too simple for the students to appreciate. Another concern is that a topic might be too long to explain and use too much time overall. Here are a few ideas : Program deployment using version control (git in my case) filtering apache logs using grep, awk, uniq, tail A couple of bash scripts that i've made for various stuff on servers live montitoring (htop, iotop, iptraf) creating databases and assigning roles in mysql/postgresql So, are these ideas any good ? Do you have better ideas ? are the ideas too simple and should I go for more "advanced" stuff ?

    Read the article

  • How can I solve the apache2 httpd error "mixing * ports and non-* ports with a NameVirtualHost addre

    - by rrc7cz
    Here is the error I get when booting up Apache2: * Starting web server apache2 apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName [Wed Oct 21 16:37:26 2009] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results [Wed Oct 21 16:37:26 2009] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results [Wed Oct 21 16:37:26 2009] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results [Wed Oct 21 16:37:26 2009] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results [Wed Oct 21 16:37:26 2009] [warn] NameVirtualHost *:80 has no VirtualHosts I first followed this guide on setting up Apache to host multiple sites: http://www.debian-administration.org/articles/412 I then found a similar question on ServerFault and tried applying the solution, but it didn't help. Here is an example of my final VirtualHost config: <VirtualHost *:80> ServerAdmin [email protected] ServerName www.xxx.com ServerAlias xxx.com # Indexes + Directory Root. DirectoryIndex index.html DocumentRoot /var/www/www.xxx.com # Logfiles ErrorLog /var/www/www.xxx.com/logs/error.log CustomLog /var/www/www.xxx.com/logs/access.log combined </VirtualHost> with the domain X'd out to protect the innocent :-) Also, I have the conf.d/virtual.conf file mentioned in the guide looking like this: NameVirtualHost * The odd thing is that everything appears to work fine for two of the three sites.

    Read the article

  • kickstart: reference floppy drive via %ksappend or %include

    - by virtualeyes
    Having trouble getting %ksappend or %include to work when referencing a local floppy drive. Booting off remote server's cd-rom drive I am able to load the CentOS 6 minimal install image, and then add ks=hd:fd0/ks-jvm.cfg to boot params to load kickstart init file from floppy disk. That works fine. The problem is that I want to load a streamlined generic init file off the floppy and then, within the init, %ksappend or %include specific config files relative to the type of server I'm building (JVM, MySQL, Apache, etc.) I do not have DHCP, networking needs to be specified statically, so %ksappend and %include both fail when attempting to reference http://some-LAN-IP/foo.cfg since networking has not yet been set. The kickstart setup only works when I glob in the entire config into a single file, which is great, but ugly and difficult to maintain when I return later, having forgotten the original setup. At this point I'd be happy if I could get %ksappend or %include working with a floppy drive reference in the %post section; that would consolidate a lot of common boilerplate that all kickstarts will rely on (sshd_config, rsync config, resolve.conf, and so on) Thanks for providing the magic floppy drive reference that is eluding me!

    Read the article

  • How to troubleshoot web server lock-up (Debian Squeeze)

    - by Ryan
    Every once in a while, my web server slows so significantly, it seems locked up. Can't SSH in, no sites being served. It's a VPS that started out as Debian 5 which I upgraded to testing (squeeze). It's a typical LAMP set-up with the sole purpose of running a couple of wordpress sites. One time when it locked up, I got to one of the sites, but it was wordpress complaining it couldn't establish a database connection. So it seemed as if something was really chewing up the CPU and mysqld either timed out, or possibly failed and couldn't restart. But since I couldn't SSH in I feel more inclined to attribute it to CPU. But the only processes running now, aside from OS and kernel stuff: apache mysqld python (for fail2ban) sshd exim4 It has 512M of RAM and 1.5 GB of swap. Every time I check on it, it has plenty of free memory and is using virtually no swap (usually 2-3M). And since I am running fail2ban I don't think I'm getting ddosed. I did find this in my logwatch email this morning (it locked up late last night, when there would have been very little traffic): 6 Time(s): [<ffffffff810a0ebc>] ? oom_kill_process+0x7e/0x23d 6 Time(s): [<ffffffff810a1505>] ? __out_of_memory+0x12a/0x141 6 Time(s): [<ffffffff810a1586>] ? out_of_memory+0x6a/0x94 I didn't find anything else suspicious. It can't be my provider's host because I can SSH in and restart the VM, and everything seems fine. Anybody know which logs I should start poring through to find the core of my problem? Thanks guys.

    Read the article

  • Web based KVM management for Ubuntu

    - by Tim
    Hey all, We've got a single Ubuntu 9.10 root server on which we want to run multiple KVM virtual machines. To administer these virtual machines I'd like a web based KVM management tool, but I don't know which one to choose from the list of tools mentioned on linux-kvm.org. I've used virsh & virt-manager on my desktop, but would like a web interface for the server. I tested ConVirt on my desktop, but it failed to pickup KVM machines from virsh / virt-manager, and I could not get KVM virtual machine import to work (only Xen). oVirt looks good, but I can't find out if and how I can install it on Ubuntu 9.10.. (And I'd really rather not waste another few days on testing stuff that might not work in the end.) Can anyone recommend any good web based KVM management tools that are easy to install on Ubuntu 9.10? I'm looking for something that will also allow me to run other services like apache and postgresql besides hosting virtual machines, so preferably fairly lightweight & no dedicated OS installs. We don't need any professional clustering / migration or anything, just something that will let us create, start, inspect, administer & stop virtual machines from a web page. Best regards, Tim

    Read the article

< Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >