Search Results

Search found 6253 results on 251 pages for 'apache2 ssl'.

Page 120/251 | < Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >

  • How can I setup BluePill to Monitor a Rails App Running via Passenger (mod_rails)

    - by Jim Jeffers
    I recently launched a site running phusion passenger. Unfortunately, the site went down due to a frozen thread. I was able to save the server by doing kill -9 to the specific PID. Still though, I thought passenger was able to manage this automatically. I have a server with 1GB of memory running one rails app with passenger allotted up to 7 instances. However, when I came to discover the site went down I found that passenger had spawned 6 instances with one of them using up over 800mb of memory causing the server to swap. As a result I am hoping to setup something like bluepill on the server but I'm slightly confused as to how you go about doing it. Mainly because bluepill expects to start/stop the processes it's monitoring. However, in our case, passenger already restarts processes for us so we only need to monitor the pids of passengers instances and kill them once they've gotten too large. Has anyone here setup BluePill to monitor a rails app running under phusion's passenger? Any insight would be useful.

    Read the article

  • .htaccess deny from all does not work?

    - by jeffery_the_wind
    I am running Apache 2.2.20 on a Ubuntu 11.04 web server. I have a Joomla site running on it, but I have also added some custom content. In the main web directly I have added a folder /images/sub_folder and in this sub_folder I have put a bunch of pictures. I do not want anyone to be able to simply access these pictures directly from the web, so I made a .htaccess file in that sub_folder and just put the following line in it: deny from all There doesn't seem to be any effect, I can still access the images directly from a web browser. I have restarted the Apache service. What am I doing wrong? Thanks Tim

    Read the article

  • Protect all XML-RPC calls with HTTP basic auth but one

    - by bodom_lx
    I set up a Django project for smartphone serving XML-RPC methods over HTTPS and using basic auth. All XML-RPC methods require username and password. I would like to implement a XML-RPC method to provide registration to the system. Obviously, this method should not require username and password. The following is the Apache conf section responsible for basic auth: <Location /RPC2> AuthType Basic AuthName "Login Required" Require valid-user AuthBasicProvider wsgi WSGIAuthUserScript /path/to/auth.wsgi </Location> This is my auth.wsgi: import os import sys sys.stdout = sys.stderr sys.path.append('/path/to/project') os.environ['DJANGO_SETTINGS_MODULE'] = 'project.settings' from django.contrib.auth.models import User from django import db def check_password(environ, user, password): """ Authenticates apache/mod_wsgi against Django's auth database. """ db.reset_queries() kwargs = {'username': user, 'is_active': True} try: # checks that the username is valid try: user = User.objects.get(**kwargs) except User.DoesNotExist: return None # verifies that the password is valid for the user if user.check_password(password): return True else: return False finally: db.connection.close() There are two dirty ways to achieve my aim with current situation: Have a dummy username/password to be used when trying to register to the system Have a separate Django/XML-RPC application on another URL (ie: /register) that is not protected by basic auth Both of them are very ugly, as I would also like to define a standard protocol to be used for services like mine (it's an open Dynamic Ridesharing Architecture) Is there a way to unprotect a single XML-RPC call (ie. a defined POST request) even if all XML-RPC calls over /RPC2 are protected?

    Read the article

  • Can ping between Host and Guest, but can't acces webserver with Virtualbox

    - by Gastoni
    How come I can ping back and forth between host and guest using VirtualBox, but I can't access from the host the web server installed in the guest. I'm using a host-only network. Host Ubuntu 10.10 vboxnet0 - 192.168.56.1 ping to self, works ping to guest, works access to web server in guest, FAILS Guest Fedora 13 eth1 - 192.168.56.101 ping to self, works ping to host, works access to web server in host, works

    Read the article

  • How Do I Use Multiple Versions of OpenSSL ... One for Apache and one for PHP

    - by Ken S.
    I have an Apache 2.2 (self-compiled version) server that is getting dinged during a PCI scan because it does not support TLS 1.1 or 1.2 ciphers. After some digging I found that the installed version of OpenSSL (0.9.8e) does not contain the newest TLS ciphers. So I went and downloaded and compiled the latest version of OpenSSL (1.0.1c) and have it installed in an alternate location within /opt so it wouldn't interfere with the installed version. What I would like to do is to compile Apache against the 1.0.1 libraries and keep the system-installed libraries for use with PHP, cURL, openssh, etc. I'm hoping that doing it this way will allow Apache to use the newest TLS but not break anything with any other programs that require the old libraries. I thought I could do this by adding an entry in to /etc/ld.so.conf that pointed to the new libraries, but I think this will conflict with the existing ones. i.e. two references to libcrypto could cause everything to have issues. The main reason for doing this is because of issues with PHP cURLing to external servers and having issues with the latest OpenSSL libs thus requiring edits to our PHP code. Would love some guidance on how best to accomplish this.

    Read the article

  • Flush mod_pagespeed cache in Debian

    - by Ivar
    I need a way to flush the mod_pagespeed cache while developing. According to mod_pagespeed documents, I should run the following command: sudo touch /var/mod_pagespeed/cache/cache.flush In Debian it's "su" instead of "sudo". However, it doesn't work for me; there's no "touch" command, nor is there any "cache.flush" file in the defined directory. Have I missed something? You kick-ass Linux users, please be humble - I'm pretty new to these stuff. Thank you in advance!

    Read the article

  • Apache Virtual Host with directory aliases

    - by brechtvhb
    I'm trying to set up a dynamic virtual host in apache with a directory alias pointing to a difirent path for every domain. Here's what I'm trying to achive. Say I have 2 domains: * www.domain1.com * www.domein2.com I want both to point to the same index.php file (C:/cms/index.php). Now the hard part ... I want directories or certain file types to point to a diffirent path for each domain. Example: * www.domain1.com/layout -> C:/store/www.domain1.com/layout * www.domain2.com/layout -> C:/store/www.domain2.com/layout * www.domain1.com/image.png -> C:/store/www.domain1.com/image.png * www.domain2.com/image.png -> C:/store/www.domain2.com/image.png However the admin directory should point to the same path again for all sites * www.domain1.com/admin -> C:/cms/admin * www.domain2.com/admin -> C:/cms/admin Is there a way to achieve this kind of behaviour in apache 2.2 without having to create a virtualhost entry for each new domain?

    Read the article

  • Amazon EC2: Instances, IPs and a wordpress blog (LAMP)

    - by JustinXXVII
    I had a link to my blog posted on Reddit yesterday and MySQL crashed on my EC2 Micro instance. I know I didn't have that many visitors because I used a marketing link that tracks hits. The link got 167 hits over the course of the last 18 hours, and MySQL crashed twice. So anyway, 167 visits is not a lot, so I've done some short term optimizations like restricting the number of Apache threads to limit the MySQL calls. I also set up WP Super Cache to serve static content. Soon I'm going to offload all of my images to S3 or CloudFront. So this leads me to my question. If this doesn't seem to help, and if i have another traffic "spike", how do AMIs work when you have a MySQL database? I think I understand that if you have more than one instance and assign the same Elastic IP to both of them, the incoming traffic gets distributed among both. But what happens when the MySQL database gets updated on one of the instances? I just need to wrap my mind around what happens when I create an AMI and then launch a new instance to help with traffic. Thanks for your suggestions.

    Read the article

  • Multiple Rails apps on same subdomain?

    - by Derek
    I recently decided to try out Rails. When working with PHP, I simply had all of my PHP projects in the same directory. For example, I may have http://ubuntu/app1, http://ubuntu/app2, etc. I created a subdomain for Rails (http://ruby.ubuntu), installed Rails and Passenger and everything is working. However, I may be wrong, but it looks like I can only have one Rails app per subdomain? My VirtualHost is as follows: <VirtualHost *:80> ServerName ruby.ubuntu ServerAdmin webmaster@localhost DocumentRoot /var/www/ruby/blog/public <Directory /var/www/ruby/blog/public> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all RailsEnv development </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> All of my PHP and misc. files are stored in /var/www/main. I want to be able to store all of my Rails apps in /var/www/ruby. I tried changing DocumentRoot to /var/www/ruby, but I don't think it's as simple as that. When I browse to a Rails app's Welcome Aboard page and click on "About my application's environment," I get a 404 page, but when the DocumentRoot is set to the public directory, I get the expected result. I don't want to have to create a new subdomain every time I create a new project. Is there any way I can make it so I can store all of my apps in /var/www/ruby, and browsing to http://ruby.ubuntu will let me access all of my Rails apps there? That way if I want to create a new app, all I have to do is rails new app, no Apache .htaccess or VirtualHost configuration required.

    Read the article

  • CPanel: Every url is being redirected to http://:2083

    - by Frank
    On my cpanel server, I restored about 50 accounts from crashed cpanel server. All of the sites were working fine, but suddenly without changing anything, every site started to get redirected to url "http://:2083/"., There is nothing in logs, no errors. when i do wget it says: wget grinfeld.com.br --2012-09-04 13:18:23-- http://grinfeld.com.br/ Resolving grinfeld.com.br... 198.101.221.254 Connecting to grinfeld.com.br|198.101.221.254|:80... connected. HTTP request sent, awaiting response... 301 Moved Location: https://:2083/ [following] https://:2083/: Invalid host name.

    Read the article

  • Thoughts on MPM-ITK?

    - by Rich
    I have several sites on my server set up as virtual hosts. What are your thoughts on MPM-ITK? Are the tradeoffs and the potential root exploit vulnerability worth the security of internal system files? http://mpm-itk.sesse.net/

    Read the article

  • Apache local verses external (domain)

    - by Jessy Houle
    I have an Apache server running on Ubuntu server 10, using Passenger for Ruby on Rails. I have configured my site under the sites-enabled directory of Apache and can hit the server with an internal IP address (192.168.X.X) and the site comes back as expected. However, whenever I try to hit the site externally, either through the domain name or the IP address tied to the domain name, the site will not come back. I have a router in the middle with a static IP address, with Port Forwarding turned on (forwarding 80/443) to the server and I'm quite confident the issue isn't there. In fact, I even DMZed to the Ubuntu Server just to make sure. Also, all router firewall options have been turned off. So here is the question... Is there something else I have to do with Ubuntu server to allow externally requested port 80 traffic? Otherwise, is there some settings that need to be set in Apache to allow domain or external IP address port 80 traffic through? I'm pretty new to Apache, so, please take it a bit easy on me :-) Thank you for your responses. -Jessy Houle

    Read the article

  • How do I digitally sign an HTTPS request in .net?

    - by Endy Tjahjono
    Is there a built in procedure to digitally sign an HTTPS request with client's SSL private key in .net? Also, is there a built in procedure to verify the digital signature against an SSL certificate? Or do I have to roll my own? Or is there a third party library? I need the request to be digitally signed because the client manipulates money, so I want to be sure that the request really comes from the client and that nobody tampers with the content of the request. I'm also considering using SSL client certificate, but it can only provide confidentiality and authentication, but not data integrity.

    Read the article

  • What are the risks in putting website files in the "root" folder of a shared web hosting server?

    - by Obay Ouano
    A site I've been asked to manage is hosted (shared) on GoDaddy, with this folder structure: / public_html public_ftp mail stats logs etc... However, the website files are stored in the / folder, and NOT in public_html. I'm not sure if this is how GoDaddy sets up their customers' accounts, or if the old web developer accidentally changed it from public_html to root. But when we call up GoDaddy to tell them to correct this (move files to public_html), they won't change it and insist that there is no security risk unless someone gets a hold of the FTP password. Is this true? (I have always read that website files should be inside public_html.) If not, where could this setting be changed? The .htaccess is empty.

    Read the article

  • Understanding RewriteCond in .htacces files

    - by Paulo Bu
    I'm having problems understanding how RewriteCond directive works. So far, it's pretty clear that it compares to strings to apply a RewriteRule. I have this file: <IfModule rewrite_module> RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*)$ app_dev.php </IfModule> This works for me but I don't know why it works. So far in the RewriteCond directive I understand: if the value of REQUEST_FILENAME is NOT a file in the hard drive then allow the rule This doesn't have sense becouse app_dev.php after substituting is a file in the hard drive. Anyways, could someone enlighten me with this issue? I am having a very harsh time figuring out how this works.

    Read the article

  • Error configuring virtual hosts

    - by user148351
    i Have a problem using my virtual hosts: When i try to connect to my server on direct ip adress, for example http://111.11.11.111/ in apache error log i see following error: script '/var/www/html/mmm/public/index.php' not found or unable to stat File index.php exists!!! and has correct access rights. I have virtual hosts configured <VirtualHost *:80> DocumentRoot /var/www/html/mmm/public ServerName example.com ServerAlias example.com www.example..com <Directory var/www/html/mmm/public> AllowOverride All </Directory> </VirtualHost> Why when I try to connect to ip address - it try to search index.php not in servers root directory, but in root directory of virtual host.

    Read the article

  • Syntax for piping varnish logs to rotatelogs

    - by jetboy
    Ubuntu 12.04 Server x64, Varnish 3.0.2 I'm trying to pipe varnishncsa's logs through Apache's rotatelogs, and running from the shell, things work fine: sudo varnishncsa -a -P /var/run/varnishncsa/varnishncsa.pid |/usr/sbin/rotatelogs /var/log/varnish/varnish.log.%Y%m%d%H 3600 creates a new logfile in /var/log/varnish, with rotation every hour (3600 seconds). However, I'm struggling to get things working the same way inside /etc/init.d/varnishncsa: PATH=/sbin:/bin:/usr/sbin:/usr/bin DAEMON=/usr/bin/$NAME PIDFILE=/var/run/$NAME/$NAME.pid LOGFILE=/var/log/varnish/varnishncsa.log USER=varnishlog DAEMON_OPTS="-a -P ${PIDFILE}" DAEMON_PIPE="|/usr/sbin/rotatelogs /var/log/varnish/varnish.log.%Y%m%d%H 3600" ... start_varnishncsa() { output=$(/bin/tempfile -s.varnish) log_daemon_msg "Starting $DESC" "$NAME" create_pid_directory if start-stop-daemon --start --verbose --pidfile ${PIDFILE} \ --chuid $USER --exec ${DAEMON} -- ${DAEMON_OPTS} \ > ${output} 2>&1; then log_end_msg 0 else log_end_msg 1 cat $output exit 1 fi rm $output } Where should I put DAEMON_PIPE in the above code? I've tried at the end of: if start-stop-daemon --start --verbose --pidfile ${PIDFILE} which is where additional command line parameters usually go, but it isn't creating a logfile.

    Read the article

  • How to exclude a sub-folder from HTaccess RewriteRule

    - by amb9800
    I have WordPress installed in my root directory, for which a RewriteRule is in place. I need to password-protect a subfolder ("blue"), so I set the htaccess in that folder as such. Problem is that the root htaccess RewriteRule is applying to "blue" and thus I get a 404 in the main WordPress site (instead of opening the password dialog for the subfolder). Here's the root htaccess: RewriteEngine on <Files 403.shtml> order allow,deny allow from all </Files> <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> I tried inserting this as the second line, to no avail: RewriteRule ^(blue)($|/) - [L] Also tried inserting this before the index.php RewriteRule: RewriteCond %{REQUEST_URI} !^/blue/ That didn't work either. Also inserted this into the subfolder's htaccess, which didn't work either: <IfModule mod_rewrite.c> RewriteEngine off </IfModule> Any ideas?

    Read the article

  • Centos + CPanel results in php always running as fgci, instead of cli, as expected

    - by quamis
    I'm having problems with a server configured by someone else. It uses CPanel, and it has Apache+PHP. For some reason, when running php -v as a user, i get "cli" as handler # php -v | head -n 1 PHP 5.3.27 (cli) (built: Oct 15 2013 16:06:48) If a make a PHP script with echo shell_exec("php -v | tail -n 1"), i get PHP 5.3.27 (cgi-fcgi) (built: Oct 15 2013 16:22:16) Why the cli/fcgi difference? I need to fix this so that scripts ran by the apache user would be running as "cli", not "fcgi". As a side-note, i'm not sure how php got installed, because in the package list i see another version installed, instead of the pne reported by php -v # rpm -qa | grep php [.... other packages...] cpanel-php53-5.3.17-5.cp1136.x86_64 rebuild_phpconf --current: /usr/local/cpanel/bin/rebuild_phpconf --current Available handlers: suphp dso fcgi cgi none DEFAULT PHP: 5 PHP4 SAPI: none PHP5 SAPI: dso SUEXEC: enabled RUID2: not installed redhat-release: # cat /etc/redhat-release CentOS release 6.4 (Final) possibly related to http://superuser.com/questions/665809/centos-6-4-running-php-for-root-as-cgi-fcgi-not-cli

    Read the article

  • Google Chrome and kerberos authentication against Apache

    - by Lars
    I've managed to get kerberos authentication to work now with Apache and Likewise Open but so far, Google Chrome doesn't seem to play fair. Unless I start it with chrome.exe --auth-server-whitelist="*company.com" it does only pop-up a login window but will not accept any credentials at all. As far as I know, the --auth-server-whitelist option should only be used when trying to get Single-Sign-On (SSO) to work, but if you are fine with a log-in window it should work directly out of the box, but so far it doesn't. This is the error I get in the apache logs. [Tue Dec 13 08:49:04 2011] [error] [client 192.168.1.15] failed to verify krb5 credentials: Unknown code krb5 7

    Read the article

  • Node.js apps and wordpress on the same vps

    - by Msencenb
    So currently my linode (ubuntu 11.10) serves up three node.js apps for me using connect's vhost middleware listening on port 80. Here is an example of how vhost sets up a domain: var portfolio = require('./bootstrap-portfolio/lib/app.js'); var server = express(); server.use(express.vhost('sencedev.com',portfolio)); server.use(express.vhost('www.sencedev.com',portfolio)); server.listen(80); However I would now like to add a wordpress installation to my vps as well. In the past for me this has meant a traditional apache installation; however I'm a bit unsure of how node.js + a different webserver (apache or nginx) should interact. Any thoughts on how I should approach hosting wordpress + node.js on the same box?

    Read the article

  • Secure against c99 and similar shells

    - by Amit Sonnenschein
    I'm trying to secure my server as much as i can without limiting my options, so as a first step i've prevented dangerous functions with php disable_functions = "apache_child_terminate, apache_setenv, define_syslog_variables, escapeshellarg, escapeshellcmd, eval, exec, fp, fput, ftp_connect, ftp_exec, ftp_get, ftp_login, ftp_nb_fput, ftp_put, ftp_raw, ftp_rawlist, highlight_file, ini_alter, ini_get_all, ini_restore, inject_code, mysql_pconnect, openlog, passthru, php_uname, phpAds_remoteInfo, phpAds_XmlRpc, phpAds_xmlrpcDecode, phpAds_xmlrpcEncode, popen, posix_getpwuid, posix_kill, posix_mkfifo, posix_setpgid, posix_setsid, posix_setuid, posix_setuid, posix_uname, proc_close, proc_get_status, proc_nice, proc_open, proc_terminate, shell_exec, syslog, system, xmlrpc_entity_decode" but i'm still fighting directory travel, i can't seems to be able to limit it, by using a shell script like c99 i can travel from my /home/dir to anywhere on the disc. how can i limit it once and for all ?

    Read the article

  • Windows share mounted then symlinked on LAMP server. Serves up html, but not images.

    - by Samuurai
    This has really got me befuddled... I've mounted a share, like this: //srv1/UserUploads /mount/UserUploads cifs rw,user,exec,uid=wwwrun,gid=www,username=shareuser,password=sharepw 0 0 I then have a symlink here: WEBSVR:/Web/htdocs/public_html # ls -l useruploads lrwxrwxrwx 1 wwwrun www 18 Dec 7 09:18 useruploads -> /mount/UserUploads Oddly, if I ls inside the mounted area, items appear with a capital S -rwxrwSrwx 1 wwwrun www 4077 Dec 30 14:54 prop9.jpg -rwxrwSrwx 1 wwwrun www 4 Jan 12 15:57 test.html And if I bring up test.html in a browser, it works fine, but if I go to prop9.jpg, chrome gives me this error: This web page is not available. The web page at http://10.1.64.100/useruploads/webteam/help2let/prop6-1.jpg might be temporarily down or it may have moved permanently to a new web address. More information on this error Below is the original error message Error 100 (net::ERR_CONNECTION_CLOSED): Unknown error. Has anyone seen this behaviour where the binary files (images) arent displayed, but html/text is?

    Read the article

  • Using AWStats, cannot get MaxNbOfExtraX to limit rows in Extra Report

    - by user137519
    Folks, got something really odd here I'd like to resolve. I've been using Awstats and have a couple of extra reports. I cannot get any of them to limit the rows using MaxNbOfExtraX to work. Here are two examples: ExtraSectionName1="Top 100 Searches" ExtraSectionCodeFilter1="200 304" ExtraSectionCondition1="URL,/search/search_post.php" ExtraSectionFirstColumnTitle1="Search Parameters" ExtraSectionFirstColumnValues1="QUERY_STRING,(.*)" ExtraSectionFirstColumnFormat1="QueryParameters: %s" ExtraSectionStatTypes1=HL ExtraSectionAddAverageRow1=0 ExtraSectionAddSumRow1=1 MaxNbOfExtra1=100 MinHitExtra1=4 ExtraSectionName2="Top 100 Downloads" ExtraSectionCodeFilter2="200 304" ExtraSectionCondition2="URL,/filedownload.php" ExtraSectionFirstColumnTitle2="File Downloads" ExtraSectionFirstColumnValues2="QUERY_STRING,(.[0-9]{5})(h|p)?." ExtraSectionFirstColumnFormat2="File ID: %s" ExtraSectionStatTypes2=HL ExtraSectionAddAverageRow2=0 ExtraSectionAddSumRow2=1 MaxNbOfExtra2=100 MinHitExtra2=3 According to all documentation I've read the MaxNbOfExtra1 should keep the limit to 100. However when I run this, with the debug messages enabled I get a message indicated that the query will be in excess of of 500 and would not run it. I increased the number of ExtraTrackedRowsLimit to 2000 and it would work. But the option I provided should have lowered that. I even tried without the ExtraTrackedRowsLimit with MaxNbOfExtra1=100 but same error: No limit to 100 and the "excess of 500" error. I have the URLWithQuery=1 and my reports do run properly along with my regex filters. I am using MinHitExtra1 to limit the rows and that works, but why can I not get the MaxObOfExtraX option to work. Any ideas? Thanks in advance.

    Read the article

  • apache url / filename with special characters

    - by Mario Delgado
    I have this url: http://domain.com/wp-content/uploads/2012/10/Hvilke-vilkår-følger-med-når-du-bestiller-nyt-bredbånd.png If I ftp/ssh or just browse to that folder (apache index feature), I see the file Hvilke-vilkår-følger-med-når-du-bestiller-nyt-bredbånd.png If I click on the link from the apache index, I can see the file, however, if I copy the URL and try to browse to it directly, I get the error: The requested URL /wp-content/uploads/2012/10/Hvilke-vilkÃ¥r-følger-med-nÃ¥r-du-bestiller-nyt-bredbÃ¥nd.png was not found on this server. Also my error log says: File does not exist: /wp-content/uploads/2012/10/Hvilke-vilk\xc3\xa5r-f\xc3\xb8lger-med-n\xc3\xa5r-du-bestiller-nyt-bredb\xc3\xa5nd.png

    Read the article

< Previous Page | 116 117 118 119 120 121 122 123 124 125 126 127  | Next Page >