Search Results

Search found 6200 results on 248 pages for 'lib'.

Page 72/248 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • Permission forbidden on localhost with apache2

    - by N Alex
    Here is what I am trying to do. I tried to add another folder to apache and I get the following error when trying to acces testing/index.html. The idea is that I would like to have for every customer a folder like /home/neagoe/Work/InterWebs/Projects/[PROJECT NAME]/CustomerProjects/website/dist. Forbidden You don't have permission to access /index.html on this server. Apache/2.2.22 (Ubuntu) Server at testing Port 80 Here are the steps that I followed: Step1: sudo chmod a+x /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist Step2: sudo chown -R www-data:www-data /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist sudo chmod -R 775 /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist Step3: sudo adduser $USER www-data Step4: sudo a2enmod userdir Step5: sudo cp /etc/apache/sites-available/default /etc/apache/sites-available/testing I edited the file /etc/apache/sites-available/testing so it looks like this: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName testing DocumentRoot /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/neagoe/Work/InterWebs/Projects/testing/CustomerProjects/website/dist/ > Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> Step6: I edited hosts ("/etc/hosts") so it looks like this: 127.0.0.1 localhost 127.0.0.1 testing # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters Step7: sudo a2ensite testing sudo service apache2 restart I searched for about 2 hours on the internet but I can't figure out what went wrong. All the pages that I found following the same steps as described above. I know there are similar questions here on the internet, but the answer is to change permission to the directory which I did on Step2. I am sorry if this is really a duplicate but I could't find the right answer. Thank you! PS. I asked this also on AskUbuntu but didn't get any answers so I'm trying my luck here. Edit: There isn't much on the error log or the access log. On the access.log: ::1 - - [10/Aug/2013:11:23:28 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:29 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:31 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:32 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:33 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:34 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:35 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" 127.0.0.1 - - [10/Aug/2013:11:23:23 +0300] "POST /wordpress-testing/wp-cron.php?doing_wp_cron=1376123003.7026669979095458984375 HTTP/1.0" 200 705 "-" "WordPress/3.6; http://localhost/wordpress-testing" ::1 - - [10/Aug/2013:11:23:36 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:37 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" ::1 - - [10/Aug/2013:11:23:38 +0300] "OPTIONS * HTTP/1.0" 200 126 "-" "Apache/2.2.22 (Ubuntu) (internal dummy connection)" 127.0.0.1 - - [10/Aug/2013:11:31:32 +0300] "GET /index.html HTTP/1.1" 200 485 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:23.0) Gecko/20100101 Firefox/23.0" And the last line repeats for about 200 rows. On the error.log: 1. This lines repeat from time to time. PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20100525 /msql.so' - /usr/lib/php5/20100525/msql.so: cannot open shared object file: No such file or directory in Unknown on line 0 [Sat Aug 10 13:06:42 2013] [notice] Apache/2.2.22 (Ubuntu) PHP/5.4.9-4ubuntu2.2 configured -- resuming normal operations [Sat Aug 10 13:07:36 2013] [notice] caught SIGTERM, shutting down PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20100525/msql.so' - /usr/lib/php5/20100525/msql.so: cannot open shared object file: No such file or directory in Unknown on line 0 [Sat Aug 10 13:07:37 2013] [notice] Apache/2.2.22 (Ubuntu) PHP/5.4.9-4ubuntu2.2 configured -- resuming normal operations 2. And this is the predominant error. (hundreds of lines) [Sat Aug 10 13:07:40 2013] [error] [client 127.0.0.1] (13)Permission denied: access to /index.html denied

    Read the article

  • Can't run command with sudo, even with the full path, I got an error

    - by Keating Wang
    the command starling is /home/keating/.rvm/gems/ruby-1.9.2-p290/bin/starling when run starling, get the error Permission denied when run rvmsudo starling, works well when run sudo starling, get the error sudo: starling: command not found when run sudo /home/keating/.rvm/gems/ruby-1.9.2-p290/bin/starling, get the error: /home/keating/.rvm/rubies/ruby-1.9.2-p290/lib/ruby/site_ruby/1.9.1/rubygems/dependency.rb:247:in to_specs': Could not find starling (>= 0) amongst [minitest-1.6.0, rake-0.8.7, rdoc-2.5.8] (Gem::LoadError) from /home/keating/.rvm/rubies/ruby-1.9.2-p290/lib/ruby/site_ruby/1.9.1/rubygems/dependency.rb:256:into_spec' from /home/keating/.rvm/rubies/ruby-1.9.2-p290/lib/ruby/site_ruby/1.9.1/rubygems.rb:1229:in gem' from /home/keating/.rvm/gems/ruby-1.9.2-p290/bin/starling:18:in' I really want to run the command with sudo, because the error above is the same as running rvmsudo service starling start(I had set starling as a service of the os)

    Read the article

  • ERROR: Failed to build gem native extension (mysql2 on rails 3.2.3)

    - by Ryan Arneson
    I'm trying to install the mysql2 gem with Rails 3.2.3 and it's failing: ? bundle install Fetching gem metadata from https://rubygems.org/......... Using rake (0.9.2.2) Using i18n (0.6.0) Using multi_json (1.2.0) Using activesupport (3.2.3) Using builder (3.0.0) Using activemodel (3.2.3) Using erubis (2.7.0) Using journey (1.0.3) Using rack (1.4.1) Using rack-cache (1.2) Using rack-test (0.6.1) Using hike (1.2.1) Using tilt (1.3.3) Using sprockets (2.1.2) Using actionpack (3.2.3) Using mime-types (1.18) Using polyglot (0.3.3) Using treetop (1.4.10) Using mail (2.4.4) Using actionmailer (3.2.3) Using arel (3.0.2) Using tzinfo (0.3.32) Using activerecord (3.2.3) Using activeresource (3.2.3) Using bundler (1.1.3) Using coffee-script-source (1.2.0) Using execjs (1.3.0) Using coffee-script (2.2.0) Using rack-ssl (1.3.2) Using json (1.6.6) Using rdoc (3.12) Using thor (0.14.6) Using railties (3.2.3) Using coffee-rails (3.2.2) Using jquery-rails (2.0.2) Installing mysql2 (0.3.11) with native extensions Gem::Installer::ExtensionBuildError: ERROR: Failed to build gem native extension. /Users/rarneson/.rvm/rubies/ruby-1.9.3-p125/bin/ruby extconf.rb checking for rb_thread_blocking_region()... yes checking for rb_wait_for_single_fd()... yes checking for mysql_query() in -lmysqlclient... no checking for main() in -lm... yes checking for mysql_query() in -lmysqlclient... no checking for main() in -lz... yes checking for mysql_query() in -lmysqlclient... no checking for main() in -lsocket... no checking for mysql_query() in -lmysqlclient... no checking for main() in -lnsl... no checking for mysql_query() in -lmysqlclient... no checking for main() in -lmygcc... no checking for mysql_query() in -lmysqlclient... no *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/Users/rarneson/.rvm/rubies/ruby-1.9.3-p125/bin/ruby --with-mysql-config --without-mysql-config --with-mysql-dir --without-mysql-dir --with-mysql-include --without-mysql-include=${mysql-dir}/include --with-mysql-lib --without-mysql-lib=${mysql-dir}/lib --with-mysqlclientlib --without-mysqlclientlib --with-mlib --without-mlib --with-mysqlclientlib --without-mysqlclientlib --with-zlib --without-zlib --with-mysqlclientlib --without-mysqlclientlib --with-socketlib --without-socketlib --with-mysqlclientlib --without-mysqlclientlib --with-nsllib --without-nsllib --with-mysqlclientlib --without-mysqlclientlib --with-mygcclib --without-mygcclib --with-mysqlclientlib --without-mysqlclientlib Gem files will remain installed in /Users/rarneson/.rvm/gems/ruby-1.9.3-p125/gems/mysql2-0.3.11 for inspection. Results logged to /Users/rarneson/.rvm/gems/ruby-1.9.3-p125/gems/mysql2-0.3.11/ext/mysql2/gem_make.out An error occured while installing mysql2 (0.3.11), and Bundler cannot continue. Make sure that `gem install mysql2 -v '0.3.11'` succeeds before bundling. I'm running bundle install and this is in my Gemfile: gem 'mysql2', '~> 0.3.11' I've currently got MySQL running through MAMP. I'm not sure if this is a bad idea and I should run a vanilla MySQl but it seems my current problem is just getting the gem installed. I've seen quite a few of these problems here on stackoverflow but all seem a bit different or have really complicated solutions. Is there something I'm missing? Something simple? Something stupid? I can provide additional info from the out file if necessary. I've read that some people use SQLite for dev and test then MySQL in prod but that sounds like a pretty horrible idea.

    Read the article

  • Printer Brother DCP-110C Linux 64-bit drivers

    - by Ondra Žižka
    Hi, I need 64-bit Linux driver for DCP-110C (for Ubuntu 10.04 64-bit) I found only 32-bit here. http://welcome.solutions.brother.com/bsc/public_s/id/linux/en/index.html I've tried to follow those instructions. During the installation, I got this: ondra@ondra-doma:~/Downloads$ sudo dpkg -i --force-all dcp110clpr-1.0.2-1.i386.deb dpkg: warning: overriding problem because --force enabled: package architecture (i386) does not match system (amd64) (Reading database ... 257283 files and directories currently installed.) Preparing to replace dcp110clpr 1.0.2-1 (using dcp110clpr-1.0.2-1.i386.deb) ... Unpacking replacement dcp110clpr ... Setting up dcp110clpr (1.0.2-1) ... ln: creating symbolic link `/usr/lib/libbrcompij2.so.1.0': File exists ln: creating symbolic link `/usr/lib/libbrcompij2.so.1': File exists ln: creating symbolic link `/usr/lib/libbrcompij2.so': File exists After installation, the printer is listed at the cups server, but does not work (no command has any effect on printer (which is, of course, on and connected)). Anyone has found some working solution? Thanks, Ondra

    Read the article

  • Not able to apt-get update from terminal, what to do now?

    - by Utkarsh
    Whenever I try to update from terminal, I get this error: root@Utkarsh[utkarsh]#apt-get update Hit http://packages.bosslinux.in anokha Release.gpg Hit http://packages.bosslinux.in anokha Release Hit http://packages.bosslinux.in anokha/contrib Sources Hit http://packages.bosslinux.in anokha/non-free Sources Hit http://packages.bosslinux.in anokha/main Sources Hit http://packages.bosslinux.in anokha/contrib i386 Packages Hit http://packages.bosslinux.in anokha/non-free i386 Packages Hit http://packages.bosslinux.in anokha/main i386 Packages Ign http://packages.bosslinux.in anokha/contrib Translation-en_US Ign http://packages.bosslinux.in anokha/contrib Translation-en Ign http://packages.bosslinux.in anokha/main Translation-en_US Ign http://packages.bosslinux.in anokha/main Translation-en Ign http://packages.bosslinux.in anokha/non-free Translation-en_US Ign http://packages.bosslinux.in anokha/non-free Translation-en Reading package lists... Done W: Duplicate sources.list entry http://packages.bosslinux.in/boss/ anokha/main i386 Packages (/var/lib/apt/lists/packages.bosslinux.in_boss_dists_anokha_main_binary-i386_Packages) W: Duplicate sources.list entry http://packages.bosslinux.in/boss/ anokha/contrib i386 Packages (/var/lib/apt/lists/packages.bosslinux.in_boss_dists_anokha_contrib_binary-i386_Packages) W: Duplicate sources.list entry http://packages.bosslinux.in/boss/ anokha/non-free i386 Packages (/var/lib/apt/lists/packages.bosslinux.in_boss_dists_anokha_non-free_binary-i386_Packages) W: You may want to run apt-get update to correct these problems

    Read the article

  • Where to obtain openssl-devel for SunOS 5.10

    - by user35949
    So I am having an issue that I have seen other people have on many different systems. I have to build Subversion on a SunOS 5.10 box and have run into issues. I have the openssl source code installed and in the subversion-1.6.9 folder, I run the following: ./configure --with-ssl --with-libs=/opt/exp/lib/openssl/lib and receive the error: checking for library containing RSA_new... not found configure: error: could not find library containing RSA_new configure failed for neon I have also tried running the command without the "lib" on the end of the --with-libs path. I read online that I need the openssl-devel packages, but I have been unable to find them for SunOS 5.10, and they do not show up already installed on my system when running pkginfo. I have looked online including http://www.sunfreeware.com/ which I was told was a good SunOS software source. Any help you can provide would be welcomed. Thanks, Sean

    Read the article

  • Trying to install SawMill and getting the following error:

    - by Itai Ganot
    [root@sawmill sawmill]# ./sawmill ./sawmill: error while loading shared libraries: libldap-2.3.so.0: cannot open shared object file: No such file or directory Using yum provides libldap_r-2.3.so.0 i found that the package which includes this file is: compat-openldap-2.3.43-2.el6.i686 . After installing it i still get the error. If i use locate, i can find the file in /usr/lib, so I tried to create a symbolic link to the file from /usr/lib to /usr/lib64 but i still get the same error. I also tried setting LD_LIBRARY_PATH=/usr/lib/ and LD_LIBRARY_PATH=/usr/lib64 but it doesn't allow me to run the sawmill installation script. Anyone knows how to solve this issue?

    Read the article

  • Plesk directory structure problems

    - by johnnietheblack
    I have an entire website with the following directory structure: /example.com /html (public) /css /js index.php /lib session.php other_lib_files.php /views index.php /models /controllers As illustrated, the html is public, and anything above it is private. My site now needs to upgrade servers, and the new server (Linux w/ Plesk) has the following structure (reduced to the problematic parts below): /myplesksite.com /httpdocs /css /js index.php /private /lib /models /views What I would THINK is that I should be able to put my /lib, /views, /models, etc in the directory directly above /httpdocs, the same way I had it in my previous server. Is that possible? Or do I have to put it in private? I would really love not to have to adjust my internal paths throughout the site if not necessary...

    Read the article

  • How to connect with MySQL server if it won't connect via the socket?

    - by cwd
    I have an account on a shared server. I have jailshell access and also PhpMyAdmin. I want to run mysql commands via SSH but I'm getting an error: $ mysql -u mySqlUser -p mySqlPw Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' I can connect with PHP and phpMyAdmin, so would it be possible to call mysql from the shell and have it connect via an ip and port instead of the socket? The file /var/lib/mysql/mysql.sock does not exist - maybe that is intentional, and the only thing in /etc/my.cnf is [mysqld] skip-innodb More Info I don't have access to change system settings. I did a search in /var for mysql.sock but found nothing. However, phpMyAdmin might be connecting via a socket somehow: Really it would just be great if I could connect via IP. Also tried these two syntaxes: $ mysql -u mySqlUser -p mySqlPw -h localhost $ mysql -u mySqlUser -p mySqlPw -h localhost -P 3306 Both with the same result: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)

    Read the article

  • How to delete files quicker than rm -rf?

    - by Byakugan
    Is there any way how to delete folder/files quicker than with command rm -rf? It seems my disc is filled with bilions of files (sessions of php5) which were not deleted in cron so I need to delete them manually but it takes hours and it is still not helping reducing the amount. Thank you. My command: rm -rf /var/lib/php5/* Tried also these commands: find /var/lib/php5 -name "sess_*" -exec rm {} \; And perl -e 'chdir "/var/lib/php5/" or die; opendir D, "."; while ($n = readdir D) { unlink $n }'

    Read the article

  • How to suppress "Not collecting exported resources without storeconfigs"?

    - by Andy Shinn
    I'm getting the following in my Puppet master syslog over and over: Sep 27 11:52:05 puppet1 puppet-master: Not collecting exported resources without storeconfigs Sep 27 11:52:06 puppet1 puppet-master: Not collecting exported resources without storeconfigs Sep 27 11:52:06 puppet1 puppet-master: Not collecting exported resources without storeconfigs I'm not actually using storeconfigs: [ashinn@puppet1 ~]$ cat /etc/puppet/puppet.conf [agent] server = puppet.mydomain.com environment = production report = true [main] logdir = /var/log/puppet vardir = /var/lib/puppet ssldir = /var/lib/puppet/ssl rundir = /var/run/puppet factpath = $vardir/lib/facter pluginsync = true certname = puppet1.mydomain.com [master] modulepath = $confdir/environments/$environment/modules manifest = $confdir/environments/$environment/manifests/site.pp templatedir = $confdir/templates autosign = $confdir/autosign.conf ssl_client_header = SSL_CLIENT_S_DN ssl_client_verify_header = SSL_CLIENT_VERIFY report = true reports = hipchat Any way I can suppress these messages? What do they actually come from?

    Read the article

  • installing SQLite3 gem on remote FreeBSD server using RVM - root permissions needed?

    - by atmosx
    I am trying to install ruby SQLite3 gem, on a remote freebsd server. I'm using RVM which in theory does not need 'root permission' to compile gems but I get a root error, here: [user ~]$ gem install sqlite3 -- --with-sqlite3-dir=/home/www/atma/opt/ [...] make install /usr/bin/install -c -o root -g wheel -m 0755 sqlite3_native.so /home/www/atma/.gems/gems/sqlite3-1.3.6/lib/sqlite3 install: /home/www/atma/.gems/gems/sqlite3-1.3.6/lib/sqlite3/sqlite3_native.so: chown/chgrp: Operation not permitted make: * [/home/www/atma/.gems/gems/sqlite3-1.3.6/lib/sqlite3/sqlite3_native.so] Error 71 Gem files will remain installed in /home/www/atma/.gems/gems/sqlite3-1.3.6 for inspection. Results logged to /home/www/atma/.gems/gems/sqlite3-1.3.6/ext/sqlite3/gem_make.out Any ideas how to approach this? Maybe re-installing RVM? best regards, PA

    Read the article

  • Installing php(suexec) for Apache

    - by John
    I've got Apache installed and running but how do i install and run PHP as fastcgi so it runs as its own user? here is my apache config: ./configure --prefix=/usr/local/apache2 --enable-rewrite=shared --enable-so --enable-suexec --disable-asis --disable-autoindex --enable-cache --enable-deflate --enable-disk-cache --enable-expires --enable-file-cache --enable-mem-cache --enable-ssl --enable-vhost-alias --with-mpm=prefork --with-port=8080 here is my php config: ./configure prefix=/usr/local/php --without-pear --enable-safe-mode --enable-magic-quotes --with-apxs2=/usr/local/apache2/bin/apxs --disable-cli --disable-cgi --enable-force-cgi-redirect --enable-fastcgi --with-mysql --with-gd --with-jpeg-dir=/usr/lib --with-png-dir=/usr/lib --with-freetype-dir=/usr/lib --enable-calendar --with-curl --enable-mbstring --with-mcrypt

    Read the article

  • How do I start mysqld with options

    - by xiankai
    I need to start up mysqld with command line options as from here: http://dev.mysql.com/doc/refman/5.1/en/server-options.html#option_mysqld_skip-grant-tables I normally do sudo service mysqld start, but passing the option as sudo service mysqld start --skip-grant-tables does not seem to work. Alternatively I have tried starting as a daemon, sudo mysqld_safe --skip-grant-tables & But it seems to terminate too soon: 131101 04:59:57 mysqld_safe Logging to '/var/lib/mysql/vagrant.example.com.err'. 131101 04:59:57 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 131101 05:00:03 mysqld_safe mysqld from pid file /var/lib/mysql/vagrant.example.com.pid ended My last option seems to specify the option in /etc/my.cnf instead, but is there any way to do it via the command line?

    Read the article

  • Upgrading Fedora on Amazon to 12 but getting libssl.so.* & libcrypto.so.* are missing

    - by bateman_ap
    I am upgrading to Fedora 12 on a Amazon EC2 using help here: http://www.ioncannon.net/system-administration/894/fedora-12-bootable-root-ebs-on-ec2/ I managed to do a 64 bit instance OK, however facing some problems with a standard one. On the final bit of the install from 11 to 12 I am getting an error: Error: Missing Dependency: libcrypto.so.8 is needed by package httpd-tools-2.2.1.5-1.fc11.1.i586 (installed) Error: Missing Dependency: libssl.so.8 is needed by package httpd-tools-2.2.1.5-1.fc11.1.i586 (installed) This is referenced in the comments from the link above but all it says is: Q: Apache failed, or libssl.so.* & libcrypto.so.* are missing A: These versions are mssing the symlinks they require. Easy fix, go symlink them to the newest versions in /lib However I am afraid I don't know how to do this. If it is any help I tried running the command locate libssl.so and got: /lib/libssl.so.0.9.8b /lib/libssl.so.6

    Read the article

  • Strange thing on IPv6 multicast program on Windows

    - by zhanglistar
    I have written an ipv6 multicast program on windows xp sp3. But a problem bothers me a lot. The sendto function implies no error, but I can't capture the packet using wireshark. I am sure the filter is right. Thanks in advance. And the code is as follows: #include "stdafx.h" #include <stdio.h> /* for printf() and fprintf() */ #include <winsock2.h> /* for socket(), connect(), sendto(), and recvfrom() */ #include <ws2tcpip.h> /* for ip_mreq */ #include <stdlib.h> /* for atoi() and exit() */ #include <string.h> /* for memset() */ #include <time.h> /* for timestamps */ #include <pcap.h> #include <Iphlpapi.h> #pragma comment(lib, "Ws2_32.lib") #pragma comment(lib, "wpcap.lib") #pragma comment(lib, "Iphlpapi.lib") int _tmain(int argc, _TCHAR* argv[]) { int sfd; int on, length, iResult; WSADATA wsaData; struct addrinfo Hints; struct addrinfo *multicastAddr, *localAddr; char buf[46]; // Initialize Winsock iResult = WSAStartup(MAKEWORD(2, 2), &wsaData); if (iResult != 0) { printf("WSAStartup failed: %d\n", iResult); return 1; } /* Resolve destination address for multicast datagrams */ memset(&Hints, 0, sizeof (Hints)); Hints.ai_family = AF_INET6; Hints.ai_socktype = SOCK_DGRAM; Hints.ai_protocol = IPPROTO_UDP; Hints.ai_flags = AI_NUMERICHOST; iResult = getaddrinfo("FF02::1:2", "547", &Hints, &multicastAddr); if (iResult != 0) { /* error handling */ printf("socket error: %d\n", WSAGetLastError()); return -1; } /* Get a local address with the same family (IPv4 or IPv6) as our multicast group */ Hints.ai_family = multicastAddr->ai_family; Hints.ai_socktype = SOCK_DGRAM; Hints.ai_flags = AI_PASSIVE; /* Return an address we can bind to */ if ( getaddrinfo(NULL, "546", &Hints, &localAddr) != 0 ) { printf("getaddrinfo() failed: %d\n", WSAGetLastError()); exit(-1); } // Create sending socket //sfd = socket (multicastAddr->ai_family, multicastAddr->ai_socktype, multicastAddr->ai_protocol); sfd = socket(AF_INET6, SOCK_DGRAM, IPPROTO_UDP); if (sfd == -1) { printf("socket error: %d\n", WSAGetLastError()); return 0; } /* Bind to the multicast port */ if ( bind(sfd, localAddr->ai_addr, localAddr->ai_addrlen) != 0 ) { printf("bind() failed: %d\n", WSAGetLastError()); exit(-1); } if (multicastAddr->ai_family == AF_INET6 && multicastAddr->ai_addrlen == sizeof(struct sockaddr_in6)) /* IPv6 */ { on = 1; if (setsockopt (sfd, IPPROTO_IPV6, IPV6_MULTICAST_IF, (char *)&on, sizeof (on) /*(char *)&interface_addr, sizeof(interface_addr)*/) == -1) { printf("setsockopt error:%d\n", WSAGetLastError()); return -1; } if (setsockopt (sfd, IPPROTO_IPV6, IPV6_MULTICAST_LOOP, (char *)&on, sizeof (on) /*(char *)&interface_addr, sizeof(interface_addr)*/) == -1) { printf("setsockopt error:%d\n", WSAGetLastError()); return -1; } struct ipv6_mreq multicastRequest; /* Multicast address join structure */ /* Specify the multicast group */ memcpy(&multicastRequest.ipv6mr_multiaddr, &((struct sockaddr_in6*)(multicastAddr->ai_addr))->sin6_addr, sizeof(struct in6_addr)); /* Accept multicast from any interface */ multicastRequest.ipv6mr_interface = 0; /* Join the multicast address */ if ( setsockopt(sfd, IPPROTO_IPV6, IPV6_JOIN_GROUP, (char*) &multicastRequest, sizeof(multicastRequest)) != 0 ) { printf("setsockopt() failed: %d\n", WSAGetLastError()); return -1; } on = 1; if (setsockopt (sfd, IPPROTO_IPV6, IPV6_MULTICAST_IF, (char *)&on, sizeof (on)) == -1) { printf("setsockopt error:%d\n", WSAGetLastError()); return 0; } } memset(buf, 0, sizeof(buf)); strcpy(buf, "hello world"); iResult = sendto(sfd, buf, strlen(buf), 0, (LPSOCKADDR) multicastAddr->ai_addr, multicastAddr->ai_addrlen); if (iResult == SOCKET_ERROR) { printf("setsockopt error:%d\n", WSAGetLastError()); return -1; /* Error handling */ } return 0; }

    Read the article

  • Tomcat directly serve static (css, js) files shared by multiple applications

    - by Josvic Zammit
    I'm using the ExtJS framework which has a bulk of js and css files that are used for all apps. I intend to share these between a number of web applications (different war files). For this reason I would like to serve ExtJS js and css directly from the web server, in my case Tomcat6, which can be used to serve static files, as in this helpful link. Therefore I put my files under /var/lib/tomcat6/webapps/ROOT/extjs/. The static files that are directly under that directory are served correctly, e.g. /extjs/ext.js correctly serves the file at /var/lib/tomcat6/webapps/ROOT/extjs/ext.js. However files in lower-level directories, for example /extjs/welcome/css/welcome.css, which should serve the file at /var/lib/tomcat6/webapps/ROOT/extjs/welcome/css/welcome.css, return a 404. TL/DR Tomcat serves static files only at top-level directory. A 404 is returned for files deeper in the hierarchy. Config file contents: server.xml application's web.xml

    Read the article

  • Installing multiple versions of a shared library

    - by nsfyn55
    I am running ubuntu 10.04 and I want to use tmux 1.6. tmux has a dependency on libevent 2. My solution was to compile libevent2 and drop into /usr/local/lib then compile tmux against this lib and drop into /usr/local/bin. This works great until...I restart. This is just an assumption on my part but it seems that other binaries are now linking to the libevent2 library presumably because its on the library path. Because there are 60+ packages with libevent1 dependencies this causes my install to basically lose its mind. Is there an idiomatic way to approach running an application that has a core library dependency on a different version? Should I just statically link the lib?

    Read the article

  • What is this PHP process? It is crippling my server

    - by user1019588
    This process has been using 65% of my site CPU and has lasted for about 10 minutes now (aren't processes only supposed to go for a couple seconds?) It is obviously something with mysql. This makes sense because I have a lot of queries going, but something still seems a bit odd... This could have something to do with my bad PDO connection that I mentioned in the previous question. Perhaps I am opening too many connections or something like that? Here is the stats on it: Owner: mysql Priority: 0 CPU %: 61.1 Memory %: 0.4 Command:/usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log-error=/var/lib/mysql/cvps54834319.myhost.com.err --pid-file=/var/lib/mysql/cvps54834319.myhost.com.pid Thanks for any help on this. I have over 10GHZ on my server so this is very concerning to me.

    Read the article

  • Postgresql fails to start on Ubuntu 10.04.4 LTS

    - by cancerballs
    I installed postgresql 9.2 from add-apt-repository ppa:pitti/postgresql using apt-get install postgresql-9.2 At the end of the install and every time I try to launch postgresql by using the following command /etc/init.d/postgresql start or service postgresql start I get this error: Error: could not exec /usr/lib/postgresql/9.2/bin/pg_ctl /usr/lib/postgresql/9.2/bin/pg_ctl start -D /var/lib/postgresql/9.2/main -l /var/log/postgresql/postgresql-9.2-main.log -s -o -c config_file="/etc/postgresql/9.2/main/postgresql.conf" : [fail] invoke-rc.d: initscript postgresql, action "start" failed. dpkg: error processing postgresql-9.2 (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: postgresql-9.2 E: Sub-process /usr/bin/dpkg returned an error code (1) I have tried everything found here: How to thoroughly purge and reinstall postgresql on ubuntu and here: Eliminating non working postgresql installations on ubuntu 10-04 and starting af. I have also done dpkg -P --force-remove-reinstreq postgresql-client-9.2 in my attempt to remove everything postgres related from my server. After removing postgresql I have used dpkg --get-selections | grep postg To be sure there is nothing left and I can do a clean install. I have also made sure that the files and folders mentioned in the error message have the right permissions. The /var/log/postgresql/postgresql-9.2-main.log file is empty. I have tried installing every postgresql version from 8.3 to 9.2 and I get the same error on every time. I once managed to compile postgresql from the source provided on their website but then I encountered weird errors with psycopg2 so I figured I'd install postgresql this way and avoid those errors. Also when I type apt-get install postgresql it by default tries to install the 8.3 version even when I can find the package by typing apt-get install postgresql-9.2.

    Read the article

  • What does "error reading login count from pmvarrun" mean?

    - by n3rd
    I have the above mentioned error in my /var/log/auth.log file and just try to figure out if this is a harmelss statement. As far as I understand does pmvarrun tells the system how many active session (e.g. logins) a user has on the system. Full output of auth.log Jan 24 17:44:42 P835 lightdm: pam_unix(lightdm:session): session opened for user lightdm by (uid=0) Jan 24 17:44:42 P835 lightdm: pam_ck_connector(lightdm:session): nox11 mode, ignoring PAM_TTY :0 Jan 24 17:44:49 P835 lightdm: pam_succeed_if(lightdm:auth): requirement "user ingroup nopasswdlogin" not met by user "user" Jan 24 17:44:51 P835 dbus[1289]: [system] Rejected send message, 2 matched rules; type="method_call", sender=":1.31" (uid=104 pid=1882 comm="/usr/lib/indicator-datetime/indicator-datetime-ser") interface="org.freedesktop.DBus.Properties" member="GetAll" error name="(unset)" requested_reply="0" destination=":1.17" (uid=0 pid=1561 comm="/usr/sbin/console-kit-daemon --no-daemon ") Jan 24 17:45:04 P835 lightdm: pam_unix(lightdm:session): session closed for user lightdm Jan 24 17:45:04 P835 lightdm: pam_mount(pam_mount.c:691): received order to close things Jan 24 17:45:04 P835 lightdm: pam_mount(pam_mount.c:693): No volumes to umount Jan 24 17:45:04 P835 lightdm: command: 'pmvarrun' '-u' 'user' '-o' '-1' Jan 24 17:45:04 P835 lightdm: pam_mount(misc.c:38): set_myuid<pre>: (ruid/rgid=0/0, e=0/0) Jan 24 17:45:04 P835 lightdm: pam_mount(misc.c:38): set_myuid<post>: (ruid/rgid=0/0, e=0/0) Jan 24 17:45:04 P835 lightdm: pam_mount(pam_mount.c:438): error reading login count from pmvarrun Jan 24 17:45:04 P835 lightdm: pam_mount(pam_mount.c:728): pam_mount execution complete Jan 24 17:45:08 P835 lightdm: pam_unix(lightdm:session): session opened for user user by (uid=0) Jan 24 17:45:08 P835 lightdm: pam_ck_connector(lightdm:session): nox11 mode, ignoring PAM_TTY :0 Jan 24 17:45:25 P835 polkitd(authority=local): Registered Authentication Agent for unix-session:/org/freedesktop/ConsoleKit/Session2 (system bus name :1.54 [/usr/lib/policykit-1-gnome/polkit-gnome-authentication-agent-1], object path /org/gnome/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) Jan 24 17:45:47 P835 dbus[1289]: [system] Rejected send message, 2 matched rules; type="method_call", sender=":1.59" (uid=1000 pid=4748 comm="/usr/lib/indicator-datetime/indicator-datetime-ser") interface="org.freedesktop.DBus.Properties" member="GetAll" error name="(unset)" requested_reply="0" destination=":1.17" (uid=0 pid=1561 comm="/usr/sbin/console-kit-daemon --no-daemon ") Thanks for any help

    Read the article

  • MPI Project Template for VS2010

    If you are developing MS MPI applications with Visual Studio 2010, you are probably tired of following some tedious steps for every new C++ project that you create, similar to the following:1. In Solution Explorer, right-click YourProjectName, then click Properties to open the Property Pages dialog box.2. Expand Configuration Properties and then under VC++ Directories place the cursor at the beginning of the list that appears in the Include Directories text box and then specify the location of the MS MPI C header files, followed by a semicolon, e.g.C:\Program Files\Microsoft HPC Pack 2008 SDK\Include;3. Still under Configuration Properties and under VC++ Directories place the cursor at the beginning of the list that appears in the Library Directories text box and then specify the location of the Microsoft HPC Pack 2008 SDK library file, followed by a semicolon, e.g.if you want to build/debug 32bit application:C:\Program Files\Microsoft HPC Pack 2008 SDK\Lib\i386;if you want to build/debug 64bit application:C:\Program Files\Microsoft HPC Pack 2008 SDK\Lib\amd64;4. Under Configuration Properties and then under Linker, select Input and place the cursor at the beginning of the list that appears in the Additional Dependencies text box and then type the name of the MS MPI library, i.e.msmpi.lib;5. In the code file#include "mpi.h"6. To debug the MPI project you have just setup, under Configuration Properties select Debugging and then switch the Debugger to launch combo value from Local Windows Debugger to MPI Cluster Debugger.Wouldn't it be great if at C++ project creation time you could choose an MPI Project Template that included the steps/configurations above? If you answered "yes", I have good news for you courtesy of a developer on our team (Qing). Feel free to download from Visual Studio gallery the MPI Project Template. Comments about this post welcome at the original blog.

    Read the article

  • How do I install the newest Flash beta for Minefield on 64-bit Ubuntu?

    - by Øsse
    Hi, I'm using a fully updated Ubuntu 10.10 64-bit which is pretty much bog standard except I'm using Minefield from the Mozilla daily PPA in addition to Firefox as provided by Ubuntu. I want to try the newest beta of Flash (10.3 as of writing). The installation instructions simply say "drop libflashplayer.so into the plugin folder of your browser". This the 32-bit version. Currently I'm using Flash as provided by the package flashplugin-installer (ver. 10.2.152.27ubuntu0.10.10.1). Going to about:plugins in Minefield/Firefox says the version of Flash I'm running is 10.2 r152 and the file responsible is npwrapper.libflashplayer.so. I have two files with that name on my system. One is /usr/share/ubufox/plugins/npwrapper.libflashplayer.so which is a broken link to /usr/lib/flashplugin-installer/npwrapper.libflashplayer.so. The other is /var/lib/flashplugin-installer/npwrapper.libflashplayer.so (note var instead of usr). I also have a file simply called libflashplayer.so in /usr/lib/flashplugin-installer/. So it seems Firefox/Minefield gets its Flash plugin from a file that doesn't exist, and replacing libflashplayer.so with the one in the archive from Adobe has no effect. Since I want to try the 32-bit version I have to use the wrapper. The only way I know how is through the flashplugin-installer package. How would I go about installing the newest 32-bit beta Flash if possible at all? And where is "the plugin folder of my browser"?

    Read the article

  • Crashplan not starting. Fails to find swt-gtk

    - by Pibben
    I've installed Crashplan on my (K)Ubuntu computer. However, it fails to start and the ui_output.log says: [09.02.12 15:24:43.518 INFO main root ] ************************************************************* [09.02.12 15:24:43.519 INFO main root ] ************************************************************* [09.02.12 15:24:43.524 INFO main root ] Loading lib/swt-64.jar, exists=true [09.02.12 15:24:43.525 INFO main root ] [file:/usr/local/crashplan/lib/com.backup42.desktop.jar, file:/usr/local/crashplan/lang/, file:/usr/local/crashplan/skin/, file:/usr/local/crashplan/lib/swt-64.jar] [09.02.12 15:24:43.527 INFO main root ] STARTED CrashPlanDesktop [09.02.12 15:24:43.528 INFO main root ] CPVERSION = 3.2.1 - 1332824401321 (2012-03-27T05:00:01:321+0000) [09.02.12 15:24:43.529 INFO main root ] ARGS = [ ] [09.02.12 15:24:43.531 INFO main root ] LOCALE = English (United States) [09.02.12 15:24:43.570 ERROR main com.backup42.desktop.CPDesktop ] Failed to launch CPDesktop; java.lang.UnsatisfiedLinkError: no swt-gtk-3557 or swt-gtk in swt.library.path, java.library.path or the jar file, java.lang.UnsatisfiedLinkError: no swt-gtk-3557 or swt-gtk in swt.library.path, java.library.path or the jar file java.lang.UnsatisfiedLinkError: no swt-gtk-3557 or swt-gtk in swt.library.path, java.library.path or the jar file at org.eclipse.swt.internal.Library.loadLibrary(Unknown Source) at org.eclipse.swt.internal.Library.loadLibrary(Unknown Source) at org.eclipse.swt.internal.C.<clinit>(Unknown Source) at org.eclipse.swt.internal.Converter.wcsToMbcs(Unknown Source) at org.eclipse.swt.internal.Converter.wcsToMbcs(Unknown Source) at org.eclipse.swt.widgets.Display.<clinit>(Unknown Source) at com.backup42.desktop.CPDesktop.<init>(CPDesktop.java:231) at com.backup42.desktop.CPDesktop.main(CPDesktop.java:161) [09.02.12 15:24:43.570 ERROR main root ] Failed to launch CPDesktop; java.lang.UnsatisfiedLinkError: no swt-gtk-3557 or swt-gtk in swt.library.path, java.library.path or the jar file I've installed the relevant (I think) swt-gtk packages.

    Read the article

  • How to properly deny Railo directory access through Apache

    - by Sn3akyP3t3
    I've been battle tested on this and failed to achieve my goal which is to deny all access to all directories except the Public directory and only allow access to all all other directories with specific IP addresses. To get Railo+Apache+Tomcat installed I pretty much followed this script: https://github.com/talltroym/Railo-Ubuntu-Installer-Script then verified settings with this tutorial: http://blog.nictunney.com/2012/03/railo-tomcat-and-apache-on-amazon-ec2.html From the installation script these mods are enabled: sudo a2enmod ssl sudo a2enmod proxy sudo a2enmod proxy_http sudo a2enmod rewrite sudo a2ensite default-ssl Outside of the script I copied the sites-available to sites-enabled then reloaded Apache. I have a directory created for Railo cmfl located at /var/www/Railo/ Navigating the browser to http ://Server_IP_Address/Railo forces ssl and relocates to https ://Server_IP_Address/Railo which shows off index.cfm. Not providing index.cfm and omitting https indicates that the DirectoryIndex directive and RewriteCond of Apache appears to be working for the sites-enabled VirtualHost. The problem I'm encountering is that I cannot seem to deny access to all directories except Public. My directory structure is rather simple and looks like this: Railo error Public NotPublic Sandbox These are my sites-enabled configurations: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www #Default Deny All to prevent walking backwards in file system Alias /Railo/ "/var/www/Railo/" <Directory ~ ".*/Railo/(?!Public).*"> Order Deny,Allow Deny from All </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> DirectoryIndex index.cfm index.cfml default.cfm default.cfml index.htm index.html index.cfc RewriteEngine on RewriteCond %{SERVER_PORT} !^443$ RewriteRule ^.*$ https://%{SERVER_NAME}%{REQUEST_URI} [L,R] </VirtualHost> and <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerAdmin webmaster@localhost DocumentRoot /var/www Alias /Railo/ "/var/www/Railo/" <Directory ~ "/var/www/Railo/(?!Public).*"> Order Deny,Allow Deny from All </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 # MSIE 7 and newer should be able to use keepalive BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown DirectoryIndex index.cfm index.cfml default.cfm default.cfml index.htm index.html #Proxy .cfm and cfc requests to Railo ProxyPassMatch ^/(.+.cf[cm])(/.*)?$ http://127.0.0.1:8888/$1 ProxyPassReverse / http://127.0.0.1:8888/ #Deny access to admin except for local clients <Location /railo-context/admin/> Order deny,allow Deny from all # Allow from <Omitted> # Allow from <Omitted> Allow from 127.0.0.1 </Location> </VirtualHost> </IfModule> The apache2.conf includes the following: # Include the virtual host configurations: Include sites-enabled/ <IfModule !mod_jk.c> LoadModule jk_module /usr/lib/apache2/modules/mod_jk.so </IfModule> <IfModule mod_jk.c> JkMount /*.cfm ajp13 JkMount /*.cfc ajp13 JkMount /*.do ajp13 JkMount /*.jsp ajp13 JkMount /*.cfchart ajp13 JkMount /*.cfm/* ajp13 JkMount /*.cfml/* ajp13 # Flex Gateway Mappings # JkMount /flex2gateway/* ajp13 # JkMount /flashservices/gateway/* ajp13 # JkMount /messagebroker/* ajp13 JkMountCopy all JkLogFile /var/log/apache2/mod_jk.log </IfModule> I believe I understand most of this except the jk_module inclusion which I've noticed has an error that shows up in the logs that I can't sort out: [warn] No JkShmFile defined in httpd.conf. Using default /etc/apache2/logs/jk-runtime-status I've checked my Regular expression against the paths of the directories with RegexBuddy just to be sure that I wasn't correct. The problem doesn't appear to be Regex related although I may have something incorrect in the Directory directive. The Location directive seems to be working correctly for blocking out Railo admin site access.

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >