Search Results

Search found 11993 results on 480 pages for 'define syntax'.

Page 356/480 | < Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >

  • Bind a key to a commandline command in Mac OS X?

    - by Stefan Lasiewski
    I have a Mac Powerbook running Leopard (10.5.8). Does Leopard provide an easy way to bind keys to commands which are typically run on the commandline? For example, I can open up Terminal.app and run the command /System/Library/Frameworks/ScreenSaver.framework/Resources/ScreenSaverEngine.app/Contents/MacOS/ScreenSaverEngine which will activate the screensaver and lock my screen. What if I want to bind 'Apple-key L' to this command and execute this globally, regardless of which application is in use at the moment? Can I do this, or can I only run ScreenSaverEngine from a Terminal window? I tried to set up global keyboard shortcuts, but it seems that this won't allow me to bind a key to an arbitrary shell command: Note: You can create keyboard shortcuts only for existing menu commands. You cannot define keyboard shortcuts for general purpose tasks such as opening an application or switching between applications. I tried to set up a application keyboard shortcut, but commands like ScreenSaverEngine don't seem to be an application. Note that this Screensaver/Lock screen is just one example. I have come across other nifty commands which I might want to bind to a key-combination as well. I can do this in Gnome and Windows (with varying success). How about with Leopard? Should I be looking at doing this with AppleScript? (I haven't used that since the Hypercard days ...)

    Read the article

  • Create a mailbox in qmail, then forward all incoming message to Gmail

    - by lorenzo-s
    I needed to let PHP send mails from my webserver to my web app users. So I installed qmail on my Debian server: sudo apt-get install qmail I also updated files in /etc/qmail specifing my domain name, and then I run sudo qmailctl reload and sudo qmailctl restart: /etc/qmail/defaultdomain # Contains 'mydomain.com' /etc/qmail/defaulthost # Contains 'mydomain.com' /etc/qmail/me # Contains 'mail.mydomain.com' /etc/qmail/rcpthosts # Contains 'mydomain.com' /etc/qmail/locals # Contains 'mydomain.com' Emails are sent without any problem from my PHP script to any email address, using the standard mail PHP library. Now the problem is that if I send mail from my PHP using [email protected] as sender address, I want that customer can reply to that address! And possibly, I want all mails sent to this address should be forwarded to my personal Gmail address. At the moment qmail seems to not accept any incoming mail because of "invalid mailbox name". Here is a complete SMTP session I established with my server: me@MYPC:~$ nc mydomain.com 25 220 ip-XX-XX-XXX-XXX.xxx.xxx.xxx ESMTP HELO [email protected] 250 ip-XX-XX-XXX-XXX.xxx.xxx.xxx MAIL FROM:<[email protected]> 250 ok RCPT TO:<[email protected]> 250 ok DATA 554 sorry, invalid mailbox name(s). (#5.1.1) QUIT I'm sure I missing something related to mailbox or alias creation, in fact I did nothing to define mailbox [email protected] anywhere. But I tried to search something on the net and on the numerous qmail man pages, bot I found nothing.

    Read the article

  • Excel concatenate strings from cells listed in third cell

    - by Puddingfox
    I have an excel 2007 workbook that has five columns: A. A list of machines B. A list of service numbers for each machine C. A list of service names for each machine ...(nothing here) I. A list of Service Numbers J. A list of Service Names Each machine listed in column A has one or more services running on it from the list in column J. I would like to be able to add services to a machine (i.e. updating the cell in Column C) by simply adding another comma-separated number to Column B. For Example, The first row would look like this assuming Machine1 has the first three services: | A | B | C | Machine1 | 1,2,3 | HTTP,HTTPS,DNS Right now I have to manually update the formula in column c for each change I make. The current formula is: =CONCATENATE(J1,",",J2,",",J3) I would like to use something like this (please forgive my syntax; I'm a coder and I'm treating cell B1 as if it is an indexed array): =CONCATENATE(CELL("J"+B1[0] , "," , "J"+B1[1] , "," "J"+B1[2]) Although having variable numbers of services makes this even more difficult. Is there any way of doing this. For reference, this is columns I and J: | I | J | 1 |HTTP | 2 |HTTPS | 3 |DNS ..... | 16 |Service16 I don't know very much about Excel so any help is greatly appreciated.

    Read the article

  • mysql command line not working

    - by Sandeepan Nath
    I have mysql running in my fedora system. I have xampp setup on the system and php projects present in the webspace are working fine. PhpMyAdmin is working fine. echoing phpinfo() in a PHP script also shows mysql enabled. But running mysql connect command mysql -u[username] -p[password] Gives this - bash: mysql: command not found How do I fix that? Any pointers? I guess I need to do some pointing (define some path in some file) so that my system knows that mysql is installed. What exactly do I have to do? Additional Details This system was someone else's and he is not available here. May be PHP/Mysql was setup already in the system. I just freshly extracted xampp for linux into /opt/lampp/ and have put all the above mentioned things (PHP projects and PhpMyAdmin) there. After doing that I had a socket problem (PhpMyAdmin was not working and showing this)- #2002 - The server is not responding (or the local MySQL server's socket is not correctly configured) I restarted lampp using ./lampp restart but problem remained. Then after turning on system today, I started lampp and everything worked just fine. No project issues anymore only command line Mysql not working

    Read the article

  • Facing error: "Could not open a connection to your authentication agent."; trying to add ssh-key.

    - by Kaustubh P
    I use ubuntu server 10.04. ssh-add /foo/cert.pem gave the following output Could not open a connection to your authentication agent. These are my running processes: ps -aux | grep ssh Warning: bad ps syntax, perhaps a bogus '-'? See http://procps.sf.net/faq.html root 1523 0.0 0.0 49260 632 ? Ss Dec25 0:00 /usr/sbin/sshd root 10023 0.0 0.3 141304 6012 ? Ss 12:58 0:00 sshd: padmin [priv] padmin 10117 0.0 0.1 141304 2400 ? S 12:58 0:00 sshd: padmin@pts/1 padmin 11867 0.0 0.0 7628 964 pts/1 S+ 13:06 0:00 grep --color=auto ssh root 31041 0.0 0.3 141264 5884 ? Ss 11:24 0:00 sshd: padmin [priv] padmin 31138 0.0 0.1 141264 2312 ? S 11:25 0:00 sshd: padmin@pts/0 root 31382 0.0 0.3 139240 5844 ? Ss 11:26 0:00 sshd: padmin [priv] padmin 31475 0.0 0.1 139372 2488 ? S 11:27 0:00 sshd: padmin@notty padmin 31476 0.0 0.0 12468 964 ? Ss 11:27 0:00 /usr/lib/openssh/sftp-server These are my environment variables: $ env | grep SSH SSH_CLIENT=192.168.1.13 42626 22 SSH_TTY=/dev/pts/1 SSH_CONNECTION=192.168.1.13 42626 192.168.1.2 22 What is wrong? Why cant I add any identities? Thanks.

    Read the article

  • What is /etc/apache2/sites-available used for and is it necessary?

    - by Mariane
    I have 3 sites, each with a specific IP, running on apache2 (up-to-date Ubuntu). To put a site online, I just created a file in: /etc/apache2/sites-enabled and in this file I told apache which directory was the root directory for this site, and to which IP it should correspond. So I have 000-default 001-www.lapf.eu 002-www.felkin.info 003-www.seidhr.fr in this directory. My first site, lapf suddenly lost contact with its database after the domain name was transferred from another registrar unto the registrar who is also hosting the site's data. Then I did an update, and I reinstalled mysql-server and mysql-common, and I did I-have-forgotten-what to reinstall the locales (uft8 and such) which had vanished for some reason. This fixed my first site. Now I noticed that the other 2 sites are offline. Pointing a browser to them just hangs until timeout. They used to function, and their domain names did not move, they are still registered at the same place. The files are still in /etc/apache2/sites-enabled I noticed another directory: /etc/apache2/sites-available with just defaut and default.ssl in it. Why are there 2 directories, sites-enabled and sites-available? Should I copy the files from "sites-enabled" into "sites-available"? Or should I put a modified version of each in "sites-available"? command: "apache2ctl -S" VirtualHost configuration: 92.243.20.169:80 Charlotte (/etc/apache2/sites-enabled/001-www.lapf.eu:1) 92.243.21.141:80 xvm-21-141.ghst.net (/etc/apache2/sites-enabled/002-www.felkin.info:1) 92.243.4.114:80 xvm-4-114.ghst.net (/etc/apache2/sites-enabled/003-www.seidhr.fr:1) wildcard NameVirtualHosts and default servers: *:80 is a NameVirtualHost default server Charlotte (/etc/apache2/sites-enabled/000-default:1) port 80 namevhost Charlotte (/etc/apache2/sites-enabled/000-default:1) Syntax OK

    Read the article

  • What do the readonly attributes in diskpart really mean?

    - by marzipan
    I am wondering exactly what the meaning is of the "Read-only" disk and volume attributes that you can twiddle in diskpart on Windows 7. I am trying to set up an external USB drive as an installation medium for my own software, so I'd like to protect it against casual or inadvertent changes by users who it is given to, so they don't screw up the installation files they might need in the future. From what I can tell by experimentation with diskpart, the volume read-only attribute is actually stored on the physical disk somewhere, because I can set it and it shows up when I take the drive to another machine. This is great because my users can't (easily) change any of the files on the volume, or format it from Windows explorer. However, the disk read-only attribute seems to be just an aspect of how the current machine is accessing the drive. When I set it I can no longer delete the volume in the disk via Disk Management, but when I take the drive to another machine, the attribute is no longer set and in Disk Management I can delete the volume on the disk. I guess I'm not that worried about my users doing that, but I am annoyed that I don't understand what these attributes are really doing. Another thing that I don't understand is that the "volume" read-only attribute actually seems to be global to the disk - if I have two volumes on the disk, and I set the readonly flag on one of them, then it gets set on the other one too. ?!? I have the feeling I'm not searching for the right docs - all I'm finding is diskpart docs that give the syntax for twiddling these attributes, not what they really mean. Any pointers would be very welcome! Thanks, Asa

    Read the article

  • chkconfig creating service symlinks with the wrong order

    - by Robert
    On RHEL 6.3, I have a system service that should be starting after postgresql and httpd (order 64 and 85, respectively), but chkconfig always places it at order 50. I tried an experiment on a CentOS 6.0 virtual machine to make sure I understood the LSB stanza syntax. I created /etc/init.d/foo, owner root, permissions 755, with this text: ### BEGIN INIT INFO # Provides: foo # Required-Start: postgresql httpd # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Description: Foo init script ### END INIT INFO And then ran chkconfig --add foo. Result: /etc/rc5.d/S86foo is created, as expected. (The other runlevels are also as expected.) I repeated the exact same experiment on the RHEL machine, and it created /etc/rc5.d/S50foo instead. I can't see anything different between the two that would lead to different results. Both machines have postgresql and httpd starting at the same orders and runlevels. Any thoughts? I could just use # chkconfig: 2345 86 50, or manually rename the service symlinks to the correct order, but I'm trying to document an install process for later users, and I want to know how to do it right and understand why it's not working as expected.

    Read the article

  • Loading dependencies for custom puppet functions

    - by Ben Smith
    I have written a custom puppet function, which is working fine, that depends on the cloudservers gem (a Rackspace client library). This is fine if I have pre-installed the gem on a server before running puppet but totally breaks if I have not installed the gem as the function seems to be run during the 'compilation' sweep, well before my package definition is realised. Here's what my .pp looks like, with get_hosts the function that requires the cloudservers gem. package { "rubygems": ensure => installed, provider => "gem"; } package { "cloudservers": ensure => installed, provider => "gem", require => Package["rubygems"]; } class hosts::us { $hosts = get_hosts("us") hostentry { $hosts: } } define hostentry() { $parts = split($name, ',') $address = $parts[0] $ip = $parts[1] $aliases = $parts[2] host{ $address: ip => $ip, host_aliases => $aliases } } Is there a way to stop the function getting run so early, or at least having it's run depend up the library being installed. Alternatively, is there a way that I can add dependencies somewhere in the functions folder that will be available to the function?

    Read the article

  • Mindtouch broke my Apache2 virtual host configuration.

    - by grenade
    I installed mindtouch using the instructions here and it seems to have broken my Virtual Host configuration. I have several domains running off the same apache instance and this was working fine but now all my domain names resolve to the virtualhost where mindtouch was installed. So mindtouch made all my domain names point to the new mindtouch instance. Grrr! I use debians default virtual host mechanisms (sites-enabled, etc). Does anyone know what apache directive mindtouch is using to ruin my vh setup? I've scoured all the conf files and there is nothing obvious in apache2.conf or httpd.conf that would cause the behaviour. Did it create a sym-link somewhere that I should destroy? I should add that I uninstalled the mindtouch packages already but apache persists in redirecting all domains to the first one mentioned in the sites-enabled folder. thini:~# apache2ctl -S [Wed Jan 05 13:39:11 2011] [warn] NameVirtualHost *:80 has no VirtualHosts VirtualHost configuration: wildcard NameVirtualHosts and _default_ servers: *:* www.openancestry.org (/etc/apache2/sites-enabled/openancestry.org:1) *:* www.pragmantra.com (/etc/apache2/sites-enabled/pragmantra.com:1) *:* services.pragmantra.com (/etc/apache2/sites-enabled/services.pragmantra.com:1) *:* www.subversionreports.com (/etc/apache2/sites-enabled/subversionreports.com:1) *:* www.thijssen.ch (/etc/apache2/sites-enabled/thijssen.ch:1) Syntax OK

    Read the article

  • What is a good WordPress theme for long Objective-C code samples [closed]

    - by willc2
    As some of you iPhone developers know, Objective-C can be a verbose language. Long, descriptive variable and method names are the norm. I'm not complaining, it makes code easier to read and code completion makes it easy to type. But damn! Check out this method name for getting a cell in a table view: -(UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath; I have a WordPress blog where I publish my code samples as I'm learning the language. One thing I hate on other blogs is how the code won't fit in a column without that scroll bar or without wrapping around. It really made it hard for me to read and comprehend method names back when I was a super-noob (six months ago). Right now I use the clean-looking Fazyvo 1.0 theme by noonnoo. I love the look of it but the columns are just too narrow and it doesn't have support for wider ones. I could hand-modify it but then I'd have to maintain/redo those changes every time I updated it. Instead, I'm looking for a nice theme that has width control built-in and looks good at larger font sizes. Can anyone help? Note: I use WP-CodeBox for code syntax highlighting.

    Read the article

  • OpenLDAP replication fails, "syncrepl_entry: rid=666 be_modify failed (20)"

    - by Pavel
    I've configured a second host to replicate the main LDAP server via syncrepl in the slapd.conf: syncrepl rid=666 provider=ldaps://my-main-server.com type=refreshAndPersist searchBase="dc=Staff,dc=my-main-server,dc=com" filter="(objectClass=*)" scope=sub schemachecking=off bindmethod=simple binddn="cn=repadmin,dc=my-main-server,dc=com" credentials=mypassword When I restart slapd, it writes to /var/log/debug Jun 11 15:48:33 cluster-mn-04 slapd[29441]: @(#) $OpenLDAP: slapd 2.4.9 (Mar 31 2009 07:18:37) $ ^Ibuildd@yellow:/build/buildd/openldap2.3-2.4.9/debian/build/servers/slapd Jun 11 15:48:34 cluster-mn-04 slapd[29442]: slapd starting Jun 11 15:48:34 cluster-mn-04 slapd[29442]: null_callback : error code 0x14 Jun 11 15:48:34 cluster-mn-04 slapd[29442]: syncrepl_entry: rid=666 be_modify failed (20) Jun 11 15:48:34 cluster-mn-04 slapd[29442]: do_syncrepl: rid=666 quitting I've looked into the sources for the return code and found only #define LDAP_TYPE_OR_VALUE_EXISTS 0x14 in include/ldap.h. Anyway, I don't quite get what the error message means. Can you help me debugging this problem and figure out why the LDAP replication doesn't work? I've managed to put a "manual" copy via slapcat and slapadd into the database, but I'd like to sync automatically. UPDATE: "Solved" by removing /var/lib/ldap/* and re-importing the database with slapadd.

    Read the article

  • Changing Corosync/Heartbeat pair's active node based on MySQL/Galera cluster state

    - by Hace
    Background I'm planning on building a High Availability "cluster" for our Zabbix instance by placing two physical servers in one server room and two in another server room. In each server room one of the physical servers will run Zabbix on RHEL and the other will run Zabbix's MySQL database, also on RHEL. I'd prefer synchronous replication for the MySQL nodes so I'm planning on using Galera in a master-slave configuration. The Zabbix instances on the two Zabbix servers would be controlled by Heartbeat/Corosync (although Red Hat Cluster Suite is also an option...) If the Zabbix server in Server Room A goes down, the one in Server Room B becomes active (and vice versa). Ditto for the MySQL servers/instances. If either of those cases happen, however, the connection between the Zabbix server and the MySQL server becomes significantly slower as ti has to travel over WAN. Question Is it possible to configure the Heartbeat/CoroSync pair to instruct the MySQL/Galera cluster to change the master node to switch to (if available) the one that's in the server room as the active Heartbeat/Corosync -node and (more challengingly) is it possible to do the same in the other direction, i.e have the Galera cluster change the active Heartbeat/CoroSync server to be in the same room as the active MySQL master server in case of a failover in over to avoid unnecessary WAN transfers between the application and its DB? Theories Most likely I can get CoroSync to run something that'd log in to one of the DB nodes to change the MySQL/Galera master but I don't know if it's really possible to do anything similar in the other direction in Galera. Is it possible to define a "service" in CoroSync/Heartbeat so that both the service and its MySQL service would migrate as one if possible. Using the DB server that's behind WAN should still be a better option to DB downtime. Am I just using too many tools to solve a problem that'd be far simpler with something else?

    Read the article

  • Can I recover a zpool after it's been exported, given that devices have not been reallocated?

    - by cali-spc
    I had a zpool we'll call 'testpool'. testpool had 3 devices included in it, and a single zfs called 'test'. I needed to move 'test' to a new, smaller pool. I wanted to name the new pool the same name 'testpool'. Basically did the following. zfs send testpool@backup > /tmp/test-dump zpool export -f testpool zpool create -f testpool newdevice zfs receive -F testpool < /tmp/test-dump Unfortunately I found out that the testpool@backup snapshot was the wrong snapshot. Too old. I have yet to reallocate the three devices that were in the OLD testpool. (None of these 3 devices are 'newdevice', they are a separate 3.) Is there any way I can recover data in those devices? I'm thinking since I named the new, smaller pool the same as the old zpool, I'm pretty much SOL. But if not, that would be nice to know. Edit: More info I did a 'zpool import' and got this. bash-3.00# zpool import pool: testpool id: 14781458723915654709 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: testpool ONLINE c5t8d0 ONLINE c5t9d0 ONLINE c5t10d0 ONLINE So I'm guessing I just need the syntax to import this zpool using its numeric identifier, while giving it a new name. S.

    Read the article

  • Sending mail through local MTA while domain MX records point to Google Apps

    - by Assaf
    My domain's email is managed by Google Apps, so that domain users get Gmail and Calendar, etc. But I also want to be able to send applicative notifications to users outside the domain via email (e.g. "some commented on your post", and so on). However, if I try to send email through code I get blocked by Gmail after a few emails. I send marketing email through MailChimp, to minimize the risk of appearing as spam to my users (one-click unsubscribe, etc.). But I can't send applicative message in this way. I want to install a local MTA (my server runs Ubuntu), but I'm not sure what anti-spam measures I need to implement so that receiving MTAs don't think it's a spam server. What's stopping anyone from setting up a mail server and sending emails using my domain name? AFAIK it's the DNS records that show the MTA's address actually belongs to the domain. But my understanding of this is rather superficial, so someone please correct me if I'm wrong. But what sort of DNS configuration do I need to put in place so that I don't get blacklisted (assuming I don't actually spam anyone)? The MX records already point to Google, and I'd like to keep it this way. So do I just need to define an A record for my internal mail server? Should it show email as coming from a sub-domain, so as not to conflict with the bare domain being managed by google? Edit: Does the following SPF record make sense if I want email from my domain name to be sent by either google's servers or any server with a dns name ending with mydomain.com? "v=spf1 ptr mx:google.com mx:googlemail.com ~all" How should I set up reverse DNS for my server? If I have an A record that points mailsender.mydomain.com to my MTA's ip address, does it mean that reverse lookup will only allow emails sent from [email protected]?

    Read the article

  • Location directive in nginx configuration

    - by ryan
    I have an nginx server setup to act as a fileserver. I want to set the expires directive on images. This is how a part of my config file looks like. http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; location ~* \.(ico|jpg|jpeg|png)$ { expires 1y; } include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } I get the following error when I reload config - "Location directive not allowed here". Can someone tell me what the right syntax for this is? Thanks in advance. EDIT : Found the answer myself. Added it in a comment. Closing this.

    Read the article

  • Puppet&Hiera: $variable is not an hash or array when accessing it

    - by txworking
    I wrote a puppet module and the content of init.pp was: class install( $common_instanceconfig = hiera_hash('common_instanceconfig'), $common_instances = hiera('common_instances') ) { define instances { common { $title: name => $title, path => $common_instanceconfig[$title]['path'], version => $common_instanceconfig[$title]['version'], files => $common_instanceconfig[$title]['files'], pre => $common_instanceconfig[$title]['pre'], after => $common_instanceconfig[$title]['after'], properties => $common_instanceconfig[$title]['properties'], require => $common_instanceconfig[$title]['require'] , } } instances {$common_instances:} } And the hieradata file was: classes: - install common_instances: - common_instance_1 - common_instance_2 common_instanceconfig: common_instance_1 path : '/opt/common_instance_1' version : 1.0 files : software-1.bin pre : pre_install.sh after : after_install.sh properties: "properties" common_instance_2: path : '/opt/common_instance_2' version : 2.0 files : software-2.bin pre : pre_install.sh after : after_install.sh properties: "properties" I always got a error message When puppet agent run Error: common_instanceconfig String is not an hash or array when accessing it with common_instance_1 at /etc/puppet/modules/install/manifests/init.pp:16 on node puppet.agent1.tmp It seems $common_instances can be got correctly, but $commono_instanceconfig always be treated as a string. I used YAML.load_file to load the hieradata file, and got a correct hash object. Can anybody help?

    Read the article

  • rc scripts dependencies

    - by chris
    On a Ubuntu 10.04.1 LTS server install certain services fail to start properly after a reboot. I have a couple of virtual interfaces defined on eth0: /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet static address 172.16.5.240 netmask 255.255.255.0 gateway 172.16.5.1 auto eth0:1 iface eth0:1 inet static address 172.16.5.241 netmask 255.255.255.0 gateway 172.16.5.1 auto eth0:2 iface eth0:2 inet static address 172.16.5.242 netmask 255.255.255.0 gateway 172.16.5.1 auto eth0:3 iface eth0:3 inet static address 172.16.5.243 netmask 255.255.255.0 gateway 172.16.5.1 and so on... The services that try to bind to for example 172.16.5.243 fail during boot, complaining that there is no such IP address. My questions: 1) Are the services started parallel by default? Can I disable that so they run sequentially? 2) Is there a way to define dependencies between rc scripts? I'm only familiar with the defining the order of seqentially started scripts using the numbers in /etc/rc[0-6].d/) Any other fix or workaround appreciated.

    Read the article

  • Configure tomcat behind loadbalancer to respond on HTTP and HTTPS

    - by user253530
    I have 2 tomcat machines behind a load balancer on Amazon EC2. Until now The load balancer was configured to respond only on https. So in order to access our services you would go to https://url. Tomcat was configured to listen on 8080 but the connector had additional params that would tell tomcat that it is behind a proxy and that it should respond on HTTPS 443. The connector looks like this: <Connector scheme="https" secure="true" proxyPort="443" proxyHost="my.domain.name" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" useBodyEncodingForURI="true" URIEncoding="UTF-8" /> What i would like to do is to open port 80 on the load balancer and basically allow traffic on HTTP and HTTPS. I've configured the load balancer to redirect all HTTP traffic to the tomcat machines on port 8088. I was thinking that i could define a new connector so that all HTTPS traffic goes to 8080 and HTTP to 8088. Unfortunately i did not succeed. Here is my connector <Connector port="8088" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" useBodyEncodingForURI="true" URIEncoding="UTF-8" /> Am I missing something? Thanks

    Read the article

  • Apache22 on FreeBSD - Starts, does not respond to requests

    - by NuclearDog
    Hey folks! I'm running Apache 2.2.17 with the peruser MPM on FreeBSD 8.2-RC1 on Amazon's EC2 (so it's XEN). It was installed from ports. My problem is that, although Apache is running, listening for, and accepting connections, it doesn't actually respond to any or show them in the log at all. If I telnet to the port it's listening on and type out an HTTP request: GET / HTTP/1.1 Host: asdfasdf And hit enter a couple of times, it just sits there... Nothing. No response requesting with a browser either. There doesn't appear to be anything helpful in the error log: [Sun Jan 09 16:56:24 2011] [warn] Init: Session Cache is not configured [hint: SSLSessionCache] [Sun Jan 09 16:56:25 2011] [notice] Digest: generating secret for digest authentication ... [Sun Jan 09 16:56:25 2011] [notice] Digest: done [Sun Jan 09 16:56:25 2011] [notice] Apache/2.2.17 (FreeBSD) mod_ssl/2.2.17 The access log stays empty: root:/var/log# wc httpd-access.log 0 0 0 httpd-access.log root:/var/log# I've tried with accf_http and accf_data both enabled and disabled, and with both the stock configuration and my customized config. I also tried uninstalling apache22-peruser-mpm and just installing straight apache22... Still no luck. I tried removing all of the LoadModule lines from httpd.conf and just re-enabled the ones that were necessary to parse the config. Ended up with only the following loaded: root:/usr/local/etc/apache22# /usr/local/sbin/apachectl -M Loaded Modules: core_module (static) mpm_peruser_module (static) http_module (static) so_module (static) authz_host_module (shared) log_config_module (shared) alias_module (shared) Syntax OK root:/usr/local/etc/apache22# Same results. Apache is definitely what's listening on port 80: root:/usr/local/etc/apache22# sockstat -4 | grep httpd root httpd 43789 3 tcp4 6 *:80 *:* root httpd 43789 4 tcp4 *:* *:* root:/usr/local/etc/apache22# And I know it's not a firewall issue as there is nothing running locally, and connecting from the local box to 127.0.0.1:80 results in the same issue. Does anyone have any idea what's going on? Why it would be doing this? I've exhausted all of my debugging expertise. :/ Thanks for any suggestions!

    Read the article

  • WAMP server won't run with PHP 5.3.4 but will with PHP 5.2.11

    - by Ben Williams
    I have a 64bit Windows 7 Professional machine. I'm running WampServer Version 2.1 with Apache 2.2.4. It was installed on a clean machine. I'm using the default ini/conf files as they come. Wamp is installed in C:\wamp\, with php5.2 at C:\wamp\bin\php\php5.2.11 and php5.3 at C:\wamp\bin\php\php5.3.4. Both folders have the same permissions. When I run WAMP with 5.2.11 picked, it starts fine. When I run it with 5.3.4 picked, there are no errors in the Apache or PHP error logs, but I get The Apache service named reported the following error: httpd.exe: Syntax error on line 115 of C:/wamp/bin/apache/apache2.2.4/conf/httpd.conf: Cannot load C:/wamp/bin/php/php5.3.4/php5apache2_2.dll into server: The Apache service named is not a valid Win32 application. in my system application error logs. 5.2.11 calls C:/wamp/bin/php/php5.2.11/php5apache2_2.dll and that doesn't throw an error. What am I doing wrong?

    Read the article

  • lighttpd with multiple IPs, each with a UCC certificate and many hostnames

    - by Dave
    I'd like to get lighttpd working with UCC certificates, but I can't seem to figure out the correct syntax. Essentially, for each IP address, I have one UCC certificate and a bunch of hostnames. $SERVER["socket"] == "10.0.0.1:443" { ssl.engine = "enable" ssl.ca-file = "/etc/ssl/certs/the.ca.cert.pem" ssl.pemfile = "/etc/ssl/private/websitegroup1.com.pem" $HTTP["host"] =~ "mywebsite.com" { server.document-root = /var/www/mywebsite.com/htdocs" } The above code works fine for one hostname, but as soon as I try to set up another hostname (note the same SSL cert): $SERVER["socket"] == "10.0.0.1:443" { ssl.engine = "enable" ssl.ca-file = "/etc/ssl/certs/the.ca.cert.pem" ssl.pemfile = "/etc/ssl/private/websitegroup1.com.pem" $HTTP["host"] =~ "anotherwebsite.com" { server.document-root = /var/www/anotherwebsite.com/htdocs" } ...I get this error: Duplicate config variable in conditional 6 global/SERVERsocket==10.0.0.1:443: ssl.engine Is there any way I can put a conditional so that only if ssl.engine is not already enabled, enable it? Or do I have to put all my $HTTP["host"]s inside the same $SERVER["socket"] (which will make config file management more difficult for me) or is there some entirely different way to do it? This has to be repeated for multiple IPs too (so I'll have a bunch of SERVER["socket"] == 10.0.0.2:443" etc), each with one UCC cert and many hostnames. Am I going about this the wrong way entirely? My goal is to conserve IP addresses when I have many websites that are related and can share an SSL certificate, but still need their own SSL-accessible version from the appropriate hostname (instead of a single secure.mywebsite.com).

    Read the article

  • Translating IPTables rule to UFW

    - by Dario Fumagalli
    we are using an Ubuntu 12.04 x64 LTS VPS. Firewall being used is UFW. I have setup a Varnish + LEMP setup. along with other things, including an Openswan IPSEC VPN from our office to the VPS data center. A second in house Ubuntu box is to act as MySQL slave and fetch data from the VPS through the VPN. Master's ppp0 is seen as 10.1.2.1 from the slave, they ping etc. I have done the various required tasks but I can't get the client (slave) MySQL (nor telnet 10.1.2.1 3306) to access the master through the VPN unless I issue this fairly obvious IPTables command: iptables -A INPUT -s 10.1.2.0/24 -p tcp --dport 3306 -j ACCEPT I willingly forced the accepted input to come from the last octet. With this rule everything works just fine! However I want to translate this command to UFW syntax so to keep everything in one place. Now I admit being inexperienced with UFW, I prepared rules like: ufw allow proto tcp from 10.1.2.0/24 port mysql and 2-3 variations involving specifying 3306 instead of mysql, specifying a target IP (MySQL's my.cnf at the moment is configured as 0.0.0.0) and similar but I just don't seem to be able to replicate the simple iptables rule in a functional way. Anyone could kindly give me a suggestion that is not to dump UFW? Thanks in advance.

    Read the article

  • What steps should I take to debug this non-starting hvm virtual machine?

    - by Ophidian
    I have a dom0 machine running CentOS 5.4 with all the latest updates using Xen as my hypervisor. I am using Xen in part because this machine was set up prior to KVM being included in RHEL, and in part because KVM's network bridging configuration is not nearly as simple as Xen's. The dom0 machine is headless and I do all of my VM management via virsh from the command line. I have two hvm domU's: A web server running CentOS 5.4 A mail server running Gentoo Both VM's are backed by LV's on the dom0 but do not use LVM in the domU. Both have virtually identical libvirt configurations (differing by expected things like name, UUID, NIC MAC, VNC port, etc). The web server domU (WSdomU hereafter) does not start since applying the most recent kernel update (kernel-xen-2.6.18-164.15.1.el5.x86_64 and kernel-2.6.18-164.15.1.el5.x86_64 for the dom0 and WSdomU respectively). By 'not start' I mean it appears to be running but it does not use an CPU cycles, does not bring up a graphical console, and does not respond on the network. The WSdomU is listed as no state rather than the normal running or blocked in xentop. The mail server domU starts fine and functions normally. Here are the steps I have taken so far that did not solve the problem: Reboot the dom0 to see if things come up on their own Check xen dmesg on dom0 Check xend logs (a cursory viewing did not show anything blatant; specific suggestions of things to look for would be appreciated) Attempted to connect to the WSdomU's graphical (VNC) console from the dom0 Shutdown the mail server domU and attempt to start the WSdomU Check the SELinux labels on backing LV's (they're the same) Set SELinux to permissive and attempt to start the WSdomU Use virsh edit to try tweaking the WSdomU config virsh undefine, reboot, virsh define the WSdomU config dd the WSdomU LV to an .img file, copy it to my Fedora desktop and run it under KVM (works fine) What steps should I take next to debug this? I will edit in any additional configuration's requested in the comments.

    Read the article

  • Can a pool of memcache daemons be used to share sessions more efficiently?

    - by Tom
    We are moving from a 1 webserver setup to a two webserver setup and I need to start sharing PHP sessions between the two load balanced machines. We already have memcached installed (and started) and so I was pleasantly surprized that I could accomplish sharing sessions between the new servers by changing only 3 lines in the php.ini file (the session.save_handler and session.save_path): I replaced: session.save_handler = files with: session.save_handler = memcache Then on the master webserver I set the session.save_path to point to localhost: session.save_path="tcp://localhost:11211" and on the slave webserver I set the session.save_path to point to the master: session.save_path="tcp://192.168.0.1:11211" Job done, I tested it and it works. But... Obviously using memcache means the sessions are in RAM and will be lost if a machine is rebooted or the memcache daemon crashes - I'm a little concerned by this but I am a bit more worried about the network traffic between the two webservers (especially as we scale up) because whenever someone is load balanced to the slave webserver their sessions will be fetched across the network from the master webserver. I was wondering if I could define two save_paths so the machines look in their own session storage before using the network. For example: Master: session.save_path="tcp://localhost:11211, tcp://192.168.0.2:11211" Slave: session.save_path="tcp://localhost:11211, tcp://192.168.0.1:11211" Would this successfully share sessions across the servers AND help performance? i.e save network traffic 50% of the time. Or is this technique only for failovers (e.g. when one memcache daemon is unreachable)? Note: I'm not really asking specifically about memcache replication - more about whether the PHP memcache client can peak inside each memcache daemon in a pool, return a session if it finds one and only create a new session if it doesn't find one in all the stores. As I'm writing this I'm thinking I'm asking a bit much from PHP, lol... Assume: no sticky-sessions, round-robin load balancing, LAMP servers.

    Read the article

< Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >