Search Results

Search found 6079 results on 244 pages for 'define'.

Page 169/244 | < Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >

  • Nagios command not transmitting all arguments

    - by markus
    I'm using the following service to monitor our postgres db from nagios: define service{ use test-service ; Name of servi$ host_name DEMOCGN002 service_description Postgres State check_command check_nrpe!check_pgsql!192.168.1.135!test!test!test notifications_enabled 1 } On the remote machine I've configured the command: command[check_pgsql]=/usr/lib/nagios/plugins/check_pgsql -H $ARG1$ -d $ARG2$ -l $ARG3$ -p $ARG4$ In the syslog I can see that command is executed, but there is only one argument transmitted: Oct 20 13:18:43 DEMOSRV01 nrpe[1033]: Running command: /usr/lib/nagios/plugins/check_pgsql -H 192.168.1.134 -d -l -p Oct 20 13:18:43 DEMOSRV01 nrpe[1033]: Command completed with return code 3 and output: check_pgsql: Database name is not valid - -l#012Usage:#012check_pgsql [-H <host>] [-P <port>] [-c <critical time>] [-w <warning time>]#012 [-t <timeout>] [-d <database>] [-l <logname>] [-p <password>] Oct 20 13:18:43 DEMOSRV01 nrpe[1033]: Return Code: 3, Output: check_pgsql: Database name is not valid - -l#012Usage:#012check_pgsql [-H <host>] [-P <port>] [-c <critical time>] [-w <warning time>]#012 [-t <timeout>] [-d <database>] [-l <logname>] [-p <password>] Why are arguments 2,3 and 4 missing?

    Read the article

  • can't backup to a NAS drive as offline schedule task

    - by imageng
    I have seen this problem issue discussed in several forums including this one, but could not find a solution. On MS server 2003 I configured a Backup task, the target backup is on a NAS disc (Seagate BlackArmor NAS 110). The backup task is working well as a scheduled task or by a direct command, when I am logged on. It is not working when the user is offline (in this case - Administrator). I already tried the following actions: 1) addressing to the target as network drive (Y:location..), 2)Using UNC instead, 3) making the drive a domain member (the NAS admin S/W allows to define itself as a domain member) The result log message for 1 and 2 is: "The operation was not performed because the specified media cannot be found." The result log message for 3 is empty file. The schedule task "RUN" command is: C:\WINDOWS\system32\ntbackup.exe backup "@C:\Documents and Settings\Administrator\Local Settings\Application Data\Microsoft\Windows NT\NTBackup\data\de-board.bks" /a /d "Set created 2/14/2010 at 5:10 PM" /v:yes /r:no /rs:no /hc:off /m incremental /j "de-board" /l:s /f "\10.0.0.8\public\Backups\IBMServer\de-board.bkf" 10.0.0.8 is the static IP of the NAS. "Run only if logged on" is NOT marked. Password of the administrator user is set. It is obvious that there is no access to the NAS when the user is logged-out. Do you have any idea how can I solve it? Thanks

    Read the article

  • AWS document on number of databases allowed on an Amazon RDS instance

    - by user35042
    At the Amazon RDS FAQ there is the question "What is a database instance (DB Instance)?". The entire answer (as of mid-June 2012) is: You can think of a DB Instance as a database environment in the cloud with the compute and storage resources you specify. You can create and delete DB Instances, define/refine infrastructure attributes of your DB Instance(s), and control access and security via the AWS Management Console, Amazon RDS APIs, and Command Line Tools. Multiple MySQL databases or SQL Server databases (up to 30) or Oracle database schemas can be created on a given DB Instance. The last part of that quote, "Multiple MySQL databases or SQL Server databases (up to 30) Oracle database schemas" I interpret to mean that you can have an "unlimited" number of databases on an RDS MySQL or Oracle instance but only 30 databases on an MS SQL Server instance ("unlimited" meaning not limited by the RDS infrastructure itself). This was asked in the Stackoverflow question Does Amazon RDS support multiple databases per instance?. The answer quoted an older version of the FAQ. What I am looking for is an Amazon document that clarifies this question, or else someone who has experience using Amazon RDS who can attest what the situation actually is.

    Read the article

  • Workstations cannot see new MS Server 2008 domain, but can access DHCP. (solved)

    - by Radix
    The XP Pro workstations do not see the new replacement domain upon boot; they only see their cached entry for the old (server 2003) domain controller. The old_server is not connected to the network. I have DHCP working with the same scope as the old_server. In my "before-asking" search for a solution I came across the following two articles, and I recall doing things as suggested by the articles. http://www.windowsreference.com/windows-server-2008/how-to-setup-dhcp-server-in-windows-server-2008-step-by-step-guide/ http://www.windowsreference.com/windows-server-2008/step-by-step-guide-for-windows-server-2008-domain-controller-and-dns-server-setup/ The only possible issue is: I was under the impression that the domain netbios needed to match the DC's netbios. The DC netbios is city01 while the domain's FQDN is city.domain.org (I think this is mistaken and should have been just domain.org) But, the second link led me to a post which I believe answers my question. I did as they instructed by opening Local Area Connection Properties, then selecting TCP/IPv4 and setting the sole preferred DNS server to the local hosts static IP (10.10.1.1). Search for "Your problems should clear up" for the post I'm referencing: http://forums.techarena.in/active-directory/1032797.htm Have I misunderstood their instructions? I am hoping to reach the point where I can define users and user groups. Also, does TechNet have a single theoretical overview document I could read. I really don't like treating comps as magic. I will be watching this closely and will quickly answer any questions. If I've left anything out it is because I did not know it was needed. PS: I am loath to ask obviously basic questions, but I am tired and wish to fix this before tomorrow. Also, this is my first server installation, thank you for your help.

    Read the article

  • Bind a key to a commandline command in Mac OS X?

    - by Stefan Lasiewski
    I have a Mac Powerbook running Leopard (10.5.8). Does Leopard provide an easy way to bind keys to commands which are typically run on the commandline? For example, I can open up Terminal.app and run the command /System/Library/Frameworks/ScreenSaver.framework/Resources/ScreenSaverEngine.app/Contents/MacOS/ScreenSaverEngine which will activate the screensaver and lock my screen. What if I want to bind 'Apple-key L' to this command and execute this globally, regardless of which application is in use at the moment? Can I do this, or can I only run ScreenSaverEngine from a Terminal window? I tried to set up global keyboard shortcuts, but it seems that this won't allow me to bind a key to an arbitrary shell command: Note: You can create keyboard shortcuts only for existing menu commands. You cannot define keyboard shortcuts for general purpose tasks such as opening an application or switching between applications. I tried to set up a application keyboard shortcut, but commands like ScreenSaverEngine don't seem to be an application. Note that this Screensaver/Lock screen is just one example. I have come across other nifty commands which I might want to bind to a key-combination as well. I can do this in Gnome and Windows (with varying success). How about with Leopard? Should I be looking at doing this with AppleScript? (I haven't used that since the Hypercard days ...)

    Read the article

  • Create a mailbox in qmail, then forward all incoming message to Gmail

    - by lorenzo-s
    I needed to let PHP send mails from my webserver to my web app users. So I installed qmail on my Debian server: sudo apt-get install qmail I also updated files in /etc/qmail specifing my domain name, and then I run sudo qmailctl reload and sudo qmailctl restart: /etc/qmail/defaultdomain # Contains 'mydomain.com' /etc/qmail/defaulthost # Contains 'mydomain.com' /etc/qmail/me # Contains 'mail.mydomain.com' /etc/qmail/rcpthosts # Contains 'mydomain.com' /etc/qmail/locals # Contains 'mydomain.com' Emails are sent without any problem from my PHP script to any email address, using the standard mail PHP library. Now the problem is that if I send mail from my PHP using [email protected] as sender address, I want that customer can reply to that address! And possibly, I want all mails sent to this address should be forwarded to my personal Gmail address. At the moment qmail seems to not accept any incoming mail because of "invalid mailbox name". Here is a complete SMTP session I established with my server: me@MYPC:~$ nc mydomain.com 25 220 ip-XX-XX-XXX-XXX.xxx.xxx.xxx ESMTP HELO [email protected] 250 ip-XX-XX-XXX-XXX.xxx.xxx.xxx MAIL FROM:<[email protected]> 250 ok RCPT TO:<[email protected]> 250 ok DATA 554 sorry, invalid mailbox name(s). (#5.1.1) QUIT I'm sure I missing something related to mailbox or alias creation, in fact I did nothing to define mailbox [email protected] anywhere. But I tried to search something on the net and on the numerous qmail man pages, bot I found nothing.

    Read the article

  • mysql command line not working

    - by Sandeepan Nath
    I have mysql running in my fedora system. I have xampp setup on the system and php projects present in the webspace are working fine. PhpMyAdmin is working fine. echoing phpinfo() in a PHP script also shows mysql enabled. But running mysql connect command mysql -u[username] -p[password] Gives this - bash: mysql: command not found How do I fix that? Any pointers? I guess I need to do some pointing (define some path in some file) so that my system knows that mysql is installed. What exactly do I have to do? Additional Details This system was someone else's and he is not available here. May be PHP/Mysql was setup already in the system. I just freshly extracted xampp for linux into /opt/lampp/ and have put all the above mentioned things (PHP projects and PhpMyAdmin) there. After doing that I had a socket problem (PhpMyAdmin was not working and showing this)- #2002 - The server is not responding (or the local MySQL server's socket is not correctly configured) I restarted lampp using ./lampp restart but problem remained. Then after turning on system today, I started lampp and everything worked just fine. No project issues anymore only command line Mysql not working

    Read the article

  • OpenLDAP replication fails, "syncrepl_entry: rid=666 be_modify failed (20)"

    - by Pavel
    I've configured a second host to replicate the main LDAP server via syncrepl in the slapd.conf: syncrepl rid=666 provider=ldaps://my-main-server.com type=refreshAndPersist searchBase="dc=Staff,dc=my-main-server,dc=com" filter="(objectClass=*)" scope=sub schemachecking=off bindmethod=simple binddn="cn=repadmin,dc=my-main-server,dc=com" credentials=mypassword When I restart slapd, it writes to /var/log/debug Jun 11 15:48:33 cluster-mn-04 slapd[29441]: @(#) $OpenLDAP: slapd 2.4.9 (Mar 31 2009 07:18:37) $ ^Ibuildd@yellow:/build/buildd/openldap2.3-2.4.9/debian/build/servers/slapd Jun 11 15:48:34 cluster-mn-04 slapd[29442]: slapd starting Jun 11 15:48:34 cluster-mn-04 slapd[29442]: null_callback : error code 0x14 Jun 11 15:48:34 cluster-mn-04 slapd[29442]: syncrepl_entry: rid=666 be_modify failed (20) Jun 11 15:48:34 cluster-mn-04 slapd[29442]: do_syncrepl: rid=666 quitting I've looked into the sources for the return code and found only #define LDAP_TYPE_OR_VALUE_EXISTS 0x14 in include/ldap.h. Anyway, I don't quite get what the error message means. Can you help me debugging this problem and figure out why the LDAP replication doesn't work? I've managed to put a "manual" copy via slapcat and slapadd into the database, but I'd like to sync automatically. UPDATE: "Solved" by removing /var/lib/ldap/* and re-importing the database with slapadd.

    Read the article

  • Loading dependencies for custom puppet functions

    - by Ben Smith
    I have written a custom puppet function, which is working fine, that depends on the cloudservers gem (a Rackspace client library). This is fine if I have pre-installed the gem on a server before running puppet but totally breaks if I have not installed the gem as the function seems to be run during the 'compilation' sweep, well before my package definition is realised. Here's what my .pp looks like, with get_hosts the function that requires the cloudservers gem. package { "rubygems": ensure => installed, provider => "gem"; } package { "cloudservers": ensure => installed, provider => "gem", require => Package["rubygems"]; } class hosts::us { $hosts = get_hosts("us") hostentry { $hosts: } } define hostentry() { $parts = split($name, ',') $address = $parts[0] $ip = $parts[1] $aliases = $parts[2] host{ $address: ip => $ip, host_aliases => $aliases } } Is there a way to stop the function getting run so early, or at least having it's run depend up the library being installed. Alternatively, is there a way that I can add dependencies somewhere in the functions folder that will be available to the function?

    Read the article

  • rc scripts dependencies

    - by chris
    On a Ubuntu 10.04.1 LTS server install certain services fail to start properly after a reboot. I have a couple of virtual interfaces defined on eth0: /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet static address 172.16.5.240 netmask 255.255.255.0 gateway 172.16.5.1 auto eth0:1 iface eth0:1 inet static address 172.16.5.241 netmask 255.255.255.0 gateway 172.16.5.1 auto eth0:2 iface eth0:2 inet static address 172.16.5.242 netmask 255.255.255.0 gateway 172.16.5.1 auto eth0:3 iface eth0:3 inet static address 172.16.5.243 netmask 255.255.255.0 gateway 172.16.5.1 and so on... The services that try to bind to for example 172.16.5.243 fail during boot, complaining that there is no such IP address. My questions: 1) Are the services started parallel by default? Can I disable that so they run sequentially? 2) Is there a way to define dependencies between rc scripts? I'm only familiar with the defining the order of seqentially started scripts using the numbers in /etc/rc[0-6].d/) Any other fix or workaround appreciated.

    Read the article

  • Changing Corosync/Heartbeat pair's active node based on MySQL/Galera cluster state

    - by Hace
    Background I'm planning on building a High Availability "cluster" for our Zabbix instance by placing two physical servers in one server room and two in another server room. In each server room one of the physical servers will run Zabbix on RHEL and the other will run Zabbix's MySQL database, also on RHEL. I'd prefer synchronous replication for the MySQL nodes so I'm planning on using Galera in a master-slave configuration. The Zabbix instances on the two Zabbix servers would be controlled by Heartbeat/Corosync (although Red Hat Cluster Suite is also an option...) If the Zabbix server in Server Room A goes down, the one in Server Room B becomes active (and vice versa). Ditto for the MySQL servers/instances. If either of those cases happen, however, the connection between the Zabbix server and the MySQL server becomes significantly slower as ti has to travel over WAN. Question Is it possible to configure the Heartbeat/CoroSync pair to instruct the MySQL/Galera cluster to change the master node to switch to (if available) the one that's in the server room as the active Heartbeat/Corosync -node and (more challengingly) is it possible to do the same in the other direction, i.e have the Galera cluster change the active Heartbeat/CoroSync server to be in the same room as the active MySQL master server in case of a failover in over to avoid unnecessary WAN transfers between the application and its DB? Theories Most likely I can get CoroSync to run something that'd log in to one of the DB nodes to change the MySQL/Galera master but I don't know if it's really possible to do anything similar in the other direction in Galera. Is it possible to define a "service" in CoroSync/Heartbeat so that both the service and its MySQL service would migrate as one if possible. Using the DB server that's behind WAN should still be a better option to DB downtime. Am I just using too many tools to solve a problem that'd be far simpler with something else?

    Read the article

  • Puppet&Hiera: $variable is not an hash or array when accessing it

    - by txworking
    I wrote a puppet module and the content of init.pp was: class install( $common_instanceconfig = hiera_hash('common_instanceconfig'), $common_instances = hiera('common_instances') ) { define instances { common { $title: name => $title, path => $common_instanceconfig[$title]['path'], version => $common_instanceconfig[$title]['version'], files => $common_instanceconfig[$title]['files'], pre => $common_instanceconfig[$title]['pre'], after => $common_instanceconfig[$title]['after'], properties => $common_instanceconfig[$title]['properties'], require => $common_instanceconfig[$title]['require'] , } } instances {$common_instances:} } And the hieradata file was: classes: - install common_instances: - common_instance_1 - common_instance_2 common_instanceconfig: common_instance_1 path : '/opt/common_instance_1' version : 1.0 files : software-1.bin pre : pre_install.sh after : after_install.sh properties: "properties" common_instance_2: path : '/opt/common_instance_2' version : 2.0 files : software-2.bin pre : pre_install.sh after : after_install.sh properties: "properties" I always got a error message When puppet agent run Error: common_instanceconfig String is not an hash or array when accessing it with common_instance_1 at /etc/puppet/modules/install/manifests/init.pp:16 on node puppet.agent1.tmp It seems $common_instances can be got correctly, but $commono_instanceconfig always be treated as a string. I used YAML.load_file to load the hieradata file, and got a correct hash object. Can anybody help?

    Read the article

  • Sending mail through local MTA while domain MX records point to Google Apps

    - by Assaf
    My domain's email is managed by Google Apps, so that domain users get Gmail and Calendar, etc. But I also want to be able to send applicative notifications to users outside the domain via email (e.g. "some commented on your post", and so on). However, if I try to send email through code I get blocked by Gmail after a few emails. I send marketing email through MailChimp, to minimize the risk of appearing as spam to my users (one-click unsubscribe, etc.). But I can't send applicative message in this way. I want to install a local MTA (my server runs Ubuntu), but I'm not sure what anti-spam measures I need to implement so that receiving MTAs don't think it's a spam server. What's stopping anyone from setting up a mail server and sending emails using my domain name? AFAIK it's the DNS records that show the MTA's address actually belongs to the domain. But my understanding of this is rather superficial, so someone please correct me if I'm wrong. But what sort of DNS configuration do I need to put in place so that I don't get blacklisted (assuming I don't actually spam anyone)? The MX records already point to Google, and I'd like to keep it this way. So do I just need to define an A record for my internal mail server? Should it show email as coming from a sub-domain, so as not to conflict with the bare domain being managed by google? Edit: Does the following SPF record make sense if I want email from my domain name to be sent by either google's servers or any server with a dns name ending with mydomain.com? "v=spf1 ptr mx:google.com mx:googlemail.com ~all" How should I set up reverse DNS for my server? If I have an A record that points mailsender.mydomain.com to my MTA's ip address, does it mean that reverse lookup will only allow emails sent from [email protected]?

    Read the article

  • What steps should I take to debug this non-starting hvm virtual machine?

    - by Ophidian
    I have a dom0 machine running CentOS 5.4 with all the latest updates using Xen as my hypervisor. I am using Xen in part because this machine was set up prior to KVM being included in RHEL, and in part because KVM's network bridging configuration is not nearly as simple as Xen's. The dom0 machine is headless and I do all of my VM management via virsh from the command line. I have two hvm domU's: A web server running CentOS 5.4 A mail server running Gentoo Both VM's are backed by LV's on the dom0 but do not use LVM in the domU. Both have virtually identical libvirt configurations (differing by expected things like name, UUID, NIC MAC, VNC port, etc). The web server domU (WSdomU hereafter) does not start since applying the most recent kernel update (kernel-xen-2.6.18-164.15.1.el5.x86_64 and kernel-2.6.18-164.15.1.el5.x86_64 for the dom0 and WSdomU respectively). By 'not start' I mean it appears to be running but it does not use an CPU cycles, does not bring up a graphical console, and does not respond on the network. The WSdomU is listed as no state rather than the normal running or blocked in xentop. The mail server domU starts fine and functions normally. Here are the steps I have taken so far that did not solve the problem: Reboot the dom0 to see if things come up on their own Check xen dmesg on dom0 Check xend logs (a cursory viewing did not show anything blatant; specific suggestions of things to look for would be appreciated) Attempted to connect to the WSdomU's graphical (VNC) console from the dom0 Shutdown the mail server domU and attempt to start the WSdomU Check the SELinux labels on backing LV's (they're the same) Set SELinux to permissive and attempt to start the WSdomU Use virsh edit to try tweaking the WSdomU config virsh undefine, reboot, virsh define the WSdomU config dd the WSdomU LV to an .img file, copy it to my Fedora desktop and run it under KVM (works fine) What steps should I take next to debug this? I will edit in any additional configuration's requested in the comments.

    Read the article

  • Configure tomcat behind loadbalancer to respond on HTTP and HTTPS

    - by user253530
    I have 2 tomcat machines behind a load balancer on Amazon EC2. Until now The load balancer was configured to respond only on https. So in order to access our services you would go to https://url. Tomcat was configured to listen on 8080 but the connector had additional params that would tell tomcat that it is behind a proxy and that it should respond on HTTPS 443. The connector looks like this: <Connector scheme="https" secure="true" proxyPort="443" proxyHost="my.domain.name" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" useBodyEncodingForURI="true" URIEncoding="UTF-8" /> What i would like to do is to open port 80 on the load balancer and basically allow traffic on HTTP and HTTPS. I've configured the load balancer to redirect all HTTP traffic to the tomcat machines on port 8088. I was thinking that i could define a new connector so that all HTTPS traffic goes to 8080 and HTTP to 8088. Unfortunately i did not succeed. Here is my connector <Connector port="8088" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" useBodyEncodingForURI="true" URIEncoding="UTF-8" /> Am I missing something? Thanks

    Read the article

  • Can a pool of memcache daemons be used to share sessions more efficiently?

    - by Tom
    We are moving from a 1 webserver setup to a two webserver setup and I need to start sharing PHP sessions between the two load balanced machines. We already have memcached installed (and started) and so I was pleasantly surprized that I could accomplish sharing sessions between the new servers by changing only 3 lines in the php.ini file (the session.save_handler and session.save_path): I replaced: session.save_handler = files with: session.save_handler = memcache Then on the master webserver I set the session.save_path to point to localhost: session.save_path="tcp://localhost:11211" and on the slave webserver I set the session.save_path to point to the master: session.save_path="tcp://192.168.0.1:11211" Job done, I tested it and it works. But... Obviously using memcache means the sessions are in RAM and will be lost if a machine is rebooted or the memcache daemon crashes - I'm a little concerned by this but I am a bit more worried about the network traffic between the two webservers (especially as we scale up) because whenever someone is load balanced to the slave webserver their sessions will be fetched across the network from the master webserver. I was wondering if I could define two save_paths so the machines look in their own session storage before using the network. For example: Master: session.save_path="tcp://localhost:11211, tcp://192.168.0.2:11211" Slave: session.save_path="tcp://localhost:11211, tcp://192.168.0.1:11211" Would this successfully share sessions across the servers AND help performance? i.e save network traffic 50% of the time. Or is this technique only for failovers (e.g. when one memcache daemon is unreachable)? Note: I'm not really asking specifically about memcache replication - more about whether the PHP memcache client can peak inside each memcache daemon in a pool, return a session if it finds one and only create a new session if it doesn't find one in all the stores. As I'm writing this I'm thinking I'm asking a bit much from PHP, lol... Assume: no sticky-sessions, round-robin load balancing, LAMP servers.

    Read the article

  • Host Primary Domain from a subfolder

    - by TandemAdam
    I am having a problem making a sub directory act as the public_html for my main domain, and getting a solution that works with that domains sub directories too. My hosting allows me to host multiple sites, which are all working great. I have set up a subfolder under my ~/public_html/ directory called /domains/, where I create a folder for each separate website. e.g. public_html domains websiteone websitetwo websitethree ... This keeps my sites nice and tidy. The only issue was getting my "main domain" to fit into this system. It seems my main domain, is somehow tied to my account (or to Apache, or something), so I can't change the "document root" of this domain. I can define the document roots for any other domains ("Addon Domains") that I add in cPanel no problem. But the main domain is different. I was told to edit the .htaccess file, to redirect the main domain to a subdirectory. This seemed to work great, and my site works fine on it's home/index page. The problem I'm having is that if I try to navigate my browser to say the images folder (just for example) of my main site, like this: www.yourmaindomain.com/images/ then it seems to ignore the redirect and shows the entire server directory in the url, like this: www.yourmaindomain.com/domains/yourmaindomain/images/ It still actually shows the correct "Index of /images" page, and show the list of all my images. Here is an example of my .htaccess file that I am using: RewriteEngine on RewriteCond %{HTTP_HOST} ^(www.)?yourmaindomain.com$ RewriteCond %{REQUEST_URI} !^/domains/yourmaindomain/ RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /domains/yourmaindomain/$1 RewriteCond %{HTTP_HOST} ^(www.)?yourmaindomain.com$ RewriteRule ^(/)?$ domains/yourmaindomain/index.html [L] Does this htaccess file look correct? I just need to make it so my main domain behaves like an addon domain, and it's subdirectories adhere to the redirect rules.

    Read the article

  • Prevent nginx from redirecting traffic from https to http when used as a reverse proxy

    - by Chris Pratt
    Here's my abbreviated nginx vhost conf: upstream gunicorn { server 127.0.0.1:8080 fail_timeout=0; } server { listen 80; listen 443 ssl; server_name domain.com ~^.+\.domain\.com$; location / { try_files $uri @proxy; } location @proxy { proxy_pass_header Server; proxy_redirect off; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_connect_timeout 10; proxy_read_timeout 120; proxy_pass http://gunicorn; } } The same server needs to serve both HTTP and HTTPS, however, when the upstream issues a redirect (for instance, after a form is processed), all HTTPS requests are redirected to HTTP. The only thing I have found that will correct this issue is changing proxy_redirect to the following: proxy_redirect http:// https://; That works wonderfully for requests coming from HTTPS, but if a redirect is issued over HTTP it also redirects that to HTTPS, which is a problem. Out of desperation, I tried: if ($scheme = 'https') { proxy_redirect http:// https://; } But nginx complains that proxy_redirect isn't allowed here. The only other option I can think of is to define the two servers separately and set proxy_redirect only on the SSL one, but then I would have duplicate the rest of the conf (there's a lot in the server directive that I omitted for simplicity sake). I know I could also use an include directive to factor out the redundancy, but I really want to keep just one conf file without any dependencies. So, first, is there something I'm missing that will negate the problem entirely? Or, second, if not, is there any other way (besides including an external file) to factor out the redundant config information so that I can separate out the HTTP and HTTPS versions of the server config?

    Read the article

  • Making one of the folders default in Apache

    - by OmerO
    Hello, The file & directory structure of my website is as follows: /Library/WebServer/mysite/joomla .. /Library/WebServer/mysite/wiki .. /Library/WebServer/mysite/forum .. /Library/WebServer/mysite/index.php As you see, there are various applications each residing in separate folders. Now, in order to define this structure, I have made this entry in Apache http-vhosts.config file: ServerName mysite.com DocumentRoot "/Library/WebServer/mysite" ** And I already have the DirectoryIndex defined: DirectoryIndex index.html index.php, and so on. So far so good but I want this specific functionality: When someone visits mysite, he/she should automatically directed to: /Library/WebServer/mysite/joomla (and therefore /Library/WebServer/mysite/joomla/index.php) I don't want to achieve that functionality by putting a redirection code inside /Library/WebServer/mysite/index.php or /Library/WebServer/mysite/index.htm because that causes time delays (because of the redirection, of course) But in this case, the only proper way of achieving it seems to set DocumentRoot this way: DocumentRoot "/Library/WebServer/mysite/joomla" But when I set it that way, then the other folders (/wiki, /forum, etc.) are simply not served by Apache. To work around it, I put directives like: Alias /wiki /Library/WebServer/mysite/wiki .. Alias /forum /library/WebServer/mysite/forum and it did work actually the way I wanted. But... I still cannot use it that way because in this case I just couldn't manage to make the wiki use Short URLs (as described in link text) So, I have to set the DocumentRoot back to /Library/WebServer/mysite and shoud be able to assign /Library/WebServer/mysite/joomla as the "default directory" (my own terminology :) Can I do it in Apache? Is there any other way you might suggest? Thanks.

    Read the article

  • Mod_rewrite to eliminate query strings

    - by Greg Frommer
    Hi everyone, I have been working on this for a while but I'm not finding exactly what I am looking for. I am writing a webapp to let my users create and publish pieces of HTML content in a domain and URL folder structure of their choosing. All of the content and requested URL structures are stored in a database. I have all of the code in my index.php (in the root folder) to access the database content, and based on the server name (and hopefully folder structure) will pick out the proper content from the DB and display it to the end-users browser. So my situation looks like this: www.test.com/index.php?id=123234345 ... will display the proper page, but I want my users to be able to define a unique "page name" instead of using the numeric index (also I want to hide the /index.php part) so what I would like the end-user to see is: www.test.com/arbitrary-unique-keyword/keyword2/keyword3 which will invoke the index.php page in the root folder. Then I will use the PHP $_SERVER['PATH_INFO'] variable to match the requested folder structure up with the proper content in my database and display that. All the material I have found so far expects me to hard code parts of the folder structure into the rules.... but I think I want something simpler (perhaps). So the question in a nutshell: How do I use mod_rewrite to allow all "non-existent" folder paths be passed through to a main index.php residing in the root folder? (For all paths that DO exist, like for calls to images... I want those to succeed and not be directed to the index.php obviously) Thanks everyone, please let me know if I can clear anything up.

    Read the article

  • Backup script to FTP with timed subfolders

    - by Frederik Nielsen
    I want to make a backup script, that makes a .tar.gz of a folder I define, say fx /root/tekkit/world This .tar.gz file should then be uploaded to a FTP server, named by the time it was uploaded, for example: 07-10-2012-13-00.tar.gz How should such backup script be written? I already figured out the .tar.gz part - just need the naming and the uploading to FTP. I know that FTP is not the most secure way to do it, but as it is non-sensitive data, and FTP is the only option I have, it will do. Edit: I ended up with this script: #!/bin/bash # have some path predefined for backup unless one is provided as first argument BACKUP_DIR="/root/tekkit/world/" TMP_DIR="/tmp/tekkitbackup/" FINISH_DIR="/tmp/tekkitfinished/" # construct name for our archive TIME=$(date +%d-%m-%Y-%H-%M) if [ $1 ]; then BACKUP_DIR="$1" fi echo "Backing up dir ... $BACKUP_DIR" mkdir $TMP_DIR cp -R $BACKUP_DIR $TMP_DIR cd $FINISH_DIR tar czvfp tekkit-$TIME.tar.gz -C $TMP_DIR . # create upload script for lftp cat <<EOF> lftp.upload.script open server user user password lcd $FINISH_DIR mput tekkit-$TIME.tar.gz exit EOF # start backup using lftp and script we created; if all went well print simple message and clean up lftp -f lftp.upload.script && ( echo Upload successfull ; rm lftp.upload.script )

    Read the article

  • How to install Predis

    - by user782860
    I am trying to install Predis, but keep getting a 500 Server errror. Here is what I have done. 1.) Have apache and php installed on Ubuntu Natty. 2.) Used the instructions on this page http://redis.io/download to download Redis. 3.) Ran the following example to confirm that Redis is working: $ src/redis-cli redis> set foo bar OK redis> get foo "bar" 4.) Have a local website at /home/user/Dropbox/documents/www/mywebsite.com/index.php and have confirmed that php is working. 5.) Downloaded the .zip version of Predis ( https://github.com/nrk/predis Version: v0.6.6-PHP5.2 ), and unzipped the contents to /home/user/Dropbox/documents/www/mywebsite.com/. So now Predis is here: /home/user/Dropbox/documents/www/mywebsite.com/nrk-predis-3bf1230/ 6.) Opened the /home/user/Dropbox/documents/www/mywebsite.com/index.php page. Here is its contents: <? define("PREDIS_BASE_PATH", "nrk-predis-3bf1230/lib/"); spl_autoload_register(function($class) { $file = PREDIS_BASE_PATH.strtr($class, '\\', '/').'.php'; if (file_exists($file)) { require $file; return true; } }); $redis = new Predis_Client(); $redis->set('foo', 'bar'); $value = $redis->get('foo'); ?> I have tried changing: $redis = new Predis_Client(); to: $redis = new Predis\Client(); Have tried changing the the PREDIS_BASE_PATH to: /nrk-predis-3bf1230/lib /home/user/Dropbox/documents/www/mywebsite.com/nrk-predis-3bf1230/lib/ /home/user/Dropbox/documents/www/mywebsite.com/nrk-predis-3bf1230/lib Have done a chmod +x on both: /home/user/Dropbox/documents/www/mywebsite.com/nrk-predis-3bf1230/ /home/user/Dropbox/documents/www/mywebsite.com And doing all of the above always results in a 500 server error. What am I doing wrong?

    Read the article

  • Difference between CurrentClockSpeed and MaxClockSpeed

    - by Ben
    Rationale this belongs on ServerFault rather than StackOverflow - I already have my program which gets the value, I am querying the value returned and what it means. I have an in-house program which audits our company PCs, and one of the things it checks is the speed of the processor. To do this, it queries the Win32_Processor WMI class and gets the value of CurrentClockSpeed. We were playing with the data today and found an anomaly with some of the speeds being reported incorrectly (for example, CurrentClockSpeed said 1.0GHz, whereas the CPU name said Intel(R) Core(TM)2 CPU T5600 @ 1.83GHz [Confirmed it is in fact 1.83GHz]). I did a bit of digging on the internet and found this blog post which might explain what is going on. My initial thought was that I could change the program to instead get the value for MaxClockSpeed instead of CurrentClockSpeed, but Microsoft's documentation doesn't clearly define what this will return. What I mean by that is will this return a value which is its actual maximum speed (say if it were overclocked) but which it would not normally be running at, or would it return what I expect, which is its maximum speed under normal (not overclocked) conditions?

    Read the article

  • Should I partition my main table with 2 millions rows?

    - by domribaut
    Hi, I am a developer and would need some DBA-advices. We are starting to get performance problem with a MSSQL2005 database. The visible effects of the incidents is mainly CPU-hog on the server but operations reported that it was also draining resources from the SAN (not always). the main source of issues is for sure in some application but I am wondering if we should partition some of the main tables anyway in order to relax the I/O pressure. The base is about 60GB in one file. The main table (order) has 2.1 Million rows with a 215 colones (but none is huge). We have an integer as PK so it should be OK to define a partition function. Will we win something with partitioning? will partition indexes buy us something? Here are some more facts about the DB and the table database_name database_size unallocated space My_base 57173.06 MB 79.74 MB reserved data index_size unused 29 444 808 KB 26 577 320 KB 2 845 232 KB 22 256 KB name rows reserved data index_size unused Order 2 097 626 4 403 832 KB 2 756 064 KB 1 646 080 KB 1688 KB Thanks for any advice Dom

    Read the article

  • Applying ACLs to a Dovecot public namespace

    - by larsks
    I have a public namespace define in my dovecot (dovecot-2.0.9) configuration that looks like this: namespace { type = public separator = . prefix = news. location = maildir:/var/spool/news subscriptions = no } I would like to make all the mailboxes in this namespace read-only. I've got the following configuration for the ACL plugin: plugin { acl = vfile:/etc/dovecot/acls:cache_secs=300 } After perusing the documentation, it seemed as if I had a mailfolder /var/spool/news/.foo.bar that I could place the following into /var/spool/news/.foo.bar/dovecot-acl: anyone rl But that doesn't have any affect. I also tried creating a file /usr/local/etc/dovecot/acls/news.foo.bar with the same contents, but that didn't do anything, either. I've turned on mail debugging: mail_debug = yes But the log doesn't produce anything that appears to be relevant to ACL processing. I'm curious to know if anyone has gotten this to work correctly and if so if you could provide some configuration examples. Also, if there's any way to do this that doesn't involve per-mailbox configuration (.e.g, the ability to apply an ACL to news.* or something), that would be awesome. Getting the documented behavior for default ACLs working would be a step in the right direction.

    Read the article

< Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >