Search Results

Search found 705 results on 29 pages for 'cfg'.

Page 23/29 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Setting up SSL for phpMyAdmin

    - by Ubuntu User
    I would like to run phpmyadmin using my SSL certificate. I read that if I placed the following within the file: /etc/phpmyadmin/config.inc.php, it would force it to use SSL. And now it does... $cfg['ForceSSL'] =true; However, my issue is when I did this, now I get an error stating "cannot connect to server." I do a port scan and my port 443 is closed for one, but I am connecting via https:// for my secure web based email admin panel. This tells me this may not be the issue. Second, is that I have a SSL certificate I purchased but I am not sure how to apply this cert. mydomain.com.crt is sitting on my desktop, how should I be utilizing this? I remember creating a self signed cert for my web-email access. Do I have to do this for phpmyadmin as well? At least this way, since I am the only one who will ever access the DB, it will never expire. Also the phpmyadmin used to come up as: http://mydomain/phpmyadmin/ of course I am now trying to get to https://mydomain.com/phpmyadmin/ however, I do not have any pages on my website that requires https:// currently. In the future I may add this. But for now, I only want to access phpmyadmin via ssl. I can use my own -- but if this causes problems with future ecommerce apps under mydomain.com I would rather use the SSL cert I already purchased. Thank you!

    Read the article

  • Getting Grub2 to recognize a Raid 10 boot/root

    - by xenoterracide
    I've been trying to get my raid to boot from grub2 for about 2 days now and I don't seem to be getting closer. The problem appears to be that it doesn't recognize my raid at all. It doesn't see (md0) etc. I'm not sure Why or how to change this. I'm using mdadm, 2 device (essentially a raid1) raid10,f2, which is currently degraded. I have tried adding the raid and mdraid modules with grub install along with others. I've tried several variation on grub-install such as grub-install --debug --no-floppy --modules="biosdisk part_msdos chain raid mdraid ext2 linux search ata normal" /dev/md0 I've been searching the net for an answer to what I haven't done but no luck. On my other drive which I plan on removing the raid is initialized and mounted fine on boot, but it's not the boot/root for that setup. My grub.cfg isn't recognized by grub since it can't read the raid partition so I'm not posting that. md0 is not listed in my /boot/grub/device.map.

    Read the article

  • Fetch videos from sony handycam to linux

    - by bstpierre
    I've got a Sony Handycam DCR-DVD101. When I plug connect the USB cable to my laptop (Ubuntu 10) it doesn't mount any storage device. If I run usb-devices, I see: T: Bus=02 Lev=02 Prnt=02 Port=00 Cnt=01 Dev#= 6 Spd=480 MxCh= 0 D: Ver= 2.00 Cls=00(>ifc ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1 P: Vendor=054c ProdID=00c1 Rev=01.00 S: Manufacturer=SONY S: Product=Storage Device C: #Ifs= 1 Cfg#= 1 Atr=c0 MxPwr=2mA I: If#= 0 Alt= 0 #EPs= 2 Cls=08(stor.) Sub=05 Prot=50 Driver=usb-storage The driver says usb-storage, but I'm not sure how to get the device mounted. Is there a way to make this work? Update: checking dmesg, I see: [259072.576559] usb 2-1.1: new high speed USB device using ehci_hcd and address 6 [259072.687200] usb 2-1.1: configuration #1 chosen from 1 choice [259072.836188] Initializing USB Mass Storage driver... [259072.836476] scsi5 : SCSI emulation for USB Mass Storage devices [259072.836632] usb-storage: device found at 6 [259072.836636] usb-storage: waiting for device to settle before scanning [259072.836660] usbcore: registered new interface driver usb-storage [259072.836666] USB Mass Storage support registered. [259077.830410] usb-storage: device scan complete [259077.832343] scsi 5:0:0:0: CD-ROM SONY DDX-A1010 R1.0 PQ: 0 ANSI: 0 [259077.888167] sr1: scsi3-mmc drive: 0x/0x pop-up [259077.888446] sr 5:0:0:0: Attached scsi CD-ROM sr1 [259077.888593] sr 5:0:0:0: Attached scsi generic sg2 type 5 [259080.002079] sr 5:0:0:0: [sr1] Unhandled sense code [259080.002085] sr 5:0:0:0: [sr1] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [259080.002091] sr 5:0:0:0: [sr1] Sense Key : Blank Check [current] [259080.002097] sr 5:0:0:0: [sr1] Add. Sense: No additional sense information [259080.002104] sr 5:0:0:0: [sr1] CDB: Read(10): 28 00 00 00 00 00 00 00 02 00 00 00 [259080.002117] end_request: I/O error, dev sr1, sector 0 [259080.002123] Buffer I/O error on device sr1, logical block 0 [259080.002128] Buffer I/O error on device sr1, logical block 1 Those I/O errors don't look good, is there any hope?

    Read the article

  • Moodle serves on IP only - will not work with mod_proxy

    - by Jon H
    I'm trying to set a moodle server up on an Ubuntu box, which already serves Plone & Trac via Apache. In my Moodle config I have $CFG-wwwroot = 'http://www.server-name.org/moodle' The configuration below works fine for the first two, but when I visit www.server-name.com/moodle I get: Incorrect access detected, this server may be accessed only through "http://xxx.xxx.xxx.xxx:8888/moodle" address, sorry It then forwards to the IP address, where Moodle functions fine. What am I missing to get the server name approach working correctly? Apache Config follows: LoadModule transform_module /usr/lib/apache2/modules/mod_transform.so Listen 8080 Listen 8888 Include /etc/phpmyadmin/apache.conf <VirtualHost xxx.xxx.xxx.xxx:8080> <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPreserveHost On <Location /> ProxyPass http://127.0.0.1:8082/ ProxyPassReverse http://127.0.0.1:8082/ </Location> </VirtualHost> <VirtualHost xxx.xxx.xxx.xxx:80> ServerName www.server-name.org ServerAlias server-name.org ProxyRequests Off FilterDeclare MyStyle RESOURCE FilterProvider MyStyle XSLT resp=Content-Type $text/html TransformOptions +ApacheFS +HTML TransformCache /theme.xsl /home/web/webapps/plone/theme.xsl TransformSet /theme.xsl FilterChain MyStyle ProxyPass /issue-tracker ! ProxyPass /moodle ! <Location /issue-tracker/login> AuthType Basic AuthName "Trac" AuthUserFile /home/web/webapps/plone/parts/trac/trac.htpasswd Require valid-user </Location> Alias /moodle /usr/share/moodle/ <Directory /usr/share/moodle/> Options +FollowSymLinks AllowOverride None order allow,deny allow from all <IfModule mod_dir.c> DirectoryIndex index.php </IfModule> </Directory> </VirtualHost>

    Read the article

  • CentOS 5.5 remote kickstart installation stalls at "Starting install process." How to debug?

    - by ewwhite
    Hello, I'm having a difficult time with a remote CentOS 5.5 kickstart installation on an HP ProLiant DL360 G6. This is in an environment where I maintain an internal CentOS yum repository. The kickstart installation and post scripts have been tested and normally work. This hardware is also common in this environment, so I do not believe that it is a factor. Unfortunately, I'm having problems with a specific server install. The system is remote to the yum repository at a distance of 500 miles. They are connected over a private low-latency 100-megabit layer 2 connection (26ms round-trip). I'm mounting the 10mb CentOS 5 netinstall ISO image via an HP ILO remote console. The initial boot parameters are: linux ks=http://yum.abctrading.com/prop.cfg ksdevice=eth0 ip=x.x.x.x dns=x.x.x.x netmask=255.255.255.0 gateway=x.x.x.x I'm using the url --url http://ks.abctrading.com/5.5/os/x86_64/ method of installation. This quickly boots into the anaconda installer, pulls the kickstart config and formats the drives. The process eventually halts at the screen below, reading "Starting install process.". Going to the other virtual consoles give the second image below. The process stalls at this point and cannot proceed with the rest of the installation. Running the same kickstart config locally works just fine. I've tried mounting the boot ISO from the console as well as from the ILO2 command line pointing to a locally-hosted boot ISO via http. How can I debug this? Are there any options I've overlooked?

    Read the article

  • HAProxy "503 Service Unavailable" for webserver running on a KVM virtual machine

    - by Menda
    I'm setting up a server with KVM (IP 192.168.0.100) and I have created inside of it one virtual machine using network bridging at 192.168.0.194. This virtual machine has an nginx instance running, which I can access from the server or from any computer computer in the internal network just typing in the browser http://192.168.0.194. However, I try configure HAProxy in the same server that hosts KVM and looking the status page of HAProxy it always shows the virtual machine as "DOWN". If I try from the server http://localhost, it should be the same than if I go to http://192.168.0.194. My goal is to build a reverse proxy, but I tried this little example and won't work. What am I doing bad? This is my config file in the server: # /etc/haproxy/haproxy.cfg global maxconn 4096 user haproxy group haproxy daemon defaults log global mode http option httplog option dontlognull retries 3 option redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen ServerStatus *:8081 mode http stats enable stats auth haproxy:haproxy listen Server *:80 mode http balance roundrobin cookie JSESSIONID prefix option httpclose option forwardfor option httpchk HEAD /check.txt HTTP/1.0 server mv1 192.168.0.194:80 cookie A check Thanks.

    Read the article

  • PhpMyAdmin Missing parameter:

    - by Ali
    Everything was working fine until this moment I want to create a new database and I'm receiving this error db_create.php: Missing parameter: new_db (FAQ 2.8) Also when I was trying to export my database I also receive the following error export.php: Missing parameter: what (FAQ 2.8) export.php: Missing parameter: export_type (FAQ 2.8) When I looked it FAQ 2.8 from the suggested link in PHPMYADMIN 2.8 I get "Missing parameters" errors, what can I do? Here are a few points to check: In config.inc.php, try to leave the $cfg['PmaAbsoluteUri'] directive empty. See also FAQ 4.7. Maybe you have a broken PHP installation or you need to upgrade your Zend Optimizer. See http://bugs.php.net/bug.php?id=31134. If you are using Hardened PHP with the ini directive varfilter.max_request_variables set to the default (200) or another low value, you could get this error if your table has a high number of columns. Adjust this setting accordingly. (Thanks to Klaus Dorninger for the hint). In the php.ini directive arg_separator.input, a value of ";" will cause this error. Replace it with "&;". If you are using Hardened-PHP, you might want to increase request limits. The directory specified in the php.ini directive session.save_path does not exist or is read-only. I did tried with php.ini to make sure that I've session.save_path = "/tmp" I'm using Mac Xserver and running with MAMP I did tried everything to restart my server and nothing help. Please if anyone help me to give me some suggestion. Apologize if I post in a wrong place.

    Read the article

  • Issue with SSL using HAProxy and Nginx

    - by Ben Chiappetta
    I'm building a highly available site using a multiple HAProxy load balancers, Nginx web serves, and MySQL servers. The site needs to be able to survive load balancer or web servers nodes going offline without any interruption of service to visitors. Currently, I have two boxes running HAProxy sharing a virtual IP using keepalived, which forward to two web servers running Nginx, which then tie into two MySQL boxes using MySQL replication and sharing a virtual IP using heartbeat. Everything is working correctly except for SSL traffic over HAProxy. I'm running version 1.5 dev12 with openssl support compiled in. When I try to navigate to the virtual IP for haproxy over https, I get the message: The plain HTTP request was sent to HTTPS port. Here's my haproxy.cfg so far, which was mainly assembled from other posts: global log 127.0.0.1 local0 log 127.0.0.1 local1 notice # log 127.0.0.1 local0 user haproxy group haproxy daemon maxconn 20000 defaults log global option dontlognull balance leastconn clitimeout 60000 srvtimeout 60000 contimeout 5000 retries 3 option redispatch listen front bind :80 bind :443 ssl crt /etc/pki/tls/certs/cert.pem mode http option http-server-close option forwardfor reqadd X-Forwarded-Proto:\ https if { is_ssl } reqadd X-Proto:\ SSL if { is_ssl } server web01 192.168.25.34 check inter 1s server web02 192.168.25.32 check inter 1s stats enable stats uri /stats stats realm HAProxy\ Statistics stats auth admin:********* Any idea why SSL traffic isn't being passed correctly? Also, any other changes you would recommend? I still need to configure logging, so don't worry about that section. Thanks in advance your help.

    Read the article

  • Can grub and BCD work simultaneously?

    - by wuputah
    Here's my setup. After I installed a new SSD, I have: Original Windows 7 on sdc1 (to be retired) Copy of Windows 7 on sdb2 A Windows system partition on sdb1 Ubuntu 12.04 on sda, /boot and ergo grub is on sda1 Grub is MBR on sda and set to boot from BIOS. I prefer to not change this; grub is much preferable as a boot manager. I've run update-grub from Ubuntu and grub seems to be correctly configured as all options are available: I can boot any of the 3 Windows partitions and Ubuntu. I also ran the repair tool to get Windows to add both installations to BCD. At present, choosing particular options seem to have no effect; the old version of Windows on sdc1 always boots. I don't understand what is causing this, but I can't figure out what. How does grub and BCD play along? I can't find any docs on this. My thought was to only boot Windows off sdb1, and then let BCD do the rest (present a menu to boot between sdb2 and sdc1, but I can't seem to get BCD to boot sdb2), but this has been unsuccessful. My configuration files: BCDEdit output grub.cfg

    Read the article

  • Debian, Apache2, CGI: paths issue

    - by Bubnoff
    I have a perl form email script on the servers cgi-bin directory ( /usr/lib/cgi-bin ). /etc/apache2/sites-enabled/000-default ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch AddHandler cgi-script cgi pl Order allow,deny Allow from all </Directory> The issue is with paths. html calls script here: <form name="Request" method="post" action="http://server-test.local/cgi-bin/formprocessorpro.pl" onsubmit="return checkWholeForm49874(this)"> The directory with the templates and configs is passed here: <input type="hidden" name="base_path" value="../contact" /> The path to this form is: http://server-test.local/formstest/contact.htm No matter what variation I try for the base_path I get an error from the formprocessor script that it can't find the directory: An error occurred when opening the Form Configuration File (../contact/form.cfg): No such file or directory. I need to move this script from an old server, configured by a previous sysadmin, to a new server. Since cgi-bin is automatically linked to /usr/lib/cgi-bin and linked such that the script resides: http://server-test.local/cgi-bin/formprocessorpro.pl I would imagine that, given that the templates are in the webroot in a directory called contact, the correct path would be: ../contact Any ideas? It's been awhile since I've messed with CGI. Bubnoff

    Read the article

  • How to set up GRUB2 chainloader to other Grub (Fedora, Debian) on GPT

    - by basic6
    I'm trying to set up a dedicated GRUB2 which (chain-)loads another GRUB on a disk with GPT partition table. Relevant partitions: /dev/sda1 BIOS_BOOT /dev/sda2 BOOT (ext2) /dev/sda3 FEDORA (ext4) /dev/sda6 DEBIAN (ext4) I installed Fedora first, using /dev/sda2 as boot partition. Then I installed Debian. The Debian installer recognized the Fedora installation and added it as boot entry, then installed its GRUB into the MBR. While this works for the moment, it's pretty messy, because every Debian update may change the boot config, removing the Fedora entry (tried it) and the other way around. That's why I want both systems to have their own boot loader and one main boot loader (that could reside on /dev/sda2), which loads one of them. This is what I've tried: Moved everything from /dev/sda2 to /dev/sda3/boot Removed /boot mount point in Fedora (so /dev/sda2 isn't used anymore) From a live Linux, installed GRUB2 to the MBR (grub-install --boot-directory=sda2 /dev/sda) Wrote a menu.lst: title Fedora root (hd0,2) chainloader +1 (Again, for Debian) Converted that to a grub.cfg script (grub-menu2cfg or something like that) When booting, actually got a GRUB2 menu with "Fedora" (and "Debian") When selecting any one of those: error: invalid signature Issued "grub-install /dev/sda6" (and ...sda3) from all kinds of live Linux systems, all of which failed with another error message (in the case of the Debian installer, without explanation at all) Added --force to the chainloader line, now it says "loading", then reboots Found douzens of howtos, none of which seem to work for me Since I get the self-made GRUB2 menu on bootup, I've at least successfully installed the first stage of GRUB, right? When trying to chainload, some signature is checked and seems to be wrong - how do I fix it? The boot menus (Fedora with its different Kernel versions and Debian with Debian and Fedora as well) are now on the system partitions (/dev/sda3, /dev/sda6), is there anything else to do on these partitions, so they can be chainloaded? Any help is greatly appreciated.

    Read the article

  • NoMachine NX window closes after establishing connection

    - by blackicecube
    I am trying to use nomachine nx server and client. But somehow it doen't work. What happens is the following: Client starts up Client authenticates with Server The NoMachine window appears for 2-4 seconds The NoMachine window exists Somehow a "closeEvent" is sent. Here's what I see in the log file: [Thu Sep 24 11:20:37 2009]: Starting nxcomp with options: 'NX 299 Switch connection to: NX mode: unencrypted options: nx/nx,options=/home/foo/.nx/S-adnws029-1022-7EEF1367361DB2A7F4D9F76B06F4B434/options:1022'. [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: NXFileMonitor: opened file: [/home/foo/.nx/S-adnws029-1022-7EEF1367361DB2A7F4D9F76B06F4B434/session] [Thu Sep 24 11:20:38 2009]: LoginDialog::ShowConnectionStatus code=[246] str=[Initializing X protocol compression] error=[0] [Thu Sep 24 11:20:38 2009]: ProgressDialog::printNxStatus: [Initializing X protocol compression] [Thu Sep 24 11:20:38 2009]: LoginDialog::ShowConnectionStatus code=[247] str=[Established the display connection] error=[0] [Thu Sep 24 11:20:38 2009]: ProgressDialog::printNxStatus: [Established the display connection] [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: LoginDialog: slotAgentTimer [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: QClipboard: Unknown SelectionClear event received. [Thu Sep 24 11:20:38 2009]: LoginDialog: slotAgentTimer [Thu Sep 24 11:20:38 2009]: LoginDialog: Agent found closing windows... [Thu Sep 24 11:20:38 2009]: LoginDialog: setting automatic reconnection to true. [Thu Sep 24 11:20:38 2009]: Settings::flush [Thu Sep 24 11:20:38 2009]: Settings::flush [Thu Sep 24 11:20:38 2009]: LoginDialog: closeEvent received! [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: NXFileMonitor::readData [Thu Sep 24 11:20:38 2009]: LoginDialog::destructor called begin [Thu Sep 24 11:20:38 2009]: LoginDialog: stopAllTimers [Thu Sep 24 11:20:38 2009]: LoginDialog: stopProgressTimer [Thu Sep 24 11:20:38 2009]: Utility::getPreferencesFile: 'nxclient' - '/home/foo/.nx/config/nxclient.cfg' [Thu Sep 24 11:20:38 2009]: Settings::flush [Thu Sep 24 11:20:38 2009]: Called destructor for protocol class [Thu Sep 24 11:20:38 2009]: LoginDialog::destructor called end Anyone with a helpful idea?

    Read the article

  • Tripwire help Required

    - by ramaperumal
    I have created the policy file in Tripwire and also I have created the rules as well mentioned below: /opt/jboss/server/gis/conf -> $(SEC_CONFIG) +aipm +c+g+a+i+s+t+u+l+M; /usr/local/gtech/eseries/ -> $(SEC_CONFIG) +a+c+g+i+s+t+u+l+M ; After running the integrity check the output should be a(Access timestamp),c (Inode timestamp (create/modify),g (File owner's group ID),i (Inode number),s (File size),t (time stamp),u (File owner's user ID),l(File is increasing in size (a "growing file"),M (MD5 hash value). I am getting the output as below: [root@xxsi1242 tripwire]# tripwire --check Parsing policy file: /etc/tripwire/tw.pol *** Processing Unix File System *** Performing integrity check... Wrote report file: /var/lib/tripwire/report/xxsi1242.gtk.gtech.com-20131106-053812.twr Open Source Tripwire(R) 2.4.1 Integrity Check Report Report generated by: root Report created on: Wed 06 Nov 2013 05:38:12 AM EST Database last updated on: Wed 06 Nov 2013 05:31:17 AM EST =============================================================================== Report Summary: =============================================================================== Host name: xxsi1242.gtk.gtech.com Host IP address: 156.24.65.171 Host ID: None Policy file used: /etc/tripwire/tw.pol Configuration file used: /etc/tripwire/tw.cfg Database file used: /var/lib/tripwire/xxsi1242.gtk.gtech.com.twd Command line used: tripwire --check =============================================================================== Rule Summary: =============================================================================== ------------------------------------------------------------------------------- Section: Unix File System ------------------------------------------------------------------------------- Rule Name Severity Level Added Removed Modified --------- -------------- ----- ------- -------- Invariant Directories 66 0 0 0 Temporary directories 33 0 0 0 * Tripwire Data Files 100 0 0 1 Tech Stack 100 0 0 0 User binaries 66 0 0 0 Tripwire Binaries 100 0 0 0 * CLPS bins 100 0 0 2 CLPS Configuration files 100 0 0 0 ESCommon 100 0 0 0 Shell Binaries 100 0 0 0 OS executables and libraries 100 0 0 0 Security Control 100 0 0 0 ESCommon Configuration 100 0 0 0 (/etc/gtech/escommon) Total objects scanned: 12358 Total violations found: 3 =============================================================================== Object Summary: =============================================================================== ------------------------------------------------------------------------------- # Section: Unix File System ------------------------------------------------------------------------------- ------------------------------------------------------------------------------- Rule Name: Tripwire Data Files (/etc/tripwire/tw.pol) Severity Level: 100 ------------------------------------------------------------------------------- Modified: "/etc/tripwire/tw.pol" ------------------------------------------------------------------------------- Rule Name: CLPS bins (/opt/jboss/server) Severity Level: 100 ------------------------------------------------------------------------------- Modified: "/opt/jboss/server/esapps1/data/hypersonic/localDB.lck" "/opt/jboss/server/gis/data/hypersonic/localDB.lck" =============================================================================== Error Report: =============================================================================== No Errors ------------------------------------------------------------------------------- *** End of report *** Note: In the output I only am getting the files which are modified. I need the detail output for this. But unfortunately I am not getting what I expected. Please help me to proced further.

    Read the article

  • Haproxy and CNAME

    - by user123354
    I want to create a simple load balancer for the two servers. The problem is with CNAME records, I think. Let's say I have two the same applications on AppFog.com. app1.aws.af.cm and app2.aws.af.cm Here is my haproxy.cfg file: global maxconn 2000 daemon defaults mode http clitimeout 60000 srvtimeout 30000 contimeout 4000 option httpclose listen http_proxy bind [myip]:80 mode http stats enable stats auth user:passwd stats uri /stats balance source option httpchk option forwardfor server host01 app1.aws.af.cm:80 maxconn 300 check server host02 app2.aws.af.cm:80 maxconn 300 check But this only resolving IP for domain app1.aws.af.cm and app2.aws.af.cm, which obviously doesn't work if I open this IP in web browser. The problem is that AppFog doesn't have public IP for application (same as OpenShift). How to do that Haproxy to perform a proper connection between Load Balancer and this two servers? Example: This is real app - http://freechat.eu01.aws.af.cm Haproxy only resolves IP for this domain which is 46.51.204.8:80 Of course this IP will not show my application, only an error page. Sorry for my poor English.

    Read the article

  • NRPE: Unable to read output with check_connections plugin

    - by Wlodzimierz
    I'm using plugin which gives me warning or crtis with established connections. If I run it on local machine it gives: *root@graber:/usr/lib/nagios/plugins# ./check_connections -w 1 -c 5 -C sshd CRITICAL Established connections: 6* I know, I run as root. But: Rights to the file: root@graber:/usr/lib/nagios/plugins# ls -all check_connections -rwxr-xr-x 1 nagios nagios 5459 2012-07-06 10:19 check_connections /etc/sudoers: root@graber:/usr/lib/nagios/plugins# cat /etc/sudoers Defaults env_reset root ALL=(ALL:ALL) ALL %admin ALL=(ALL) ALL nagios ALL=(ALL) NOPASSWD: /usr/bin/lsof nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ /etc/nagios/nrpe.cfg: *nrpe_user=nagios nrpe_group=nagios* *dont_blame_nrpe=1* *command_prefix=/usr/bin/sudo command[check_connections]=/usr/lib/nagios/plugins/check_connections -w 1 -c 5 -C sshd* log from remote: *2012-07-06T11:12:49+02:00 graber nrpe[25928]: Handling the connection... 2012-07-06T11:12:49+02:00 graber nrpe[25928]: Host address is in allowed_hosts 2012-07-06T11:12:49+02:00 graber nrpe[25928]: Host is asking for command 'check_connections' to be run... 2012-07-06T11:12:49+02:00 graber nrpe[25928]: Running command: /usr/lib/nagios/plugins/check_connections -w 1 -c 5 -C sshd 2012-07-06T11:19:11+02:00 graber nrpe[26100]: Return Code: 2, Output: NRPE: Unable to read output* Why is this happening? I'm out of ideas, I've searched google for 2 days now :)

    Read the article

  • LVM and cloning HDs

    - by jcea
    Using Linux, I have several backup levels. One of them is a periodical sector by sector copy (using dd) of my laptop harddisk to an external USB disk. Yes, I have other backups too, like remote rsync. This approach (the disk dd) is OK when cloning a HDD with no LVM volumes, since I can plug the external disk anytime and mount the partitions simply mounting /dev/sdb* instead of /dev/sda*. Trivial and handy. Today I moved ALL my harddisk (including the /boot) to LVM. Everything works fine. I will stress it for a couple of days, and then I will do a sector by sector copy to my external harddisk. Now I have a problem, I guess. If in the future I plug the external USB HDD to recover any file, the OS will detect a duplicate LVM configuration, with the same name and the same UUID. Even doing a vgrename (which LVM would be renamed, the internal HDD or the external HDD?), the cloned UUID will not change. Is there any command to change name and UUID? Ideally I would clone the HDD and then change the LVM group name and its UUID, but I don't know how to do it. Another related issue would be... In the past I have booted my laptop using the external disk, using the BIOS boot menu and changing GRUB entries manually to boot from /dev/sdb instead of /dev/sda. But now my current GRUB configuration boots directly from a LVM logical volume, something like: set root='(LVM-root)' in my grub.cfg. So... What is going to happen with duplicated volumes? Any suggestion? I guess I could repartition my external harddisk and change backup strategy from dd to rsync, but this disk has windows installed too, and I really would like to have a physical "real" copy.

    Read the article

  • How to send email from home ip when the email server isn't a designated outbound mail server allocated to BT Retail customers [on hold]

    - by Mr Shoubs
    (I am sys admin!) I can receive email, but when I try to send an email from my home office via our work email server I get the following reply: Your message did not reach some or all of the intended recipients. Subject: Test Sent: 19/08/2014 17:02 The following recipient(s) cannot be reached: 'Joe Blogs' on 19/08/2014 17:02 Server error: '554 5.7.1 Service unavailable; Client host [my-ip-here] blocked using zen.spamhaus.org; http://www.spamhaus.org/query/bl?ip=my-ip-here' I went to that URL and it says the following: Ref: PBL231588 81.152.0.0/13 is listed on the Policy Block List (PBL) Outbound Email Policy of BT Retail for this IP range: It is the policy of BT Retail that unauthenticated email sent from this IP address should be sent out only via the designated outbound mail server allocated to BT Retail customers. Please consult the following URL for details on how to configure your email client appropriately. http://btybb.custhelp.com/cgi-bin/btybb.cfg/php/enduser/cci/bty_adp.php?p_sid=fPnV4zhj&p_faqid=6876 Removal Procedure Removal of IP addresses within this range from the PBL is not allowed by the netblock owner's policy. Going to this URL just says: This site has been disabled for the time being. Does anyone know what I should do to allow me to send emails from my home ip - the site suggests I can configure my email client? (note that I have configured the client to use smtp authentication)

    Read the article

  • Clarification for setting up SSH terminal access on Cisco IOS

    - by Matt Malesky
    I'm attempting to set up SSH on a Cisco 2811 and having some difficulties. The first step to this should be running crypto key generate rsa I seem to be missing this though: better#crypto key generate rsa ^ % Invalid input detected at '^' marker. better# Furthermore, the only available commands I have in the crypto key namespace are lock and unlock, which seem to indicate a locked keypair (for which I don't know the password): better#crypto key ? lock Lock a keypair. unlock Unlock a keypair. better#crypto key unlock ? rsa RSA keys better#crypto key unlock rsa %% Please enter the passphrase: %% Unlocking failed. . better# More or less, I'm asking what exactly this might mean, and if I actually do have certificates already here (used router)? Otherwise, how can I solve this? It's my first time configuring this feature, but I definitely believe it's part of my IOS. Speaking of my IOS, I'm running the image c2800nm-advsecurityk9-mz.124-24.T6.bin I'll also note that I have my hostname and ip domain-name configured. I'll also give you a dir flash: below if it's at all of use: better#dir flash: Directory of flash:/ 2 -rw- 2748 Jul 27 2009 14:03:52 +00:00 sdmconfig-2811.cfg 3 -rw- 931840 Jul 27 2009 14:04:10 +00:00 es.tar 4 -rw- 1505280 Jul 27 2009 14:04:32 +00:00 common.tar 5 -rw- 1038 Jul 27 2009 14:04:46 +00:00 home.shtml 6 -rw- 112640 Jul 27 2009 14:05:00 +00:00 home.tar 7 -rw- 1697952 Jul 27 2009 14:05:26 +00:00 securedesktop-ios-3.1.1.45-k9.pkg 8 -rw- 415956 Jul 27 2009 14:05:46 +00:00 sslclient-win-1.1.4.176.pkg 9 -rw- 38732900 Dec 8 2011 06:28:56 +00:00 c2800nm-advsecurityk9-mz.124-24.T6.bin 64016384 bytes total (20598784 bytes free) better#

    Read the article

  • Syntax error at '{'; expected '}' when using nagios in puppet

    - by jiangchengwu
    It's a big problem to me, because I'm not familiar with puppet. ERROR on the puppetmaster: debug: importing '/etc/puppet/manifests/nodes/group-1.pp' err: Could not parse for environment production: Syntax error at '{'; expected '}' at /etc/puppet/manifests/nodes/group-1.pp:6 ERROR on the puppet client: err: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not parse for environment production: Syntax error at '{'; expected '}' at /etc/puppet/manifests/nodes/group-1.pp:6 in group-1.pp: node 'group1' { include ntp class { 'nagios::host': #this is line 6 nodename => $clientcert, appname => 'test', } } nagios::host in module module/nagios/host.pp code are here: class nagios::host($nodename, $hostgroup) { file { '/usr/lib/nagios/plugins': mode = "755", require = Package["nagios-plugins"], } ... @@nagios_service { "${nodename}_check_ssh": ensure => present, use => 'generic-service', host_name => "${nodename}", notification_interval => 60, flap_detection_enabled => 0, service_description => "SSH", check_command => "check_ssh", target => "/etc/nagios3/services.d/${nodename}.cfg", } } and the file module/nagios/init.pp is blank How could I fix it ?

    Read the article

  • Installing Debian 7.1 on FakeRAID/Intel Z77 results in boot with no grub menu

    - by user198982
    I'm trying to install Debian 7.1 from DVD onto 2x500GB drives which are set up in a FakeRAID mirror using the on-board FakeRAID provided by the Z77 chipset. I have followed the guide here https://wiki.debian.org/DebianInstaller/SataRaid. Namely, I booted into the expert install with the 'dmraid=true' option added, installed onto the RAID mirror which the installer correctly detected, then installed grub2 onto /dev/mapper/.. raid volume. I chose to use LVM (so a boot partition + LVM volume). As per the guide, I have uncommented the "GRUB_DISABLE_LINUX_UUID=true" line in "/etc/default/grub" and ran "update-grub" then "grub-install /dev/mapper/.." (with the right RAID device in the command). However, after I rebooted the system, all I got was a grub console. It did not load the menu. I checked and it seems that it never even generated a menu file. I re-installed Debian a few times since, trying out different options and also a few workarounds people posted online, but to no avail. The best I am getting is a grub console. No menu. Some times it will generate the grub.cfg, some times it won't, depending on the workaround I try. I was wondering if anyone else has experienced this issue. There is no need to preach how I should not use FakeRAID. I have seen others trying to figure this out so I think a resolution to this issue would be of interest to more than just me. Also, I first installed the system onto a small drive for testing something else. I made a backup with Acronis and was able to restore that onto the RAID mirror by using Universal Restore. When I installed it onto a 500GB without RAID, backed up using the same method, then restored onto a RAID volume of the same size, it would not boot and I got grub errors. Weird. I can post more details, just let me know what you want to see.

    Read the article

  • How to boot XBMC 10.1 ISO on USB via grub?

    - by Shi
    I am trying to boot the XBMC Live image (http://xbmc.org/download/) as ISO from USB via grub 1.98. I have a Kubuntu 11.04 image there as well already and it works using the following configuration: menuentry "Kubuntu 11.04 64bit" { loopback loop /boot/iso/kubuntu-11.04-desktop-amd64.iso linux (loop)/casper/vmlinuz boot=casper iso-scan/filename=/boot/iso/kubuntu-11.04-desktop-amd64.iso noeject noprompt initrd (loop)/casper/initrd.gz } However, if I try to boot XBMC in an analogue way, I always get an error "Unable to find a medium containing a live file system". I found different approaches to install XBMC, but they all are about installing the distribution on USB, or using grub4dos, or unetbootin. I already found out that XBMC 10.1 is based on Ubuntu 10.04.2 LTS, so I tried those settings - even though they are quite similar to Kubuntu 11.04. Finally, the ISO contains a grub configuration as well in boot/grub/grub.cfg, but even with those parameters, I get the error above. My current configuration is the following one: menuentry "xbmc 10.1" { loopback loop /boot/iso/xbmc-10.1-live.iso linux (loop)/live/vmlinuz video=vesafb boot=live iso-scan/filename=/boot/iso/xbmc-10.1-live.iso xbmc=autostart,nodiskmount splash quiet loglevel=0 persistent quickreboot quickusbmodules notimezone noaccessibility noapparmor noaptcdrom noautologin noxautologin noconsolekeyboard nofastboot nognomepanel nohosts nokpersonalizer nolanguageselector nolocales nonetworking nopowermanagement noprogramcrashes nojockey nosudo noupdatenotifier nouser nopolkitconf noxautoconfig noxscreensaver nopreseed union=aufs initrd (loop)/live/initrd.img } Any more ideas or any more information I should supply?

    Read the article

  • Setup ejabberd with SQL Server 2008

    - by wonster
    Here's what I have got so far. Windows 2008 Server 64 bit. Installed the latest version of ejabberd, ejabberd-2.1.8-windows-installer.exe. The windows service starts up fine but seems ineffective. However, using the start & stop scripts work. I am able to login to the admin page which so far doesn't seem that versatile. Opened up ports 5222, 5226 and 5280 for my workstation to talk to the server. I've got Spark and Jabbear Windows clients to register, login and instant message with multiple accounts using the server. After confirming that I've got the very basics working, I've decided to make use of SQL Server 2008 as the database. Reason? Mainly, I am very comfortable with SQL Server. I can deal with redundancy, failover, data analysis easily. Not sure if ejabberd's built in DB provides all that. Following the instructions from ejabberd's documentation, I setup a system DSN that points to another physical database. The DSN checks out fine. (Tried both Named Pipes and TCP/IP) Modified ejabberd.cfg. Commented line %%{auth_method, internal} and uncommented line {auth_method, odbc} Uncommented and modified {odbc_server, "DSN=ejabberd;UID=somelogin;PWD=somepassword"}. After making these changes, I restarted. No errors are found in the log files. The jabber clients are no longer able to register new accounts. I'm not sure where to look for errors besides the /logs/ folder as I'm new to all this. I am basically stuck here on step 5. Has anyone got this setup to work recently? Some of the posts I've found around are years old and of no help. I can't be the only one setting up ejabberd with MS SQL. Any help would be appreciated!

    Read the article

  • Debian grub2 update removed Windows boot option.

    - by Wrikken
    Since I updated grub to grub 2 I no longer get the option to boot to Windows (which is unfortunately sometimes necessary for proprietary MSIE browser plugins I need to use for work). Relevant /boot/grub/menu.lst portion: ### END DEBIAN AUTOMAGIC KERNELS LIST # This is a divider, added to separate the menu items below from the Debian # ones. title Other operating systems: root # This entry automatically added by the Debian installer for a non-linux OS # on /dev/hda1 title Windows NT/2000/XP root (hd0,0) savedefault makeactive chainloader +1 This however does not appear anymore. I do have some entries in /boot/grub/grub.cfg with entries like these: menuentry 'Debian GNU/Linux, with Linux 2.6.32-5-amd64' --class debian --class gnu-linux --class gnu --class os { insmod part_msdos insmod ext2 set root='(hd1,msdos1)' search --no-floppy --fs-uuid --set e638c434-4884-412f-a141-2c194f881fae echo 'Loading Linux 2.6.32-5-amd64 ...' linux /boot/vmlinuz-2.6.32-5-amd64 root=UUID=e638c434-4884-412f-a141-2c194f881fae ro quiet echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-2.6.32-5-amd64 } Do I have to alter that file? If so, what is the correct syntax for a Windows boot? If not, what could be the problem?

    Read the article

  • HAProxy, health checking multiple servers with different host names

    - by Marco Bettiolo
    I need to load balance between multiple running servers with different host names. I cannot set-up the same virtual host on each one. Is it possible to have only one listen configuration with multiple server and make the Health Checks apply the http-send-name-header Host directive? I am using HAProxy 1.5. I came up with this working haproxy.cfg, as you can see, I had to set a different hostname for each health check as the health check ignores the http-send-name-header Host. I would have preferred to use variables or other methods and keep things more concise. global log 127.0.0.1 local0 notice maxconn 2000 user haproxy group haproxy defaults log global mode http option httplog option dontlognull retries 3 option redispatch timeout connect 5000 timeout client 10000 timeout server 10000 stats enable stats uri /haproxy?stats stats refresh 5s balance roundrobin option httpclose listen inbound :80 option httpchk HEAD / HTTP/1.1\r\n server instance1 127.0.0.101 check inter 3000 fall 1 rise 1 server instance2 127.0.0.102 check inter 3000 fall 1 rise 1 listen instance1 127.0.0.101:80 option forwardfor http-send-name-header Host option httpchk HEAD / HTTP/1.1\r\nHost:\ www.example.com server www.example.com www.example.com:80 check inter 5000 fall 3 rise 2 listen instance2 127.0.0.102:80 option forwardfor http-send-name-header Host option httpchk HEAD / HTTP/1.1\r\nHost:\ www.bing.com server www.bing.com www.bing.com:80 check inter 5000 fall 3 rise 2

    Read the article

  • Install Debian stable linux ISO from USB to dual boot Windows

    - by tgkprog
    I want debian as dual boot with my windows vista, Free'd up 50GB in my d drive. Plan to use 40 for debian install, 6GB for swap space Have a 16GB USB drive Downloaded http://unetbootin.sourceforge.net/ Downloaded DVD files of stable debian-7.0.0-amd64-DVD-1.iso ( debian-7.0.0-amd64-DVD-2.iso and 3) After I choose HD install, unetbootin says place the ISO in the same place. but I have 3. do i need to merge them? if so any freeware to do that? can i do it with 7zip? when I extract with 7 zip there are classes between the 3 ISO files. Just over write? Options to merge (format etc for 7zip) ? Or I must use I tried to keep the 3 files with the other unetbootin files but get an error msg Files I have on my USB 06/30/2013 11:44 PM 2,835,648 ubnkern 06/05/2013 12:14 AM 3,998,007,296 debian-7.0.0-amd64-DVD-1.iso 06/04/2013 03:30 PM 4,696,872,960 debian-7.0.0-amd64-DVD-2.iso 06/05/2013 01:25 AM 4,698,955,776 debian-7.0.0-amd64-DVD-3.iso 06/30/2013 11:45 PM 6,530,278 ubninit 06/30/2013 11:46 PM 155 syslinux.cfg 06/30/2013 11:46 PM 60,928 menu.c32 also i can only copy above files if i format my USB as NTFS On FAT32 says too large to copy .iso How do I get around that? My internet needs a login so cannot do net install

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29  | Next Page >