Search Results

Search found 3567 results on 143 pages for 'instructions'.

Page 100/143 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • Can't setup 3 nodes MongoDB recplica set

    - by Victor Lin
    I just follow instructions in MongoDB document Replica Sets - Basics to setup a 3-node Replica set. Everything goes fine when I do the initiate and add first node in the primary. [foo@host-a mongodb]$ bin/mongo localhost MongoDB shell version: 1.8.2 connecting to: localhost > rs.initiate() { "info2" : "no configuration explicitly specified -- making one", "info" : "Config now saved locally. Should come online in about a minute.", "ok" : 1 } > rs.add("host-b") { "ok" : 1 } So far so good, but when I try to add third node myset:PRIMARY> rs.addArb("host-c") Sun Aug 7 22:57:09 MessagingPort recv() errno:104 Connection reset by peer 127.0.0.1:27017 Sun Aug 7 22:57:09 SocketException: remote: error: 9001 socket exception [1] Sun Aug 7 22:57:09 DBClientCursor::init call() failed Sun Aug 7 22:57:09 query failed : local.$cmd { count: "system.replset", query: {}, fields: {} } to: 127.0.0.1 Sun Aug 7 22:57:09 Error: error doing query: failed shell/collection.js:150 Sun Aug 7 22:57:09 trying reconnect to 127.0.0.1 Sun Aug 7 22:57:09 reconnect 127.0.0.1 ok As result, the current primary became secondary, and the host-b was marked as dead, but actually, it is still alive. myset:SECONDARY> rs.status() { "set" : "myset", "date" : ISODate("2011-08-08T04:03:23Z"), "myState" : 2, "members" : [ { "_id" : 0, "name" : "host-a:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "optime" : { "t" : 1312775799000, "i" : 1 }, "optimeDate" : ISODate("2011-08-08T03:56:39Z"), "self" : true }, { "_id" : 1, "name" : "host-b", "health" : 0, "state" : 6, "stateStr" : "(not reachable/healthy)", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2011-08-08T04:03:22Z"), "errmsg" : "still initializing" } ], "ok" : 1 } How could this happen? I just follow the guide in the document, did I do something wrong? Moreover, I can't do anything on current secondary server. It doesn't allow me to reconfig on the secondary node, but the problem is there is no primary node. myset:SECONDARY> rs.reconfig({}) { "errmsg" : "replSetReconfig command must be sent to the current replica set primary.", "ok" : 0 } Any ideas?

    Read the article

  • Recent ImageMagick on CentOS 6.3

    - by organicveggie
    I'm having a terrible time trying to get a recent version of ImageMagick installed on a CentOS 6.3 x86_64 server. First, I downloaded the RPM from the ImageMagick site and tried to install it. That failed due to missing dependencies: error: Failed dependencies: libHalf.so.4()(64bit) is needed by ImageMagick-6.8.0-4.x86_64 libIex.so.4()(64bit) is needed by ImageMagick-6.8.0-4.x86_64 libIlmImf.so.4()(64bit) is needed by ImageMagick-6.8.0-4.x86_64 libImath.so.4()(64bit) is needed by ImageMagick-6.8.0-4.x86_64 libltdl.so.3()(64bit) is needed by ImageMagick-6.8.0-4.x86_64 I have libtool-ltdl installed, but that includes libltdl.so.7, not libltdl.so.4. I have a similar problem with libHalf, libIex, libIlmImf and libImath. Typically, you can install OpenEXR to get those dependencies. Unfortunately, CentOS 6.3 includes OpenEXR 1.6.1, which includes ilmbase-devel 1.0.1. And that release of ilmbase-devel includes newer versions of those dependencies: libHalf.so.6 libIex.so.6 libIlmImf.so.6 libImath.so.6 I next tried following the instructions for installing ImageMagick from source. No luck there either. I get a build error: RPM build errors: File not found by glob: /home/sean/rpmbuild/BUILDROOT/ImageMagick-6.8.0-4.x86_64/usr/lib64/ImageMagick-6.8.0/modules-Q16/coders/djvu.* I even re-ran configure to explicitly exclude djvu and I still get the same error. At this point, I'm pulling my hair out. What's the easiest way to get a relatively recent version of ImageMagick ( 6.7) installed on CentOS 6.3? Does someone offer RPMs with dependencies somewhere?

    Read the article

  • How to register an agent with launchd

    - by Konrad Rudolph
    I’m unable to schedule a periodic launch with launchctl/launchd on OS X (Leopard). Basically, I’m unable to find a step-by-step list of instructions on the web and the intuitive approach doesn’t work. The sync.plist file: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>net.madrat.utils.sync</string> <key>Program</key> <string>rsync</string> <key>ProgramArguments</key> <array> <string>-ar</string> <string>/path/to/folder/</string> <string>/path/to/backup/</string> </array> <key>StartInterval</key> <integer>7200</integer> </dict> </plist> I’ve put this script inside the path ~/Library/LaunchAgents. Next, I’ve registered the script using launchctl load ~/Library/LaunchAgents/sync.plist Finally, to test that it works, I started the job: launchctl start net.madrat.utils.sync – Nothing happened. Manually executing the rsync command in the terminal yields the expected result. I’m fairly sure that the job was registered correctly because if I try to start a non-existing job, I get an error message (which I didn’t get in the above command). What did I do wrong?

    Read the article

  • Setting Ubuntu Global PATH for Ruby Enterprise Edition

    - by Wally Glutton
    Context: I recently installed Ruby Enterprise Edition (REE) on an Ubuntu 8.04 server. I would like for this new version of Ruby to globally supersede (for all users, crontabs, etc) the older version in /usr/local/bin. Attempted Solution #1: The REE documentation recommends placing the REE bin folder at the beginning of the global PATH in /etc/environment. I altered the PATH line in this file to read: PATH="/opt/ruby_ee/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games" This did affect my PATH at all. Attempted Solution #2: Next I followed these instructions and updated the PATH setting in the /etc/login.defs and /etc/crontab files. (I did not change /etc/sudoers.) This didn't affect my PATH either, even after logging out and rebooting the server. Other information: I seem to be having the same problem described here. I'm testing using the command: echo $PATH My shell is bash. My .bashrc doesn't not alter my PATH. I'm ssh'ed into the system for all testing. /opt/ruby_ee/ is a sym-link to /opt/ruby-enterprise-1.8.7-2011.03/

    Read the article

  • Building new computer, turns on, but no post

    - by addybojangles
    Pardon my ignorance here, finally decided to put together a computer and egads. I purchased a new motherboard, power supply, processor, video card and memory. ASUS M4A79XTD EVO AM3 AMD 790X ATX AMD Motherboard OCZ Fatal1ty OCZ550FTY 550W ATX12V v2.2 / EPS12V SLI Ready 80 PLUS Certified Modular Active PFC Power Supply AMD Phenom II X4 965 Black Edition Deneb 3.4GHz 4 x 512KB L2 Cache 6MB L3 Cache Socket AM3 125W Quad-Core Processor XFX HD-577A-ZNFC Radeon HD 5770 (Juniper XT) 1GB 128-bit GDDR5 PCI Express 2.0 x16 HDCP Ready CrossFireX Support Video Card G.SKILL 4GB (2 x 2GB) 240-Pin DDR3 SDRAM DDR3 1600 (PC3 12800) Dual Channel Kit Desktop Memory Model F3-12800CL9D-4GBNQ (originally had links for you guys, but I lack the rep, sorry!!) And I've got it all in the tower. I put in power supply, installed processor on motherboard, installed heatsink, put in ram, and I am using an older IDE hard disk. When I start the computer, the monitor tells me "check signal cable." As far as I can tell, the heatsink on the processor is spinning, the power supply is on (obviously), and the green LED on the motherboard is on. I originally only had the bigger output plugged in to the motherboard (what I saw in a YouTube vid as well as the mobo instructions), but after doing some research, it said plug in the other ATX power supply. Which I did. And trying to power the computer results in nothing. No beeps on startup, no post, anyone have any ideas? Your ideas and help is greatly appreciated.

    Read the article

  • New 3TB HDD, can see full 2.7TB in Linux and Windows, but shows up as 801.6GB in BIOS

    - by Ben Lee
    I recently purchased a Seagate Barracuda 3TB drive (ST3000DM001). After installing it, my BIOS recognized it but reported the size as 801.6gb. I went ahead and booted into Linux anyway (Ubuntu 11.10 64-bit). Linux saw it as a 2.7TB. Following some online instructions (don't have the link handy, unfortunately), it looks liked converting this drive to GPT was recommended. So I used gparted to do that, then formatted it to NTFS also using gparted. (I'm using NTFS because my machine is daul-boot and I want to have access to the drive in Windows too). I rebooted to Windows (Windows 7 64-bit), and Windows also sees the drive with 2.7TB free. Everything seems to be working fine. The only issue is that my BIOS is still reporting the drive as 801.6GB. My motherboard is an ASRock 770 Extreme3 and BIOS is the latest version. Since everything seems to be working with the new drive anyway, I'm hoping that the fact that the BIOS is reporting the wrong size is not an actual problem. But honestly, I don't really know. Anyone out there more familiar with this know if this could potentially cause any problems in the future? Any way to get the BIOS to report the correct size?

    Read the article

  • Ubuntu Hardy : Testing for environment variables in udev rules doesn't seem to work

    - by Fred
    I have a Ubuntu 8.04 LTS (server edition), and I need to write a udev rule for it to act upon plugging a USB thumb drive. However, I need a different action depending on the filesystem of the drive. I know I can use the ID_FS_TYPE environment variable to check for the filesystem on the drive. Following instructions found here, I try a dummy udev rule as such : KERNEL!="sd[a-z][0-9]", GOTO="my_udev_rule_end" ACTION=="add", RUN+="/usr/bin/touch /tmp/test_udev_%E{ID_FS_TYPE}" ACTION=="add", ENV{ID_FS_TYPE}=="vfat", RUN+="/usr/bin/touch /tmp/test_udev_it_works" LABEL="my_udev_rule_end" However, when I plug in a thumb drive with a vfat filesystem (which should trigger both rules), I end up with a file called /tmp/test_udev_vfat, meaning the first rule was triggered successfully, and that the ID_FS_TYPE environment variable is "vfat", but I don't have the other file, meaning that although I know the ID_FS_TYPE env variable is "vfat", I can't seem to check against it for a match. I tried googling the thing, but pretty much every result seems to assume ENV{ID_FS_TYPE}=="vfat" works. I also tested the exact same udev rule on Ubuntu 10.04 LTS server, and I have the same result. I'm probably missing something very simple, but I just don't get it. Does anyone see what is wrong with my udev rule that would prevent it from matching on ENV{ID_FS_TYPE}? Thanks.

    Read the article

  • Puppet Agent fails sporadically, with either timeout or "Could not find class" error

    - by smokris
    I have puppet master running on a Xen dom0, and 3 domUs syncing to it via an hourly crontab puppet agent --test. About 80% of the time, the puppet agent --test completes successfully: info: Retrieving plugin info: Caching catalog for test3 info: Applying configuration version '1333319732' notice: Finished catalog run in 5.08 seconds The other 20% of the time, it fails midway, with errors such as the following: err: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not find class iptables for test1 at /etc/puppet/manifests/site.pp:1 on node test1 warning: Not using cache on failed catalog err: Could not retrieve catalog; skipping run or info: Retrieving plugin info: Caching catalog for test2 info: Applying configuration version '1333319732' notice: Finished catalog run in 24.73 seconds err: Could not send report: Error 500 on SERVER: Internal Server Error private method `gsub' called for WEBrick::HTTPStatus::RequestTimeout:Class WEBrick/1.3.1 (Ruby/1.8.5/2006-08-25) OpenSSL/0.9.8e-rhel5 at puppet:8140 or info: Retrieving plugin err: Could not retrieve catalog from remote server: execution expired warning: Not using cache on failed catalog err: Could not retrieve catalog; skipping run or info: Retrieving plugin info: Caching catalog for test3 info: Applying configuration version '1333319732' notice: Finished catalog run in 9.47 seconds err: Could not send report: Error 408 on SERVER: Request Timeout During this time, I've not made any changes to the Puppet configuration — it just sporadically fails. I'm running puppet-2.7.12 on CentOS, and followed the setup instructions described on http://docs.puppetlabs.com/learning/agent_master_basic.html. Any ideas about how I can troubleshoot this?

    Read the article

  • Getting AWStats to work in Ubuntu 12.04

    - by koogee
    I'm new to apache and i'm trying to set up AWStats on my ubuntu 12.04 server. I've followed the guide at Ubuntu docs https://help.ubuntu.com/community/AWStats I set it up according to the instructions and awstats is able to generate initial stats from apache log successfully. I placed the links to awstats in the default virtual host file. However when I try to run http://server-ip-address:8080/awstats/awstats.pl, I get: Error: SiteDomain parameter not defined in your config/domain file. You must edit it for using this version of AWStats. Setup ('/etc/awstats/awstats.conf' file, web server or permissions) may be wrong. Check config file, permissions and AWStats documentation (in 'docs' directory). Here is my /etc/apache2/sites-available/default file: <VirtualHost *:8080> ServerAdmin webmaster@localhost DocumentRoot /home/saad/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/saad/www/> Options Indexes FollowSymLinks MultiViews AllowOverride AuthConfig Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> Alias /awstatsclasses "/usr/share/awstats/lib/" Alias /awstats-icon "/usr/share/awstats/icon/" Alias /awstatscss "/usr/share/doc/awstats/examples/css" ScriptAlias /awstats/ /usr/lib/cgi-bin/ Options ExecCGI -MultiViews +SymLinksIfOwnerMatch ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> The only three variables I edited in /etc/awstats/awstats.conf are: LogFile="/var/log/apache2/access.log" SiteDomain="server-name.noip.org" HostAliases="localhost 127.0.0.1 server-name.no-ip.org" The apache server works fine and i'm able to access other pages stored on the server. Any guidance would be welcome.

    Read the article

  • Getting Perl DBD::mysql working on OS X 10.7?

    - by Bart B
    I can't seem to get Perl & MySQL to talk to each other on OS X 10.7 Lion. I did all the installs by the book, I used Oracle's PKG installer for the latest MySQL Community Server, and I installed DBI and DBD::mysql via CPAN. There were not problems at all during the install, but, when I try to USE DBD::mysql to connect to my local DB server I get the following error: install_driver(mysql) failed: Can't load '/Library/Perl/5.12/darwin-thread-multi-2level/auto/DBD/mysql/mysql.bundle' for module DBD::mysql: dlopen(/Library/Perl/5.12/darwin-thread-multi-2level/auto/DBD/mysql/mysql.bundle, 1): Library not loaded: /usr/local/mysql/lib/libmysqlclient.16.dylib Referenced from: /Library/Perl/5.12/darwin-thread-multi-2level/auto/DBD/mysql/mysql.bundle Reason: image not found at /System/Library/Perl/5.12/darwin-thread-multi-2level/DynaLoader.pm line 204. at (eval 3) line 3 Compilation failed in require at (eval 3) line 3. Perhaps a required shared library or dll isn't installed where expected After a lot of googling all I could find were suggested hacks, so I gave this one a go: http://arkoftech.wordpress.com/2011/02/10/fixing-dbdmysql-for-mysql-5-5-89-under-macos-10-6-x/ I had to update some of the paths in the instructions since on Lion it's Perl 5.12 not 5.10. After doing that I got a new error: dyld: lazy symbol binding failed: Symbol not found: _mysql_init Referenced from: /Library/Perl/5.12/darwin-thread-multi-2level/auto/DBD/mysql/mysql.bundle Expected in: flat namespace dyld: Symbol not found: _mysql_init Referenced from: /Library/Perl/5.12/darwin-thread-multi-2level/auto/DBD/mysql/mysql.bundle Expected in: flat namespace Trace/BPT trap: 5 There must be a simple way to get MySQL & Perl working on OS X? - HELP!

    Read the article

  • Mail won't start up after migration to Mountain Lion Server

    - by Meltemi
    I'm not sure where to start. Our hard disk died on server running 10.6.8. Installed new disk. Followed Apple's migration instructions to 10.8.2. All services are operational except Mail. /var/log/mail.log Dec 6 17:52:17 [email protected] postfix/master[10370]: fatal: bind: private/smtp: Permission denied Dec 6 17:52:27 [email protected] postfix/master[10374]: fatal: bind: private/smtp: Permission denied Dec 6 17:52:37 [email protected] postfix/master[10376]: fatal: bind: private/smtp: Permission denied Dec 6 17:52:47 [email protected] postfix/master[10380]: fatal: bind: private/smtp: Permission denied Dec 6 17:52:57 [email protected] postfix/master[10382]: fatal: bind: private/smtp: Permission denied Dec 6 17:53:07 [email protected] postfix/master[10392]: fatal: bind: private/smtp: Permission denied Dec 6 17:53:17 [email protected] postfix/master[10394]: fatal: bind: private/smtp: Permission denied Dec 6 17:53:27 [email protected] postfix/master[10398]: fatal: bind: private/smtp: Permission denied Dec 6 17:53:38 [email protected] postfix/master[10400]: fatal: bind: private/smtp: Permission denied Dec 6 17:53:48 [email protected] postfix/master[10404]: fatal: bind: private/smtp: Permission denied Dec 6 17:53:58 [email protected] postfix/master[10411]: fatal: bind: private/smtp: Permission denied Dec 6 17:54:08 [email protected] postfix/master[10421]: fatal: bind: private/smtp: Permission denied

    Read the article

  • How to verify PostgreSQL 9 has installed correctly on a CentOS server?

    - by A4J
    I'm trying to install the PG (postgres) gem on a CentoOS server, but it keeps saying Postgres is too old, even though I have upgraded it to 9.1.3 (as per the instructions here http://www.davidghedini.com/pg/entry/install_postgresql_9_on_centos). I am using CentOS 5.8 (and Ruby 1.9.3) Here is the error message: Building native extensions. This could take a while... ERROR: Error installing pg: ERROR: Failed to build gem native extension. /usr/local/bin/ruby extconf.rb checking for pg_config... yes Using config values from /usr/bin/pg_config checking for libpq-fe.h... yes checking for libpq/libpq-fs.h... yes checking for pg_config_manual.h... yes checking for PQconnectdb() in -lpq... yes checking for PQconnectionUsedPassword()... no Your PostgreSQL is too old. Either install an older version of this gem or upgrade your database. *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. psql --version confirms my version: psql (PostgreSQL) 9.1.3 I can confirm packages installed: Setting up Install Process Package postgresql91-9.1.3-1PGDG.rhel5.x86_64 already installed and latest version Package postgresql91-devel-9.1.3-1PGDG.rhel5.x86_64 already installed and latest version Package postgresql91-server-9.1.3-1PGDG.rhel5.x86_64 already installed and latest version Package postgresql91-libs-9.1.3-1PGDG.rhel5.x86_64 already installed and latest version Package postgresql91-contrib-9.1.3-1PGDG.rhel5.x86_64 already installed and latest version Nothing to do Any ideas on how to troubleshoot this? Thanks in advance.

    Read the article

  • Error installing Sony Remote Play

    - by Iszi Rory or Isznti
    I'm trying to install Remote Play software to connect my laptop to my PS3. I've found a guide with instructions which seem to be in fairly wide use (found similar walk-throughs on numerous other sites), for running the software on a non-Vaio PC. Tech-Recipies: Playstation 3 – Use Remote Play on any Windows 7 PC The setup essentially goes like this: Download Remote Play software. Download patch by NTAuthority. Install Remote Play as normal. Reboot. Extract NTAuthority patch to Remote Play program folder. Manually register patched DLLs via CLI. Run Remote Play software. Sadly, my problem is early in - Step 3. I had to use Google to find the software download, as the link from Tech-Recipies seems broken. I found the download on Sony's site here: Sony eSupport: Remote Play with PlayStation®3 After downloading and running the software, I hit "Next" at the welcome screen and "I Agree" at the EULA screen. After this, a popup informs me that Setup is checking my computer's information. Then, Setup terminates with this error: I'm running Windows 7 Ultimate x64. Is anyone familiar with this error in this software? Is there a way to work around it? Did I perhaps pick the wrong download from Sony's site?

    Read the article

  • Folder Redirection Issues - Freezing, Strange Warnings

    - by JCardenas
    I have Folder Redirection set up in a test environment for a couple accounts. I have followed the instructions for setting up the folder security settings here, and I can confirm that folders are created automatically by the system with the correct security settings when a user logs in. The GPO has been configured to automatically move user files up to the redirected folders, and this is working properly. Problems start occurring when a Windows 7 PC is in use. It is rare, but Explorer will lock up when performing a file write operation (move/copy/save from application). This results in the entire system being unusable, with only a hard reset resolving it (Task Manager doesn't start, the "three finger salute" does nothing, apps stop working). The mouse functions, but clicks do nothing. The other issue is that occasionally when copying/creating/modifying files a dialog box will pop up with the message "You need permission to perform this action. You require permission from XYZ\cardenas to make changes to this folder." The folder that was created by copying an existing one has the correct security settings and lists me as the owner. My company will not be implementing Folder Redirection on XP, since we are making a "clean break" with implementing new technologies with the Windows 7 rollout, so this behavior has not been - nor will be - checked for in XP. Thanks in advance for your help!

    Read the article

  • Making the iPhone work with stripped down Windows XP

    - by Gabriel
    Hi, this is my first time posting here and I have a really specific question. I have an ASUS eee 901 running Windows XP Home. I had everything working well, but then I decided to improve performance by moving Windows to the smaller but faster internal SSD. I used Nlite to strip down Windows, following the instructions here: http://wiki.eeeuser.com/howto:nlitexp I now have a very lightweight installation of XP home with SP3 and all the current updates. Almost everything is working really well. I have installed iTunes and I CAN sync with no problems. However, each time I plug in my iPhone 3GS (latest firmware), Windows tries and fails to install drivers. The Found New Hardware Wizard launches, but nothing I do will make it complete successfully, with the result that the iphone does not show up in Windows as removable storage, or as a camera. When I launch the Camera and Scanner Wizard, it shows only my webcam, not the iphone. I have verified that I have the following files in place: Windows\System32\ptpusb.dll (regsvr32 successful) Windows\System32\ptpusd.dll (entry point not found, can not be registered) Windows\System32\usbaaplrc.dll (entry point not found, can not be registered) Windows\System32\drivers\usbaapl.sys Windows\System32\drivers\usbscan.sys Windows\System32\drivers\usbstor.sys Does anyone know if some other file is required or if there's some other element preventing this from working? Edit (From posted answer) I did select Cameras & Camcorders, and my webcam is working fine for video & still capture.

    Read the article

  • Nagios Apache Config with PHP-FPM downloading cgi files

    - by tubaguy50035
    I'm trying to setup Nagios 3 under Apache 2.4 with PHP-FPM. I've run into a couple problems I could use help with. The PHP side of things seems to be working, I can see the home page and the sidebar. But all of the CGI files are downloading instead of executing, and when I try to click on "Read What's New In Nagios Core 3", I get an error /nagios3/docs/whatsnew.html was not found on this server. Below is my vhost config for Nagios. <VirtualHost *:300> # apache configuration for nagios 3.x ScriptAlias /cgi-bin/nagios3 /usr/lib/cgi-bin/nagios3 ScriptAlias /nagios3/cgi-bin /usr/lib/cgi-bin/nagios3 # Where the stylesheets (config files) reside Alias /nagios3/stylesheets /etc/nagios3/stylesheets # Where the HTML pages live Alias /nagios3 /usr/share/nagios3/htdocs ProxyPassMatch ^/(.*\.php)$ fcgi://127.0.0.1:9001/usr/share/nagios3/htdocs/$1 <DirectoryMatch (/usr/share/nagios3/htdocs|/usr/lib/cgi-bin/nagios3|/etc/nagios3/stylesheets)> Options FollowSymLinks ExecCGI AllowOverride AuthConfig Order Allow,Deny Allow From All AuthName "Nagios Access" AuthType Basic AuthUserFile /etc/nagios3/htpasswd.users require valid-user </DirectoryMatch> <Directory /usr/share/nagios3/htdocs> Options +ExecCGI </Directory> </VirtualHost> I also added this in my global Apache config: AddHandler cgi-script .cgi Any help or instructions you can give me would be much appreciated. If more information is needed, let me know.

    Read the article

  • switchless Infiniband between two servers on RHEL 6.3

    - by exfizik
    I have 2 servers running RHEL 6.3 which have 2 port Infiniband cards >lspci | grep -i infini 07:00.0 InfiniBand: QLogic Corp. IBA7322 QDR InfiniBand HCA (rev 02) I'm interested in connecting them directly to each other bypassing an Infiniband switch (which I don't have). Quick googling showed that at least in some configurations it's possible. I installed all RedHat Infiniband packages with yum groupinstall "Infiniband Support". However, ibv_devinfo shows that both ports in each card are down, which indicates that cables are not connected. But the cable is connected, although the LEDs are off on the cards (not a good sign). Another source of confusion for me is that according to this, RedHat doesn't come with OFED packages and I'm slightly hesitant to install them from source due to the lack of RedHat support for them... So where am I going with this? The questions I have are: is it possible to have a switchless/direct Infiniband connection between two servers the way I described above? If it's possible, do I have to use the OFED packages or can I configure everything with just the packages coming with RHEL. Why are the LEDs off on my servers even though the cable is connected? Any additional input/advice/pointers would be appreciated. P.S. I followed this guide for installation instructions. The Infiniband cards are clearly recognized by my OS and the rdma service is running. Update: I have opensm installed. When I run it it says: OpenSM 3.3.13 Command Line Arguments: Log File: /var/log/opensm.log ------------------------------------------------- OpenSM 3.3.13 Entering DISCOVERING state Using default GUID 0x1175000076e4c8 SM port is down and stays at that point.

    Read the article

  • BOOTMGR is compressed

    - by JavaAndCSharp
    So I've just finished messing up my computer beyond repair. I originally had a Windows 7 install on my 64GB SSD and all my data on my 1.5TB HDD. So I decided to install Windows 8 Consumer Preview. I tried to clone my 64GB SSD onto my HDD; but that didn't work either at first. I later learned that it wiped all my data off of it. Thank goodness for the cloud. So I retried the cloning after I realized that my data was gone, and I finished it. So I rebooted and got a cryptic error message: BOOTMGR is compressed. Press Ctrl+Alt+Del to restart. So I did. Several times. I followed the instructions to rebuild the bootmanager from many websites, including HowToGeek, smallvoid.com, and more. None worked. BTW, I've lost my Windows 7 Professional DVDs. All I have is my Windows 7 Home Premium DVD. It's the one I got with my laptop. How can I get back to a working computer? I'm willing to wipe anything, as my data is all in the cloud.

    Read the article

  • Starfield Wildcard SSL Certificate Not Trusted in All Browsers

    - by Austen Cameron
    I am at a loss as to what else I might try in order to debug this issue with a Starfield Wildcard SSL Certificate. The problem is that in certain browsers (Safari or the most-updated chrome you can get for OS X 10.5.8 for example) the certificate comes up as untrusted, even on the root domain. My server setup / background info: General LAMP setup - CentOS 6.3 - on a Godaddy VPS Starfield Technologies Wildcard SSL certificate Installed using the instructions from godaddy's support pages ssl.conf lines are basically as follows: SSLCertificateFile /path/to/cert/mysite.com.cert SSLCertificateKeyFile /path/to/cert/mysite.key SSLCertificateChainFile /path/to/cert/sf_bundle.crt Everything seemingly worked fine until the other night when I noticed the problem in OS X, I assume it's more browser version related, but have only been able to replicate it on that particular machine. What I have tried: Updating sf_bundle.crt from godaddy's cert repository and Starfield's repository versions Following This ServerFault answer from Jim Phares - changing the ChainFile line to sf_intermediate.crt from Starfield's repository Using http://www.sslshopper.com/ssl-checker.html on my url It says the domain is correctly listed on the certificate but comes up with an error that reads The certificate is not trusted in all web browsers. You may need to install an Intermediate/chain certificate to link it to a trusted root certificate. What might I try next to remedy the untrusted certificate issue? Let me know if there is any other information needed that might help debugging this issue. Thanks in advance!

    Read the article

  • JAWStats statspath error on windows

    - by crosenblum
    I have AWStats which works fine, and JAWStats I am trying to get working. I have tried back and forward slashes to get the program to read the dirdata files. I even moved the folder of dirdata outside of program files folder, in case it had problems with folder names with spaces in them. Here is my config file. // core config parameters $sConfigDefaultView = "thismonth.all"; $bConfigChangeSites = true; $bConfigUpdateSites = true; $sUpdateSiteFilename = "xml_update.php"; // individual site configuration // awstats092012.noname.jumpingcrab.com.txt $aConfig["site1"] = array( "statspath" => "C:\\Program Files\\AWStats\\DirData\\", "statsname" => "awstats[MM][YYYY].yourexample.com.txt", "updatepath" => "C:\\Program Files\\AWStats\\wwwroot\\cgi-bin\\awstats.pl\\", "siteurl" => "http://yourexample.com", "theme" => "default", "fadespeed" => 250, "password" => "", "includes" => "" ); Domain names changed to protect the innocent...:P Here is the error message: An error has occured: No AWStats Log Files Found JAWStats cannot find any AWStats log files in the specified directory: C:\Program Files\AWStats\DirData\ Is this the correct folder? Is your config name, site1, correct? Please refer to the installation instructions for more information.

    Read the article

  • Compiling Mono on Fedora 7

    - by Gary
    Trying to install Mono from source on Fedora 7.. running # ./configure --prefix=/opt/mono works fine, but doing the make # make ; make install ends up with the following: Makefile:93: warning: overriding commands for target `csproj-local' ../build/executable.make:131: warning: ignoring old commands for target `csproj-local' make install-local make[6]: Entering directory `/opt/mono-2.6.4/mcs/mcs' Makefile:93: warning: overriding commands for target `csproj-local' ../build/executable.make:131: warning: ignoring old commands for target `csproj-local' MCS [basic] mcs.exe typemanager.cs(2047,40): error CS0103: The name `CultureInfo' does not exist in the context of `Mono.CSharp.TypeManager' Compilation failed: 1 error(s), 0 warnings make[6]: *** [../class/lib/basic/mcs.exe] Error 1 make[6]: Leaving directory `/opt/mono-2.6.4/mcs/mcs' make[5]: *** [do-install] Error 2 make[5]: Leaving directory `/opt/mono-2.6.4/mcs/mcs' make[4]: *** [install-recursive] Error 1 make[4]: Leaving directory `/opt/mono-2.6.4/mcs' make[3]: *** [profile-do--basic--install] Error 2 make[3]: Leaving directory `/opt/mono-2.6.4/mcs' make[2]: *** [profiles-do--install] Error 2 make[2]: Leaving directory `/opt/mono-2.6.4/mcs' make[1]: *** [install-exec] Error 2 make[1]: Leaving directory `/opt/mono-2.6.4/runtime' make: *** [install-recursive] Error 1 I've been following the instructions at http://ruakuu.blogspot.com/2008/06/installing-and-configuring-opensim-on.html. This is all in an effort to get OpenSimulator running.

    Read the article

  • 500 Internal Server Error after changing .NET Framework Version to 4.0 in IIS7

    - by René
    I just changed my .NET Framework Version of the Application Pools in IIS7 Manager, following these instructions. Now when I try to re-upload my ASP.Net page, it shows me a 500 - Internal server error. I have tried uploading it in .net 2.0(X86, X64, AnyCPU), and 4.0(X86, X64, AnyCPU), and everything gives the same error. This is all the details the error gives me: "There is a problem with the resource you are looking for, and it cannot be displayed." When keeping the .NET version on 2.0 on the server, it works just fine. Also, when uploading "index.htm", it works fine as well, it just shows the HTML page. This is on Windows Server 2008 R2, by the way. EDIT: I have finally found out how to get the error details. Here they are: "Handler "PageHandlerFactory-Integrated" has a bad module "ManagedPipelineHandler" in its module list." "Most likely causes: •Managed handler is used; however, ASP.NET is not installed or is not installed completely. •There is a typographical error in the configuration for the handler module list. Things you can try: •Install ASP.NET if you are using managed handler. •Ensure that the handler module's name is specified correctly. Module names are case-sensitive and use the format modules="StaticFileModule,DefaultDocumentModule,DirectoryListingModule"." I am sure that I have installed ASP.NET completely. Please help me, -René

    Read the article

  • Using GitOAuthPlugin for Jenkins - not working as expected

    - by Blundell
    I need some clarity and maybe a fix. I'm using this plugin to authorise who views our Jenkins ci server: https://wiki.jenkins-ci.org/display/JENKINS/Github+OAuth+Plugin As I understand it anyone who is auth'd to view one of our github project's can also login to our Jenkins box. This works I thought it would also allow the person logging in to only view the Project that they have GitHub permission on. For instance. Three projects on GitHub (A,B,C). Three builds on Jenkins. User 1 has Git access to all 3 projects (A B C). User 2 has Git access to only 1 project (A). When logging into Jenkins: User 1 can see all 3 projects ( this works ) User 2 can only see project A The problem is User 2 can also see all 3 projects when they should only see 1! Have I got this correct, and if so is this a bug? I have the settings set in Jenkins configuration Github Authorization Settings. Here we have some admin users. One organization. And none out of the 4 checkboxes ticked. (User 2, is not an admin, is not part of the org). The plugin is open sourced here: https://github.com/mocleiri/github-oauth-plugin I was trying to get Jenkins to print me the Logs from the plugin but I also failed at viewing these (to see if there was an issue). I followed these instructions: https://wiki.jenkins-ci.org/display/JENKINS/Logging It's the same concept as outlined below but using GitHub rather than manually selecting users: https://wiki.jenkins-ci.org/display/JENKINS/2012/01/03/Allow+access+to+specific+projects+for+Users%28Assigning+security+for+projects+in+Jenkins%29 Have I got this right or wrong? Is it possible to auth a Jenkins user to only see one project?

    Read the article

  • "A disk read error occurred" when booting XP disk image in VirtualBox

    - by intuited
    I'm trying to boot an XP installation cloned into VirtualBox from a real drive. I'm getting the message A disk read error occurred Press Ctrl+Alt+Del to restart whenever* I try to boot the machine. * This is not strictly true: with AMD-V enabled, the boot process appears to not make it this far and instead hangs at a black screen with cursor. I created the VirtualBox image from the original drive using the following method: $ sudo ddrescue -n /dev/sdd sdd.img logfile # completed without errors $ VBoxManage convertfromraw sdd.img disk.vdi The original disk (and the image) contain a single NTFS partition with XP installed on it. The owner of the drive indicates that it did boot okay the last time the system made it that far. The (Pentium 4) system has a broken (enormous) heat sink, so at some point it failed to boot because it would quickly overheat and shut down. If I boot the VM from a live cd, I am able to mount its /dev/sda1 without any problems. I ran ntfsfix and didn't have any luck. I've read through the instructions on doing this. I didn't really follow them. For example, I didn't run MergeIDE before imaging because the machine was not bootable. However, the symptom of that problem seems to be quite different. The emitted message is contained in the volume boot record of the XP partition, which leads me to suspect that this is a problem with the core operating system bootstrap procedure, and not related to anything in the registry. I don't have an XP boot CD.

    Read the article

  • How can I install Satchmo?

    - by Jonathan Hayward
    I am trying to install Satchmo 0.9 on an Ubuntu 9.10 32-bit guest off of the instructions at http://bitbucket.org/chris1610/satchmo/downloads/Satchmo.pdf. I run into difficulties at 2.1.2: pip install -r http://bitbucket.org/chris1610/satchmo/raw/tip/scripts/requirements.txt pip install -e hg+http://bitbucket.org/chris1610/satchmo/@v0.9#egg=satchmo The first command fails because a compile error for how it's trying to build PIL. So I ran an "aptitude install python-imaging", locally copy the first line's requirements.text, and remove the line that's unsuccessfully trying to build PIL. The first line completes without reported error, as does the second. The next step tells me to change directory to the /path/to/new/store, and run: python clonesatchmo.py A little bit of trouble here; I am told that clonesatchmo.py will be in /bin by now, and it isn't there, but I put some Satchmo stuff under /usr/local, create a symlink in /bin, and run: python /bin/clonesatchmo.py This gives: jonathan@ubuntu:~/store$ python /bin/clonesatchmo.py Creating the Satchmo Application Traceback (most recent call last): File "/bin/clonesatchmo.py", line 108, in <module> create_satchmo_site(opts.site_name) File "/bin/clonesatchmo.py", line 47, in create_satchmo_site import satchmo_skeleton ImportError: No module named satchmo_skeleton A find after apparently checking out the repository reveals that there is no file with a name like satchmo*skeleton* on my system. I thought that bash might be prone to take part of the second pip invocation's URL as the beginning of a comment; I tried both: pip install -e hg+http://bitbucket.org/chris1610/satchmo/@v0.9\#egg=satchmo pip install -e hg+http://bitbucket.org/chris1610/satchmo/@v0.9#egg=satchmo Neither way of doing it seems to take care of the import error mentioned above. How can I get a Satchmo installation under Ubuntu, or at least enough of a Satchmo installation that I am able to start with a skeleton of a store and then flesh it out the way I want? Thanks, Jonathan

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >