Search Results

Search found 3039 results on 122 pages for 'centos'.

Page 57/122 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • configuration required for HIVE to be installed on a node

    - by ????? ????????
    I went through the process of manually installing ambari (not through SSH, because I couldnt get keyless to work) and everything installed OK, except for HIVE and GANGLIA. I got this message: stderr: None stdout: warning: Unrecognised escape sequence ‘\;’ in file /var/lib/ambari-agent/puppet/modules/hdp-hive/manifests/hive/service_check.pp at line 32 warning: Dynamic lookup of $configuration is deprecated. Support will be removed in Puppet 2.8. Use a fully-qualified variable name (e.g., $classname::variable) or parameterized classes. notice: /Stage[1]/Hdp::Snappy::Package/Hdp::Snappy::Package::Ln[32]/Hdp::Exec[hdp::snappy::package::ln 32]/Exec[hdp::snappy::package::ln 32]/returns: executed successfully notice: /Stage[2]/Hdp-hive::Hive::Service_check/File[/tmp/hiveserver2Smoke.sh]/ensure: defined content as ‘{md5}7f1d24221266a2330ec55ba620c015a9' notice: /Stage[2]/Hdp-hive::Hive::Service_check/File[/tmp/hiveserver2.sql]/ensure: defined content as ‘{md5}0c429dc9ae0867b5af74ef85b5530d84' notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/File[/tmp/hcatSmoke.sh]/ensure: defined content as ‘{md5}bae7742f7083db968cb6b2bd208874cb’ notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: 13/06/25 03:11:56 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore. notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: FAILED: SemanticException org.apache.hadoop.hive.ql.parse.SemanticException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: 13/06/25 03:12:06 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore. notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: FAILED: SemanticException [Error 10001]: Table not found hcatsmokeida8c07401_date102513 notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: 13/06/25 03:12:15 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore. notice: /Stage[2]/Hdp-hcat::Hcat::Service_check/Exec[hcatSmoke.sh prepare]/returns: FAILED: SemanticException o When i go to the alerts and health checks i’m getting this: ive Metastore status check CRIT for 42 minutes CRITICAL: Error accessing hive-metaserver status [13/06/25 03:44:06 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. What am I doing wrong? I have already tried to do ambari-server reset on the the database without results.

    Read the article

  • Exim redirect all unexisting accounts for local domains to a specific account

    - by tntu
    I want to route all incoming emails for local domains only to a single account if an account is not setup for that user. I would also like each email to be written in it's own file in user folder. I have a catchall user with /home/catchall/ path where I have a mail folder made for this but so far emails wither fail to deliver (thus my rule did not work) or they do deliver to /etc/mail/catchall file. I have been trying to put something together from the Exim configuration but so far nothing seem to work. http://exim.org/exim-html-current/doc/html/spec_html/ch20.html

    Read the article

  • Encrypt remote linux server

    - by Margaret Thorpe
    One of my customers has requested that their web server is encrypted to prevent offline attacks to highly sensitive data contained in a mysql database and also /var/log. I have full root access to the dedicated server at a popular host. I am considering 3 options - FDE - This would be ideal, but with only remote access (no console) I imagine this would be very complex. Xen - installing XEN and moving their server within a XEN virtual machine and encrypting the VM - which seems easier to do remotely. Parition - encrypt the non-static partitions where the sensitive data resides e.g. /var /home etc. What would be the simplist approach that satisfies the requirements?

    Read the article

  • SSH Private Key Not Working in Some Directories

    - by uesp
    I have a strange issue where SSH won't properly connect with a private-key if the key file is in certain directories. I've setup the keys on a set of servers and the following command ssh -i /root/privatekey [email protected] works fine and I login to the given host without getting prompted by a password, but this command: ssh -i /etc/keyfiles/privatekey [email protected] gives me a password prompt. I've narrowed it down that this behavior occurs in only some sub-directories of /etc/. For example /etc/httpd1/ gives me a password prompt but /etc/httpd/ does not. What I've checked so far: All private key files used are identical (copied from the original file). The private key file and directories used have identical permissions. No relevant error messages in the server/client logs. No interesting debug messages from ssh -v (it just seems to skip the key file). It happens with connecting to different hosts. After more testing it is not the actual directory name. For example: mkdir /etc/test cp /root/privatekey /etc/test ssh -i /etc/test/privatekey [email protected] # Results in password prompt cp /root/privatekey /etc/httpd # Existing directory ls -ald test httpd # drwxr-xr-x 4 root root 4096 Mar 5 18:25 httpd # drwxr-xr-x 2 root root 4096 Mar 5 18:43 test ssh -i /etc/httpd/privatekey [email protected] # Results in *no* prompt rm -r test cp -R /etc/httpd /etc/test ssh -i /etc/test/privatekey [email protected] # Results in *no* prompt` I'm sure its just something simple I've overlooked but I'm at a loss.

    Read the article

  • VPS host can't send email to Google and Yahoo Mail

    - by mandeler
    Hi, I got a new VPS setup and I'm wondering why I can't send emails to yahoo and gmail. Here's the error in /var/log/maillog: 00:43:00 mylamp sendmail[32507]: o45Gh0nc032505: to=, ctladdr= (48/48), delay=00:00:00, xdelay=00:00:00, mailer=esmtp, pri=120405, relay=alt4.gmail-smtp-in.l.google.com. [74.125.79.27], dsn=4.0.0, stat=Deferred: Connection refused by alt4.gmail-smtp-in.l.google.com What seems to be the problem?

    Read the article

  • Apache - The name

    - by Josh
    I am working on a migration to a newer virtualized server. The old one has Apache 2.2.4 according to the old servers phpinfo(). The new one with the most up to date has 2.2.3 . How can this be assuming no trickery is involved? The old one is years old. Alot of the guides I reference use apache2 in folders names and many of the conventions. The newest version of things, as I understand it is called httpd. Did apache change the name from what it originally was? (i.e. break the web server component into its own project called httpd, I realize the original daemon was probably still called httpd)

    Read the article

  • Configuring two DNS zones with named.conf

    - by tike
    I am trying to configure DNS to run two domain names. I am able to do one domain but am not sure how to configure a second domain. So for example: test.com and test1.com on same machine. How do i configure zone file and named.conf to achieve this?

    Read the article

  • Downloading multiple files with wget and handling parameters

    - by coure2011
    How can I download multiple files using wget? I also want to rename the files. Here are the commands I'm running one by one (copy/paste on terminal): wget -c --load-cookies cookies.txt http://www.filesonic.com/file/812720774/PS11.rar -O part11.rar wget -c --load-cookies cookies.txt http://www.filesonic.com/file/812721094/PS12.rar -O part12.rar wget -c --load-cookies cookies.txt http://www.filesonic.com/file/812720804/PS13.rar -O part13.rar wget -c --load-cookies cookies.txt http://www.filesonic.com/file/812720854/PS14.rar -O part14.rar ........ and so on.. What can I do to download all these files one by one?

    Read the article

  • configuring DNS zone file and named.conf

    - by tike
    Hi, i was trying to configure DNS in a way so that i could be able to run two domain name. I am able to do one as its preety simple and easy. e.g. test.com now i want to run test1.com in same machine. how do i configure zone file and named.conf. but i have no concept at all. any clue appreciated. thanks Tikejhya.

    Read the article

  • Deploying website content via Subversion

    - by Johann
    we have recently set up a new development infrastructure and process for one of our clients. This involves the strict use of subversion as a central source code repository. The svn repositories contains a seperate branch for code on the live system (/branches/live/). The repositories are use for PHP content (mainly Wordpress Blogs), but in future they may hold other asp code as well. Bonus points for a solutions which more or less in the same way with ASP code on Windows Server 2008 R2. We have two servers: one staging system and one live system. The staging system is updated regularly with the code of the trunk. The live system is update manually. Each webroot on the servers are working copy of either the trunk (staging system) or the live branch (live system). The current workflow is: Developing on the dev's box - commit into the trunk - auto-deploy on staging system - testing on the staging system - merging into /branches/live/ - manual deployment on live system. This works for one-way changes very well, however we have some troubles on every wordpress (or plugin) update: The WP update process removes the directories and unpack the archive of the new version. This removes the svn admin area as well, which produces a lot of errors. We could switch to SVN 1.7 with a single, global admin area, but this would only solve on part of the problem. Finally, we have done the update via the WP Gui, restored the svn admin area, added/removed the files and committed the changes to the trunk. After testing, we had to do basically the same thing on the live server (except the commit, we just reverted the changes and merged the new files from the staging system to the live system). I'm currently thinking of the following: The htdocs of each website is a svn export Each website has a svn working copy beside the htdocs directory a script which "replays" the changes in the wc from htdocs after an update in WP (rsync'ing the changed files to the working copy, rsync'ing new files and svn add them and finally svn delete the deleted files). The script would have to exclude some files (like wp-config.php, uploads/temp directories, etc.). Are there better ways to do this? Unfortunaly, a complete CI server is out of scope due to time and budget limitations.

    Read the article

  • How to manage a home-grown YUM package repo?

    - by TomOnTime
    There are plenty of websites that explain how to manage a mirror of YUM repos. I want to run a repo for my home-grown packages. Is there a good way to manage such repos? What I need to do: Manage 3 repos: unstable, testing, stable Self-service functions that let users add/remove/promote packages (promote means moving a package unstable?testing or testing-stable). ACLs that control which users/groups may add/remove/promote packages. Automatically re-sign packages as they move repo to repo (since the GPG key for "stable" should be different than "unstable") Automatically run "createrepo" to update repodata when needed. Suggestions?

    Read the article

  • Shrink a mounted LVM partition

    - by javanix
    I fear I already know the answer to this question, but here goes. I need to carve out a new partition on a running system. /var/ is mounted from an LVM volume (hdd1_vg-var) and has only 3% used disk space. / is mounted separately (hdd1_vg-root) and has about 80% used disk space. Filesystem Size Used Avail Use% Mounted on /dev/**/hdd1_vg-root 2.0G 1.4G 481M 75% / /dev/**/hdd1_vg-var 33G 699M 31G 3% /var Unfortunately I don't have any free extents to grow this partition organically - vgdisplay shows: Total PE 10000 Alloc PE / Size 10000 / 39.06 GB Free PE / Size 0 / 0 So seeing that I have all this free disk space on /var/, can I shrink /var/ without un-mounting it or is this just a pipe dream? I am really hoping to be able to do this work on a running system - un-mounting would of course not be difficult but it would interfere with system functionality.

    Read the article

  • Getting Pango-WARNING: Invalid UTF-8 string passed to pango_layout_set_text()

    - by geerlingguy
    About three days ago, I noticed the exim mailqueue started filling up on one of my servers, and upon inspecting some of the emails using # exim -Mvb $ID, I noticed they were being sent to some system email address (which is not a real address), and the body of the messages were as follows: (process:8259): Pango-WARNING **: Invalid UTF-8 string passed to pango_layout_set_text() I'm wondering what could be causing this strange issue, as I've never heard of 'pango' at all... I've never seen that function used in my lifetime! It seems the process id (PID) is for an apache process, though, as the pids are always gone by the time I use # ps -aux to look them up. Edit: Whoops! Forgot to include the subject - looks like it's actually munin-cron that's bringing up the issue: Subject: Cron /usr/bin/munin-cron --force-root

    Read the article

  • ntpdate cannot receive data

    - by Hengjie
    I have a problem where running ntpdate on my server doesn't return any data therefore I get the following error: [root@server etc]# ntpdate -d -u -v time.nist.gov 12 Apr 01:10:09 ntpdate[32072]: ntpdate [email protected] Fri Nov 18 13:21:21 UTC 2011 (1) Looking for host time.nist.gov and service ntp host found : 24-56-178-141.co.warpdriveonline.com transmit(24.56.178.141) transmit(24.56.178.141) transmit(24.56.178.141) transmit(24.56.178.141) transmit(24.56.178.141) 24.56.178.141: Server dropped: no data server 24.56.178.141, port 123 stratum 0, precision 0, leap 00, trust 000 refid [24.56.178.141], delay 0.00000, dispersion 64.00000 transmitted 4, in filter 4 reference time: 00000000.00000000 Thu, Feb 7 2036 14:28:16.000 originate timestamp: 00000000.00000000 Thu, Feb 7 2036 14:28:16.000 transmit timestamp: d3303975.1311947c Thu, Apr 12 2012 1:10:13.074 filter delay: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 filter offset: 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 delay 0.00000, dispersion 64.00000 offset 0.000000 12 Apr 01:10:14 ntpdate[32072]: no server suitable for synchronization found I have tried Googling the 'no server suitable for synchronization found' error online and I have tried disabling my firewall (running iptables -L returns no rules). I have also confirmed with my DC that there are no rules that are blocking ntp (port 123). Does anyone have any ideas on how I may fix this? Btw, this is what the output should look like on a working server in another DC: 11 Apr 19:01:24 ntpdate[725]: ntpdate [email protected] Fri Nov 18 13:21:17 UTC 2011 (1) Looking for host 184.105.192.247 and service ntp host found : 247.conarusp.net transmit(184.105.192.247) receive(184.105.192.247) transmit(184.105.192.247) receive(184.105.192.247) transmit(184.105.192.247) receive(184.105.192.247) transmit(184.105.192.247) receive(184.105.192.247) transmit(184.105.192.247) receive(184.105.192.247) transmit(184.105.192.247) server 184.105.192.247, port 123 stratum 2, precision -20, leap 00, trust 000 refid [184.105.192.247], delay 0.18044, dispersion 0.00006 transmitted 4, in filter 5 reference time: d330364e.e956694f Wed, Apr 11 2012 18:56:46.911 originate timestamp: d3303765.8702d025 Wed, Apr 11 2012 19:01:25.527 transmit timestamp: d3303765.73b213e3 Wed, Apr 11 2012 19:01:25.451 filter delay: 0.18069 0.18044 0.18045 0.18048 0.18048 0.00000 0.00000 0.00000 filter offset: -0.00195 -0.00197 -0.00211 -0.00202 -0.00202 0.000000 0.000000 0.000000 delay 0.18044, dispersion 0.00006 offset -0.001970

    Read the article

  • How to split registration and media?

    - by Stackfan
    I have a SIP project. Where i will have SIP server running. Server will do following: will only do routing and receive incoming calls But the audio/video will be peer 2 peer Can this be done with Asterisk? Only the media i have to split but the registration will be with Server. Tools: A) server with SIP B) One PC with SIP client C) Anoher PC with SIP client My goal is: B and C gets connected via A and audio/video packets are not via A

    Read the article

  • my server suddenly crashes every 2 days or so. Programmer has no idea, please help find the cause, here is the top

    - by Alex
    Every couple of days my server suddenly crashes and I must request hardware reset at data center to get it back running. Today I came back to my shell and saw the server was dead and "top" was running on it, and see below for the "top" right before the crash. I opened /var/log/messages and scrolled to the reboot time and see nothing, no errors prior to the hard reboot. (I checked in /etc/syslog.conf and I see "*.info;mail.none;authpriv.none;cron.none /var/log/messages" , isn't this good enough to log all problems?) Usually when I look at the top, the swap is never used up like this! I also don't know why mysqld is at 323% cpu (server only runs drupal and its never slow or overloaded). Solver is my application. I don't know whats that 'sh' doing and 'dovecot' doing. Its driving me crazy over the last month, please help me solve this mystery and stop my downtimes. top - 01:10:06 up 6 days, 5 min, 3 users, load average: 34.87, 18.68, 9.03 Tasks: 500 total, 19 running, 481 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 96.6%sy, 0.0%ni, 1.7%id, 1.8%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 8165600k total, 8139764k used, 25836k free, 428k buffers Swap: 2104496k total, 2104496k used, 0k free, 8236k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 4421 mysql 15 0 571m 105m 976 S 323.5 1.3 9:08.00 mysqld 564 root 20 -5 0 0 0 R 99.5 0.0 2:49.16 kswapd1 25767 apache 19 0 399m 8060 888 D 79.3 0.1 0:06.64 httpd 25781 apache 19 0 398m 5648 492 R 79.0 0.1 0:08.21 httpd 25961 apache 25 0 398m 5700 560 R 76.7 0.1 0:17.81 httpd 25980 apache 25 0 10816 668 520 R 75.0 0.0 0:46.95 sh 563 root 20 -5 0 0 0 D 71.4 0.0 3:12.37 kswapd0 25766 apache 25 0 399m 7256 756 R 69.7 0.1 0:39.83 httpd 25911 apache 25 0 398m 5612 480 R 58.8 0.1 0:17.63 httpd 25782 apache 25 0 440m 38m 648 R 55.2 0.5 0:18.94 httpd 25966 apache 25 0 398m 5640 556 R 55.2 0.1 0:48.84 httpd 4588 root 25 0 74860 596 476 R 53.9 0.0 0:37.90 crond 25939 apache 25 0 2776 172 84 R 48.9 0.0 0:59.46 solver 4575 root 25 0 397m 6004 1144 R 48.6 0.1 1:00.43 httpd 25962 apache 25 0 398m 5628 492 R 47.9 0.1 0:14.58 httpd 25824 apache 25 0 440m 39m 680 D 47.3 0.5 0:57.85 httpd 25968 apache 25 0 398m 5612 528 R 46.6 0.1 0:42.73 httpd 4477 root 25 0 6084 396 280 R 46.3 0.0 0:59.53 dovecot 25982 root 25 0 397m 5108 240 R 45.9 0.1 0:18.01 httpd 25943 apache 25 0 2916 172 8 R 44.0 0.0 0:53.54 solver 30687 apache 25 0 468m 63m 1124 D 42.3 0.8 0:45.02 httpd 25978 apache 25 0 398m 5688 600 R 23.8 0.1 0:40.99 httpd 25983 root 25 0 397m 5272 384 D 14.9 0.1 0:18.99 httpd 935 root 10 -5 0 0 0 D 14.2 0.0 1:54.60 kjournald 25986 root 25 0 397m 5308 420 D 8.9 0.1 0:04.75 httpd 4011 haldaemo 25 0 31568 1476 716 S 5.6 0.0 0:24.36 hald 25956 apache 23 0 398m 5872 644 S 5.6 0.1 0:13.85 httpd 18336 root 18 0 13004 1332 724 R 0.3 0.0 1:46.66 top 1 root 18 0 10372 212 180 S 0.0 0.0 0:05.99 init 2 root RT -5 0 0 0 S 0.0 0.0 0:00.95 migration/0 3 root 34 19 0 0 0 S 0.0 0.0 0:00.01 ksoftirqd/0 4 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/0 5 root RT -5 0 0 0 S 0.0 0.0 0:00.15 migration/1 6 root 34 19 0 0 0 S 0.0 0.0 0:00 .06 ksoftirqd/1 here is a normal top, when server is working fine: top - 01:50:41 up 21 min, 1 user, load average: 2.98, 2.70, 1.68 Tasks: 271 total, 2 running, 269 sleeping, 0 stopped, 0 zombie Cpu(s): 15.0%us, 1.1%sy, 0.0%ni, 81.4%id, 2.4%wa, 0.1%hi, 0.0%si, 0.0%st Mem: 8165600k total, 2035856k used, 6129744k free, 60840k buffers Swap: 2104496k total, 0k used, 2104496k free, 283744k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2204 apache 17 0 466m 83m 19m S 25.9 1.0 0:22.16 httpd 11347 apache 15 0 466m 83m 19m S 25.9 1.0 0:26.10 httpd 18204 apache 18 0 481m 97m 19m D 25.2 1.2 0:13.99 httpd 4644 apache 18 0 481m 100m 19m D 24.6 1.3 1:17.12 httpd 4727 apache 17 0 481m 99m 19m S 24.3 1.2 1:10.77 httpd 4777 apache 17 0 482m 102m 21m S 23.6 1.3 1:38.27 httpd 8924 apache 15 0 483m 99m 19m S 22.3 1.3 1:13.41 httpd 9390 apache 18 0 483m 99m 19m S 18.9 1.2 1:05.35 httpd 4728 apache 16 0 481m 101m 19m S 14.3 1.3 1:12.50 httpd 4648 apache 15 0 481m 107m 27m S 12.6 1.4 1:18.62 httpd 24955 apache 15 0 467m 82m 19m S 3.3 1.0 0:21.80 httpd 4722 apache 15 0 503m 118m 19m R 1.7 1.5 1:17.79 httpd 4647 apache 15 0 484m 105m 20m S 1.3 1.3 1:40.73 httpd 4643 apache 16 0 481m 100m 20m S 0.7 1.3 1:11.80 httpd 1561 root 15 0 12900 1264 828 R 0.3 0.0 0:00.54 top 4434 mysql 15 0 496m 55m 4812 S 0.3 0.7 0:06.69 mysqld 4646 apache 15 0 481m 100m 19m S 0.3 1.3 1:25.51 httpd 1 root 18 0 10372 692 580 S 0.0 0.0 0:02.09 init 2 root RT -5 0 0 0 S 0.0 0.0 0:00.03 migration/0 3 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/0 4 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/0 5 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/1 6 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/1 7 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/1 8 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/2 9 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/2 10 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/2 11 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/3 12 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/3 13 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/3 14 root RT -5 0 0 0 S 0.0 0.0 0:00.03 migration/4 15 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/4 16 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/4 17 root RT -5 0 0 0 S 0.0 0.0 0:00.02 migration/5 18 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/5 19 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/5 20 root RT -5 0 0 0 S 0.0 0.0 0:00.01 migration/6 21 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/6 22 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/6 23 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/7

    Read the article

  • UCARP: prevent the original master from taking over the VIP when it comes back after failure?

    - by quanta
    Keepalived can do this by combining the nopreempt option and the BACKUP state on the both nodes: Prevent VRRP Master from becoming Master once it has failed Prevent master to fall back to master after failure How about the UCARP? Name : ucarp Arch : x86_64 Version : 1.5.2 Release : 1.el5.rf Size : 81 k Repo : installed Summary : Common Address Redundancy Protocol (CARP) for Unix URL : http://www.ucarp.org/ License : BSD Description: UCARP allows a couple of hosts to share common virtual IP addresses in order : to provide automatic failover. It is a portable userland implementation of the : secure and patent-free Common Address Redundancy Protocol (CARP, OpenBSD's : alternative to the patents-bloated VRRP). : Strong points of the CARP protocol are: very low overhead, cryptographically : signed messages, interoperability between different operating systems and no : need for any dedicated extra network link between redundant hosts. If I don't use the --preempt option and set the --advskew to the same value, both nodes become master. /etc/sysconfig/carp/vip-010.conf # Virtual IP configuration file for UCARP # The number (from 001 to 255) in the name of the file is the identifier # $Id: vip-001.conf.example 1527 2004-07-09 15:23:54Z dude $ # Set the same password on all mamchines sharing the same virtual IP PASSWORD="pa$$w0rd" # You are required to have an IPADDR= line in the configuration file for # this interface (so no DHCP allowed) BIND_INTERFACE="eth0" # Do *NOT* use a main interface for the virtual IP, use an ethX:Y alias # with the corresponding /etc/sysconfig/network-scripts/ifcfg-ethX:Y file # already configured and ith ONBOOT=no VIP_INTERFACE="eth0:0" # If you have extra options to add, see "ucarp --help" output # (the lower the "-k <val>" the higher priority and "-P" to become master ASAP) OPTIONS="-z -k 255" /etc/sysconfig/network-scripts/ifcfg-eth0:0 DEVICE=eth0:0 ONBOOT=no BOOTPROTO= IPADDR=192.168.6.8 NETMASK=255.255.255.0 USERCTL=yes IPV6INIT=no node 1: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether c6:9b:8e:af:a7:69 brd ff:ff:ff:ff:ff:ff inet 192.168.6.192/24 brd 192.168.6.255 scope global eth0 inet 192.168.6.8/24 brd 192.168.6.255 scope global secondary eth0:0 inet6 fe80::c49b:8eff:feaf:a769/64 scope link valid_lft forever preferred_lft forever node 2: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:30:48:f7:0f:81 brd ff:ff:ff:ff:ff:ff inet 192.168.6.38/24 brd 192.168.6.255 scope global eth1 inet 192.168.6.8/24 brd 192.168.6.255 scope global secondary eth1:0 inet6 fe80::230:48ff:fef7:f81/64 scope link valid_lft forever preferred_lft forever

    Read the article

  • Clear OS always showing "Operation too slow. Less than 1 bytes/sec"

    - by Blue Gene
    Have been trying to install clear os addon but nothing is working as i am facing this error on every mirror in the .repo file. Yum install squid http://mirror2-dallas.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: [Errno 12] Timeout on http://mirror2-dallas.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: (28, 'Operation too slow. Less than 1 bytes/sec transfered the last 30 seconds') Trying other mirror. mirror2-dc.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: [Errno 12] Timeout on mirror2-dc.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: (28, 'Operation too slow. Less than 1 bytes/sec transfered the last 30 seconds') Trying other mirror. mirror1.timburgess.net/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: [Errno 12] Timeout on mirror1.timburgess.net/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: (28, 'Operation too slow. Less than 1 bytes/sec transfered the last 30 seconds') Trying other mirror. mirror2-houston.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: [Errno 12] Timeout on mirror2-houston.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: (28, 'Operation too slow. Less than 1 bytes/sec transfered the last 30 seconds') Trying other mirror. mirror3-toronto.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: [Errno 12] Timeout on mirror3-toronto.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: (28, 'Operation too slow. Less than 1 bytes/sec transfered the last 30 seconds') Trying other mirror. mirror2-dallas.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: [Errno 12] Timeout on mirror2-dallas.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: (28, 'O*peration too slow. Less than 1 bytes/sec transfered the last 30 seconds'*) Trying other mirror. mirror2-dc.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: [Errno 12] Timeout on mirror2-dc.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: (28, 'Operation too slow. Less than 1 bytes/sec transfered the last 30 seconds') Trying other mirror. mirror1.timburgess.net/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: [Errno 12] Timeout on mirror1.timburgess.net/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: (28, 'Operation too slow. Less than 1 bytes/sec transfered the last 30 seconds') Trying other mirror. mirror3-toronto.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: [Errno 12] Timeout on mirror3-toronto.clearsdn.com/clearos/core/6/x86_64/repodata/primary.sqlite.bz2: (28, 'Operation too slow. Less than 1 bytes/sec transfered the last 30 seconds') Trying other mirror. Error: failure: repodata/primary.sqlite.bz2 from clearos-core: [Errno 256] No more mirrors to try. How can i fix this.i am able to access repo through web,and it seems nothing wrong with the repo.Where can be the problem. Tried yum clean all but it also didnt help. Is there a way to fix it as i am not able to install any package in it.

    Read the article

  • Testing UDP port connectivity

    - by Lock
    I am trying to test whether I can get to a particular port on a remote server (both of which I have access to) through UDP. Both servers are internet facing. I am using netcat to have a certain port listening. I then use nmap to check for that port to see if it is open, but it doesn't appear to be. Iptables is turned off. Any suggestions why this could be? I am eventually going to setup a VPN tunnel, but because I'm very new to tunnels, I want to make sure I have connectivity on port UDP 1194 before advancing.

    Read the article

  • Is there software that will help me convert my PST files into a searachable web archive?

    - by chronoz
    I have used POP3 for many and many years and always used PST files for back-up purposes. I'd like to be able to create a searchable mail archive of this 12GB worth of e-mail. I had used Horde + Qmail for a while for searching e-mail, but it was truly horrible and even extremely slow when searching into a few ten thousands of e-mails, let alone more than a million. I would prefer a free solution that would provide fast searching through historical e-mails. Also, preferably hosted on a server, so I don't have to worry about backing up any more crucial data on my desktop.

    Read the article

  • php.ini date.timezone usefulness?

    - by Buttle Butkus
    I'm not sure if this is a question for serverfault or stackoverflow but it seems like it has a lot to do with server config. We have a server in chicago and the server's clock is on chicago time. But since the business is located in California, it would seem to make sense to use pacific time. What happens when server time is Chicago, and php.ini directive date.timezone is set to "America/Los_Angeles"? How will that affect logs written to mysql, error logs, etc? I've looked at the Apache error log and, as I expected, the php directive does not affect it. Times are all servertime. Thanks.

    Read the article

  • How to execute a shell script on startup?

    - by vijay.shad
    I have create a script to start a server(my first question). Now I want it to run on the system boot and start the defined server. What should I do to get this done? My findings tell me put this file in /etc/init.d location and it will execute when the system will boot. But I am not able to understand how the first argument on the startup will be start? Is this predefined somewhere to use start as $1? If I want to have a case startall that will start all the servers in the script, then what are the options I can manage. My Script is like this: #!/bin/bash case "$1" in start) start ;; stop) stop ;; restart) $0 stop $0 start ;; *) echo "usage: $0 (start|stop|restart)" ;; esac

    Read the article

  • apache on Cent OS opening default page on https

    - by Asghar
    I am new to apache and SSL and configuration, i got verysign certificte to secure my site. i have public, private and ca_intermediate cert files. i have configured ssl.conf as below VirtualHost _default_:443> DocumentRoot /var/www/mydomain.com/web/ ServerName mydomain.com:443 ServerAlias www.mydomain.com # Use separate log files for the SSL virtual host; note that LogLevel # is not inherited from httpd.conf. ErrorLog logs/ssl_error_log TransferLog logs/ssl_access_log LogLevel warn # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on problem is that when i access www.mydoamin.com with "HTTP" it works fine, but when i access using "HTTPS" it just opens apache default page. but with green "HTTPS" means my certificates are installed correctly. How can i get rid of this situtaion. Thanks EDIT Output of apachectl -S -bash-3.2# apachectl -S [Mon Aug 27 10:20:19 2012] [warn] NameVirtualHost 82.56.29.189:80 has no VirtualHosts [Mon Aug 27 10:20:19 2012] [warn] NameVirtualHost 82.56.29.189:443 has no VirtualHosts VirtualHost configuration: wildcard NameVirtualHosts and _default_ servers: _default_:8081 localhost.localdomain (/etc/httpd/conf/sites-enabled/000-apps.vhost:10) *:8080 is a NameVirtualHost default server localhost.localdomain (/etc/httpd/conf/sites-enabled/000-ispconfig.vhost:10) port 8080 namevhost localhost.localdomain (/etc/httpd/conf/sites-enabled/000-ispconfig.vhost:10) *:443 is a NameVirtualHost default server mydomain.com (/etc/httpd/conf.d/ssl.conf:81) port 443 namevhost mydomain.com (/etc/httpd/conf.d/ssl.conf:81) *:80 is a NameVirtualHost default server app.mydomain.com (/etc/httpd/conf/sites-enabled/100-app.mydomain.com.vhost:7) port 80 namevhost app.mydomain.com (/etc/httpd/conf/sites-enabled/100-app.mydomain.com.vhost:7) port 80 namevhost mydomain.com (/etc/httpd/conf/sites-enabled/100-mydomain.com.vhost:7) Syntax OK

    Read the article

  • Including a PHP file that can be used with multiple sites

    - by Roland
    I have a web server that we use, apache, centos5, php I have a file called 'include.php' that I need to include in multiple sites. Eg. I have a site called testsite.co.za, now in the index.php i want to include the include.php file, the include.php is not in the root of testsite.co.za, Now i created another folder includes in the web root directory which contains include.php my code looks as follows in testsite.co.za/index.php require_once '../includes/include.php'; if i run testsite.co.za it can't detect include.php. Is there a certain server setting I need to change in order to include this file? My directory structureof -/var/www/html   -testsite.co.za       -index.php     -includes       -include.php Hope this makes sence

    Read the article

  • Fast (non-blocking) way to transfer many files to another server

    - by Nyxynyx
    I am currently attempting to transfer over 1 million files from one server to another. Using wget, it seems to be extremely slow, probably because it starts a new transfer after the previous one has been completed. Question: Is there a faster non-blocking (asynchronous) way to do the transfer? I do not have enough space on the first server to compress the files into tar.gz and transferring them over. Thanks!

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >