Search Results

Search found 3039 results on 122 pages for 'centos'.

Page 109/122 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • Hosting websites in our Workplace custom-built datacentre

    - by i.h4d35
    I'm faced with unique learning opportunity at work at the moment. Due to the slowdown (amongst other reasons), the powers that be at my office have decided to abandon our shared hosting providers (both shared and dedicated hosting) and have decided to host the websites at our office's datacentre. We're running 7 websites, wherein the average unique hits per day at the moment is about 900. We have 2 servers set aside for this - one is a DELL POWER EDGE 1850 (Intel Xeon 3 GHZ*2, 4GB RAM, 73GB HDD and the other is an HP DL 380 G3 (Intel Xeon 2.8 GHz, 6 GB RAM, 73 GB HDD) a) I would like to know the pros and cons of going ahead with this project.All the sites will be hosted on a single IP. In all probability, the OS is going to be CentOS. b) Do you think I should consider Virtualization into this equation (KVM/Xen)? I was thinking in terms of separate instances of the DB server and the frontend though I do not know if this is the best way to go. c) Should I be trying to use cloud stacks like OpenStack and try to make it look like websites hosted on some sort of Public Cloud? (something that I checked out here). Here is something else I came across, which looks similar to what needs to be done at our office. About the websites - Of the 7 websites, 4 are basic static websites which basically gives a whole lot of information about a few local institutions. The remaining 3 are local product-based websites developed in PHP wherein end user can view products and order them online. I am trying to take this as a learning experience wherein I can learn to build something from scratch and save the company a little something in the process. The migration needs to be completed by Easter so I guess it gives us some time (or am I being overly optimistic??). I am confused here and would appreciate all the help I can get. Thanks in advance.

    Read the article

  • Cannot login to ISCSI Target - hangs after sending login details

    - by Frank
    I have an ISCSI target volume, to which i am trying to connect using CentOS Linux server. Everything works fine, but cannot its stuck at login. Here are the steps i am performing: [root@neon ~]# iscsiadm -m node -l iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session20 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session21 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session22 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session23 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session30 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session31 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session78 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session79 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session80 iscsiadm: could not read session targetname: 5 iscsiadm: could not find session info for session81 Logging in to [iface: eql.eth2, target: iqn.2001-05.com.equallogic:0-8a0906-ab4764e0b-55ed2ef5cf350a66-neon105, portal: 10.10.1.1,3260] (multiple) After this step, its stucks, waits for some time and then gives this output: Logging in to [iface: iface1, target: iqn.2001-05.com.equallogic:0-8a0906-ab4764e0b-55ed2ef5cf350a66-neon105, portal: 10.10.1.1,3260] (multiple) iscsiadm: Could not login to [iface: eql.eth2, target: iqn.2001-05.com.equallogic:0-8a0906-ab4764e0b-55ed2ef5cf350a66-neon105, portal: 10.10.1.1,3260]. My iscsi.conf is this: node.startup = automatic node.session.timeo.replacement_timeout = 15 # default 120; RedHat recommended node.conn[0].timeo.login_timeout = 15 node.conn[0].timeo.logout_timeout = 15 node.conn[0].timeo.noop_out_interval = 5 node.conn[0].timeo.noop_out_timeout = 5 node.session.err_timeo.abort_timeout = 15 node.session.err_timeo.lu_reset_timeout = 20 node.session.initial_login_retry_max = 8 # default 8; Dell recommended node.session.cmds_max = 1024 # default 128; Equallogic recommended node.session.queue_depth = 32 # default 32; Equallogic recommended node.session.iscsi.InitialR2T = No node.session.iscsi.ImmediateData = Yes node.session.iscsi.FirstBurstLength = 262144 node.session.iscsi.MaxBurstLength = 16776192 node.conn[0].iscsi.MaxRecvDataSegmentLength = 262144 discovery.sendtargets.iscsi.MaxRecvDataSegmentLength = 32768 node.conn[0].iscsi.HeaderDigest = None node.session.iscsi.FastAbort = Yes Also, in access control, i have given full access to Any IP, Any CHAP user and fixed iscsi initiator name. With same access level, all other volumes on rest of servers are working, except this one.

    Read the article

  • High disk I/O - jbd2/sda2-8 process

    - by Evan Hamlet
    I have run a file server on a CentOS 5.8 final server. My only concern at the moment is what appears to be intermittent but continuous high disk I/O activity causing a general slowdown because of jbd2/sda2-8 process. jbd2/sda2-8 is making use of /dev/sda2, which is the 2nd partition of the first harddrive (IE: root partition). More info: using "iotop" the culprit appears to be "jbd2/sda1-8" making writes every second, which appears to be a kernel process associated with journaling on the ext4 filesystem, if my googling around is correct. I see "jbd2/sda2-8" appearing here every now and then, but certainly not every 3 seconds.. when idle, it appears about 1 or 2 times per minute. When I'm using the system, it appears more frequently. ATOP results: http://grabilla.com/02b14-8022db2e-4eb9-4f10-8e10-d65c49ad7530.png IOTOP results: http://grabilla.com/02b14-cf74b25d-4063-4447-9210-7d1b9b70e25b.png HTOP results: grabilla. com/02b14-ad8cad0e-89b0-46d3-849d-4fd515c1e690.png jbd2/sda2-8 is the processes I see with iotop making writes on disk even though it's not in use at all. Does someone has any idea how could I solve the high disk usage caused jbd2/sda2-8 process?

    Read the article

  • SELinux blocking Samba directory listing

    - by Sean M
    I am running Samba on a CentOS server, and I am experiencing a problem where it allows me to connect to the server and see a share, but shows the share as an empty directory. I find this behavior strange. Here is the stanza in my smb.conf for the given share: [seanm] path = /home/seanm writeable = yes valid users = seanm, root read only = No Here's what I see on the server side: [seanm@server ~]$ ls -l -rw-r--r-- 1 seanm seanm 40 Jan 4 13:45 pangram.txt And yet: [seanm@client ~]$ smbclient //server/seanm -U seanm -W WORKGROUP Enter seanm's password: Domain=[WORKGROUP] OS=[Unix] Server=[Samba 3.0.33-3.29.el5_5.1] smb: \> ls . D 0 Fri Jan 7 10:08:55 2011 .. D 0 Fri Jan 7 07:58:31 2011 58994 blocks of size 262144. 50356 blocks available This behavior is present on both a Windows client and a Linux client system. The behavior is present with the firewall on and with the firewall off, so it's not that. Neither /var/log/messages nor /var/log/secure have any complaints about Samba. I doubt that SELinux is a problem: just in case, here are the relevant settings. [root@server ~]# getsebool -a | grep samba samba_domain_controller --> off samba_enable_home_dirs --> on samba_export_all_ro --> off samba_export_all_rw --> off samba_share_fusefs --> off samba_share_nfs --> off use_samba_home_dirs --> on virt_use_samba --> off What am I doing wrong here, and what can I do to fix it? Edit: SELinux probably is the problem, judging by the fact that the issue goes away when I set SELinux to "permissive" or issue setsebool -P samba_export_all_rw on - both of which are unacceptable for production environments. What the heck kind of context does a directory need to have on it for Samba users to actually get files from it? I consider rolling your own rules and/or context to be deeply sub-optimal.

    Read the article

  • dhclient requests filling memory?

    - by shanethehat
    Dammit Jim, I'm a web developer, not a sys admin. With that out of the way, my client's has a CentOS server (6.2) that is only serving a single Magento site (and the associated MySQL server) and it is frequently running out of memory, despite the site only currently being open to 5 users. I'm investigating the logs to try to figure out why the memory usage is so high, but I don't really know what I'm looking at. It seems that there are a lot of entries in /var/log/messages concerning DHCP requests, approximately one every 15 seconds, that look like this: Apr 7 14:23:06 s15940039 dhclient[815]: DHCPREQUEST on eth0 to 172.30.102.85 port 67 (xid=0x6b5cd2a7) Is this normal? I don't see anything else in here that I don't recognise, but then I'm not sure I'd know the problem if I did see it. 4 days ago the server ran out of memory completely and locked up, requiring a restart. The DHCP messages did not start up again for 23 hours, but then carried on as before. I have read this question which describes the same issue, but in my case a fresh DHCP lease does not ever seem to be issued. Is this something I should push back to the hosting provider, or have I not yet found the source of the memory problem?

    Read the article

  • Karmic iptables missing kernel moduyles on OpenVZ container

    - by luison
    After an unsuccessful p2v migration of my Ubuntu server to an OpenVZ container which I am stack with I thought I would give a try to a reinstall based on a clean OpenVZ template for Ubuntu 9.10 (from the OpenVZ wiki) When I try to load my iptables rules on the VM machine I've been getting errors which I believe are related to kernel modules not being loaded on the VM from the /vz/XXX.conf template model. I've been testing with a few post I've found but I was stack with the error: WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. FATAL: Could not load /lib/modules/2.6.24-10-pve/modules.dep: No such file or directory iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw' Error occurred at line: 2 Try `iptables-restore -h' or 'iptables-restore --help' for more information. I read about the template not loading all iptables modules so I added modules to the XXX.conf of the VZ virtual machine like this: IPTABLES="ip_tables iptable_filter iptable_mangle ipt_limit ipt_multiport ipt_tos ipt_TOS ipt_REJECT ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_LOG ipt_length ip_conntrack ip_conntrack_ftp ip_conntrack_irc ipt_conntrack ipt_state ipt_helper iptable_nat ip_nat_ftp ip_nat_irc" As the error remained I read that I should build dependencies again on the virtual machine: depmod -a but this returned an error: WARNING: Couldn't open directory /lib/modules/2.6.24-10-pve: No such file or directory FATAL: Could not open /lib/modules/2.6.24-10-pve/modules.dep.temp for writing: No such file or directory So I read again about creating the directory empty and redoing "depmod -a" it. I now don't get the dependancies error but get this and I don't have a clue how to proceed: WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. FATAL: Module ip_tables not found. iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw' Error occurred at line: 2 Try `iptables-restore -h' or 'iptables-restore --help' for more information. I understand that iptables rules have to be different on the VM machine and perhaps some of the rules we are trying to apply (from our physical server) are not compatible but these are just source IP and destination port checks that I would like to be able to have available . I've heard that on the CentOS template there are no issues with this, so I understand is to do with VM config. Any help would be greatly appreciated.

    Read the article

  • PHP memory_limit local value does not match php.ini value

    - by Buttle Butkus
    CentOS system. Summary: changed memory_limit in master and local php.ini and yet no change in the local value for a particular virtual host. Trying to improve performance, I set the memory_limit to 1024M in /etc/php.ini phpinfo() shows Master and Local values for other virtual hosts on the server as 1024M. Changing the value in /etc/php.ini changes all values, except one. One site is stuck with a local value of 256M. I thought I found the problem: there is a php.ini file (which I didn't know about) in that site's root, and it had memory_limit = 256M I changed it to 1024M. Problem solved? No. And now I don't know where to look. Obviously, I've restarted apache (/etc/init.d/httpd restart), and that usually does the trick. I also turned off APC cache, though I don't think it would cache ini files. And finally, I tried adding this to the virtual host in httpd.conf: php_value memory_limit 536870912 (yes, that would be 5 GB) And that had no effect whatsoever. What else could be the problem? Thanks.

    Read the article

  • Can't install MailParse on cpanel server

    - by Tom
    Hi, I've got a linux vps running CentOs 5.5 (cpanel/whm), I've installed MailParse via Module Installers section on whm, and it did install it, the end of setup log: running: make INSTALL_ROOT="/root/tmp/pear-build-root/install-mailparse-2.1.5" install Installing shared extensions: /root/tmp/pear-build-root/install-mailparse-2.1.5/usr/lib/php/extensions/no-debug-non-zts-20090626/ running: find "/root/tmp/pear-build-root/install-mailparse-2.1.5" | xargs ls -dils 508718 4 drwxr-xr-x 3 root root 4096 Feb 6 21:08 /root/tmp/pear-build-root/install-mailparse-2.1.5 508745 4 drwxr-xr-x 3 root root 4096 Feb 6 21:08 /root/tmp/pear-build-root/install-mailparse-2.1.5/usr 508746 4 drwxr-xr-x 3 root root 4096 Feb 6 21:08 /root/tmp/pear-build-root/install-mailparse-2.1.5/usr/lib 508747 4 drwxr-xr-x 3 root root 4096 Feb 6 21:08 /root/tmp/pear-build-root/install-mailparse-2.1.5/usr/lib/php 508748 4 drwxr-xr-x 3 root root 4096 Feb 6 21:08 /root/tmp/pear-build-root/install-mailparse-2.1.5/usr/lib/php/extensions 508749 4 drwxr-xr-x 2 root root 4096 Feb 6 21:08 /root/tmp/pear-build-root/install-mailparse-2.1.5/usr/lib/php/extensions/no-debug-non-zts-20090626 508744 196 -rwxr-xr-x 1 root root 193502 Feb 6 21:08 /root/tmp/pear-build-root/install-mailparse-2.1.5/usr/lib/php/extensions/no-debug-non-zts-20090626/mailparse.so Build process completed successfully Installing '/usr/lib/php/extensions/no-debug-non-zts-20090626/mailparse.so' install ok: channel://pecl.php.net/mailparse-2.1.5 Extension mailparse enabled in php.ini The mailparse.so object is not in /usr/local/lib/php/extensions/no-debug-non-zts-20090626 Now, when i try to use mailparse functions using php i get the following error: PHP Warning: PHP Startup: Unable to load dynamic library '/usr/local/lib/php/extensions/no-debug-non-zts-20090626/mailparse.so' - /usr/local/lib/php/extensions/no-debug-non-zts-20090626/mailparse.so: cannot open shared object file: No such file or directory in Unknown on line 0 What should i do?

    Read the article

  • Problems installing MySQL-python via yum / missing dependency / incompatibility problem?

    - by bs0
    I have come up against problems installing MySQL-python via yum. Our server is running Centos 5.5 and MySQL Version 5.1.45, Python-dev is installed. Yum complains about the missing dependency libmysqlclient_r.so.15: Missing Dependency: libmysqlclient_r.so.15()(64bit) is needed by package MySQL-python-1.2.1-1.x86_64 (base) The server is up to date and the packages mysql mysql-devel python-devel are installed. The missing dependency is nowhere on the system: locate libmysqlclient /usr/lib64/libmysqlclient.so /usr/lib64/libmysqlclient.so.15 /usr/lib64/libmysqlclient.so.16 /usr/lib64/libmysqlclient.so.16.0.0 /usr/lib64/libmysqlclient_r.so /usr/lib64/libmysqlclient_r.so.16 /usr/lib64/libmysqlclient_r.so.16.0.0 /usr/lib64/mysql/libmysqlclient.a /usr/lib64/mysql/libmysqlclient.la /usr/lib64/mysql/libmysqlclient.so /usr/lib64/mysql/libmysqlclient_r.a /usr/lib64/mysql/libmysqlclient_r.la /usr/lib64/mysql/libmysqlclient_r.so /usr/local/cpanel/lib64/libmysqlclient.so.14 rpm -qa | grep -i mysql MySQL-devel-5.1.45-0.glibc23 MySQL-bench-5.0.89-0.glibc23 MySQL-shared-5.1.45-0.glibc23 MySQL-server-5.1.45-0.glibc23 MySQL-test-5.1.45-0.glibc23 MySQL-client-5.1.45-0.glibc23 The Python version is python-2.4.3-27.el5.x86_64: Python 2.4.3 (#1, Sep 3 2009, 15:37:37) [GCC 4.1.2 20080704 (Red Hat 4.1.2-46)] on linux2 Any suggestions would be greatly appreciated.

    Read the article

  • Compiled git from source, cannot access subversion repositories using git-svn

    - by haydenmuhl
    I'm setting up a CentOS dev box, and need git. At first I tried to install git using yum, but I could not connect to a yum repository that had git. Next I downloaded the git source (version 1.7.1) and compiled it. When I run the following command git svn clone svn+ssh://... I get the following error. Initialized empty Git repository in /root/main_ec/.git/ Can't locate SVN/Core.pm in @INC (@INC contains: /usr/local/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi /usr/local/lib/perl5/site_perl/5.8.8 /usr/lib/perl5/site_perl/5.8.8/i386-linux-thread-multi /usr/lib/perl5/site_perl/5.8.8 /usr/lib/perl5/site_perl /usr/lib/perl5/vendor_perl/5.8.8/i386-linux-thread-multi /usr/lib/perl5/vendor_perl/5.8.8 /usr/lib/perl5/vendor_perl /usr/lib/perl5/5.8.8/i386-linux-thread-multi /usr/lib/perl5/5.8.8 .) at /usr/local/libexec/git-core/git-svn line 41. It looks like git-svn uses perl in some manner, but I don't know packages I'm missing. Can anyone help?

    Read the article

  • Website memory problem

    - by Toktik
    I have CentOS 5 installed on my server. I'm in VPS server. I have site where I have constant online ~150. First look on site looks OK. But when I go through links, sometimes I receive Out of memory PHP error. It looks like this Fatal error: Out of memory (allocated 36962304) (tried to allocate 7680 bytes) in /home/host/public_html/sites/all/modules/cck/modules/fieldgroup/fieldgroup.install on line 100 And always, not allocated memory is very small. In average I have 30% CPU load, 25% RAM load. So I think here is not a physical memory problem. My PHP memory limit was set to 1500MB. My apache error log looks like this [Thu Sep 30 17:48:59 2010] [error] [client 91.204.190.5] Out of memory, referer: http://www.host.com/17402 [Thu Sep 30 17:48:59 2010] [error] [client 91.204.190.5] Premature end of script headers: index.php, referer: http://www.host.com/17402 [Thu Sep 30 17:48:59 2010] [error] [client 91.204.190.5] Out of memory, referer: http://www.host.com/17402 [Thu Sep 30 17:48:59 2010] [error] [client 91.204.190.5] Premature end of script headers: index.php, referer: http://www.host.com/17402 [Thu Sep 30 17:49:00 2010] [error] [client 91.204.190.5] File does not exist: /home/host/public_html/favicon.ico Past I have not met with this on my server and the problem appeared itself. Besides this I'm receiving some server errors on mail. cpsrvd failed @ Fri Sep 24 16:45:20 2010. A restart was attempted automagically. Service Check Method: [tcp connect] Failure Reason: Unable to connect to port 2086 Same for tailwatchd. Support tried, and can't help me...

    Read the article

  • Simultanious process mysteriously ending

    - by Matt
    I'm trying to run a large air quality model, written in FORTRAN, setup with bash scripts, and run in a work queue (slurm.) The first part of the modeling is to run an "entry" model, this runs with MPI in the work queue but only on one process. At one point in the logs, there's a mysterious FORTRAN STOP, and then later the model fails because something wasn't set up properly. This FORTRAN STOP isn't from the main process, which continues running. This is a huge model, but as far as I know there should not be any other processes running at the same time. It consistently fails at the exact same spot. (I can move it by adding debug, but the debug is in the main process) How can I determine what this process is? I've tried added a call to strace -feprocess $SHELL in the run script, but I'm new to this, so if it has offered any info, I haven't been able to use it yet. The is no trace output around the FORTRAN STOP. The whole process occurs so fast that I can't seem to observe it by using ps. Is there a way I can somehow monitor all the processes being initiated from the time the work queue starts? Or some other way I can figure out what is failing? This is running on CentOS 6.4, with Slurm, compiled with PGI 13.

    Read the article

  • Using modem for sending voice recording

    - by ircmaxell
    I've got an interesting one for you. I've been going over my server monitoring and notification systems (Nagios based), and realized that if our internet connection goes down, there's no way for it to notify me. I already have a modem listening (Via CentOS 5) on a spare POTS line so that I can dial-in in case our internet goes down. I was wondering if I could come up with a script (Shell, Python, etc) that can dial out and play a recorded message (wave file I'm guessing) when it's picked up. I know Windows supports voice calls over a voice modem, I was wondering if a solution existed for Linux... I know asterisk can probably do it, but isn't that overkill (A full blown VOIP system just for a notification mechanism that will hopefully never be used)? And wouldn't it interfere with the modem's primary function as a backup network interface (PPP spawned via mgetty)? I've done some searching, and haven't really come up with much. I know how to dial out from the command line, but only as a modem (not as voice). Worst case, I could set it up to dial out as a modem, and then just realize that if I get a call with modem sounds from that number that it's the notification... Any insight would be appreciated...

    Read the article

  • Fully FOSS EMail solution

    - by Ravi
    I am looking at various FOSS options to build a robust EMail solution for a government funded university. Commercial options are to be chosen only in the worst case scenario. Here are the requirements: Approx 1000-1500 users - Postfix or Exim? (Sendmail is out;-)) Mailing lists for different groups/Need web based archive - Mailman? Sympa? Centralised identity store - OpenLDAP? Fedora 389DS? Secure IMAP only - no POP3 required - Courier? Dovecot? Cyrus?? Anti Spam - SpamAssasin? what else? Calendaring - ?? webmail - good to have, not mandatory - needs to be very secure...so squirrelmail is out;-)? Other questions: What mailbox storage format to use? where to store? database/file system? Simple and effective HA options? Is there a web proxy equivalent to squid in the mail server world? software load balancers?CARP? Monitoring and alert? Backup? The govt wants to stimulate the local economy by buying hardware locally from whitebox vendors. Also local consultants and university students will do the integration. We looked at out-of-the-box integrated solutions like Axigen, Zimbra and GMail but each was ruled out in favour of a DIY approach in the hopes of full control over the data and avoiding vendor lockin - which i though was a smart thing to do. I wish more provincial governments in the developing world think of these sort of initiatives As for OS - Debian, FreeBSD would be first preference. Commercial OS's need not apply. CentOS as second tier option...

    Read the article

  • Shrink a Volume Group in LVM / Linux in order to install Windows on the freed space

    - by Stephan Kristyn
    I have a Volume Group with Unused space. This 40Gig should become an entidy in order to install Microsoft windows 7 on it. I do not have extra space on the drive - that is why I want to shrink the VG! LVG berta resides on sda2 and consists of lv_root lv_swap unused_space I want it to become lv_root lv_swap and have a seperate entidy made out of unused_space. Microsoft Windows 7 has to get installed on this entidy. I do not understand why Linux made simple things complicated. I utterly hate LVM and think its absolute bollocks. Useful Sources: http://www.centos.org/docs/5/html/Deployment_Guide-en-US/s1-system-config-lvm.html Edit: I found the answer. The necessary steps depict how complicated LVM really is. In my opinion it is best to avoiding LVM until pvresize matures as promised in its man pages. Answer: http://fedorasolved.org/Members/zcat/shrink-lvm-for-new-partition If you run into problems when you want to remove lvswap even if in resuce mode, then try swapoff /dev/vg_1/lv_swap lvchange -an /dev/vg_1/lv_swap

    Read the article

  • Missing libcurl.so.3 on updating tp PHP 5.2.13

    - by exentric
    Hi, I am trying to update my PHP to 5.2.13 however when I tried running yum update, it gives me this dependency error. php-5.2.13-jason.1.i386 from utterramblings has depsolving problems --> Missing Dependency: libcurl.so.3 is needed by package php-5.2.13-jason.1.i386 (utterramblings) Error: Missing Dependency: libcurl.so.3 is needed by package php-cli-5.2.13-jason.1.i386 (utterramblings) Error: Missing Dependency: libcurl.so.3 is needed by package php-5.2.13-jason.1.i386 (utterramblings) I believe this problem has been caused by my updating libcurl some time ago (to version 7.16.4-8.el5) but I have no idea how to solve this dependency issue. Some time ago my friend asked me regarding missing libcurl.so.3 as well on running some script. Can't say I remember what but he did say he managed to solved it (at least on his end) so I paid no attention to the libcurl.so.3 issue anymore. But now when I try to update my PHP, this problem arises again. This however does indeed exist (and presumably what solved my friend's issue): /usr/lib/libcurl.so.3 Any thoughts on this matter? I'm using centOS 5.3, PHP 5.2.11 and on LightTPD. -Regards

    Read the article

  • linux hardware raid 10 / lvm / virtual machine partition alignment and filesystem optimization

    - by Jason Ward
    I've been reading everything I can find about partition alignment and filesystem optimization (ext4 and xfs) but still don't know enough to be confident in setting up my current configuration. My remaining confusion comes from the LVM layer and if I should use raid parameters on the filesystem in guest os'es. My main questions are: When I use 'pvcreate --dataalignment' do I use the stripe-width as calculated for a filesystem on RAID (128kB for ext4 in my situation), the Stripe size of the RAID set (256kB), something else altogether, or do I not need this? When I create ext2/3/4 or xfs filesystems in guests on the Logical Volumes, should I add the settings for the underlying RAID (e.g. mkfs.ext4 -b 4096 -E stride=64,stripe-width=128)? Does anyone see any glaring errors in my set up below? I'm running some benchmarks now but haven't done enough to start comparing results. I have four drives in RAID 10 on a 3ware 9750-4i controller (more details on the settings below) giving me a 6.0TB device at /dev/sda. Here is my partition table: Model: LSI 9750-4i DISK (scsi) Disk /dev/sda: 5722024MiB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1.00MiB 257MiB 256MiB ext4 BOOTPART boot 2 257MiB 4353MiB 4096MiB linux-swap(v1) 3 4353MiB 266497MiB 262144MiB ext4 4 266497MiB 4460801MiB 4194304MiB Partition 1 is to be the /boot partition for my xen host. Partition 2 is swap. Partition 3 is to be the root (/) for my xen host. Partition 4 is to be (the only) physical volume to be used by LVM (for those who are counting, I left about 1.2TB unallocated for now) For my Xen guests, I usually create a Logical Volume of the needed size and present it to the guests for them to partition as needed. I know there are other ways of handling that but this method works best for my situation. Here's the hardware of interest on my CentOS 6.3 Xen Host: 4x Seagate Barracuda 3TB ST3000DM001 Drives (sector size: 512 logical/4096 physical) 3ware 9750-4i w/BBU (sector size reported: 512 logical/512 physical) All four drives make up a RAID 10 array. Stripe: 256kB Write Cache enabled Read Cache: intelligent StoreSave: Balance Thanks!

    Read the article

  • Starting my own server - basic recommendations and questions [closed]

    - by Ilia Rostovtsev
    Possible Duplicate: Can you help me with my capacity planning? I'm planning to start my own high-performance server and then use collocation services for keeping it up and running. I'm planning to USE it for processing videos and keeping big video site up! (using FFMpeg, MENcoder and etc.) I just need recommendations on whether listed hardware is good enough and will work together well and fast enough. Do I need anything else (missed something). I remember about CPU coolers though! ;) I'm planning to use SSD drives so please tell me if it's going to work just as regular HDDs (but much faster)? Are they going to be used as RAID (is this possible for SSDs)? Here is what I would like to get: Intel ® Server System SR1600URHSR (Urbanna) or Intel® Server System SR1695WBAC 2 x Intel Xeon X5650 4 x 16Gb DDR-III 1333MHz Kingston ECC Reg (KVR13R9D4/16) 3 x (or maybe 4x) 480Gb SSD Intel 520 Series (SSDSC2CW480A3K5) Which server system would be better? Is listed hardware new/good enough and worth buying it at the moment? Should I probably take a look at something slightly more expensive but more up to date and powerful, may be? After all as software I would like to use CentOS 6 64 bit + WHM/CPanel? Any other suggestions on maybe cheaper and same/more powerful server management system but WHM? What most important points to keep in mind when starting/maintaining your own server?

    Read the article

  • Domain names timing out after VPS IP change

    - by Fourjays
    I rent a CentOS 5 VPS from a UK-based provider, with DirectAdmin also installed. On Thursday night, they carried out planned maintenance to changed the two IPs I had been assigned to two new ones. On Friday, after the change had taken place, I updated my domain name records to reflect the IP change. Since then, all of the domains pointing to the VPS are timing out. Additionally, DirectAdmin was also not responding, but was was resolved by running the ipswap scripts as found in the DirectAdmin knowledgebase. It did not fix my domains though. I have contacted the VPS provider but I have been waiting for a response for some time now. I have checked again, and again, and all the IPs referenced in DirectAdmin are correct. If I go to the server IP in my browser it responds with "Apache is functioning normally." Email accounts on the server are also functioning correctly. But if I access a domain itself, it times out. Running a ping and a DNS look-up, I can confirm the nameserver IPs are correct. If I run a trace route it reaches an IP that is similar to my VPS IPs (last 2 blocks are different) before timing out (it never shows my server IP). I am relatively new to VPS management so don't have a vast wealth of experience with troubleshooting problems on them. I have checked all of the httpd configuration files, which don't seem to have any IP references in them at all. Looking in the Apache error logs, what errors there are do not coincide with times I have tried to access the site. Is this issue at my provider's end? Is there anything else I can check or test, to rule out post-IP-change problems with my server configuration? It was all running fine prior to the IP change.

    Read the article

  • APC uptime 0 because of Fast

    - by demlasjr
    I have a VPS using Parallels/Plesk (11.0.9 Update #22, last updated at Oct 31, 2012 03:33 AM CentOS 6.3 (Final) x86_64) I have apache (CGI/FastCGI) installed and nginx as reverse proxy. Everything is working just fine. I installed APC for caching, but the issue is that the uptime is 0 always. It's restarting each 15 seconds or so. I checked everywhere and can't find a solution to fix it. The server have the grace restart enabled, but every 6 hours, which shouldn't influence the APC uptime. Searching in Google I found that this could be related to Apache, running with FCGId instead of FastCGI. Plesk/Apache is using this config file: usr/local/psa/admin/conf/templates/default/service/php_over_fastcgi.php which content is: <IfModule mod_fcgid.c> <Files ~ (\.php)> SetHandler fcgid-script FCGIWrapper <?php echo $VAR->server->webserver->apache->phpCgiBin ?> .p$ Options +ExecCGI allow from all </Files> Is here the issue or elsewhere ? How can I fix this to work with FastCGI and make APC working properly. I forgot to specify that even if the uptime is below one minute, APC is doing pretty good job caching (92% are hits).

    Read the article

  • ISC DHCPD IPv6 for multiple interfaces

    - by Seoman
    I want to assign multiple IPv6 to a server with multiple NIC. As IPv6 RFC defines, each server has a unique DUID that can have one of the 3 formats (LL, LLT or enterprise). And each NIC has an IAID. So a request from NIC1 its the DUID and the IAID of the NIC1 and the request from NIC2 its the same DUID but the IAID its different. The problem is that from a Centos box, when I ask for an IP in 2 different interfaces, I get the same IP. I can't find how to specify host entry based on DUID and the IAID. I see some people generating a unique DUID based on the MAC of the NIC but this is not IPv6 RFC says. What I tried is: host entry1 { host-identifier option dhcp6.client-id 00:01:00:01:19:fc:f8:1c:52:54:00:7e:c9:ec; option dhcp6.ia-na "00:09:40:5d"; fixed-address6 2001:db8:0:1::202; } host entry2 { host-identifier option dhcp6.client-id 00:01:00:01:19:fc:f8:1c:52:54:00:7e:c9:ec; option dhcp6.ia-na "00:7e:c9:ec"; fixed-address6 2001:db8:0:1::201; } This causes a Segmentation Fault in the client (what is scary...). I guess is not the right use for ia-na option but I don't see any other option.

    Read the article

  • dovecot/postfix: can send & receive via webmin, however squirrel mail and outlook fail to connect

    - by Jonathan
    I have just finished setting up dovecot and postfix on my server (centos 5.5/apache) earlier today. So far I've been able to get email working through webmin (can send/receive to and from external domains). However, attempting to telnet xxx.xxx.xx.xxx 110 returns the following errors: Connected to xxx.xxx.xx.xxx. Escape character is '^]'. +OK Dovecot ready. USER mailtest +OK PASS ********* +OK Logged in. -ERR [IN-USE] Couldn't open INBOX: Internal error occurred. Refer to server log for more information. [2011-02-11 22:55:48] Connection closed by foreign host. Which further logs the following error dovecot: Feb 11 21:32:48 Info: pop3-login: Login: user=, method=PLAIN, rip=::ffff:xxx.xxx.xx.xxx, lip=::ffff:xxx.xxx.xx.xxx, TLS dovecot: Feb 11 21:32:48 Error: POP3(mailtest): stat(/home/mailtest/MailDir/cur) failed: Permission denied dovecot: Feb 11 21:32:48 Error: POP3(mailtest): stat(/home/mailtest/MailDir/cur) failed: Permission denied dovecot: Feb 11 21:32:48 Error: POP3(mailtest): Couldn't open INBOX: Internal error occurred. Refer to server log for more information. [2011-02-11 21:32:48] dovecot: Feb 11 21:32:48 Info: POP3(mailtest): Couldn't open INBOX top=0/0, retr=0/0, del=0/0, size=0 Also, when attempting to login to squirrelmail or access the account via thunderbird/live mail etc, it obviously fails with a similar issue. Any suggestions or outside thinking on this would be a massive help! I've pretty much exhausted every resource, and tried every suggestion for my dovecot.conf file, but so far nothing seems to work :( I feel like it may be a permissions/ownership issue, but i'm lost as to specifics.

    Read the article

  • How does Windows 7 DNS client work?

    - by Mark Allison
    I am using a local DHCP and DNS server on my home network on a linux machine. It is running CentOS 6.3 with dnsmasq 2.48. It's all working fine except for local DNS lookups for Windows machines only. I have a mix of Ubuntu, CentOS and Windows machines on the network, some virtual, some physical. I have a machine called boron and the domain is called localdomain If I ping boron from any linux machine, I get [root@lithium lists]# ping -c3 boron PING boron.localdomain (10.0.0.5) 56(84) bytes of data. 64 bytes from boron.localdomain (10.0.0.5): icmp_seq=1 ttl=64 time=0.740 ms 64 bytes from boron.localdomain (10.0.0.5): icmp_seq=2 ttl=64 time=0.478 ms 64 bytes from boron.localdomain (10.0.0.5): icmp_seq=3 ttl=64 time=0.458 ms --- boron.localdomain ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2000ms rtt min/avg/max/mdev = 0.458/0.558/0.740/0.131 ms If I do it from my Windows 7 machine, I get: Ping request could not find host boron. Please check the name and try again. If I try ping boron.localdomain I get: Pinging boron.localdomain [67.215.65.132] with 32 bytes of data: Reply from 67.215.65.132: bytes=32 time=16ms TTL=57 Reply from 67.215.65.132: bytes=32 time=188ms TTL=57 Reply from 67.215.65.132: bytes=32 time=15ms TTL=57 Reply from 67.215.65.132: bytes=32 time=14ms TTL=57 Ping statistics for 67.215.65.132: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 14ms, Maximum = 188ms, Average = 58ms which is clearly wrong. Why is it going out to the internet? Why can't my windows machine resolve the boron hostname to a FQDN? My Windows machines and linux machines get their network config from DHCP. UPDATE If I do ipconfig /all in Windows, it looks as I would expect: Windows IP Configuration Host Name . . . . . . . . . . . . : lanthanum Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : .localdomain Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : .localdomain Description . . . . . . . . . . . : Realtek PCIe GBE Family Controller Physical Address. . . . . . . . . : 50-E5-49-38-FC-A2 DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IPv4 Address. . . . . . . . . . . : 10.0.0.57(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Lease Obtained. . . . . . . . . . : 23 August 2012 13:58:45 Lease Expires . . . . . . . . . . : 24 August 2012 07:58:48 Default Gateway . . . . . . . . . : 10.0.0.6 DHCP Server . . . . . . . . . . . : 10.0.0.6 DNS Servers . . . . . . . . . . . : 10.0.0.6 208.67.222.222 208.67.220.220 NetBIOS over Tcpip. . . . . . . . : Enabled When I do an nslookup I get: Server: carbon.localdomain Address: 10.0.0.6 *** carbon.localdomain can't find boron: Unspecified error However if I do ifconfig -a in Linux I get: [root@nitrogen ~]# ifconfig -a eth0 Link encap:Ethernet HWaddr 00:0C:29:AF:EC:2A inet addr:10.0.0.7 Bcast:10.0.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:187687 errors:0 dropped:0 overruns:0 frame:0 TX packets:5857 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:23910700 (22.8 MiB) TX bytes:712964 (696.2 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:329894 errors:0 dropped:0 overruns:0 frame:0 TX packets:329894 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:67153143 (64.0 MiB) TX bytes:67153143 (64.0 MiB) and nslookup: [root@nitrogen ~]# nslookup boron Server: 10.0.0.6 Address: 10.0.0.6#53 Name: boron Address: 10.0.0.5 Both machines are on the same network using the same DHCP server. UPDATE 2 I thought the issue was resolved but I am getting intermittent DNS resolving issues but only on my Windows 7 machine. All my linux boxes are fine. This is what happens when I ping and nslookup from Windows to a Windows 2008 Server: C:\Users\mark>nslookup magnesium Server: carbon.localdomain Address: 10.0.0.6 Name: magnesium.localdomain Address: 10.0.0.12 C:\Users\mark>ping magnesium Pinging magnesium.localdomain [67.215.65.132] with 32 bytes of data: Reply from 67.215.65.132: bytes=32 time=267ms TTL=57 Reply from 67.215.65.132: bytes=32 time=162ms TTL=57 Reply from 67.215.65.132: bytes=32 time=510ms TTL=57 Reply from 67.215.65.132: bytes=32 time=146ms TTL=57 Ping statistics for 67.215.65.132: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 146ms, Maximum = 510ms, Average = 271ms And from Linux: [root@beryllium ~]# ping -c4 magnesium PING magnesium.localdomain (10.0.0.12) 56(84) bytes of data. 64 bytes from magnesium.localdomain (10.0.0.12): icmp_seq=1 ttl=128 time=0.176 ms 64 bytes from magnesium.localdomain (10.0.0.12): icmp_seq=2 ttl=128 time=0.634 ms 64 bytes from magnesium.localdomain (10.0.0.12): icmp_seq=3 ttl=128 time=0.685 ms 64 bytes from magnesium.localdomain (10.0.0.12): icmp_seq=4 ttl=128 time=0.263 ms --- magnesium.localdomain ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3002ms rtt min/avg/max/mdev = 0.176/0.439/0.685/0.223 ms [root@beryllium ~]# nslookup magnesium Server: 10.0.0.6 Address: 10.0.0.6#53 Name: magnesium.localdomain Address: 10.0.0.12 UPDATE 3 I stopped the Windows DNS client on my Windows 7 machine with net stop dnscache and it is now working fine. It would be nice to get DNS working with the DNS client on, but I might be OK without it, what do you think?

    Read the article

  • Rsync Push files from linux to windoes. ssh issue - connection refused

    - by piyush c
    For some reason I want to run a script to move files from Linux machine to Windows. I have installed cwRsync on my windows machine and able to connect to linux machine. When i execute following command: rsync -e "ssh -l "piyush"" -Wgovz --timeout 120 --delay-updates --remove-sent-files /usr/local/src/piyush/sync/* "[email protected]:/cygdrive/d/temp" Where 10.0.0.60 is my widows machine and I am running above command on Linux - CentOS 5.5. After running command I get following error message: ssh: connect to host 10.0.0.60 port 22: Connection refused rsync: connection unexpectedly closed (0 bytes received so far) [sender] rsync error: error in rsync protocol data stream (code 12) at io.c(463) [sender=2.6.8] [root@localhost sync]# ssh [email protected] ssh: connect to host 10.0.0.60 port 22: Connection refused I have modified my firewall settings on widows to allow all ports. I think this issue is due to SSH Daemon not present on my windows machine. So I tried installing OpenSSH on my machine and running ssh-agent but didn't helped. I tried similar command to run on my widows machine to pull files from Linux and its working fine. For some reason I want command for Linux machine so that I can embed it in a shell script. Can you suggest me if I am missing anything.

    Read the article

  • KVM and JBoss Java Application Server

    - by Jason
    We have a large Xen deployment running on both RHEL and CentOS and have recently started looking at KVM since this is where it looks like the future of VM's are on Linux. We can load the server and get everything running without an issue. However when we load up a new one with JBoss (4.2 Community edition, Sun JDK 6) and load a large EAR the server goes a little crazy. The %sy will jump to 80-99% and just hang for large periods of time we see a similar jump in %us on the host machine. We though at first this might be I/O as it seems to happen at start of JBoss but then would "cool down" after everything got loaded. We did some tests by extracting some large tar.gz files and using jar -xvf on the ear but could not re-create. Then we starting thinking this might be some type of memory access issues. We loaded a c-program that would generate a lot of memory activity and sure enough we saw the spikes again. Not as high mind you but we did see it jump. We then wrote a small java program to do the same thing and sure enough we saw it jump again. Any thoughts on what might be causing this? Is this just the way KVM works? As a side note we do NOT see this behavior on any other setup. Xen, VMWare and/or native iron. The system does seem a bit slower than our Xen / VMware ones.

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >