Search Results

Search found 3039 results on 122 pages for 'centos 5'.

Page 110/122 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • How to choose the most optimal RAID settings on PE2950

    - by javano
    I have some Dell PowerEdge 2950's with 4x 15k, 150GB Cheetah SAS drives in them. They are going to be VM hosts, CentOS running ESXi with Windows Server 2k8 guests. Some guests will be hosting IIS servers, and others MSSQL servers. I am trying to set the RAID virtual disks settings and can't decide which is more optimal given this situation; Read Policy: Out of Read-Ahead, No-Read-Ahead and Adaptive Read-Ahead, the default is Read-Ahead. I will be making large sequential writes initially, writing out blank images for virtual machine hard drives (lets say 30GBs from /dev/zero for example) so Read-Ahead seems good at first. But within the virtual machines reads could be random from anywhere within their file systems as they are IIS and MSSQL servers, so perhaps No-Read-Ahead is a better idea? Now I think Adaptive Read-Ahead would be better then as a compromise but I don't know much about this option, how does it compare in performance to the others? Write Policy: write-back caching, write-through caching, the default is write-back caching. The default of write-back caching is safer than write-through caching but at a performance expense. My thinking here is that in the event of power loss for example, it seems more likely in my head (this is why I need some clarification!) that damage will occur to a guest VM with write-back caching enabled, so I should favour write-through? I have searched around and there is obviously no definitive answer, so I would like to find out what is best for my situation.

    Read the article

  • ddclient to update namecheap subdomain?

    - by LF4
    I have a subdomain that I want to update with ddclient. I configured the ddclient to get the IP from dyndns but it's not updating the subdomain on namecheap. They said to use yourdomain.com as the login instead of my actual domain. Has anyone been able to get namecheap DNS updated with ddclient? I'm running CentOS 6.2 with ddclient 3.7.3. When I run ddclient I get the following. CONNECT: checkip.dyndns.org CONNECTED: using HTTP SENDING: GET / HTTP/1.0 SENDING: Host: checkip.dyndns.org SENDING: User-Agent: ddclient/3.7.3 SENDING: Connection: close SENDING: RECEIVE: HTTP/1.1 200 OK RECEIVE: Content-Type: text/html RECEIVE: Server: DynDNS-CheckIP/1.0 RECEIVE: Connection: close RECEIVE: Cache-Control: no-cache RECEIVE: Pragma: no-cache RECEIVE: Content-Length: 106 RECEIVE: RECEIVE: <html><head><title>Current IP Check</title></head><body>Current IP Address: IPADD</body></html> Use of uninitialized value in string ne at /usr/sbin/ddclient line 1998. WARNING: skipping update of lf4bot from <nothing> to IPADD WARNING: last updated <never> but last attempt on Fri Jun 15 22:46:21 2012 failed. WARNING: Wait at least 5 minutes between update attempts. ddclient.conf File daemon=300 # check every 300 seconds syslog=yes # log update msgs to syslog mail=root # mail all msgs to root mail-failure=root # mail failed update msgs to root pid=/var/run/ddclient.pid # record PID in file. ssl=yes # use ssl-support. Works with use=web, web=checkip.dyndns.org/, web-skip='IP Address' # found after IP Address protocol=namecheap \ server=dynamicdns.park-your-domain.com \ login=yourdomain.com \ password=PASSWORD \ lf4bot

    Read the article

  • Linux - real-world hardware RAID controller tuning (scsi and cciss)

    - by ewwhite
    Most of the Linux systems I manage feature hardware RAID controllers (mostly HP Smart Array). They're all running RHEL or CentOS. I'm looking for real-world tunables to help optimize performance for setups that incorporate hardware RAID controllers with SAS disks (Smart Array, Perc, LSI, etc.) and battery-backed or flash-backed cache. Assume RAID 1+0 and multiple spindles (4+ disks). I spend a considerable amount of time tuning Linux network settings for low-latency and financial trading applications. But many of those options are well-documented (changing send/receive buffers, modifying TCP window settings, etc.). What are engineers doing on the storage side? Historically, I've made changes to the I/O scheduling elevator, recently opting for the deadline and noop schedulers to improve performance within my applications. As RHEL versions have progressed, I've also noticed that the compiled-in defaults for SCSI and CCISS block devices have changed as well. This has had an impact on the recommended storage subsystem settings over time. However, it's been awhile since I've seen any clear recommendations. And I know that the OS defaults aren't optimal. For example, it seems that the default read-ahead buffer of 128kb is extremely small for a deployment on server-class hardware. The following articles explore the performance impact of changing read-ahead cache and nr_requests values on the block queues. http://zackreed.me/articles/54-hp-smart-array-p410-controller-tuning http://www.overclock.net/t/515068/tuning-a-hp-smart-array-p400-with-linux-why-tuning-really-matters http://yoshinorimatsunobu.blogspot.com/2009/04/linux-io-scheduler-queue-size-and.html For example, these are suggested changes for an HP Smart Array RAID controller: echo "noop" > /sys/block/cciss\!c0d0/queue/scheduler blockdev --setra 65536 /dev/cciss/c0d0 echo 512 > /sys/block/cciss\!c0d0/queue/nr_requests echo 2048 > /sys/block/cciss\!c0d0/queue/read_ahead_kb What else can be reliably tuned to improve storage performance? I'm specifically looking for sysctl and sysfs options in production scenarios.

    Read the article

  • Linux authentication via ADS -- allowing only specific groups in PAM

    - by Kenaniah
    I'm taking the samba / winbind / PAM route to authenticate users on our linux servers from our Active Directory domain. Everything works, but I want to limit what AD groups are allowed to authenticate. Winbind / PAM currently allows any enabled user account in the active directory, and pam_winbind.so doesn't seem to heed the require_membership_of=MYDOMAIN\\mygroup parameter. Doesn't matter if I set it in the /etc/pam.d/system-auth or /etc/security/pam_winbind.conf files. How can I force winbind to honor the require_membership_of setting? Using CentOS 5.5 with up-to-date packages. Update: turns out that PAM always allows root to pass through auth, by virtue of the fact that it's root. So as long as the account exists, root will pass auth. Any other account is subjected to the auth constraints. Update 2: require_membership_of seems to be working, except for when the requesting user has the root uid. In that case, the login succeeds regardless of the require_membership_of setting. This is not an issue for any other account. How can I configure PAM to force the require_membership_of check even when the current user is root? Current PAM config is below: auth sufficient pam_winbind.so auth sufficient pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 500 quiet auth required pam_deny.so account sufficient pam_winbind.so account sufficient pam_localuser.so account required pam_unix.so broken_shadow password ..... (excluded for brevity) session required pam_winbind.so session required pam_mkhomedir.so skel=/etc/skel umask=0077 session required pam_limits.so session required pam_unix.so require_memebership_of is currently set in the /etc/security/pam_winbind.conf file, and is working (except for the root case outlined above).

    Read the article

  • Warning messages while build Apache server

    - by GoinOff
    I am building Apache server 2.4.6 from source and am not sure about a few warning messages I received during the rpm build process. The build completes OK and everything seems fine..BTW, this is on CentOS 5.5... During the make process: /home/johnm/dev/project1/install/linux/BUILD/httpd-2.4.6/srclib/apr/libtool --silent --mode=install install mod_authn_file.la /home/johnm/dev/project1/install/linux/tmp/usr/local/apache2/modules/ libtool: install: warning: remember to run `libtool --finish /usr/local/apache2/modules' What is this warning message about?? remember to run libtool --finish ?? Also, I see this: libtool: install: warning: `/home/johnm/dev/project1/install/linux/BUILD/httpd-2.4.6/srclib/apr-util/libaprutil-1.la' has not been installed in `/usr/local/apache2/lib' I am building Apache in a temp directory but libtools seems to be looking in the wrong place (/usr/local/apache2/lib instead of /home/johnm/dev/project1/install/linux/tmp/usr/local/apache2/lib). This seems like something I can blow off?? In my specfile I set DESTDIR to /home/johnm/dev/project1/install/linux/tmp where the install files are placed: %install export DESTDIR=%{buildroot} make install Both messages appear numerous times during the make process. When I install the rpm on the system, everything appears to work without problems..Thinking I can ignore these messages??? or am I missing something important??

    Read the article

  • ospfd over an OpenVPN link - strange error in logs

    - by Alex
    I am trying to set up Quagga ospfd on two hosts connected by an OpenVPN link. These hosts have VPN IPs 10.31.0.1 and 10.31.0.13. ospfd config is pretty simple: hostname bizon password xxxxxxxxx enable password xxxxxxxxx ! log file /var/log/quagga/ospfd.log ! interface lo ! interface tun0 ip ospf network point-to-point ip ospf mtu-ignore ip ospf cost 10 interface tun1 ip ospf network point-to-point ip ospf mtu-ignore ip ospf cost 10 interface tun2 ip ospf network point-to-point ip ospf mtu-ignore ip ospf cost 10 ! router ospf ospf router-id 10.31.0.1 network 10.31.0.0/16 area 0.0.0.0 network 10.119.2.0/24 area 0.0.0.0 redistribute connected area 0.0.0.0 range 10.0.0.0/8 ! line vty ! debug ospf event debug ospf packet all I am getting the following error in the ospfd.log (the log is from 10.31.0.13): 2012/10/05 01:25:28 OSPF: ip_v 4 2012/10/05 01:25:28 OSPF: ip_hl 5 2012/10/05 01:25:28 OSPF: ip_tos 192 2012/10/05 01:25:28 OSPF: ip_len 64 2012/10/05 01:25:28 OSPF: ip_id 64666 2012/10/05 01:25:28 OSPF: ip_off 0 2012/10/05 01:25:28 OSPF: ip_ttl 1 2012/10/05 01:25:28 OSPF: ip_p 89 2012/10/05 01:25:28 OSPF: ip_sum 0xe5d1 2012/10/05 01:25:28 OSPF: ip_src 10.31.0.1 2012/10/05 01:25:28 OSPF: ip_dst 224.0.0.5 2012/10/05 01:25:28 OSPF: Packet from [10.31.0.1] received on link tun1 but no ospf_interface I'm not sure what to do next. I have set up ospfd over OpenVPN several times but I used Debian and I am on CentOS 6 now. Quagga version is 0.99.15. Should I try to get more recent version?

    Read the article

  • How to unblacklist an IP at Google?

    - by DJRayon
    I own a small business with two servers for webhosting. When setting up the primary (CentOS 5.5 + WHM, secondary is WHM DNS Only) server I kinda messed up the firewall, so the hackers could send stuff from my server. My primary IP is x.y.29.218. Anyway - I got blacklisted in several places, but now those blacklistings are gone. For a week or so, but Google still has my IP blacklisted. I handling serious damages because of that. Many clients want to switch from my hosting, etc. I've fixed the hole with CSF firewall SMTP_BLOCK option and enabled also the WHM SMTP TEAK Currently all I see from the Main Email View Mail Statistics (Errors section) in WHM is rows and rows of the following message removed-the-email-address-for-security R=lookuphost T=remote_smtp: SMTP error from remote mail server after end of data: host aspmx.l.google.com [a.b.39.27]: 550-5.7.1 [x.y.29.218 1] Our system has detected an unusual rate of\n550-5.7.1 unsolicited mail originating from your IP address. To protect our\n550-5.7.1 users from spam, mail sent from your IP address has been blocked.\n550-5.7.1 Please visit http://www.google.com/mail/help/bulk_mail.html to review\n550 5.7.1 our Bulk Email Senders Guidelines. h24si3868764fas.171 What are my options? I have one IP free. How can I configure Exim to send mail from that IP? My brain is like constantly blowing up because of this problem. Please someone, who has any knowledge how to deal with the current situation, please give me some kind of help - any help, suggestions, etc. I've tried everything I know, and I still don't know much, because this is the first time (I just started to webhost, etc) I deal with real physical servers not some kind of pre-setup VPS solution. Many - many thanks, whoever has time to offer some help.

    Read the article

  • Cannot properly read files on the local server

    - by Andrew Bestic
    I'm running a RedHat 6.2 Amazon EC2 instance using stock Apache and IUS PHP53u+MySQL (+mbstring, +mysqli, +mcrypt), and phpMyAdmin from git. All configuration is near-vanilla, assuming the described installation procedure. I've been trying to import SQL files into the database using phpMyAdmin to read them from a directory on my server. phpMyAdmin lists the files fine in the drop down, but returns a "File could not be read" error when actually trying to import. Furthermore, when trying to execute file_get_contents(); on the file, it also returns a "failed to open stream: Permission denied" error. In fact, when my brother was attempting to import the SQL files using MySQL "SOURCE" as an authenticated MySQL user with ALL PRIVILEGES, he was getting an error reading the file. It seems that we are unable to read/import these files with ANY method other than root under SSH (although I can't say I've tried every possible method). I have never had this issue under regular CentOS (5, 6, 6.2) installations with the same LAMP stack configuration. Some things I've tried after searching Google and StackExchange: CHMOD 0777 both directory and files, CHOWN root, apache (only two users I can think of that PHP would use), Importing SQL files with total size under both upload_max_filesize and post_max_size, PHP open_basedir commented out, or = "/var/www" (my sites are using Apache VirtualHosts within that directory, and all the SQL files are deep within that directory), PHP safe mode is OFF (it was never ON) At the moment I have solved this issue with the smaller files by using the FILE UPLOAD method directly to phpMyAdmin, but this will not be suitable for uploading my 200+ MiB SQL files as I don't have a stable Internet connection. Any light you could shed on this situation would be greatly appreciated. I'm fair with Linux, and for the things that do stump me, Google usually has an answer. Not this time, though!

    Read the article

  • IPv6 Routing / Subnetting

    - by nappo
    Recently I have installed Citrix Xen Server 6.2 on a machine. My Provider (Hetzner) gave me the IPv6 Subnet 2a01:4f8:200:xxxx::/64. Followed an article in the providers wiki (1) i got it working and can assign IPs to my guests (CentOS). However i can't assign a second IP to a single guest - it will result in a timeout. I'm not very familiar with IPv6 routing / subnetting - any help or tips for further troubleshooting is welcome! My Setup: XenServer 6.2 IPv6: 2a01:4f8:200:xxxx::2/112 ip -6 route: 2a01:4f8:200:xxxx::/112 dev xenbr0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 0 fe80::1 dev xenbr0 metric 1024 mtu 1500 advmss 1440 hoplimit 0 default via fe80::1 dev xenbr0 metric 1024 mtu 1500 advmss 1440 hoplimit 0 Guest 1 IPv6: 2a01:4f8:200:xxxx::3/64 IPv6: 2a01:4f8:200:xxxx::4/64 ip -6 route: 2a01:4f8:200:xxxx::/64 dev eth0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 fe80::/64 dev eth0 proto kernel metric 256 mtu 1500 advmss 1440 hoplimit 4294967295 default via fe80::1 dev eth0 metric 1 mtu 1500 advmss 1440 hoplimit 4294967295 Guest 2 IPv6: 2a01:4f8:200:xxxx::5/64 Guest 1 IPv6 is working fine, Guest 2 too. As suggested by the wiki article (1) i split my /64 network into a /112. Is it right to set the host /112 and the guests /64? Why is that?

    Read the article

  • Website has become slower on a VPS, was much fast on a shared host. What's wrong?

    - by Arpit Tambi
    My shared host suspended my website stating system overload, so I moved my website to a VPS which has 4GB RAM. But for some reason the website has become very slow. This is the vmstat output - procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 1 0 0 3050500 0 0 0 0 0 1 0 0 0 0 100 0 0 Here's the Apache Benchmark output for a STATIC html page I ran on the server itself - Benchmarking www.ask-oracle.com (be patient)...apr_poll: The timeout specified has expired (70007) Total of 20 requests completed Update: Server Config: List item Centos 5.6 4 cores cpu 4 GB RAM LAMP stack with APC Wordpress Only one website It takes almost double time to load now, same website was much fast on shared hosting. I know I need to tweak some settings but have no clue where to start from? I have already tried to optimize apache, mysql etc. Update 2: CPU usage is low, see uptime output: 11:09:02 up 7 days, 21:26, 1 user, load average: 0.09, 0.11, 0.09 Update 3: When I load any webpage, browser shows "Waiting" for a long time and then page loads quickly. So I suspect server can accept only limited connections and holds extra connections in a waiting state. How to check this? Update 4: Following is the output on executing netperf TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to localhost.localdomain (127.0.0.1) port 0 AF_INET Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 16384 16384 10.00 9615.40 [root@ip-118-139-177-244 j3ngn5ri6r01t3]# Here are the Apache MPM settings from httpd.conf, do they look okay? <IfModule worker.c> StartServers 5 MaxClients 100 MinSpareThreads 50 MaxSpareThreads 250 ThreadsPerChild 125 MaxRequestsPerChild 10000 ServerLimit 100 </IfModule>

    Read the article

  • Truncated content with Apache on Vagrant VM

    - by Nev Stokes
    I'm using Vagrant to run a CentOS VM in order to try and achieve local development parity with our live servers. I've symlinked /var/www/html with the /vagrant shared directory and am forwarding port 80 for viewing at http://localhost:4567. I'm developing using SublimeText 2 on OS X Mountain Lion. Once I figured that iptables was tripping me up, all was well and good. Until I noticed something strange. I have a sample HTML page consisting of several paragraphs of lorem copy. I can view this fine in a browser on OS X. But when I make an edit, for example removing a paragraph, and refresh the content is truncated with the paragraph I deleted still visible. When I cat the files on the server I can see the changes I made but these aren't even reflected when I curl localhost. I strongly suspect that it's a problem with my Apache settings — with which I didn't really tinker — as the issue doesn't arise when I stop Apache and run sudo python -m SimpleHTTPServer 80 in the directory to view pages instead. What gives?

    Read the article

  • EngineX ignores Auth Basic?

    - by Miko
    I have configured nginx to password protect a directory using auth_basic. The password prompt comes up and the login works fine. However... if I refuse to type in my credentials, and instead hit escape multiple times in a row, the page will eventually load w/o CSS and images. In other words, continuously telling the login prompt to go away will at some point allow the page to load anyway. Is this an issue with nginx, or my configuration? Here is my virtual host: 31 server { 32 server_name sub.domain.com; 33 root /www/sub.domain.com/; 34 35 location / { 36 index index.php index.html; 37 root /www/sub.domain.com; 38 auth_basic "Restricted"; 39 auth_basic_user_file /www/auth/sub.domain.com; 40 error_page 404 = /www/404.php; 41 } 42 43 location ~ \.php$ { 44 include /usr/local/nginx/conf/fastcgi_params; 45 } 46 } My server runs CentOS + nginx + php-fpm + xcache + mysql

    Read the article

  • Is there a way to change the string format for an existing CSR "Country Code" field from UTF8 to Printable String?

    - by Mike B
    CentOS 5.x The short version: Is there a way to change the encoding format for an existing CSR "Country Code" field from UTF8 to Printable String? The long version: I've got a CSR generated from a product using standard java security providers (jsse/jce). Some of the information in the CSR uses UTF8 Strings (which I understand is the preferred encoding requirement as of December 31, 2003 - RF 3280). The certificate authority I'm submitting the CSR to explicitly requires the Country Code to be specified as a PrintableString. My CSR has it listed as a UTF8 string. I went back to the latest RFC - http://www.ietf.org/rfc/rfc5280.txt. It seems to conflict specifically on countryName. Here's where it gets a little messy... The countryName is part of the relative DN. The relative DN is defined to be of type DirectoryString, which is defined as a choice of teletexString, printableString, universalString, utf8String, or bmpString. It also more specifically defines countryName as being either alpha (upper bound 2 bytes) or numeric (upper bound 3 bytes). Furthermore, in the appendix, it refers to the X520countryName, which is limited to be only a PrintableString of size 2. So, it is clear why it doesn't work. It appears that the certificate authority and Sun/Java do not agree on their interpretation of the requirements for the countryName. Is there anything I can do to modify the CSR to be compatible with the CA?

    Read the article

  • Can enabling a RAID controller's writeback cache harm overall performance?

    - by Nathan O'Sullivan
    I have an 8 drive RAID 10 setup connected to an Adaptec 5805Z, running Centos 5.5 and deadline scheduler. A basic dd read test shows 400mb/sec, and a basic dd write test shows about the same. When I run the two simultaneously, I see the read speed drop to ~5mb/sec while the write speed stays at more or less the same 400mb/sec. The output of iostat -x as you would expect, shows that very few read transactions are being executed while the disk is bombarded with writes. If i turn the controller's writeback cache off, I dont see a 50:50 split but I do see a marked improvement, somewhere around 100mb/s reads and 300mb/s writes. I've also found if I lower the nr_requests setting on the drive's queue (somewhere around 8 seems optimal) I can end up with 150mb/sec reads and 150mb/sec writes; ie. a reduction in total throughput but certainly more suitable for my workload. Is this a real phenomenon? Or is my synthetic test too simplistic? The reason this could happen seems clear enough, when the scheduler switches from reads to writes, it can run heaps of write requests because they all just land in the controllers cache but must be carried out at some point. I would guess the actual disk writes are occuring when the scheduler starts trying to perform reads again, resulting in very few read requests being executed. This seems a reasonable explanation, but it also seems like a massive drawback to using writeback cache on an system with non-trivial write loads. I've been searching for discussions around this all afternoon and found nothing. What am I missing?

    Read the article

  • How to cache authentication in Linux using PAM/Kerberos authentication (for CVS)?

    - by Calonthar
    We have several Linux servers that authenticate Linux user passwords on our Windows Active Directory Server using PAM and Kerberos 5. The Linux distro we use is CentOS 6. On one system, we have several Version Control Systems like CVS and Subversion, both of which authenticate users throug PAM, such that users can use their normal Unix resp. Windows AD accounts. Since we started using Kerberos for password authentication, we experienced that CVS on a client machine is often much slower in establishing a connection. CVS authenticates the user on every request (eg. cvs diff, log, update...). Is is possible to cache the credentials that kerberos uses, sucht that is does not need to ask the Windows AD server every time a user executes a cvs action? Our PAM config /etc/pam.d/system-auth looks like the following: auth required pam_env.so auth sufficient pam_unix.so nullok try_first_pass auth requisite pam_succeed_if.so uid >= 500 quiet auth sufficient pam_krb5.so use_first_pass auth required pam_deny.so account required pam_unix.so broken_shadow account sufficient pam_succeed_if.so uid < 500 quiet account [default=bad success=ok user_unknown=ignore] pam_krb5.so account required pam_permit.so password requisite pam_cracklib.so try_first_pass retry=3 password sufficient pam_unix.so md5 shadow nullok try_first_pass use_authtok password sufficient pam_krb5.so use_authtok password required pam_deny.so session optional pam_keyinit.so revoke session required pam_limits.so session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid session required pam_unix.so session optional pam_krb5.so

    Read the article

  • Requesting better explanation for expires headers

    - by syn4k
    I have successfully implemented expires headers however, for several days I have been stumped by one thing. This article: http://www.tipsandtricks-hq.com/how-to-add-far-future-expires-headers-to-your-wordpress-site-1533 states Keep in mind that when you use expires header the files are cached in the browser until it expires so do not use this on files that changes frequently. Other sites indicate the same in my reading. But this doesn't seem to be true. I have updated an image, using the same name, several times. Each time I update and refresh my browser, the new image (with the same name) displays. I understand from this article that the old image should display unless I use a new name. Do you happen to know where the misunderstanding is? I have verified that the image in question has expires headers set on it: Request Headers: Host domain.com User-Agent Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.28) Gecko/20120306 Firefox/3.6.28 FirePHP/0.5 Accept image/png,image/*;q=0.8,*/*;q=0.5 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://domain.com/index.php Cookie __utma=1.61479883.1332439113.1332783348.1332796726.4; __utmz=1.1332439113.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none);PHPSESSID=lv2hun9klt2nhrdkdbqt8abug7; __utmb=1.33.10.1332796726; __utmc=1; ck_authorized=true x-insight activate If-Modified-Since Mon, 26 Mar 2012 21:55:33 GMT Cache-Control max-age=0 Response Headers: Date Mon, 26 Mar 2012 22:06:50 GMT Server Apache/2.2.3 (CentOS) Connection close Expires Wed, 25 Apr 2012 22:06:50 GMT Cache-Control max-age=2592000

    Read the article

  • IPtables: DNAT not working

    - by GetFree
    In a CentOS server I have, I want to forward port 8080 to a third-party webserver. So I added this rule: iptables -t nat -A PREROUTING -p tcp --dport 8080 -j DNAT --to-destination thirdparty_server_ip:80 But it doesn't seem to work. In an effort to debug the process, I added these two LOG rules: iptables -t mangle -A PREROUTING -p tcp --src my_laptop_ip --dport ! 22 -j LOG --log-level warning --log-prefix "[_REQUEST_COMING_FROM_CLIENT_] " iptables -t nat -A POSTROUTING -p tcp --dst thirdparty_server_ip -j LOG --log-level warning --log-prefix "[_REQUEST_BEING_FORWARDED_] " (the --dport ! 22 part is there just to filter out the SSH traffic so that my log file doesn't get flooded) According to this page the mangle/PREROUTING chain is the first one to process incomming packets and the nat/POSTROUTING chain is the last one to process outgoing packets. And since the nat/PREROUTING chain comes in the middle of the other two, the three rules should do this: the rule in mangle/PREROUTING logs the incomming packets the rule in nat/PREROUTING modifies the packets (it changes the dest IP and port) the rule in nat/POSTROUTING logs the modified packets about to be forwarded Although the first rule does log incomming packets comming from my laptop, the third rule doesn't log the packets which are supposed to be modified by the second rule. It does log, however, packets that are produced in the server, hence I know the two LOG rules are working properly. Why are the packets not being forwarded, or at least why are they not being logged by the third rule? PS: there are no more rules than those three. All other chains in all tables are empty and with policy ACCEPT.

    Read the article

  • Creating Custom ISO Images

    - by ericl42
    I am working on creating some custom ISO images using primarily Fedora and CentOS. I want the image to be a bootable live CD with some specific files on it. I also want it to have the option to be able to be downloaded to the hard drive. I've read some various articles but want to get a few more opinions since I've never done this before. Currently I'm trying 2 different methods. Install Fedora with the configuration exactly how I want it and then run the livecd-tools program to pull everything I currently have to an ISO. I haven't got this to work yet but I do see a few issues with it. Such as the default passwords I had to put in. Run a Fedora live CD and install a few things I want on it and then copy the image of it. I believe this would work better since it has more of a live cd feel. However I"m not 100% sure how I should go about pulling the current image to my own ISO. I know some people have said to use mkisofs and a few other programs but any advice would be greatly appreciated.

    Read the article

  • Can't authorize a server for Amazon RDS

    - by Parris
    We are attempting to slowly migrate a website over to AWS among other things. We decided the first thing to move was the database. We have some dedicated server with a different hosting provider. We only have one IP. I am having trouble authorizing the ip so that the old server can connect to RDS. It simply hangs for a while while using the mysql cli, then responds: ERROR 2003 (HY000): Can't connect to MySQL server on 'db.address.us-east-1.rds.amazonaws.com' (110) It did work on my laptop though. I am not quite sure what is wrong. I have a feeling I don't quite understand CIDR/IP. I simply took the ip address and tacked on /32 at the end. Then I gleaned some information that it also has to do with subnet mask? ifconfig reports: 255.255.255.0 I found a calculator and the IP changed a bit and had /24 at the end. That still didn't work. One other note... perhaps i dont know enough about the differences between OS. The hosting provider is using centOS, while our development machines are all ubuntu. Any insight would be extremely helpful! THANKS :)

    Read the article

  • Compiling linux kernel, how much size is needed?

    - by ant2009
    Hello, I am have downloaded the newest most stable linux kernel 2.6.33.2 I thought I would test this using virtualbox. So I create a dynamically sized harddisk of 4gb. And installed CentOS 5.3 with just the minimum packages. I setup the make menuconfig with just the default settings. After that I ran make and got the following error: net/bluetooth/hci_sysfs.o: final close failed: No space left on device make[2]: *** [net/bluetooth/hci_sysfs.o] Error 1 make[1]: *** [net/bluetooth] Error 2 make: *** [net] Error 2 The amount of space I have left is: # df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 3.3G 3.3G 0 100% / /dev/hda1 99M 12M 82M 13% /boot tmpfs 125M 0 125M 0% /dev/shm My virtual size is 4gb, but the actual size is 3.5gb $ ls -hl total 7.5G -rw-------. 1 root root 3.5G 2010-04-13 14:08 LFS.vdi How much size should I give when compiling and installing a linux kernel? Is there any guidelines to follow when doing this? This is my first time, so just experimenting with this. Many thanks,

    Read the article

  • Better logging for cronjob output using /usr/bin/logger

    - by Stefan Lasiewski
    I am looking for a better way to log cronjobs. Most cronjobs tend to spam email or the console, get ignored, or create yet another logfile. In this case, I have a Nagios NSCA script which sends data to a central Nagios sever. This send_nsca script also prints a single status line to STDOUT, indicating success or failure. 0 * * * * root /usr/local/nagios/sbin/nsca_check_disk This emails the following message to root@localhost, which is then forwarded to my team of sysadmins. Spam. forwarded nsca_check_disk: 1 data packet(s) sent to host successfully. I'm looking for a log method which: Doesn't spam the messages to email or the console Don't create yet another krufty logfile which requires cleanup months or years later. Capture the log information somewhere, so it can be viewed later if desired. Works on most unixes Fits into an existing log infrastructure. Uses common syslog conventions like 'facility' Some of these are third party scripts, and don't always do logging internally. UPDATE 2010-04-30 In the process of writing this question, I think I have answered myself. So I'll answer myself "Jeopardy-style". Is there any problem with this method? The following will send any Cron output to /usr/bin//logger, which will send to syslog, with a 'tag' of 'nsca_check_disk'. Syslog handles it from there. My systems (CentOS and FreeBSD) already handle log rotation. */5 * * * * root /usr/local/nagios/sbin/nsca_check_disk 2>&1 |/usr/bin/logger -t nsca_check_disk /var/log/messages now has one additional message which says this: Apr 29, 17:40:00 192.168.6.19 nsca_check_disk: 1 data packet(s) sent to host successfully. I like /usr/bin/logger , because it works well with an existing syslog configuration and infrastructure, and is included with most Unix distros. Most *nix distributions already do logrotation, and do it well.

    Read the article

  • Varnish returning 503, FetchError (could not get storage)

    - by Archan
    On the current setup we're running into a problem with Varnish, we're running a CentOS 5.7 x86_64 xenpv, with Cpanel WHM, hosted at VPS.net. Sometimes we will recieve a Guru Meditation from Varnish, and when we look in the varnishlog with the following command varnishlog -d -c -m TxStatus:503 it returns output similar to the following: 15 VCL_call c recv 15 VCL_acl c NO_MATCH devs 15 VCL_return c pass 15 VCL_call c hash 15 Hash c **** 15 Hash c ************* 15 VCL_return c hash 15 VCL_call c pass pass 15 Backend c 12 default default 15 TTL c 1835862523 RFC 0 -1 -1 1332454056 0 1332454055 375007920 0 15 VCL_call c fetch hit_for_pass 15 ObjProtocol c HTTP/1.1 15 ObjResponse c OK 15 ObjHeader c Date: Thu, 22 Mar 2012 22:07:35 GMT 15 ObjHeader c Server: Apache/2.2.21 (Unix) mod_ssl/2.2.21 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 mod_fcgid/2.3.6 15 ObjHeader c X-Powered-By: PHP/5.3.9 15 ObjHeader c Expires: Thu, 19 Nov 1981 08:52:00 GMT 15 ObjHeader c Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 15 ObjHeader c Pragma: no-cache 15 ObjHeader c Content-Type: text/html; charset=utf-8 15 ObjHeader c X-Cacheable: NO:Cache-Control=private 15 FetchError c chunked read_error: 12 (Could not get storage) 15 VCL_call c error deliver 15 VCL_call c deliver deliver As far as I have could gather, we could try increasing the nuke_limit, but currently we have a nuke_limit of 500, and when running varnishstat -1 -f n_lru_nuked we "only" get a total of 1031, even though we have seen the error happen on several pages. When we then run top to see how much memory Varnish is using, it only shows that it is using 763m, although we've set it to be allowed to use 1200m. Any ideas of what the problem can be?

    Read the article

  • Tomcat Solr times out

    - by user568458
    (Plesk 10.4 centos 5.8 linux apache2 server, with Tomcat5 on port 8080 and Apache Solr) I get "The connection has timed out" on requesting domain.com:8080 or www.domain.com:8080 or ip.ad.dr.ess:8080 Every reason I can find why this might be seems not to be the case: Plesk thinks Tomcat is running fine and lists it as an active service. The firewall currently has an accept all rule on port 8080. There's nothing relevant in the catalina tomcat logs (/var/log/tomcat5) - just some stuff from last time tomcat was started. There's no record at all of the requests that fail. netstat -lnp | grep 8080 gives the following, which I beleive means Tomcat is listening to requests to port 8080 on all ip addresses from any ip and any port (please correct me if I'm wrong): : tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 4018/java This covers every cause of this time out that I can find - so I must be missing something fundamental. It seems Tomcat is running, listening to the right port, is getting an appropriate IP address, is not obstructed by a firewall and is not failing after receiving a request in a way which would be recorded in the logs (so I believe it can't be out of memory, or anything like that). I'm all out of ideas on how to continue debugging this. I must have overlooked something obvious. Can anyone help?

    Read the article

  • Blocking an IP in Webmin

    - by Dan J
    I've been checking my /var/log/secure log recently and have seen the same bot trying to brute force onto my Centos server running webmin. I created a chain + rule in Networking - Linux Firewall: Drop If source is 113.106.88.146 But I'm still seeing the attempted logins in the log: Jun 6 10:52:18 CentOS5 sshd[9711]: pam_unix(sshd:auth): check pass; user unknown Jun 6 10:52:18 CentOS5 sshd[9711]: pam_succeed_if(sshd:auth): error retrieving information about user larry Jun 6 10:52:19 CentOS5 sshd[9711]: Failed password for invalid user larry from 113.106.88.146 port 49328 ssh2 Here is the contents of /etc/sysconfig/iptables: # Generated by webmin *filter :banned-ips - [0:0] -A INPUT -p udp -m udp --dport ftp-data -j ACCEPT -A INPUT -p udp -m udp --dport ftp -j ACCEPT -A INPUT -p udp -m udp --dport domain -j ACCEPT -A INPUT -p tcp -m tcp --dport 20000 -j ACCEPT -A INPUT -p tcp -m tcp --dport 10000 -j ACCEPT -A INPUT -p tcp -m tcp --dport https -j ACCEPT -A INPUT -p tcp -m tcp --dport http -j ACCEPT -A INPUT -p tcp -m tcp --dport imaps -j ACCEPT -A INPUT -p tcp -m tcp --dport imap -j ACCEPT -A INPUT -p tcp -m tcp --dport pop3s -j ACCEPT -A INPUT -p tcp -m tcp --dport pop3 -j ACCEPT -A INPUT -p tcp -m tcp --dport ftp-data -j ACCEPT -A INPUT -p tcp -m tcp --dport ftp -j ACCEPT -A INPUT -p tcp -m tcp --dport domain -j ACCEPT -A INPUT -p tcp -m tcp --dport smtp -j ACCEPT -A INPUT -p tcp -m tcp --dport ssh -j ACCEPT -A banned-ips -s 113.106.88.146 -j DROP COMMIT # Completed # Generated by webmin *mangle :FORWARD ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :PREROUTING ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] COMMIT # Completed # Generated by webmin *nat :OUTPUT ACCEPT [0:0] :PREROUTING ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] COMMIT # Completed

    Read the article

  • Running python script in incrontab in Debian

    - by WilliamMayor
    I have a user, dropbox, that runs the Dropbox daemon, I want to monitor the directories in the Dropbox directory for new files and run a python script when they appear. I have the python script that I know works: $ /home/dropbox/monitor.py Trying to get lock Got lock, waiting for Dropbox to be idle Dropbox idle Finding instructions Done, releasing lock I have an incrontab entry: $ incrontab -l /home/dropbox/Dropbox IN_CREATE /home/dropbox/monitor.py | logger /home/dropbox/test IN_CREATE logger "$$ $@ $# $% $&" When I add a file to the test directory I see the output in /var/log/syslog: $ touch /home/dropbox/test/a $ tail /var/log/syslog ... Nov 9 10:18:27 vps incrond[1354]: (dropbox) CMD (logger "$ /home/dropbox/test a IN_CREATE 256") Nov 9 10:18:27 vps logger: "$ /home/dropbox/test a IN_CREATE 256" ... However, when I add a file to the Dropbox directory the command doesn't seem to run: $ touch /home/dropbox/Dropbox/a $ tail /var/log/syslog ... Nov 9 10:24:16 vps incrond[1354]: (dropbox) CMD (/home/dropbox/monitor.py | logger) ... So the incron daemon notices the new file and the correct command is found to be executed but it never actually gets executed. Nor are there any error messages. It kind of seems like incrontab can only be used to run the most simple of commands. This might be a similar question to: Incrond running but not executing commands CentOS 6.4 but I think that I don't have env problems, every path is absolute. I tried changing .../monitor.py to /usr/bin/python2.7 .../monitor.py just in case but it didn't make any difference.

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >