Search Results

Search found 3039 results on 122 pages for 'centos 5'.

Page 115/122 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • How much Ram should I need on my VPS package? Am I being ripped off?

    - by Tamerax
    Hello! So, I'm currently on a VPSVille Cpanel3 account that has 768 MB guaranteed ram and 2048 MB burst ram (full details here: http://www.vpsville.ca/cpanel-vps). It's running CentOS, Cpanel, Apache and FastCGI. On the server itself I have a joomla community site with a forum system that generally has about 20 people on it max at any point and even then, during the evening, no one. It's a pretty small site but has a number of modules running on it. It gets about 6000 visits a month. Also on the server is a wordpress site that gets about 80-150 visits a day, 2 other wordpress sites that aren't developed yet so they don't get any traffic at all and 2 static html websites that also only get about 500 hits a month. All in all, no huge sites. The issue is that I get "out of memory" errors fairly frequently and it kills my server and I need to reboot it in order to get all my sites up and running again. It seems to me that I shouldn't have these issues with that much ram allotted to my account and everytime I send in a support ticket, they just tell me to upgrade the ram. Now, I'm still pretty new to all this so I'm not a good judge of how much I really need for my sites to run. I don't know if my sites really do need this much OR vpsville has oversold there servers, they don't actually have those resources available and I'm getting ripped off. So, how much ram should I be using with my current setup? Thanks!

    Read the article

  • Auto-restart mysql when it dies

    - by Los Frijoles
    I have a rackspace server that I have been renting to run my personal projects upon. Since I am cheap, it has 256Mb of RAM and honestly can't handle alot. Every once in a while, when there is a sharp uptick in traffic, the server decides to start killing processes and it seems that mysqld is a popular one for it to kill. I try to visit my site and am greeted with the message that there was an error establishing the database connection. Inspection of the logs reveals that mysqld was killed due to lack of memory. Since I am still as poor as I was yesterday and don't want to upgrade my rackspace VM's RAM, is there a way I can tell it to automagically restart mysqld when it dies? I have a thought to use something like crontab, but alas, I don't know exactly what to do there either. I guess I am product of the "Linux on your desktop" generation since I can do most things on my desktop and laptop (which run Linux almost exclusively), but still lack a lot of server administration skills for Linux. The server runs CentOS 6.3

    Read the article

  • Variable host IP address in iptables rule

    - by DrakeES
    I am running CentOS 6.4 with OpenVZ on my laptop. In order to provide Internet access for the VEs I have to apply the following rule on the laptop: iptables -t nat -A POSTROUTING -j SNAT --to-source <LAPTOP_IP> It works fine. However, I have to work in different places - office, home, partner's office etc. The IP of my laptop is different in those places, so have to alter the rule above each time I change place. I have created a workaround which basically determines the IP and applies the rule: #!/bin/bash IP=$(ifconfig | awk -F':' '/inet addr/&&!/127.0.0.1/{split($2,_," ");print _[1]}') iptables -t nat -A POSTROUTING -j SNAT --to-source $IP The workaround above works. I only still have to execute it manually. Perhaps I could make it a hook executing whenever my laptop obtains an IP address from DHCP - how can I do that? Also, I am just wondering if there is an elegant way of getting it done in the first place - iptables? Maybe there is a syntax allowing to specify "current hardware ip addres" in the rule?

    Read the article

  • Some websites hosted on my server cant be reached from some places.

    - by valter
    Hello. I have a bloblem that is causing me headaches to solve. I have a webserver at 100tb.com, running CentOS. I also have these nameservers setted up: 67.213.220.170 ns1.maisturismo.net 67.213.220.171 ns2.maisturismo.net My domain is at Godaddy. I added two Host Summary pointig to the nameserver ips... NS1 to the first IP, and NS2 to the second... Than I changed the nameservers of maisturismo.net to ns1.maisturismo.net and ns2.maisturismo.net http://img20.imageshack.us/i/dnswm.jpg/ Bellow the image showing my dns records to maisturismo.net http://img137.imageshack.us/i/nameservers.jpg/ Its strange... Everythink looks fine, but the webiste is not reachable from [zend2.com][1] proxy, and from some other places, like a friend's house, that dont use the same web provider that I use. I have another nameserver setted up on my server, that have the same problem, All websites that use it cant be reached from zend2.com and from my friends house, except a ".com.br"(Brazillian Domain). Do you have same idea about, what is causing this? I really cant imagine what is the problem... Thanks. [1]: http:// zend2.com

    Read the article

  • rsync via cron. How do I enable logging?

    - by tetranz
    Hi all I'm backing up a remote server to another computer using rsync. In cron.daily I have a file with this: rsync -avz -e ssh [email protected]:/ /mybackup/ It uses a public / private key pair to login. This seems to work well most of the time however, I've (foolishly) only ever really checked it by looking at the dates on some important files (MySQL dumps) that I know change every day. Obviously, an error could occur after that file. Sometimes it fails. When I run it manually, something like "client reset" sometimes happens. What is the best way to log it so that I can check with certainty if it completed or not? The cron log doesn't indicate any errors. I haven't tried it but the rsync man page on the oldish version of CentOS on the backup machine doesn't show the --log-file option. I guess I could redirect stdout with but I don't really want to know about every file. I just want to know if it all worked or not.. Thanks

    Read the article

  • How do you get linux to honor setuid directories?

    - by Takigama
    Some time ago while in a conversation in IRC, one user in a channel I was in suggested someone setuid a directory in order for it to inherit the userid on files to solve a problem someone else was having. At the time I spoke up and said "linux doesn't support setuid directories". After that, the person giving the advice showed me a pastebin (http://codepad.org/4In62f13) of his system honouring the setuid permission set on a directory. Just to explain, when i say "linux doesnt support setuid directories" what I mean is that you can go "chmod u+s directory" and it will set the bit on the directory. However, linux (as i understood it) ignores this bit (on directories). Try as I might, I just cant quite replicate that pastebin. Someone suggested to me once that it might be possible to emulate the behaviour with selinux - and playing around with rules, its possible to force a uid on a file, but not from a setuid directory permission (that I can see). Reading around on the internet has been fairly uninformative - most places claim "no, setuid on directories does not work with linux" with the occasional "it can be done under specific circumstances" (such as this: http://arstechnica.com/etc/linux/2003/linux.ars-12032003.html) I dont remember who the original person was, but the original system was a debian 6 system, and the filesystem it was running was xfs mounted with "default,acl". I've tried replicating that, but no luck so far (tried so far with various versions of debian, ubuntu, fedora and centos) Can anyone clue me in on what or how you get a system to honor setuid on a directory?

    Read the article

  • Using iptables to make a VPN router

    - by lost_in_the_sauce
    I am attempting to make a VPN connection to a third party VPN site, then forward traffic from my internal computers (ssh and ping for now) out to the VPN site using IPTables. 3rd Party <- (tun0/eth0)Linux VPN Box(eth1) <- Windows7TestBox I am running on CentOS 6.3 Linux and have two network connections eth0-public eth1-private. I am running vpnc-0.5.3-4 which is currently connecting to my destination. When I connect I am able to ping the destination IPAddresses but that is as far as I can get. ping -I tun0 10.1.33.26 success ping -I eth0 10.1.33.26 fail ping -I eth1 10.1.33.26 fail I have my private network Windows 7 test box set up to have the eth1 (private) network of my VPN Server as its gateway and can ping him fine. I need IPTables to send the Windows 7 traffic out the VPN tunnel. I have tried for a few days many different IPTables configurations from this site and others, either the other examples are too simple or overly complicated. The only thing this server is doing is connecting to the VPN and forwarding all traffic. So we can "flush" everything and start from scratch here. It is a blank slate. #!/bin/bash echo "Define variables" ipt="/sbin/iptables" echo "Zero out all counters" $ipt -Z $ipt -t nat -Z $ipt -t mangle -Z echo "Flush all active rules, delete all chains" $ipt -F $ipt -X $ipt -t nat -F $ipt -t nat -X $ipt -t mangle -F $ipt -t mangle -X $ipt -P INPUT ACCEPT $ipt -P FORWARD ACCEPT $ipt -P OUTPUT ACCEPT $ipt -t nat -A POSTROUTING -o tun0 -j MASQUERADE $ipt -A FORWARD -i eth1 -o eth0 -j ACCEPT $ipt -A FORWARD -i eth0 -o eth1 -j ACCEPT $ipt -A FORWARD -i eth0 -o tun0 -j ACCEPT $ipt -A FORWARD -i tun0 -o eth0 -j ACCEPT Again I have done many variations of the above and many other rules from other posts but haven't been able to move forward. It seems like such a simple task, and yet....

    Read the article

  • Print over the internet from a remote linux session locally (on a Windows 7 machine) to the shared printers?

    - by obeliksz
    I'm trying to use a linux virtual machine as a file server for windows clients. I have successfully implemented remote file sharing (samba+ssh) with which I am able to print locally with a little program that I made for this purpose (jetforms style)... but I would like to hear about a somewhat more direct approach. How can I attach the printers to the server, so that I can for example open a file on the remote session and in the print dialogbox I would see my local printers (on the machine from which I have established a remote session)? I guess there should be some kind of putty tunneling, but dont know how. I have a windows 7 machine locally; there is a CentOS 6 VM over the internet. It has ssh, cups, and samba. I have found a question which asks the opposite: there is a windows based server to connect form linux but that windows has a domain, mine is just a simple windows workstation that is behind NAT and has a dynamic IP. That question is: Print from Linux to Windows networked printer.

    Read the article

  • Iptables and system-config-firewall

    - by nivde92
    I had a set of netfilter rules set with iptables, but someone else told me to use system-config-firewall to add a rule for sharing files with Windows. (Samba) This rewrote the iptables rules file and I lost my own custom rules. I have a backup copy, but am having trouble restoring them. Edit: The server is Centos, I already tried to restore the rules with iptables-restore < /root/working.iptables.rules but for some reason the rules don't change. What are you trying to do? Trying to restore the iptable rules that I have in a backup file. What have you tried in order to make it happen? I've tried to modify the iptables file with vim, since the command iptables-restore was no help. What results did you expect? To get the old rules back. What actually happened? Nothing, when I run the command or edit the file by hand the file doesn't change at all. Maybe something else it's overwriting.

    Read the article

  • Kernel oops on Linux running in VirtualBox breaks some IO-related functionality on the server

    - by Kristoffer E
    We are having problems with CentOS release 6.3 running in VirtualBox on Windows 7 machines. The symptoms are the following: Everything works as normal for several hours, even days. Then something happens which breaks the system. What we still can do after this something happens: Access the web server Use existing SSH sessions to run top and free What does not work: Starting new SSH sessions (hangs after username and password is entered) Running ls in existing SSH sessions (hangs) SSI includes from our web servers that fetch data from remote machines probably more What we see on the server when this something happens is the following: Load average go from basically nothing to around 3 CPU usage is still low (5%) Disk activity is low (running iostat) Plenty of memory available Plenty of disk space available In /var/log/messages we get the following: Jun 14 01:10:48 devvm kernel: e1000 0000:00:03.0: eth0: Detected Tx Unit Hang Jun 14 01:10:48 devvm kernel: Tx Queue <0> Jun 14 01:10:48 devvm kernel: TDH <2e> Jun 14 01:10:48 devvm kernel: TDT <30> Jun 14 01:10:48 devvm kernel: next_to_use <30> Jun 14 01:10:48 devvm kernel: next_to_clean <2e> Jun 14 01:10:48 devvm kernel: buffer_info[next_to_clean] Jun 14 01:10:48 devvm kernel: time_stamp <1038284db> Jun 14 01:10:48 devvm kernel: next_to_watch <2f> Jun 14 01:10:48 devvm kernel: jiffies <103828b42> Jun 14 01:10:48 devvm kernel: next_to_watch.status <0> Jun 14 01:10:50 devvm kernel: e1000 0000:00:03.0: eth0: Detected Tx Unit Hang Jun 14 01:10:50 devvm kernel: Tx Queue <0> Jun 14 01:10:50 devvm kernel: TDH <2e> Jun 14 01:10:50 devvm kernel: TDT <30> Jun 14 01:10:50 devvm kernel: next_to_use <30> Jun 14 01:10:50 devvm kernel: next_to_clean <2e> Jun 14 01:10:50 devvm kernel: buffer_info[next_to_clean] Jun 14 01:10:50 devvm kernel: time_stamp <1038284db> Jun 14 01:10:50 devvm kernel: next_to_watch <2f> Jun 14 01:10:50 devvm kernel: jiffies <103829312> Jun 14 01:10:50 devvm kernel: next_to_watch.status <0> Jun 14 01:10:52 devvm kernel: ------------[ cut here ]------------ Jun 14 01:10:52 devvm kernel: WARNING: at net/sched/sch_generic.c:261 dev_watchdog+0x26d/0x280() (Not tainted) Jun 14 01:10:52 devvm kernel: Hardware name: VirtualBox Jun 14 01:10:52 devvm kernel: NETDEV WATCHDOG: eth0 (e1000): transmit queue 0 timed out Jun 14 01:10:52 devvm kernel: Modules linked in: vboxsf(U) ipv6 ppdev parport_pc parport microcode sg vboxguest(U) i2c_piix4 i2c_core e1000 snd_intel8x0 snd_ac97_codec ac97_bus snd_seq snd_seq_device snd_pcm snd_timer snd soundcore snd_page_alloc pcnet32 mii ext4 mbcache jbd2 sd_mod crc_t10dif ahci dm_mirror dm_region_hash dm_log dm_mod [last unloaded: scsi_wait_scan] Jun 14 01:10:52 devvm kernel: Pid: 0, comm: swapper Not tainted 2.6.32-279.el6.x86_64 #1 Jun 14 01:10:52 devvm kernel: Call Trace: Jun 14 01:10:52 devvm kernel: <IRQ> [<ffffffff8106b747>] ? warn_slowpath_common+0x87/0xc0 Jun 14 01:10:52 devvm kernel: [<ffffffff8106b836>] ? warn_slowpath_fmt+0x46/0x50 Jun 14 01:10:52 devvm kernel: [<ffffffff814595fd>] ? dev_watchdog+0x26d/0x280 Jun 14 01:10:52 devvm kernel: [<ffffffff81099138>] ? sched_clock_cpu+0xb8/0x110 Jun 14 01:10:52 devvm kernel: [<ffffffff81459390>] ? dev_watchdog+0x0/0x280 Jun 14 01:10:52 devvm kernel: [<ffffffff8107e897>] ? run_timer_softirq+0x197/0x340 Jun 14 01:10:52 devvm kernel: [<ffffffff810a21c0>] ? tick_sched_timer+0x0/0xc0 Jun 14 01:10:52 devvm kernel: [<ffffffff8102b40d>] ? lapic_next_event+0x1d/0x30 Jun 14 01:10:52 devvm kernel: [<ffffffff81073ec1>] ? __do_softirq+0xc1/0x1e0 Jun 14 01:10:52 devvm kernel: [<ffffffff81096c50>] ? hrtimer_interrupt+0x140/0x250 Jun 14 01:10:52 devvm kernel: [<ffffffff8100c24c>] ? call_softirq+0x1c/0x30 Jun 14 01:10:52 devvm kernel: [<ffffffff8100de85>] ? do_softirq+0x65/0xa0 Jun 14 01:10:52 devvm kernel: [<ffffffff81073ca5>] ? irq_exit+0x85/0x90 Jun 14 01:10:52 devvm kernel: [<ffffffff81505be0>] ? smp_apic_timer_interrupt+0x70/0x9b Jun 14 01:10:52 devvm kernel: [<ffffffff8100bc13>] ? apic_timer_interrupt+0x13/0x20 Jun 14 01:10:52 devvm kernel: <EOI> [<ffffffff810387cb>] ? native_safe_halt+0xb/0x10 Jun 14 01:10:52 devvm kernel: [<ffffffff810149cd>] ? default_idle+0x4d/0xb0 Jun 14 01:10:52 devvm kernel: [<ffffffff81009e06>] ? cpu_idle+0xb6/0x110 Jun 14 01:10:52 devvm kernel: [<ffffffff814e433a>] ? rest_init+0x7a/0x80 Jun 14 01:10:52 devvm kernel: [<ffffffff81c21f7b>] ? start_kernel+0x424/0x430 Jun 14 01:10:52 devvm kernel: [<ffffffff81c2133a>] ? x86_64_start_reservations+0x125/0x129 Jun 14 01:10:52 devvm kernel: [<ffffffff81c21438>] ? x86_64_start_kernel+0xfa/0x109 Jun 14 01:10:52 devvm kernel: ---[ end trace 2c7bb984812cf120 ]--- Jun 14 01:10:52 devvm kernel: e1000 0000:00:03.0: eth0: Reset adapter Jun 14 01:10:53 devvm abrtd: Directory 'oops-2013-06-14-01:10:53-1537-0' creation detected Jun 14 01:10:53 devvm abrt-dump-oops: Reported 1 kernel oopses to Abrt Jun 14 01:10:53 devvm abrtd: Can't open file '/var/spool/abrt/oops-2013-06-14-01:10:53-1537-0/uid': No such file or directory Jun 14 01:10:55 devvm kernel: Bridge firewalling registered After this we see for a while, every two minutes: Jun 14 01:14:22 devvm kernel: INFO: task events/0:19 blocked for more than 120 seconds. Jun 14 01:14:22 devvm kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Jun 14 01:14:22 devvm kernel: events/0 D 0000000000000000 0 19 2 0x00000000 Jun 14 01:14:22 devvm kernel: ffff880116c4fb90 0000000000000046 00000000ffffffff 0000000000000008 Jun 14 01:14:22 devvm kernel: 0000000000016680 0000000000016680 ffff880028210400 0000000000016680 Jun 14 01:14:22 devvm kernel: ffff880116c4daf8 ffff880116c4ffd8 000000000000fb88 ffff880116c4daf8 Jun 14 01:14:22 devvm kernel: Call Trace: Jun 14 01:14:22 devvm kernel: [<ffffffff8105b483>] ? perf_event_task_sched_out+0x33/0x80 Jun 14 01:14:22 devvm kernel: [<ffffffff814fe6a5>] schedule_timeout+0x215/0x2e0 Jun 14 01:14:22 devvm kernel: [<ffffffff8100975d>] ? __switch_to+0x13d/0x320 Jun 14 01:14:22 devvm kernel: [<ffffffff814fe323>] wait_for_common+0x123/0x180 Jun 14 01:14:22 devvm kernel: [<ffffffff81060250>] ? default_wake_function+0x0/0x20 Jun 14 01:14:22 devvm kernel: [<ffffffff814fe43d>] wait_for_completion+0x1d/0x20 Jun 14 01:14:22 devvm kernel: [<ffffffff8108d093>] __cancel_work_timer+0x1b3/0x1e0 Jun 14 01:14:22 devvm kernel: [<ffffffff8108cbe0>] ? wq_barrier_func+0x0/0x20 Jun 14 01:14:22 devvm kernel: [<ffffffff8108d0f0>] cancel_work_sync+0x10/0x20 Jun 14 01:14:22 devvm kernel: [<ffffffffa01c5ca5>] e1000_down_and_stop+0x25/0x50 [e1000] Jun 14 01:14:22 devvm kernel: [<ffffffffa01cb695>] e1000_down+0x155/0x200 [e1000] Jun 14 01:14:22 devvm kernel: [<ffffffffa01cbcb0>] ? e1000_reset_task+0x0/0xe0 [e1000] Jun 14 01:14:22 devvm kernel: [<ffffffffa01cbd1e>] e1000_reset_task+0x6e/0xe0 [e1000] Jun 14 01:14:22 devvm kernel: [<ffffffff8108c760>] worker_thread+0x170/0x2a0 Jun 14 01:14:22 devvm kernel: [<ffffffff810920d0>] ? autoremove_wake_function+0x0/0x40 Jun 14 01:14:22 devvm kernel: [<ffffffff8108c5f0>] ? worker_thread+0x0/0x2a0 Jun 14 01:14:22 devvm kernel: [<ffffffff81091d66>] kthread+0x96/0xa0 Jun 14 01:14:22 devvm kernel: [<ffffffff8100c14a>] child_rip+0xa/0x20 Jun 14 01:14:22 devvm kernel: [<ffffffff81091cd0>] ? kthread+0x0/0xa0 Jun 14 01:14:22 devvm kernel: [<ffffffff8100c140>] ? child_rip+0x0/0x20 Jun 14 01:14:22 devvm kernel: INFO: task parted:8069 blocked for more than 120 seconds. Jun 14 01:14:22 devvm kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Jun 14 01:14:22 devvm kernel: parted D 0000000000000003 0 8069 7994 0x00000080 Jun 14 01:14:22 devvm kernel: ffff8800908b3bb8 0000000000000082 0000000000000000 ffff88010ab50080 Jun 14 01:14:22 devvm kernel: ffff880116c7d500 0000000000000001 0000000000000000 0000000000000000 Jun 14 01:14:22 devvm kernel: ffff88010ab50638 ffff8800908b3fd8 000000000000fb88 ffff88010ab50638 Jun 14 01:14:22 devvm kernel: Call Trace: Jun 14 01:14:22 devvm kernel: [<ffffffff814fe6a5>] schedule_timeout+0x215/0x2e0 Jun 14 01:14:22 devvm kernel: [<ffffffff814fe323>] wait_for_common+0x123/0x180 Jun 14 01:14:22 devvm kernel: [<ffffffff81060250>] ? default_wake_function+0x0/0x20 Jun 14 01:14:22 devvm kernel: [<ffffffff8112b6d0>] ? lru_add_drain_per_cpu+0x0/0x10 Jun 14 01:14:22 devvm kernel: [<ffffffff814fe43d>] wait_for_completion+0x1d/0x20 Jun 14 01:14:22 devvm kernel: [<ffffffff8108d177>] flush_work+0x77/0xc0 Jun 14 01:14:22 devvm kernel: [<ffffffff8108cbe0>] ? wq_barrier_func+0x0/0x20 Jun 14 01:14:22 devvm kernel: [<ffffffff8108d2f3>] schedule_on_each_cpu+0x133/0x180 Jun 14 01:14:22 devvm kernel: [<ffffffff811ad440>] ? invalidate_bh_lru+0x0/0x50 Jun 14 01:14:22 devvm kernel: [<ffffffff8112ae35>] lru_add_drain_all+0x15/0x20 Jun 14 01:14:22 devvm kernel: [<ffffffff811adf6a>] invalidate_bdev+0x2a/0x50 Jun 14 01:14:22 devvm kernel: [<ffffffff8125e9a4>] blkdev_ioctl+0x3b4/0x6e0 Jun 14 01:14:22 devvm kernel: [<ffffffff811b381c>] block_ioctl+0x3c/0x40 Jun 14 01:14:22 devvm kernel: [<ffffffff8118dec2>] vfs_ioctl+0x22/0xa0 Jun 14 01:14:22 devvm kernel: [<ffffffff8118e064>] do_vfs_ioctl+0x84/0x580 Jun 14 01:14:22 devvm kernel: [<ffffffff8118e5e1>] sys_ioctl+0x81/0xa0 Jun 14 01:14:22 devvm kernel: [<ffffffff8100b0f2>] system_call_fastpath+0x16/0x1b In /var/spool/abrt/oops-2013-06-14-01:10:53-1537-0 we can see the following information: In backtrace: WARNING: at net/sched/sch_generic.c:261 dev_watchdog+0x26d/0x280() (Not tainted) Hardware name: VirtualBox NETDEV WATCHDOG: eth0 (e1000): transmit queue 0 timed out Modules linked in: vboxsf(U) ipv6 ppdev parport_pc parport microcode sg vboxguest(U) i2c_piix4 i2c_core e1000 snd_intel8x0 snd_ac97_codec ac97_bus snd_seq snd_seq_device snd_pcm snd_timer snd soundcore snd_page_alloc pcnet32 mii ext4 mbcache jbd2 sd_mod crc_t10dif ahci dm_mirror dm_region_hash dm_log dm_mod [last unloaded: scsi_wait_scan] Pid: 0, comm: swapper Not tainted 2.6.32-279.el6.x86_64 #1 Call Trace: <IRQ> [<ffffffff8106b747>] ? warn_slowpath_common+0x87/0xc0 [<ffffffff8106b836>] ? warn_slowpath_fmt+0x46/0x50 [<ffffffff814595fd>] ? dev_watchdog+0x26d/0x280 [<ffffffff81099138>] ? sched_clock_cpu+0xb8/0x110 [<ffffffff81459390>] ? dev_watchdog+0x0/0x280 [<ffffffff8107e897>] ? run_timer_softirq+0x197/0x340 [<ffffffff810a21c0>] ? tick_sched_timer+0x0/0xc0 [<ffffffff8102b40d>] ? lapic_next_event+0x1d/0x30 [<ffffffff81073ec1>] ? __do_softirq+0xc1/0x1e0 [<ffffffff81096c50>] ? hrtimer_interrupt+0x140/0x250 [<ffffffff8100c24c>] ? call_softirq+0x1c/0x30 [<ffffffff8100de85>] ? do_softirq+0x65/0xa0 [<ffffffff81073ca5>] ? irq_exit+0x85/0x90 [<ffffffff81505be0>] ? smp_apic_timer_interrupt+0x70/0x9b [<ffffffff8100bc13>] ? apic_timer_interrupt+0x13/0x20 <EOI> [<ffffffff810387cb>] ? native_safe_halt+0xb/0x10 [<ffffffff810149cd>] ? default_idle+0x4d/0xb0 [<ffffffff81009e06>] ? cpu_idle+0xb6/0x110 [<ffffffff814e433a>] ? rest_init+0x7a/0x80 [<ffffffff81c21f7b>] ? start_kernel+0x424/0x430 [<ffffffff81c2133a>] ? x86_64_start_reservations+0x125/0x129 [<ffffffff81c21438>] ? x86_64_start_kernel+0xfa/0x109 In cmdline: ro root=/dev/mapper/vg_01-lv_root rd_NO_LUKS LANG=en_US.UTF-8 KEYBOARDTYPE=pc KEYTABLE=sv-latin1 rd_NO_MD SYSFONT=latarcyrheb-sun16 rd_LVM_LV=vg_01/lv_root crashkernel=129M@0M rhgb quiet rd_LVM_LV=vg_01/lv_swap rd_NO_DM rhgb quie Additional information: # uname -a Linux devvm 2.6.32-279.el6.x86_64 #1 SMP Fri Jun 22 12:19:21 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux # cat /etc/redhat-release CentOS release 6.3 (Final) VirtualBox version 4.2.6. Any insight in how we can proceed with troubleshooting this is appreciated. If you need more information, just let me know.

    Read the article

  • ArchBeat Link-o-Rama Top 10 - September 16-22, 2012

    - by Bob Rhubart
    The Top 10 most popular items shared on the OTN ArchBeat Facebook Page for the week of September 16-22, 2012. The Real Architects of LA: OTN Architect Day in Los Angeles - Oct 25No gossip. No drama. No hair pulling. Just a full day of technical sessions and peer interaction focused on using Oracle technologies in today's cloud and SOA architectures. The event is free, but seating is limited, so register now. Thursday October 25, 2012. 8:00 a.m. – 5:00 p.m. Sofitel Los Angeles, 8555 Beverly Boulevard, Los Angeles, CA 90048. OIM-OAM-OAAM integration using TAP – Request Flow you must understand!! | Atul KumarAtul Kumar's post addresses "key points and request flow that you must understand" when integrating three Oracle Identity Management product Oracle Identity Management, Oracle Access Management, and Oracle Adaptive Access Manager. Cloud, automation drive new growth in SOA governance market | ZDNet "SOA governance tools and processes learned over the past decade are now underpinning cloud projects as they scale across enterprises," reports Joe McKendrick. But there remains a lack of understanding about SOA Governance. DevOps Basics: Track Down High CPU Thread with ps, top and the new JDK7 jcmd Tool | Frank Munz "The approach is very generic and works for WebLogic, Glassfish or any other Java application," say Frank Munz. "UNIX commands in the example are run on CentOS, so they will work without changes for Oracle Enterprise Linux or RedHat. Creating the thread dump at the end of the video is done with the jcmd tool from JDK7." Frank has captured the process in the posted video. Oracle OpenWorld 2012 Hands-on Lab: "Leading Your Everyday Application Integration Projects with Enterprise SOA" Yet another session to squeeze into your already-jammed Oracle OpenWorld schedule. This hands-on lab focuses on how "Oracle Enterprise Repository, Oracle Application Integration Architecture (AIA) Foundation Pack, and Oracle SOA Suite work together to help you drive your enterprisewide integration projects." Loving VirtualBox 4.2… | The ORACLE-BASE Blog Is it wrong for a man to love a technology? Oracle ACE Director Tim Hall has several very good reasons for his feelings… ADF Create and CreateInsert Operations for ADF Table | Andrejus Baranovskis Oracle ACE Director Andrejus Baranovskis answers the question, "What operation is best to use to insert a new row into an ADF table, Create or CreateInsert?" Fault Handling Slides and Q&A | Ronald van Luttikhuizen Oracle ACE Director Ronald van Luttikhuizen shares the slides and a Q&A transcript from a presentation he and fellow ACE Director Guido Schmutz gave at the recent Oracle OpenWorld and JavaOne preview event organized by AMIS Technology. Why IT is a profession in 'flux' | ZDNet I usuallly don't post two items from the same person in one day, but this post from ZDNet blogger Joe McKendrick deals with some critical issues affecting those in IT. As McKendrick puts it: "IT professionals are under considerable pressure to deliver more value to the business, versus being good at coding and testing and deploying and integrating." Running RichFaces on WebLogic 12c | Markus Eisele "With all the JMS magic and the different provider checks in the showcase this has become some kind of a challenge to simply build and deploy it," says Oracle ACE Director Markus Eisele. His detailed post will help you to meet that challenge. Thought for the Day "Less is more." — Ludwig Mies van der Rohe (March 27, 1886 – August 17, 1969) Source: BrainyQuote.com

    Read the article

  • fresh installation of PGF/TikZ crashes, why?

    - by Vincenzo
    I have a clean CentOS 5.5 machine with tetex installed. Next, I installed PGF/TikZ: wget http://media.texample.net/pgf/builds/pgfCVS2010-06-02_TDS.zip unzip pgfCVS2010-06-02_TDS.zip \cp -r tex /usr/share/texmf texhash I'm trying to compile a simple document and this is what I'm getting: $ latex test.tex This is pdfeTeX, Version 3.141592-1.21a-2.2 (Web2C 7.5.4) entering extended mode (./test.tex LaTeX2e <2003/12/01> .. skipped .. (/usr/share/texmf/tex/latex/pgf/frontendlayer/tikz.sty (/usr/share/texmf/tex/latex/pgf/pgf.sty (/usr/share/texmf/tex/latex/graphics/graphicx.sty (/usr/share/texmf/tex/latex/graphics/graphics.sty (/usr/share/texmf/tex/latex/graphics/trig.sty) (/usr/share/texmf/tex/latex/graphics/graphics.cfg)))) (/usr/share/texmf/tex/latex/pgf/utilities/pgffor.sty (/usr/share/texmf/tex/latex/pgf/utilities/pgfrcs.sty (/usr/share/texmf/tex/generic/pgf/utilities/pgfutil-common.tex) (/usr/share/texmf/tex/generic/pgf/utilities/pgfutil-latex.def) (/usr/share/texmf/tex/generic/pgf/utilities/pgfrcs.code.tex)) (/usr/share/texmf/tex/latex/pgf/utilities/pgfkeys.sty (/usr/share/texmf/tex/generic/pgf/utilities/pgfkeys.code.tex (/usr/share/texmf/tex/generic/pgf/utilities/pgfkeysfiltered.code.tex))) (/usr/share/texmf/tex/generic/pgf/utilities/pgffor.code.tex)) (/usr/share/texmf/tex/generic/pgf/frontendlayer/tikz/tikz.code.tex (/usr/share/texmf/tex/generic/pgf/libraries/pgflibraryplothandlers.code.tex ! Undefined control sequence. \pgfsetplottension ...ttension {\pgf@sys@tonumber \pgf@x } l.104 \pgfsetplottension{0.5} ? I failed to find any clues in the net about this problem. On other servers I don't such a problem. Could anyone help please? Thanks! ps. Btw, I tried another build of PGF/TikZ, the older one, no luck :(

    Read the article

  • Java Runtime.getRuntime().exec() alternatives

    - by twilbrand
    I have a collection of webapps that are running under tomcat. Tomcat is configured to have as much as 2 GB of memory using the -Xmx argument. Many of the webapps need to perform a task that ends up making use of the following code: Runtime runtime = Runtime.getRuntime(); Process process = runtime.exec(command); process.waitFor(); ... The issue we are having is related to the way that this "child-process" is getting created on Linux (Redhat 4.4 and Centos 5.4). It's my understanding that an amount of memory equal to the amount tomcat is using needs to be free in the pool of physical (non-swap) system memory initially for this child process to be created. When we don't have enough free physical memory, we are getting this: java.io.IOException: error=12, Cannot allocate memory at java.lang.UNIXProcess.<init>(UNIXProcess.java:148) at java.lang.ProcessImpl.start(ProcessImpl.java:65) at java.lang.ProcessBuilder.start(ProcessBuilder.java:452) ... 28 more My questions are: 1) Is it possible to remove the requirement for an amount of memory equal to the parent process being free in the physical memory? I'm looking for an answer that allows me to specify how much memory the child process gets or to allow java on linux to access swap memory. 2) What are the alternatives to Runtime.getRuntime().exec() if no solution to #1 exists? I could only think of two, neither of which is very desirable. JNI (very un-desirable) or rewriting the program we are calling in java and making it it's own process that the webapp communicates with somehow. There has to be others. 3) Is there another side to this problem that I'm not seeing that could potentially fix it? Lowering the amount of memory used by tomcat is not an option. Increasing the memory on the server is always an option, but seems like more a band-aid. Servers are running java 6.

    Read the article

  • Apache Mina Server Restart java.net.BindException: Address already in use

    - by Kosaki
    Hello, I have a rather annoying problem in my server application. I bind Apache Mina with the following code: acceptor.bind(new InetSocketAddress(PORT)); Where acceptor is an NioSocketAcceptor. Over a HTTP interface I can shutdown the server so I can restart it. Server.ioAcceptor.unbind(new InetSocketAddress(Server.PORT)); for(IoSession session: Server.ioAcceptor.getManagedSessions().values()){ if(session.isConnected() && !session.isClosing()){ session.close(false); } } Server.ioAcceptor.dispose(); Main.transport.stop(); Logger.getRootLogger().warn("System going down. Request from "+context.getRemoteAddress()); System.exit(10); This is the code I use to stop the Mina server. However if I try to start the server again in the next couple of minutes. (Somewhere between 5 minutes and 15 minutes) I get the following exception on start up: java.net.BindException: Address already in use I also tried a simple ioAcceptor.unbind() but there was no difference. The server runs on Centos 5 with OpenJDK. Apache Mina version is 2.0 RC1. Thank you in advance for any ideas on how to resolve this.

    Read the article

  • Ruby through RVM fails

    - by TheLQ
    In constant battle to install Ruby 1.9.2 on an RPM system (OS is based off of CentOS), I'm trying again with RVM. So once I install it, I then try to use it: [root@quackwall ~]# rvm use 1.9.2 Using /usr/local/rvm/gems/ruby-1.9.2-p136 [root@quackwall ~]# ruby bash: ruby: command not found [root@quackwall ~]# which ruby /usr/bin/which: no ruby in (/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin) Now that's interesting; rvm info says something completely different: [root@quackwall bin]# rvm info ruby-1.9.2-p136: system: uname: "Linux quackwall.highwow.lan 2.6.18-194.8.1.v5 #1 SMP Thu Jul 15 01:14:04 EDT 2010 i686 i686 i386 GNU/Linux" bash: "/bin/bash => GNU bash, version 3.2.25(1)-release (i686-redhat-linux-gnu)" zsh: " => not installed" rvm: version: "rvm 1.2.2 by Wayne E. Seguin ([email protected]) [http://rvm.beginrescueend.com/]" ruby: interpreter: "ruby" version: "1.9.2p136" date: "2010-12-25" platform: "i686-linux" patchlevel: "2010-12-25 revision 30365" full_version: "ruby 1.9.2p136 (2010-12-25 revision 30365) [i686-linux]" homes: gem: "/usr/local/rvm/gems/ruby-1.9.2-p136" ruby: "/usr/local/rvm/rubies/ruby-1.9.2-p136" binaries: ruby: "/usr/local/rvm/rubies/ruby-1.9.2-p136/bin/ruby" irb: "/usr/local/rvm/rubies/ruby-1.9.2-p136/bin/irb" gem: "/usr/local/rvm/rubies/ruby-1.9.2-p136/bin/gem" rake: "/usr/local/rvm/gems/ruby-1.9.2-p136/bin/rake" environment: PATH: "/usr/local/rvm/gems/ruby-1.9.2-p136/bin:/usr/local/rvm/gems/ruby-1.9.2-p136@global/bin:/usr/local/rvm/rubies/ruby-1.9.2-p136/bin:bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin:/usr/local/rvm/bin" GEM_HOME: "/usr/local/rvm/gems/ruby-1.9.2-p136" GEM_PATH: "/usr/local/rvm/gems/ruby-1.9.2-p136:/usr/local/rvm/gems/ruby-1.9.2-p136@global" MY_RUBY_HOME: "/usr/local/rvm/rubies/ruby-1.9.2-p136" IRBRC: "/usr/local/rvm/rubies/ruby-1.9.2-p136/.irbrc" RUBYOPT: "" gemset: "" So I have RVM that says one thing and bash which says another. Any suggestions on how to get this working?

    Read the article

  • apache: cgi links lead to a "you have chosen to open foo.cgi", although scriptalias is set

    - by Xiong Chiamiov
    Following this guide on CentOS 5.2, just getting nagios set up for the first time. The main page shows up just fine, but when I try to view any of the pages that should be generated by a cgi process, firefox prompts me to save the .cgi instead, so apache's obviously not understanding that it needs to run the cgi and get back some html from it. The odd thing is, though, that, as far as I can tell, apache should be running these files as cgi. nagios.conf: # SAMPLE CONFIG SNIPPETS FOR APACHE WEB SERVER # Last Modified: 11-26-2005 # # This file contains examples of entries that need # to be incorporated into your Apache web server # configuration file. Customize the paths, etc. as # needed to fit your system. ScriptAlias /nagios/cgi-bin/ "/usr/lib/nagios/cgi/" # SSLRequireSSL Options +ExecCGI AddHandler cgi-script .cgi AllowOverride None Order allow,deny Allow from all # Order deny,allow # Deny from all # Allow from 127.0.0.1 AuthName "Nagios Access" AuthType Basic AuthUserFile /etc/nagios/htpasswd.users Require valid-user Alias /nagios "/usr/share/nagios/" # SSLRequireSSL DirectoryIndex index.php Options None AllowOverride None Order allow,deny Allow from all # Order deny,allow # Deny from all # Allow from 127.0.0.1 AuthName "Nagios Access" AuthType Basic AuthUserFile /etc/nagios/htpasswd.users Require valid-use Either the ScriptAlias directive or ExecCGI option should be triggering this, but neither of them seems to have any effect. This config file is being parsed by apache, because if I move it out of conf.d, /nagios gives a 404. The .cgi files are indeed in the /nagios/cgi-bin/ directory, so I didn't specify the incorrect directory. Searching seemed to only provide people who had difficulty with permissions, which is not the issue here. This seems to me to be a pretty basic thing, but even with the excellent apache documentation, I'm at a bit of a loss (been using cherokee too much lately :) ).

    Read the article

  • Spikes in Socket Performance

    - by Harun Prasad
    We are facing random spikes in high throughput transaction processing system using sockets for IPC. Below is the setup used for the run: The client opens and closes new connection for every transaction, and there are 4 exchanges between the server and the client. We have disabled the TIME_WAIT, by setting the socket linger (SO_LINGER) option via getsockopt as we thought that the spikes were caused due to the sockets waiting in TIME_WAIT. There is no processing done for the transaction. Only messages are passed. OS used Centos 5.4 The average round trip time is around 3 milli seconds, but some times the round trip time ranges from 100 milli seconds to couple of seconds. Steps used for Execution and Measurement and output Starting the server $ python sockServerLinger.py /dev/null & Starting the client to post 1 million transactions to the server. And logs the time for a transaction in the client.log file. $ python sockClient.py 1000000 client.log Once the execution finishes the following command will show the execution time greater than 100 milliseconds in the format <line_number>:<execution_time>. $ grep -n "0.[1-9]" client.log | less Below is the example code for Server and Client. Server # File: sockServerLinger.py import socket, traceback,time import struct host = '' port = 9999 l_onoff = 1 l_linger = 0 lingeropt = struct.pack('ii', l_onoff, l_linger) s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, lingeropt) s.bind((host, port)) s.listen(1) while 1: try: clientsock, clientaddr = s.accept() print "Got connection from", clientsock.getpeername() data = clientsock.recv(1024*1024*10) #print "asdasd",data numsent=clientsock.send(data) data1 = clientsock.recv(1024*1024*10) numsent=clientsock.send(data) ret = 1 while(ret>0): data1 = clientsock.recv(1024*1024*10) ret = len(data) clientsock.close() except KeyboardInterrupt: raise except: print traceback.print_exc() continue Client # File: sockClient.py import socket, traceback,sys import time i = 0 while 1: try: st = time.time() s = socket.socket(socket.AF_INET,socket.SOCK_STREAM) while (s.connect_ex(('127.0.0.1',9999)) != 0): continue numsent=s.send("asd"*1000) response = s.recv(6000) numsent=s.send("asd"*1000) response = s.recv(6000) i+=1 if i == int(sys.argv[1]): break except KeyboardInterrupt: raise except: print "in exec:::::::::::::",traceback.print_exc() continue print time.time() -st

    Read the article

  • MediaWiki installed on virtual server accessed through Apache ProxyPass

    - by Eugen Mihailescu
    Note: where you will see "xttp" actualy is "http" but stackoverflow rules do not allow me to use more than 1 hyperlink in one post because I do not have enough "credit" to do that :) INTRODUCTION Hi, I have installed a MediaWiki 1.15.3 software on a private LAN on a Linux box (CentOS 5), with: Apache 2.2.3, PHP 5.1.6, MySQL 5.0.45. Let's name this Linux box "wiki box". Public users can't access this wiki as it is hosted on a private LAN. For external users (the Internet users) we have a Linux router (with Apache 2.0.52) where we host our website (ex: xttp://www.cubique.ro). Let's name this Linux box "router". WHAT I WANT What I want to do is: to create a virtual domain (as xttp://wiki.cubique.ro) on the "router" setup the virtual domain to forward all xttp requests to my private "wiki box" (ex: xttp://192.168.0.200/wiki_root/) WHAT I'VE DONE ALREADY On router's Apache (httpd.conf) I have created a VirtualHost as: < VirtualHost 0.0.0.0:80 ServerName wiki.cubique.ro DocumentRoot /someinternalpath/html ScriptAlias /cgi-bin /someinternalpath/cgi-bin ... Well, after I have navigate at wiki.cubique.ro I saw a blank web page, as /someinternalpath/html has an empty index.htm page. No problem, I know that I have to "teach" the router to pass all the access of virtual domain (wiki.cubique.ro) to the wiki box, where the real pages are stored. So I teach the Apache to ProxyPass the access of virtual domain root to the wiki box root like this: ...the following lines lies in the same virtual domain definition, see above ProxyPass / xttp://192.168.0.200/wiki/ ProxyPassReverse / xttp://192.168.0.200/wiki/ < /VirtualHost WHAT IS THE ISSUE If I access the wiki using the internal address (such as xttp://192.168.0.200/wiki/) it looks splendid (style sheets, everything). When I access the wiki using the virtual domain name ( xttp://wiki.cubique.ro ) it shows the content but no style sheet. Worse than that, no internal wiki links are working at all. Make a try: http://wiki.cubique.ro FINALLY, THE QUESTION Anyone has a clue how to deal with this? Thanks.

    Read the article

  • Which key:value store to use with Python?

    - by Kurt
    So I'm looking at various key:value (where value is either strictly a single value or possibly an object) stores for use with Python, and have found a few promising ones. I have no specific requirement as of yet because I am in the evaluation phase. I'm looking for what's good, what's bad, what are the corner cases these things handle well or don't, etc. I'm sure some of you have already tried them out so I'd love to hear your findings/problems/etc. on the various key:value stores with Python. I'm looking primarily at: memcached - http://www.danga.com/memcached/ python clients: http://pypi.python.org/pypi/python-memcached/1.40 http://www.tummy.com/Community/software/python-memcached/ CouchDB - http://couchdb.apache.org/ python clients: http://code.google.com/p/couchdb-python/ Tokyo Tyrant - http://1978th.net/tokyotyrant/ python clients: http://code.google.com/p/pytyrant/ Lightcloud - http://opensource.plurk.com/LightCloud/ Based on Tokyo Tyrant, written in Python Redis - http://code.google.com/p/redis/ python clients: http://pypi.python.org/pypi/txredis/0.1.1 MemcacheDB - http://memcachedb.org/ So I started benchmarking (simply inserting keys and reading them) using a simple count to generate numeric keys and a value of "A short string of text": memcached: CentOS 5.3/python-2.4.3-24.el5_3.6, libevent 1.4.12-stable, memcached 1.4.2 with default settings, 1 gig memory, 14,000 inserts per second, 16,000 seconds to read. No real optimization, nice. memcachedb claims on the order of 17,000 to 23,000 inserts per second, 44,000 to 64,000 reads per second. I'm also wondering how the others stack up speed wise.

    Read the article

  • Ideal Multi-Developer Lamp Stack?

    - by devians
    I would like to build an 'ideal' lamp development stack. Dual Server (Virtualised, ESX) Apache / PHP on one, Databases (MySQL, PgSQL, etc) on the other. User (Developer) Manageable mini environments, or instance. Each developer instance shares the top level config (available modules and default config etc) A developer should have control over their apache and php version for each project. A developer might be able to change minor settings, ie magicquotes on for legacy code. Each project would determine its database provider in its code The idea is that it is one administrate-able server that I can control, and provide globally configured things like APC, Memcached, XDebug etc. Then by moving into subsets for each project, i can allow my users to quickly control their environments for various projects. Essentially I'm proposing the typical system of a developer running their own stack on their own machine, but centralised. In this way I'd hope to avoid problems like Cross OS code problems, database inconsistencies, slightly different installs producing bugs etc. I'm happy to manage this in custom builds from source, but if at all possible it would be great to have a large portion of it managed with some sort of package management. We typically use CentOS, so yum? Has anyone ever built anything like this before? Is there something turnkey that is similar to what I have described? Are there any useful guides I should be reading in order to build something like this?

    Read the article

  • Activate a python virtual environment using activate_this.py in a fabfile on Windows

    - by Rudy Lattae
    I have a Fabric task that needs to access the settings of my Django project. On Windows, I'm unable to install Fabric into the project's virtualenv (issues with Paramiko + pycrypto deps). However, I am able to install Fabric in my system-wide site-packages, no problem. I have installed Django into the project's virtualenv and I am able to use all the " python manage.py" commands easily when I activate the virtualenv with the "VIRTUALENV\Scripts\activate.bat" script. I have a fabric tasks file (fabfile.py) in my project that provides tasks for setup, test, deploy, etc. Some of the tasks in my fabfile need to access the settings of my django project through "from django.conf import settings". Since the only usable Fabric install I have is in my system-wide site-packages, I need to activate the virtualenv within my fabfile so django becomes available. To do this, I use the "activate_this" module of the project's virtualenv in order to have access to the project settings and such. Using "print sys.path" before and after I execute activate_this.py, I can tell the python path changes to point to the virtualenv for the project. However, I still cannot import django.conf.settings. I have been able to successfully do this on *nix (Ubuntu and CentOS) and in Cygwin. Do you use this setup/workflow on Windows? If so Can you help me figure out why this wont work on Windows or provide any tips and tricks to get around this issue? Thanks and Cheers. REF: http://virtualenv.openplans.org/#id9 | Using Virtualenv without bin/python Local development environment: Python 2.5.4 Virtualenv 1.4.6 Fabric 0.9.0 Pip 0.6.1 Django 1.1.1 Windows XP (SP3)

    Read the article

  • Query optimization (OR based)

    - by john194
    I have googled but I can't find answers for these questions. Your advice is appreciated. centOS on vps with 512MB RAM, nginx, php5 (fastcgi), mysql5 (myisam, not innodb). I need to optimize this app created by some ex-employee. This app is working, but it's slow. Table: t1(id[bigint(20)],c1[mediumtext],c2[mediumtext],c3[mediumtext],c4[mediumtext]) id is some random big number, and is PK Those mediumtext rows look like this: c1="|box-002877|" c2="|ct-2348|rd-11124854|hw-3949|wd-8872|hw-119037736|...etc.. " c3="|fg-2448|wd-11172|hw-1656|...etc.. " c4="|hg-2448|qd-16667|...etc." (some columns contain a lot of data, around 900 KiB, database around 300 MiB) Yes, mediumtext "is bad", and (20) is too big... but I didn't create this. Those codes can be found on any of those 4 mediumtext's... //he needs all the columns of the row containing $code, so he wrote this: function f1($code) { SELECT * FROM t1 WHERE c1 LIKE '%$code%' OR c2 LIKE '%$code%' OR c3 LIKE '%$code%' OR c4 LIKE '%$code%'; Questions: Q1. If $code is found on c1... mysql automatically stops checking and returns row=id+c1+c2+c3+c4? or it will continue (wasting time) checking c2, c3 and c4?... Q2. Mysql is working with this table on disk (not RAM) because of the mediumtext, right? is this the primary cause of slowness? Q3. That query can be cached by mysql (if using a big query_cache_size=128M value on the my.cnf)? or that's not cacheable due to the mediumtexts, or due to the "OR LIKE"...? Q4. Do you recommend rewriting this with mysql's INSTR() / LOCATE() / MATCH..AGAINST [FULLTEXT]?

    Read the article

  • Locating memory leak in Apache httpd process, PHP/Doctrine-based application

    - by Sam
    I have a PHP application using these components: Apache 2.2.3-31 on Centos 5.4 PHP 5.2.10 Xdebug 2.0.5 with Remote Debugging enabled APC 3.0.19 Doctrine ORM for PHP 1.2.1 using Query Caching and Results Caching via APC MySQL 5.0.77 using Query Caching I've noticed that when I start up Apache, I eventually end up 10 child processes. As time goes on, each process will grow in memory until each one approaches 10% of available memory, which begins to slow the server to a crawl since together they grow to take up 100% of memory. Here is a snapshot of my top output: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1471 apache 16 0 626m 201m 18m S 0.0 10.2 1:11.02 httpd 1470 apache 16 0 622m 198m 18m S 0.0 10.1 1:14.49 httpd 1469 apache 16 0 619m 197m 18m S 0.0 10.0 1:11.98 httpd 1462 apache 18 0 622m 197m 18m S 0.0 10.0 1:11.27 httpd 1460 apache 15 0 622m 195m 18m S 0.0 10.0 1:12.73 httpd 1459 apache 16 0 618m 191m 18m S 0.0 9.7 1:13.00 httpd 1461 apache 18 0 616m 190m 18m S 0.0 9.7 1:14.09 httpd 1468 apache 18 0 613m 190m 18m S 0.0 9.7 1:12.67 httpd 7919 apache 18 0 116m 75m 15m S 0.0 3.8 0:19.86 httpd 9486 apache 16 0 97.7m 56m 14m S 0.0 2.9 0:13.51 httpd I have no long-running scripts (they all terminate eventually, the longest being maybe 2 minutes long), and I am working under the assumption that once each script terminates, the memory it uses gets deallocated. (Maybe someone can correct me on that). My hunch is that it could be APC, since it stores data between requests, but at the same time, it seems weird that it would store data inside the httpd process. How can I track down which part of my app is causing the memory leak? What tools can I use to see how the memory usage is growing inside the httpd process and what is contributing to it?

    Read the article

  • Multicore solr on Ubuntu 10.04 working for anyone?

    - by coleifer
    Following instructions from the two sites below, I've installed tomcat6 and solr 1.4 http://gist.github.com/204638 https://wiki.fourkitchens.com/display/TECH/Solr+1.4+on+Ubuntu+9.10+and+CentOS+5 I have successfully got it up and running on a server running 9.04 with multicore support, but on the 10.04 I can't seem to get it to work. I am able to reach localhost:xxxx/solr/ on the 10.04 box and see a single link to the Solr Admin, but following the link takes me to a 404 page with the following output: /solr/admin/ HTTP Status 404 - missing core name in path The requested resource (missing core name in path) is not available I am also unable to access /solr/site1/ as I would except - it similarly returns a 404 <!-- from /var/solr/solr.xml, site dirs exist --> <cores adminPath="/admin/cores"> <core name="site1" instanceDir="site1" /> <core name="site2" instanceDir="site2" /> </cores> <!-- from /etc/tomcat6/Catalina/localhost/solr.xml --> <Context docBase="/var/solr/solr.war" debug="0" privileged="true" allowLinking="true" crossContext="true"> <Environment name="solr/home" type="java.lang.String" value="/var/solr" override="true" /> </Context>

    Read the article

  • Getting svn: E170000: Unrecognized URL scheme for my custom Svn Gradle plugin

    - by Ip Doh
    I wrote a custom gradle plugin using groovy to do basic svn tasks like, Checkout, Clean, Tag etc. The groovy class calls the svn command line client to do these operations, It works fine when i run it on my windows system but the same plugin gives the following error when i run it on a linux system (Centos). svn: E170000: Unrecognized URL scheme for '%22https://source.mycompany.net/svn/MyProject/trunk%22' Am able to make the same calls to the command line client through the command prompt or shell script without any issues. So what is the difference with Here is my code sample String command =String.format("svn co -r %d --non-interactive --trust-server-cert -- username %s --password %s --depth infinity \"%s\" \"%s\"", getRevision(), getUserName(), getUserPassword(), getSrcUrl(), getDir()); Process svnProcess = Runtime.getRuntime().exec(command); BufferedReader stdInput = new BufferedReader(new InputStreamReader(svnProcess.getInputStream())); BufferedReader stdError = new BufferedReader(new InputStreamReader(svnProcess.getErrorStream())); String statusOutputLine ="" while ((statusOutputLine = stdInput.readLine()) != null) { logger.quiet(" " + statusOutputLine); } while (( statusOutputLine = stdError.readLine()) != null) { logger.error(statusOutputLine) throw new Exception(statusOutputLine) } logger.quiet("Successfully Checked out the work space") i do have neon installed on the system -bash-4.1$ svn --version svn, version 1.6.11 (r934486) compiled Jun 25 2011, 11:30:15 Copyright (C) 2000-2009 CollabNet. Subversion is open source software, see http://subversion.tigris.org/ This product includes software developed by CollabNet (http://www.Collab.Net/). The following repository access (RA) modules are available: ra_neon : Module for accessing a repository via WebDAV protocol using Neon. handles 'http' scheme handles 'https' scheme ra_svn : Module for accessing a repository using the svn network protocol. with Cyrus SASL authentication handles 'svn' scheme ra_local : Module for accessing a repository on local disk. handles 'file' scheme

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >