Search Results

Search found 4272 results on 171 pages for 'processes'.

Page 37/171 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • How might I stop BACKSCATTER using Qmail?

    - by alecb
    New to ServerFault , please pardon if my details are too much Linux box acting as Virtual Host for domain hosting. Runs CentOs. Runs Parallels Plesk 9.x Regardless of the following, the SPAM keeps flowing in at 1-3 / second. An explanation of the problem... "xinetd service listens for SMTP connections and forwards to qmail-smtpd. The qmail service only process the queue, but does not control messages coming into the queue...that's why stopping it has no effect. If you stop xinetd AND qmail, then kill any open qmail-smtpd processes, all mail flow comes to a stop SOMETIMES Problem is, qmail-smtpd is not smart enough to check for valid mailboxes on the localhost before accepting the mail. So, it accepts bad mail with a forged replyto address which gets processed in the queue by qmail. Qmail cannot deliver locally and bounces to the forged replyto address." We believe the fix is to patch the qmail-smtpd process to give it the intelligence to check for the existence of local mailboxes BEFORE accepting the message. The problem is when we try to compile the chkuser patch we run into failures due to Plesk Control Panel." Is anyone aware of something we could do differently or better?" Other things that have NOT worked thus far: -Turning off any and all mail processes (to check as an indicator that an individual account has been compromised. This has been verified as NOT the case.) -Turning off mail AND http server processes (in the case of a compromised formmail) -Running EXIM in lieu of Qmail( easy/quick install but xinetd forces exim to close and restarts qmail on its own) -Turned on SPF protection via Plesk GUI. Does not help. -Turned on Greylisting via Plesk GUI. Does not help. -Disabled Bounce notifications via command line That which MIGHT work but have complications: -Use POSTFIX instead of QMAIL (No knowledge of POSTIFX and don't want to bother with it unless anyone knows it has potential to handle backscatter WELL before investing time) -As mentioned above, compiling a chkusr patch, we believe will STOP this problem, along with qmail (because of plesk in the mix, the comile fails every time and Parallels Plesk support is unresponsive unless I cough up MONEY) If I don't clear out the SPAM from the outgoing mail queue nightly, then it clogs up with millions of SPAMs and will bring down the OUTGOING email services. Any and all help welcome and appreciated!

    Read the article

  • c - fork() and wait()

    - by Joe
    Hi there, I need to use the fork() and wait() functions to complete an assignment. We are modelling non-deterministic behaviour and need the program to fork() if there is more than one possible transition. In order to try and work out how fork and wait work, I have just made a simple program. I think I understand now how the calls work and would be fine if the program only branched once because the parent process could use the exit status from the single child process to determine whether the child process reached the accept state or not. As you can see from the code that follows though, I want to be able to handle situations where there must be more than one child processes. My problem is that you seem to only be able to set the status using an _exit function once. So, as in my example the exit status that the parent process tests for shows that the first child process issued 0 as it's exit status, but has no information on the second child process. I tried simply not _exit()-ing on a reject, but then that child process would carry on, and in effect there would seem to be two parent processes. Sorry for the waffle, but I would be grateful if someone could tell me how my parent process could obtain the status information on more than one child process, or I would be happy for the parent process to only notice accept status's from the child processes, but in that case I would successfully need to exit from the child processes which have a reject status. My test code is as follows: #include <stdio.h> #include <unistd.h> #include <stdlib.h> #include <errno.h> #include <sys/wait.h> int main(void) { pid_t child_pid, wpid, pid; int status = 0; int i; int a[3] = {1, 2, 1}; for(i = 1; i < 3; i++) { printf("i = %d\n", i); pid = getpid(); printf("pid after i = %d\n", pid); if((child_pid = fork()) == 0) { printf("In child process\n"); pid = getpid(); printf("pid in child process is %d\n", pid); /* Is a child process */ if(a[i] < 2) { printf("Should be accept\n"); _exit(1); } else { printf("Should be reject\n"); _exit(0); } } } if(child_pid > 0) { /* Is the parent process */ pid = getpid(); printf("parent_pid = %d\n", pid); wpid = wait(&status); if(wpid != -1) { printf("Child's exit status was %d\n", status); if(status > 0) { printf("Accept\n"); } else { printf("Complete parent process\n"); if(a[0] < 2) { printf("Accept\n"); } else { printf("Reject\n"); } } } } return 0; } Many thanks Joe

    Read the article

  • OS X Lion - Installing Oracle 10g Standard Edition

    - by Cellze
    Im trying to install oracle 10g on to OS X Lion, I have previous achieved this on snow leopard with the following http://blog.rayapps.com/2009/09/14/how-to-install-oracle-database-10g-on-mac-os-x-snow-leopard/ The issue im having is that the ulimit settings in the oracle/.bash_profile cannot be modified. I have the following in the bash_profile: export DISPLAY=:0.0 export ORACLE_BASE=$HOME umask 022 # must match `sysctl kern.maxprocperuid` ulimit -Hu 512 ulimit -Su 512 # must match `sysctl kern.maxfilesperproc` ulimit -Hn 10240 ulimit -Sn 10240 Upon applying the bash_profile settings . ~/.bash_profile i get the following error: -bash: ulimit: max user processes: cannot be modify limit: Invalid argument This then results in $ sqlplus / as sysdba not functioning correctly with a Segmentation fault: 11 The output of $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 10240 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 512 virtual memory (kbytes, -v) unlimited If any one knows how I can apply these ulimit settings to the oracle user I have created to allow me to install sqlplus and therefore create a db, that would be great.

    Read the article

  • 1000 HZ linux kernel necessary if I have tickless and high resolution timer?

    - by Bob
    I am trying to improve performance on my server. I have a few processes that need low jitter (less than 10ms variance). I have a load average of 4 maximum on an i7-920 (4 physical cores, 8 with HT). There are about 10 processes ranging from 40% to 90% of a core user mode. System usage is 3% total. Total CPU usage is 80% max. Will setting the kernel from 100hz to 1000hz improve the jitter if tickless and high resolution timers are already set? This page seems to indicate it still does something. https://lkml.org/lkml/2009/4/28/401 How about changing from voluntary (PREEMPT_VOLUNTARY) to preemptible (PREEMPT)?

    Read the article

  • rundll32.exe constantly running taking up resources slowing down my Win 7 computer

    - by Joe Fletcher
    Over the past week, my Windows 7 Home Premium computer (8gb RAM, 64bit) has been running slowly. When I look at my processes, there are always 2 rundll32.exe's running taking up 3 & 25% CPU power, memory slowly creeping upwards from around 115mb to 160mb each in the time it has taken me to right this message, sometimes popping upt o 300mb and back down. Svchost.exe is at 260mb. When I end those processes, everything returns to snappiness. I recently did some Windows Updates, and I think it was around the time my computer started acting slowly, but I can't remember if it was before or after the updates that things started running slowly. Last night I ccleaned & defrag'ed. How can I diagnose what's causing the slowness?

    Read the article

  • Redirect output of Python program to /dev/null

    - by STM
    I have a Python executable, written and compiled by somebody else, that I simply need to run once halfway down my own bash script. The program uses a text-based UI, therefore waits for input before proceeding, but the key operations it performs when starting are required in my bash script. A messy (and strange) procedure I know, but unfortunately I haven't got any other options. I've gotten around forcefully closing the program with a kill signal, but the program's TUI insists on outputting to wherever it's run. I've tried redirecting both stdout and stderr to /dev/null and running the program in the background by suffixing an ampersand, but simply can't get it to play ball. I believe the cause is the program spawns other processes, and the output redirection of the parent process doesn't affect them. Is there any trick I can utilise to redirect all output from child processes too?

    Read the article

  • opening adobe reader results in infinite explorer.exe process creation loop

    - by irrational John
    First, apologies if the answer to this is only a Google away. I tried, honest I did. But I wasn't able to find anything about this problem posted elsewhere. I'm using Adobe Reader v9.3.2 in Windows 7 Home Premium 64-bit. If you want more system details, then just request them. What happens is that when I attempt to open a PDF by clicking "Open" on it then (1) adobe reader never opens and (2) the explorer.exe program is (apparently) recursively opened. I base this on opening the Task Manager and seeing a long list of explorer.exe processes under the "Processes" tab. Usually there is only one. When I recreate this problem, the list of explorer.exe processes are at least a page or two long. (Too many to bother counting). I "correct" this problem by logging off and then logging back on. This kills all the explorer.exe tasks. Unfortunately I don't know another way to terminate them all. Now here's the curious part. This only happens when I attempt to "Open" a PDF file. If instead I use the context menu (right mouse click on the PDF) and select "Open with" and "Adobe Reader 9.3" then Adobe Reader opens the file with no problem. It seems that there is something wrong with the setting for the default open action for PDF files. However, I have been unable to fix this by changing the Windows setting. Here is what I have tried. When I open Control Panel > All Control Panel Items > Default Programs > Set Associations I do not find an entry for file type .pdf. There are only entries for .pdfxml and .pdx. When use "Open with" on a PDF file and select "Choose default program", the check box for "Always use the selected program to open this kind of file" is disabled (greyed out). I have uninstalled and reinstalled Adobe Reader but the problem persists. While obviously no lives are at stake here, this problem is annoying the frickin' heck out of me. If I forget and recreate this bug then I have to stop everything I'm doing to stop it. Any suggestions on how I might go about fixing this?

    Read the article

  • Should I Use PHP as FastCGI?

    - by Synetech inc.
    Hi, I am running an Apache webserver on my Windows machine. It is not generally a public server (most of the little bit of traffic comes from the machine itself, and most of the public traffic comes from crawlers). Basically, it is mostly just for use as a test-bed, development system. I have read about how running PHP as FastCGI is better (ie faster and more stable) than as an Apache module. However, I really don’t like the idea of multiple PHP.exe processes (I don’t like that Apache has two processes and I’m not even too thrilled with Chromium’s multi-process model). So I’m wondering if it would be worthwhile to change PHP to FastCGI for this scenario. If it is, how would I configure it? Pretty much all of the information I have seen has been either for non-Windows or for IIS. As I said, I’m running Windows+Apache. Thanks a lot.

    Read the article

  • Coldfusion multiserver instance hangs

    - by David Sedeño
    I have a coldfusion 8 multiserver setup with IIS in Windows 2008 Standard SP2 and when one instance "hangs" (I can't connect to the instance from fusion reactor) the web server throws a "503 service unavailable". The remains instance seems to works ok in fusion reactor but the website have only the 503. I have to restart jvm processes and IIS to get the website work again. The jvm processes have the option -Xmx2048m and the instanaces have 2.5Gb allocated. Maybe the jvm process reach the 2Gb limit and stop working? Can be a problem between IIS and CF instances? I'm new to CF debugging process, how can I find why the instance hangs? Thanks

    Read the article

  • Munin fills server memory

    - by danilo
    In the last weeks, it happened several times to me that my vserver (Debian Lenny) was out of RAM (500M) and therefore wasn't able to run apache anymore. When looking at the processes with top, I saw that there were many open munin-limits and munin-cron processes that consumed most of the memory. My guess would be that sometimes Apache temporarily needs more memory, which prevents munin-cron from running. And if munin-cron isn't able to stop itself, it would fill the memory until nothing is left. I don't know whether this guess is true, but could maybe someone know what the problem is and how to prevent it? If necessary I'll remove munin, but I'd prefer to keep it running.

    Read the article

  • Why does explorer restart automatically when I kill it with Process.Kill?

    - by Thomas Levesque
    If I kill explorer.exe like this: private static void KillExplorer() { var processes = Process.GetProcessesByName("explorer"); Console.Write("Killing Explorer... "); foreach (var process in processes) { process.Kill(); process.WaitForExit(); } Console.WriteLine("Done"); } It restarts immediately. But if I use taskkill /F /IM explorer.exe, or kill it from the task manager, it doesn't restart. Why is that? What's the difference? How can I close explorer.exe from code without restarting it? Sure, I could call taskkill from my code, but I was hoping for a cleaner solution...

    Read the article

  • How do MaxSpareServers work in Apache?

    - by John Hunt
    I've scoured the web but I can't find out what MaxSpareServers are in Apache MPM prefork.. The MaxSpareServers directive sets the desired maximum number of idle child server processes. An idle process is one which is not handling a request. If there are more than MaxSpareServers idle, then the parent process will kill off the excess processes. Great, but what causes a spareserver to be created? More importantly, when does a spare server go away? I understand that minspareservers are created gradually after the server is started.. How do maxspareservers relate to maxclients? Basically I'm at a bit of a loss on how best to configure Apache.. there's a lot of documentation out there but it isn't that clear. Thanks, John.

    Read the article

  • Logging with Resource Monitor?

    - by Jay White
    I am having sudden spikes in disk read activity, which can tie up my system for a few seconds at a time. I would like to figure out the cause of this before I set my machine to go live. With Performance Monitor I know I can log activity, but this does not show me individual processes that cause a spike. Resource Monitor allows me to see processes, but I have no way to keep logs. It seems unless I have Resource Monitor open at the time of a spike, I will not be able to identify the process causing the spike. Can someone suggest a way to log with Resource Monitor, or an alternative tool that can?

    Read the article

  • Kill Leaking Connections on SQL Server 2005

    - by Thierry Brunet
    We have a legacy ASP application that somewhere leaks SQL Connections. In Activity Monitor, I can see a bunch of idle processes with Last Batch times over an hour old. When I look at the T-SQL command batch, these are always FETCH API_CURSORXXX, which from my understanding is caused by improperly closed ASP ADO Recordsets. While we are try to pinpoint the offeding code, is there a way for me to monitor which requests open which cursors? I'm assuming profiler, but I'm not sure what I should be monitoring exactly. I can see a bunch of calls to sp_cursoropen but I don't see the API_CUSORXXX name anywhere. Second, would anyone be able to suggest a script we could run to kill these processes based on the Last Batch time 10 minutes and Last Batch Command being FETCH API_CURSORXXX? For various reasons, we unfortunately don't have any SQL Server DBAs.

    Read the article

  • MBP becomes very hot after using Xcode

    - by Globalhawk
    Hardware: MBP early 2011 version OS: Mountain lion App: Xcode 4.5.2 Problem: Every time when I start Xcode, 2 or 3 processes called "git" start running. But when I quit Xcode the "git" process won't quit and are still using a lot of CPU. Then the computer becomes quite hot and the battery gets drained very quickly. If I manually kill these processes the problem is gone. I tried to reinstall Xcode several times but the problem comes back every time. It drives me crazy. Any help will be appreciated!

    Read the article

  • Optimum number of threads while multitasking

    - by Gun Deniz
    I know similar questions have been asked but I think my case is a little bit diffrent. Let's say I have a computer with 8 cores and infinite memory with a Linux OS. I have a calculation software called Gaussian that can take advantage of multithreading. So I set its thread count to 8 for a single calculation for maximum speed. However I really can't decide what to do when I need to do run for instance 8 calculations simultaneously. In that case should I set the thread count to 1(total 8 threads spawned in 8 processes) or keep it 8(total 64 threads spawned in 8 processes) for each job? Does it really matter much? A related question is does the OS automatically does the core-parking to diffrent cores for each thread?

    Read the article

  • Window 7 problem, Explorer.exe, DWM.exe

    - by Nitinzz
    I'm using Window 7 Ultimate. I get a problem with two processes, namely explorer.exe and sometimes dwm.exe. The two processes tend to use cpu between 20-30%. And it only occurs when i play some game on my PC. My PC works perfect until I play some game. And another important observation: They consume no cpu as such but only consumes when I try to refresh my desktop. I mean when I right-click on desktop. It takes seconds for refresh. I have no virus problems. I had already tried following things: Kill explorer.exe and relaunch from task manager. (Problem still persists) Kill dwm.exe, well it relaunches again. (Problem still persists) Log Off and Log On. (Problem still persists) Restart. Problem Solved. (But need an alternative). Can anyone kindly suggest some quick fixes to the problem?

    Read the article

  • How to interpret IOZone results?

    - by homer5439
    Here are the resuts of running IOZone on an ext3 filesystem on an LVM volume residing on a SAN LUN (it was ran with 5 parallel processes). "Throughput report Y-axis is type of test X-axis is number of processes" "Record size = 4 Kbytes " "Output is in Kbytes/sec" " Initial write " 81628.55 " Rewrite " 83354.72 " Read " 115595.02 " Re-read " 119306.09 " Reverse Read " 47684.20 " Stride read " 10011.09 " Random read " 16751.27 " Mixed workload " 5659.77 " Random write " 1661.85 " Pwrite " 36030.83 Now this is all nice and dandy, but my question is: how do I know whether the values are as good as they could be or there is something to tweak (and if so, what?) The actual usage I will have for that Logical Volume is to act as virtual disk for a VM.

    Read the article

  • what does it mean for MalwareBytes to find malicious registry keys but nothing else?

    - by EndangeringSpecies
    I have a machine that is obviously infected, and when I ran MalwareBytes it told me that it found some "malicious" registry keys (surprisingly enough these contained file path to currently non-existent javascript files). But, that's it. Full scan did not uncover any malicious files, or malicious hidden processes in memory. Like, maybe the (hidden?) process that for whatever reason periodically injects keystrokes (hotkeys?) into whatever currently open window. Then on another, not obviously infected, machine it found a "malware.trace" registry key but again no files or processes etc. How does this jive with people's experience with MalwareBytes? Does it usually find registry key symptoms of an infection but nothing else? Or is it a common thing to have no infection but some malicious registry keys in place anyway?

    Read the article

  • Server monitoring for medium scale UNIX network

    - by nbartolomeo
    I'm looking for suggestions for a good monitoring tools, or tools, to handle a mixed Linux (RedHat 4-5) and HPUX environment. Currently we are using Hobbit which is working reasonably well but it is becoming harder to keep track of what alerts are sent out for what servers. Features I'd like to see: Easy configuration of servers. The ability to monitor CPU, network, memory, and specific processes I've looked into Nagios but from what I have seen it won't be easy to set up the configuration for all of our servers ~200 and that without installing a plugin into each agent I won't be able to monitor processes.

    Read the article

  • SQL Server plus small files

    - by user1467163
    I have a MSSQL server, 3 volumes, that runs some processes that seem to take way too long. One of these processes reads in a zip file, then writes to a database based on what's in the zip file.... for each record. I have 2 volumes in use and am creating the third- so I am trying to plan how to do this. OS has to remain on vol. 1. The TLogs should probably go on the new volume and the mdf's on the existing vol.2.. Do I put the file store on the volume with the MDF's so they don't interfere with the TLog writes, or with the TLogs so they don't interfere with the TLog flush to the MDFs? I know it's best to have more servers / volumes but I have to make do with whats on hand for now. I appreciate any suggestions.

    Read the article

  • Poor CPU usage under Ubuntu

    - by remek
    Hello, I just upgraded from Ubuntu 9.10 to Ubuntu 10.04 and I am now experiencing a strange problem. My computer has a quad-core processor, and when I am running several processes simultaneously, none of the cores is fully used. Before the upgrade, when I was running 4 processes, each of them was using 100% CPU (I could see that with the 'top' command). But now, CPU usage oscillates and is always pretty low. Has anybody an idea about this problem? Is it due to Ubuntu or to the program I am running? Thank you very much in advance for your help! Best, remek

    Read the article

  • How to have a soft-real-time process in presense of heavily swapping IO-intensive background load?

    - by Vi
    schedtool: PID 32301: PRIO 4, POLICY R: SCHED_RR , NICE -20, AFFINITY 0xf ionice: realtime: prio 4 But the music is stumbling anyway. Background load is low prio (SCHED_IDLEPRIO, idle ionice), but uses a lot of memory (more than is physically available) and does a lot of IO and calculations. Latencytop shows about 1500ms for: Following symlink Writing buffer to disk (sync) Page fault Writing a page to disk both for the bg load and for unrelated processes. Load average is 10 and counting. Why cannot it allocate, for example, 200MHZ of one of the cores and 32M of memory and not less than once per second opportunity for IO for mplayer to make it happy while continuing calculations on the background? Or: why it cannot leave background task and swap loving each other but keeping the rest of the system as if there were no background load? How to have RT processes AND heavy bg load simultaneously (without of virtual machines)?

    Read the article

  • optimizing mod_fcgid for a dediciated site

    - by Mike Williams
    i'm using mod_fcgid and I'm trying to find resources on how i can optimize it for running a dedicated website but have had no luck... so far i have: ive got apache2 running and im trying to have php processes spawned and always running so apache does not have to keep spawning them. # Fastcgi configuration for PHP5 LoadModule fcgid_module modules/mod_fcgid.so MaxRequestsPerProcess 5000 # Maximum number of PHP processes. MaxProcessCount 8 # Number of seconds of idle time before a process is terminated IPCCommTimeout 1800 IdleTimeout 1800 AddHandler fcgid-script .php5 .php4 .php .php3 .php2 .phtml FCGIWrapper /usr/local/cpanel/cgi-sys/php5 .php5 FCGIWrapper /usr/local/cpanel/cgi-sys/php5 .php4 FCGIWrapper /usr/local/cpanel/cgi-sys/php5 .php FCGIWrapper /usr/local/cpanel/cgi-sys/php5 .php3 FCGIWrapper /usr/local/cpanel/cgi-sys/php5 .php2 FCGIWrapper /usr/local/cpanel/cgi-sys/php5 .phtml

    Read the article

  • Intuitive view of what's using the hard drive so much on Windows 7?

    - by Aren Cambre
    Sometimes my hard drive usage is near 100%, and I have no idea what is causing it. Are there any utilities that can help diagnose excessive hard drive usage and have as intuitive of an interface as Task Manager's Processes tab, which I can sort by CPU usage? I am aware of using procmon, of adding columns to Task Manager's Processes tab like I/O Read Bytes and I/O Write Bytes, and using Resource Monitor's Disk tab. Too often, these don't give me useful information or clearly identify a single process that is hogging the disk.

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >