Search Results

Search found 4272 results on 171 pages for 'processes'.

Page 92/171 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • What does this error mean (Can't create TCP/IP socket (24))?

    - by user105196
    I have web server with OS RHEL 6.2 and Mysql 5.5.23 on another server and the web server can read from Mysql server without problem, but some time I got this error: [Sun Sep 23 06:13:07 2012] [error] [client XXXXX] DBI connect('XXXX:192.168.1.2:3306','XXX',...) failed: Can't create TCP/IP socket (24) at /var/www/html/file.pm line 199. my question : What does this error mean (Can't create TCP/IP socket (24))? is it OS error or Mysql error ? perl -v This is perl, v5.10.1 (*) built for x86_64-linux-thread-multi mysql -V mysql Ver 14.14 Distrib 5.5.23, for Linux (x86_64) using readline 5.1 su - mysql -s /bin/bash -c 'ulimit -a' core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 127220 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 1024 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited

    Read the article

  • Launch script after SFTP disconnect

    - by Mates
    I'm currently using Caja (basically the same as Nautilus) to connect using SSH to my server and work with files. What I'm looking for is a way to launch a simple script when I disconnect - I can launch a script after disconnecting from the TTY by putting it into ~/.bash_logout file, but that is not executed when disconnecting from a file manager. The only idea I have is to set up a cronjob which would be checking for existing sftp-server or sshd processes periodicaly and launched the script when there's no such process running. Is there any easier way to do this?

    Read the article

  • How to Track CPU and Memory Usage Per Process

    - by Mjsk
    I have seen this question asked on here before but was unable to follow the answer which was given. I would like to monitor a processes CPU, Memory, and possibly GPU usage over a given time. The data would be useful if presented in a graph. It would be nice if I could do this using Performance Monitor, but I am open to alternative solutions as well. I have tried using Performance Monitor and my problem is that I'm not sure which performance counters to use since there are so many. I've been looking at a Process, Processor, Memory, etc. but I'm not sure which counters within those categories will be of interest to me. My OS is Windows 7.

    Read the article

  • Comparison in Monit Permissions Testing

    - by beanland
    I'm trying to use Monit to check the permissions of a particular directory, but I only care that it's readable to all users. I don't care about any other permissions (write, execute) for the owner, group, or all. I also don't care about any special permissions. Knowing that I can't change the permissions of this directory, and with the possibility of another administrator changing these permissions without affecting my processes that rely on this directory (i.e., granting or revoking write access to the group), is it possible to check for a minimum permission in Monit? I have this which is currently working: check directory archive path /var/home/archive/ if failed perm 0755 then alert But I would like to have something like tihs: check directory archive path /var/home/archive/ if failed perm > 444 then alert This is failing for me. Is it possible to use comparison operators in Monit's permissions checking? If not, are there any workarounds?

    Read the article

  • Access denied to EFS encrypted files after PC joins domain

    - by mjmarsh
    I'm experiencing strange behavior with Windows Encrypted File System: I have a machine that is in workgroup mode (not joined to a domain) I encrypt an entire directory structure on the machine (basically a folder and subfolders with data files for my application). My application writes and reads files from the encrypted file hierarchy as a local Windows user (let's call the account 'SecureUser'). This works fine I then join the PC to a domain (Let's call it 'TEST') Afterwards, processes running as the local 'SecureUser' account can't read the files it wrote originally when it was off the domain (What is also strange is that the files are listed as "read only" now and I cannot unset this flag via Windows Explorer or the command line, even though it looks like it succeeds) I then 'un-join' the PC from the domain and everything works again Is there something about changing domain membership on a PC that changes the behavior of EFS so that previously encrypted files cannot be read, even by the originating user? Thanks in advance

    Read the article

  • Disabling certain JBoss ports

    - by Rich
    We are trying to configure JBoss 5.1.0 to be as lightweight and as secure as possible. One of the parts of this process is to identify and close any ports we do not need. Three ports that we have outstanding but don't believe we need are: 4457 - bisocket 4712 - JBossTS Recovery Manager 4713 - JBossTS Transaction Status Manager We don't think we need any of these features (but could be wrong). Bisocket seems to be a way for JMS clients behind a firewall to communicate with JBoss. We hardly use JMS now and when we do, it is very unlikely that we will need this firewall traversing ability. I am less sure about whether we need the two JBossTS ports - I am guessing these are used in a clustered environment - we aren't clustered. So my question is, how do we disable these ports (and associated processes where possible), or if we need these ports, why do we need to keep them open?

    Read the article

  • Performance Drop Lingers after Load [closed]

    - by Charles
    Possible Duplicate: How do you do Load Testing and Capacity Planning for Databases I'm noticing a drop in performance after subsequent load tests. Although our cpu and ram numbers look fine, performance seems to degrade over time as sustained load is applied to the system. If we allow more time between the load tests, the performance gets back to about 1,000 ms, but if you apply load every 3 minutes or so, it starts to degrade to a point where it takes 12,000 ms. None of the application servers are showing lingering apache processes and the number of database connections cools down to about 3 (from a sustained 20). Is there anything else I should be looking out for here?

    Read the article

  • No Cure for a Slow Computer?

    - by Marv
    I have a laptop with the following specs: 2.2gHz dual-core processor. 4gb of DDR2 Ram. 180gb HDD space. I have tried everything. I have reinstalled the OS. Installed Ubuntu with Lubuntu, LXDE, Gnome Classic, Unity 2D desktop. I have even tried downgrading to XP with all non-critical processes and services turned off. Even with the most stripped down version of Ubuntu it heats up and the fan starts churning. I'm out of ideas. I have tried everything. If you have any tips, please help. :'(

    Read the article

  • How to run a command in a process that is not a child of the current process?

    - by amicitas
    I am having a library conflict issue with calling an external program from within a interpreted programming environment (IDL). The issue seems to be that since the program I am calling ends up as a child of IDL, libraries are not being reloaded. From within IDL I can launch sub-processes either directly or using a shell. Is there a good way that I can cause my program to be run without ending up as a child process? The only solution I have found so far is to use ssh localhost my_program. This works perfectly but I would like a more direct solution.

    Read the article

  • Apache with mod_perl eating memory when idle

    - by syneticon-dj
    An Apache webserver running a mod_perl application is exposing abnormal memory usage - after the "day load" ceases, the system's memory is being exhausted by the Apache processes and oom_killer is being invoked. As the load returns the following morning, the memory usage normalizes - probably because Apache workers get recycled periodically if a sufficient number of hits is generated: This is the graph for apache hits per second to correlate: The remaining 2 hits per second throughout the night are induced by HAProxy checks - it runs HEAD http://mydomain.example.com/running HTTP/1.0 requests against the server every half a second with "running" being a static file (i.e. not invoking any perl code). It also seems that disabling these checks remedies the memory usage problem, but obviously cannot be a solution. All of 3 similarly configured servers (behind HAProxy) expose this behavior. The running OS is Ubuntu 10.10, Apache version 2.2.16. This seems to be a memory leak but I have no idea how to start debugging it - any hints?

    Read the article

  • CPU Configuration Issue for 2 Servers (Server 2008 R2)

    - by Bill Moreland
    I have 2 servers running the exact same Classic ASP code with Access DBs (yes, not ideal, but it is what it is, for now). 1) Xeon 5520 @ 2.27 GHz (6 GB Memory) 2) Xeon E5-2620 @ 2.00 GHz (2 processors, 32 GB Memory) For most pages the newer E5-2620 processes the pages between 10-15% faster. On pages requiring heavy and/or multiple complicated access stored procedures (queries) the older 5520 does a much better job. I believe the servers are configured nearly identically. My question: is it possible that the newer, multi-processor server is not as good at handling Classic ASP as the older single processor? Is there a configuration difference that needs to be in place that I'm missing since I'm shooting for identical implementations?

    Read the article

  • How to monitor RAM usage for Hyper-V VMs ?

    - by Mac
    A bit of context first : on Windows 2008 Standard x64 with 8Gb RAM, I have 5 VMs running which should take up 1664Mb RAM (3*256Mb+384Mb+512Mb). There is nothing else running on this server except the basic OS components (this not a Core installation). I know that each VM will use more RAM on the host than what has been configured in Hyper-V. But when I run the task manager, it says 6.7Gb used ! If I sum up the RAM used by each process in the task manager (showing all users processes), I get to something around 1Gb... So : how can I check how much RAM each VM is really using on the host (it does not seem to be available via task manager) ? Note that I am aware of the fact that my problem could be unrelated to VM RAM usage, but I would still very much like to know how to do this.

    Read the article

  • Problems getting auditd set up on my server

    - by Tola Odejayi
    I'm trying to figure out which processes are deleting files from a specific directory, so I want to set up and run auditd on my system. I've set up the following rule in audit.rules: -w S unlink -S truncate -S ftruncate -a exit,always -k cache_deletion -w /home/myfolder/cache Then I type this to start the audit daemon: auditctl -R /etc/audit/audit.rules -e 1 But I get this error message: Error - nested rule files not supported Does anyone know what I am doing wrong here, and how I can resolve this? Also, what do I have to do to get the daemon running at startup?

    Read the article

  • How can I speed up my macro in Excel 2003?

    - by user144872
    I have a macro that copies data from one cell to another and uses a VLOOKUP formula, among other things. My spreadsheet contains nearly 2000 rows. When I run it in Excel 2003, Excel starts to slow down as the macro processes rows 500 and above. It gets even worse when it reaches the 1000th row. It takes more than 5 hours to complete. In Excel 2007, however, the macro runs for only half an hour. Can anyone help me find a good solution?

    Read the article

  • How do I remove Lenovo Veriface from the login screen?

    - by Xenorose
    I have a Lenovo laptop which came preloaded with Windows 7. Every time I start the computer and get to the Windows log-in screen (where you enter the user password) I get a message about Veriface software giving me the option to use it. I'd like to disable this. I went over the Program's settings and there is nothing that allows you to disable it from loading with Windows. Also, I thought that it might be a service to disable, but I don't see it in the list of services, nor is it in the list of start-up Processes (either in msconfig or in the registry). I'm considering uninstalling it completely, but since it's a part of the lenovo software pack that came with the computer and I do use some of these software, I'm not sure if uninstalling it might also remove wanted things (and uninstalling and reinstalling if needed seems like a mess). Anybody know if there's an easy way to achieve this?

    Read the article

  • How to force "Windows Explorer" to open new folders in the same window

    - by yoshiserry
    I have been searching for an answer to this question for a very long time. I have checked the "open folders in the same window" radio button in the general tab of folder options. I have also been told to uncheck the launch as seperate processes button in the view tab of folder options. I'm thinking some how this must be a registry issue. Anyone know a registry hack that will fix this problem and force windows explorer to open folders in the same window. I'm sick to death of having so many windows open. Im running Windows 7 Ultimate Beta 7100.

    Read the article

  • What is this PHP process? It is crippling my server

    - by user1019588
    This process has been using 65% of my site CPU and has lasted for about 10 minutes now (aren't processes only supposed to go for a couple seconds?) It is obviously something with mysql. This makes sense because I have a lot of queries going, but something still seems a bit odd... This could have something to do with my bad PDO connection that I mentioned in the previous question. Perhaps I am opening too many connections or something like that? Here is the stats on it: Owner: mysql Priority: 0 CPU %: 61.1 Memory %: 0.4 Command:/usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --user=mysql --log-error=/var/lib/mysql/cvps54834319.myhost.com.err --pid-file=/var/lib/mysql/cvps54834319.myhost.com.pid Thanks for any help on this. I have over 10GHZ on my server so this is very concerning to me.

    Read the article

  • Logging in over ssh in a different session?

    - by Jordan Reiter
    I don't know exactly what the correct term is, but I notice if I log in to a remote SSH server, then close the window, open a new one, and log into that server again, my bash history and user processes appear to be different. For instance, if I started a background process I can't get back into it, or something I typed won't show up in my bash history. The problem is for some reason occasionally something happens to my remote session and instead of being disconnected the session just hangs; I have to close the window and open a new one to reconnect. As a result sometimes it means a long running process basically is "lost" since I can't get back into it. Is there any way to set it up so that when I log back in I log back in to the same "session"? This is using OS X Terminal.

    Read the article

  • How to know or change the size of the Windows Event Log from a program under Windows XP? [closed]

    - by ahmd1
    I ran into a weird problem on a Windows XP system. My local service app logs its diagnostic messages into the Windows Event Log, so at some point those messages stopped being logged. I thought that the issue was in my code, but then I discovered that other processes can't log messages either. So I was wondering, is there a limit on the Windows Event Log size? PS. I guess I need to write this specifically -- I need to know/change the size from a command line or an API.

    Read the article

  • Windows XP - Website unaccessible on single pc in LAN

    - by DorentuZ
    For serveral days now, a website isn't accessible on a single pc in the LAN. On the other pc's, it works just fine. And it's just a single website that's not accessible as far as I know of. The website generates a timeout on every single web browser I've tried (IE8, Firefox and Chrome). However, traceroute, nmap and telnet all work just fine. I've even tried multiple user accounts and safe mode, but that didn't work either. As a side note: using a linux live cd did work and I could access the website without any problems. The hosts file is the windows default, the ip- and dns settings on the network adapter normal as well. No strange processes are running and no viruses found. According to tcpview and netstat there are connections to the domain, but every request in the browser results in a timeout.. Any idea what's happening?

    Read the article

  • Mystery process crashing machine by using all of the RAM - how to identify?

    - by wd40
    I have a Linux machine which runs ~10 in house written processes. Every other day(!) the machine completely runs out of RAM, goes into swap and becomes unresponsive. This happens quickly over a period of a couple of seconds, so it's not feasible to sit watching the machine until it dies. It's a sudden leak, not a gradual one, so top(1) doesn't give any indications something bad may about to happen. What is the best way of identifying which process(es) are causing the trouble?

    Read the article

  • How can I monitor network usage by process on Mac OS X?

    - by psmith
    Is there any way to find out which process using how much internet bandwidth on Mac OS X Lion? I'm on mobile internet now, which is not very fast, so it would be nice if I can tell that for example, Chrome using 10kB/s, and Skype using 2kB/s. I can see the total amount of traffic in Activity Monitor, but it is not enough for me. I'd like to use an existing application, not interested to write an app like this. And I'm not interested in the actual traffic, only the number of bytes transferred and received by each processes.

    Read the article

  • strange memory usage pattern on windows server 2008 on login through remote desktop..

    - by headsling
    I'm running Windows Server 2008 Datacenter Service Pack 2 on a VM Ware instance with 10Gb ram allocated. I'm not running IIS or SQL Server. Under 'normal' conditions, the machine uses ~5.5Gb of memory. However, when I login to the server through remote desktop, the memory usage slowly climbs up to 9.8Gb of memory in use. After several minutes the memory slowly creeps back down to the 5.5Gb mark. I've tried killing all the processes associated with my login, on login, barring the taskmanager without success, and I can't see any process that is growing in memory usage when the memory is increasing. I'm assuming this is some system level cache that is growing / shrinking... but why is it doing this?

    Read the article

  • Too many files open issue (in CentOS)

    - by Ram
    Recently I ran into this issue in one of our production machines. The actual issue from PHP looked like this: fopen(dberror_20110308.txt): failed to open stream: Too many open files I am running LAMP stack along with memcache in this machine. I also run a couple of Java applications in this machine. While I did increase the limit on the number of files that can be opened to 10000 (from 1024), I would really like to know if there is an easy way to track this (# of files open at any moment) as a metric. I know lsof is a command which will list the file descriptors opened by processes. Wondering if there is any other better (in terms of report) way of tracking this using say, nagios.

    Read the article

  • Change Google Chrome's Process model?

    - by mobius42
    See here: http://imgur.com/lKffI.png Does anyone here know how to stop Chrome doing this? Chrome seems to group all tabs I open through the same page into one process. If I copy and paste the links individually into separate tabs, it creates new processes, but when I just middle click links, it groups them into one. I want to force Chrome to create a new process for every tab because when one page locks up, it freezes pretty much all the tabs I have open and if one of the tabs crashes, it takes the rest with it. You can apparently alter Chrome's process model to one called "--process-per-tab" which seems to be what I'm looking for, but when I try and open Chrome with this argument via the terminal, it doesn't work. It's likely I'm not using the correct command; what I tried was: /Applications/"Google Chrome.app"/Contents/MacOS/"Google Chrome" --process-per-tab I'm on OSX and using the latest dev build 5.0.396.0.

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >