Search Results

Search found 5237 results on 210 pages for 'lightweight processes'.

Page 110/210 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • How to use psexec to start installation or other task that requires UAC interaction?

    - by Miguel Ventura
    I'm trying to remotely start installations and I'd like not to disable UAC. If I start the processes remotely using psexec, the installer will just get stalled waiting for the UAC prompt. Other tasks such as temporary files cleaning, services restarting, etc, will get me Access Denied errors. Is there anyway psexec can walk around UAC such as logging in with Administrator but with the TrustedInstaller privileges or something like that? By the way, I'm targeting Windows 2008 R2, but I think this question applies to Vista, 2008 and Windows 7 as well.

    Read the article

  • Secure PHP environments with PHP-FPM and SFTP

    - by pdd
    I'd like to set up secure environments for a small number of untrusted PHP websites on a Debian server. Right now everything runs on the same Apache2 with mod_php5 and vsftpd for administrative file access, so there is room for improvement. The idea is to use nginx instead of apache, SFTP through OpenSSH instead of vsftpd and chrooted (in sshd_config), individual users for each website with their own pool of PHP processes. All these users and nginx are part of the same group. Now in theory I can set 700 permissions on all PHP scripts and 750 on static files that nginx has to serve up. Theoretically, if a website is compromised all the other users' data is safe, right? Are there better solutions that require less setup time and memory per website? Cheers

    Read the article

  • Nginx's speed, and how to replicate it [migrated]

    - by Mediocre Gopher
    I'm interested in this from more than an academic standpoint rather than a practical standpoint; I don't plan on creating a production webserver to compete with nginx. What I'm wondering is how exactly nginx is so fast. The top google response for this is this thread, but it merely links to a cryptic slideshow and a general covering of different io strategies. All other results seem to simply describe how fast nginx is, rather then the reason. I tried building a simple erlang server to try to compete with nginx, but to no avail; nginx won out. All my server does is spawn a new process for each request, uses that process to read the file to a socket, then closes the file and kills the thread. It's not complicated, but given erlang's lightweight processes and underlying aio structure I thought it would compete, but nginx still wins out by a consistent 300 ms average under a heavy stress test. What is nginx doing that my simple server isn't? My first thought would be keeping files in main memory instead of tossing them between requests, but the filesystem cache does this already so I didn't think it would make that great of difference. Am I wrong? Or is there something else that I'm missing?

    Read the article

  • Shut Down took way too long because of "Background Programs"

    - by Christopher Chipps
    I tried shutting my desktop PC (with Windows7) down but after several attempts (like 4 or 5) at Start -- Shut Down, the GUI was still there and it was not shutting down. I didn't think there were any programs running when I pressed Shut Down, so I went into the taskbar (Ctrl + Alt + Del) to check out the processes. Once I did that, a screen appeared with a message stating that there are "background programs" still running and it gave me an option to "Force Shut Down" which I pressed and it shut down normally. Does anyone know why this would happen?

    Read the article

  • Terminal services and memory limits

    - by Mark Wassell
    Is there a way in Terminal Services to set limits on memory related parameters for a process. For example working set size and, possibly, if it makes sense, total virtual memory allocation for the session? To turn the question around, we have an application which cannot allocate as much virtual memory running on a terminal server as it can when running on a desktop PC (both I would expect to have a limit of 2GB for user mode address space) and I was wondering if there is another limit for processes or users on a terminal server. Perhaps even 2GB per user rather than per process.

    Read the article

  • Drop in solution for logging to DB

    - by Jake
    I'm considering setting up our servers to log to a Mongo Database rather than log files. Logs will then be all on one server, queryable, and overall easier to manage. I'd love to find a solution that will allow all the different processes I have running to write to DB rather than files (or perhaps something to read the files, pass the logs on and truncate the files). I don't want to have to find a different solution for every process if I can avoid it. So, does anyone know of an existing solution to this problem?

    Read the article

  • Using Squid on Debian, Cannot Connect Error

    - by Zed Said
    I am trying to set up Squid on Debian and am getting a connection refused error: squidclient http://www.apple.com/ > test client: ERROR: Cannot connect to 127.0.0.1:3128: Connection refused Here is my config: visible_hostname none cache_effective_user proxy cache_effective_group proxy cache_dir ufs /var/spool/squid 2048 16 256 cache_mem 512 MB cache_access_log /var/log/squid/access.log emulate_httpd_log on strip_query_terms off read_ahead_gap 128 Kb collapsed_forwarding on refresh_stale_hit 30 seconds retry_on_error on maximum_object_size_in_memory 1 MB acl all src 0.0.0.0/0.0.0.0 acl purgehosts src 127.0.0.1/255.255.255.255 # Caching static objects in __data is important. # Without that, apache processes sit around spooling static objects. acl QUERY urlpath_regex /cgi-bin/ /_edit /_admin /_login /_nocache /_recache /__lib /__fudge acl PURGE method PURGE acl POST method POST cache deny QUERY cache deny POST http_access allow PURGE purgehosts http_access deny PURGE http_access allow all http_port 127.0.0.1:80 http_port 50.56.206.139:80 cache_peer 127.0.0.1 parent 80 0 originserver no-query no-digest default redirect_rewrites_host_header off read_ahead_gap 128 Kb shutdown_lifetime 5 seconds Any ideas why this is happening? What have I missed?

    Read the article

  • Cannot run SSH or send commands to /etc/init.d/ssh

    - by ThinkBohemian
    When I attempt to execute any commands such as /etc/init.d/ssh restart or /etc/init.d/ssh start, I get no output. It just goes to the next command line (Ubuntu Hardy). I can even pass in junk parameters such as /etc/init.d/ssh asldkfjalskfdj and i get no warnings or error messages, it just goes to the next line. I can check in my processes: lsof -i :22 and don't see my ssh process. I also don't see my SSH process when i run: netstat -na --inet Any troubleshooting suggestions?

    Read the article

  • MaxClients in apache. How to know the size of my proccess?

    - by Larry
    From http://httpd.apache.org/docs/2.2/misc/perf-tuning.html The single biggest hardware issue affecting webserver performance is RAM. A webserver should never ever have to swap, as swapping increases the latency of each request beyond a point that users consider "fast enough". This causes users to hit stop and reload, further increasing the load. You can, and should, control the MaxClients setting so that your server does not spawn so many children it starts swapping. This procedure for doing this is simple: determine the size of your average Apache process, by looking at your process list via a tool such as top, and divide this into your total available memory, leaving some room for other processes. The main issue is that I can't understand how to know the size, because, well i have the size of httpd on no more of 3888 But, if we need to determine the number for MaxClients, and I have 4GB of RAM, so I get: 972, so I should use like 900 in the MaxClients?

    Read the article

  • Find which php scripts cause high CPU with php-cgi

    - by Oli
    Background: I maintain a server for a client who has half a dozen Wordpress sites on. They all have the W3 Total Cache plugin installed and eAcellerator is installed (might be APC). All the PHP sites run through a single batch of fastcgi php-cgi processes (it's actually php-fpm but I'm not sure if that makes a difference). Problem: php-cgi's CPU usage is quite high. Not terminally high but high enough to raise an eyebrow. The client wants to add more sites in the future and I want to avoid becoming CPU limited if I can help it. Question: Is there any way I can find the scripts or even just requests that are causing the high CPU. I realise I might not be able to do anything with the results but it would give me a chance.

    Read the article

  • Problem with Windows activation on a VM (Virtual machine)

    - by Daisetsu
    I backed up a number of laptops to virtual machines before they are to be re-purposed, in case I need the data at some later time. While the Physical to VM processes worked fine I am encountering issues on some of the VMs. When I boot them I get an error message saying I MUST activate windows in order to login. This is expected because the hardware changed (from physical hardware to virtualized hardware). I click the OK button and expect to be prompted with ways to activate, windows sits there for quite a while then tells me that "Windows has already been activated". I click OK at that message and get take back to the beginning where I am asked to activate Windows. I have done some fairly intensive googling but haven't been able to find a real solution. EDIT: The laptops with the issues are 2 Sony Vaios, I believe that they have the OEM version of the OS originally installed by the factory.

    Read the article

  • Intel Atom overheating in ASUS EEE Box 1501P

    - by Sergey L.
    I have had an ASUS EEE Box 1501P for just a little bit over a year. Of course it breaks 2 months after the warranty runs out. http://www.asus.com/Eee/EeeBox_PC/EeeBox_PC_EB1501P/ I have been using the box as a Home Media Center. Running mostly 24/7 often pausing a video overnight. Since last week the fan started running extremely loud. After some digging I found that the Intel Atom CPU in it is overheating and the built-in sensor is reporting temperatures way over 105°C. This got me worried, so I took the unit apart. Completely vacuumed the heat sink, oiled the fan, but the unit is still showing the same behaviour. After turning it on and just observing the hardware monitor in the BIOS the temperature slowly rises from 40°C to over 95°C in appx 5 min. I am running the newest BIOS and a lightweight Linux OPENELEC OS with XBMC on it. Now I am wondering if it could be a faulty heat sensor in the Atom. Recommended running temperature is up to 85°C, but I have not detected any performance hits when running at the above mentioned 105°C and there seem to be no software faults. How can an Atom with an attached heat sink and a fan running at full capacity even get this hot in the first place at 0 load? Aren't those things designed to generate virtually no heat? Could it be a faulty heat sensor? What shall I try to fix this? I would prefer not to damage the CPU, since it is hard fused into the motherboard and cannot be replaced. I could remove the heat pipe/heat sink, but it is getting hot, so heat is properly transferring from the CPU to the heat pipe, the fan is running at full capacity, is recently oiled and warm air is making it out of the exhaust. Edit: One more note: The North-bridge (or whatever it is called nowadays) is on the same heat pipe.

    Read the article

  • Fullscreen Video stutters on second monitor laptop

    - by nobrandheroes
    Fullscreen video on my new 1080p monitor is choppy when it comes from my laptop. The same video plays when not full screen. This goes for all video(Flash/MKV, etc), regardless of video resolution. I have an ATI Mobility Radeon HD 4200 Series card in my Thinkpad Edge, Turion X2 2GHz. The computer plays 1080p fine. Things I've tried: Updating Drivers Switching cables Turning Hardware Acceleration Changing video players process priority Rebooting Turning of laptop screen Turning off unused processes Nothing Works. What is the likelyhood that my laptop cannot power a 1920x1280 display?

    Read the article

  • Innodb : cannot allocate the memory for the buffer pool

    - by mingyeow
    My innodb keeps crashing. This is the error message below. Does anyone know why this keeps happening? InnoDB: by InnoDB 49201616 bytes. Operating system errno: 12 InnoDB: Check if you should increase the swap file or InnoDB: ulimits of your operating system. InnoDB: On FreeBSD check you have compiled the OS with InnoDB: a big enough maximum process size. InnoDB: Note that in most 32-bit computers the process InnoDB: memory space is limited to 2 GB or 4 GB. InnoDB: We keep retrying the allocation for 60 seconds... 0 processes alive and '/usr/bin/mysqladmin --defaults-file=/etc/mysql/debian.cnf ping' resulted in /usr/bin/mysqladmin: connect to server at 'localhost' failed error: 'Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)' Check that mysqld is running and that the socket: '/var/run/mysqld/mysqld.sock' exists! InnoDB: Fatal error: cannot allocate the memory for the buffer pool [ERROR] Default storage engine (InnoDB) is not available

    Read the article

  • Run application or script on Windows RDC connection

    - by NickLarsen
    I checked this thread, but it did not solve my exact problem. I need to run a script on when a connection is made across my network using windows remote desktop connection. The thread listed above works for the initial login, however, if I don't log out (which is necessary for some processes running on my network), then it wont run the script again the next time someone connects to the system using remote desktop connection. Previously we were using pcAnywhere to achieve this, however after running into some graphical issues with pcAnywhere, we have decided to move away from it to RDC. For a little more information, we need to have an email sent out anytime a connection is made to particular machines. The login name will always be the same for those systems and we do not log off when closing the connection.

    Read the article

  • Batch copy gives errors, xcopy works fine

    - by ndm13
    I am writing a general file backup program. It searches the drive for files matching a set of types and then writes them to a folder on the desktop. I wrote it using xcopy on Windows XP but upon learning that xcopy was deprecated in favor of robocopy in Vista and newer, still wanting to maintain compatibility I decided to switch to the non-deprecated copy. This is where the problems begin. I'm trying to fix the copy routine. I thought I had everything sorted out, but it doesn't copy anything. My output is zero files copied for every iteration. Original Code using xcopy: for /r %%a in (*.bmp *.dds *.gif *.jpg *.jpeg *.png *.psd *.pspimage *.tga *.thm *.tif *.tiff) do ( echo f | xcopy "%%a" "%HOMEDRIVE%%HOMEPATH%\Desktop\LDR\Images\Bitmap\%%~nxa" /q /y /g /c ) Revised (broken) Code using copy: for /r %%a in (*.bmp *.dds *.gif *.jpg *.jpeg *.png *.psd *.pspimage *.tga *.thm *.tif *.tiff) do ( copy "%%a" "%HOMEDRIVE%%HOMEPATH%\Desktop\LDR\Images\Bitmap\%%~nxa" /d /y /z ) Output: The system cannot find the path specified. 0 files copied. I know that it seems everyone uses either xcopy or robocopy but can anyone help with copy? Note: I'm using Batch to keep it very lightweight and command-line accessible.

    Read the article

  • Killing an unresponsive process

    - by Sathya
    I had closed an instance of utorrent. The task no longer appears in Applications, however the process utorrent.exe appears in Processes tab of Task Manager. I tried to kill using: Kill process button in Task Manager Kill process option in SysInternals Process Explorer Suspend, resume, restart in SysInternals Process Explorer command prompt by using the command taskkill /f /im utorrent.exe Stop-Process commandlet in Windows PowerShell. All of these have failed, the process just doesn't end. I cannot restart uTorrent because of the existing process running. Is there anyway I can kill this without having to resort to a system reboot ? I'm using Windows 7 Ultimate, OEM.

    Read the article

  • How to clean up an unprocessed orphan inode list?

    - by bmk
    I tried to mount a formerly readonly mounted filesystem read-writeable: mount -o remount,rw /mountpoint Unfortunately it did not work: mount: /mountpoint not mounted already, or bad option dmesg reports: [2570543.520449] EXT4-fs (dm-0): Couldn't remount RDWR because of unprocessed orphan inode list. Please umount/remount instead A umount does not work, too: umount /mountpoint umount: /mountpoint: device is busy. (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1)) Unfortunately neither lsof of fuser don't show any process accessing something located under the mount point. So - how can I clean up this unprocessed orphan list to be able to mount the filesystem again without rebooting the computer?

    Read the article

  • Why a SIGHUP signal to httpd kills the tomcat process?

    - by Geo
    I have a server with a tomcat process binding to port 80 and a httpd processes binding to port 5000. For some reason every time any process send a SIGHUP signal to httpd process my tomcat process disappears without error or anything. I fixed the issue on the server the following way, added an explicit ServerName directive in the httpd.conf and that fixed the issue. I still don't understand why the SIGHUP to httpd killed the tomcat process. NOTE 1: I replicated the kill signal with the following command: find out what the httpd pid is. cat /etc/httpd/run/httpd.pid 4056 then kill with a sighup signal kill -s SIGHUP 4056 NOTE 2: We troubleshoot the issue and find that logrotate running every morning at 4am was sending a SIGHUP signal to realease the logs to be able to rotate them, thus killing tomcat as well.

    Read the article

  • "tar -cfz" versus "tar cf - | gzip": are they different? (or how to improve a backup)

    - by I'm Dario
    I want to speed up my backup done with "tar -cfz", the common way to do it. But day by day my backed up files grow so it becomes slower. I was thinking to take advantage of the several cores available in my server and I was wondering if there is any difference between doing the backup with "tar -cfz" or piping tar to gzip ("tar cf - | gzip"). I guess that there isn't any difference, because the first spawns two processes (tar and gzip), in a similar way like piping it. If there is not difference, do you know any good alternative to do this, without going incremental? I'm looking at pigz too and it looks fine.

    Read the article

  • On a Mac, how are connections (possibly by spyware) made to outside internet addresses during initia

    - by TT
    I am trying to secure a Mac after discovering that network links are being established to some unwanted internet sites. Using 'lsof -i' (list open 'files', internet) I have seen that launchd, ntpd, firefox, dropbox and other processes are either 'LISTENING' or have 'ESTABLISHED' links to a site or sites which I suspect have to do with spyware. I have been trying to find startup files and preference lists that initiate thise links but can't find them. I could easily reinstall the OS and restore data from a backup but I'd prefer to know how to fix this as I have six Macs to look after. Thanks...

    Read the article

  • Alternatives to auditd and inotify for monitoring file deletion

    - by Tola Odejayi
    I'm trying to figure out which processes are deleting files from a specific directory on my CentOS server. I looked at inotify, but all this does is to tell me how many file deletions are occurring; it does not tell me what process run by which user did the deletions, nor does it tell me when they happened. I also tried auditd, but I have had no luck in getting it set up on my server. Does anyone have any other tool they can suggest that will do this?

    Read the article

  • Linux: find thin server running on port 80 and kill it

    - by Andrew
    On my Linux server I ran: sudo thin start -p 80 -d Now I'd like to restart the sever. The trouble is, I can't seem to get the old process to kill it. I tried: netstat -anp But what I see on port 80 is this: Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN - So, it didn't give me a PID to kill... I tried pgrep -l thin but that gave me nothing. Meanwhile pgrep -l ruby gives me like 6 processes running. I don't really understand why multiple ruby threads would be running, or which one I need to kill... How do I kill / restart the thin daemon?

    Read the article

  • 1080p playback on external monitor

    - by xibalban
    My system (netbook) specs: 1.6 GHz dual core atom N2600 2 GB DDR3 RAM Integrated GMA3600 GPU Win 7 Professional 32 Bit Daum Pot Player V-1.5 The issue: I downloaded a 1.33 GB sized 1920 x 1080 HD documentary (file extension: mp4) and it plays without a hitch using pot player. However, when I connected an external display (an LED Full HD 24" monitor) using D-Sub, the video plays fine but the player controls become very unresponsive. For instance, it takes about 40 seconds to exit full screen while playing. I tried: Installed/updated all the latest monitor drivers Updated the VGA drivers for GMA3600 No other programs ran while the video played, with most unwanted services and background processes disabled/killed. CPU usage goes only up to 49% when the video plays Question: What may I do in order to improve the player response/video playback experience? Do I need to play with (or change) Pot Player's default video decoders, etc?

    Read the article

  • nginx connection pool race condition?

    - by wlf
    I have a shared hosting server with high traffic. I have a lightweight apache mod_proxy for static content that from time to time has a "504 proxy error" problem proxing to apache/mod_php. Error log says: error reading status line from remote server 127.0.0.1:8080 Error reading from remote server returned by / This is what the apache documentation says about it. proxy-initial-not-pooled If this variable is set no pooled connection will be reused if the client connection is an initial connection. This avoids the "proxy: error reading status line from remote server" error message caused by the race condition that the backend server closed the pooled connection after the connection check by the proxy and before data sent by the proxy reached the backend. It has to be kept in mind that setting this variable downgrades performance, especially with HTTP/1.0 clients. I am really concerned about this downgrade in performance therefore I started to look at nginx immediately. I am new to nginx and time is crucial right now, I can't afford to waste days to study it just to find out there is the same race condition issue. Is nginx affected by this connection pool race condition? Thanks

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >