Search Results

Search found 10023 results on 401 pages for 'manage processes'.

Page 118/401 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • Is it possible to ensure that multiple applications run in the same terminal server session using RemoteApp?

    - by mbrownnyc
    We are interested in implementing RemoteApp on Windows 2008 R2 to serve out a few programs. Since the developers use shared memory to pass messages between processes, it is necessary that we provide them with a solution that will allow this. They have researched and discovered that if the applications exist in the same terminal server session that they will be able to access shared memory. Is there a way to absolutely ensure that multiple RemoteApps are running within the same session (with the same user) so that they can access the same shared memory?

    Read the article

  • Diagnostic Policy does not run

    - by Magakahn
    I have got a starange error on another pc that runs windows vista. The problem seems to be that the Diagnostic policy does not manage to run. Therefor I can not find out why it can not run.I have tried to start it manually with no luck, with an error code thar say that you have no permission. Since Diagnostic policy does not run, I am not able to connect to the internet. Can somebody please help me?

    Read the article

  • Is there any way to execute something when closing the laptop's lid?

    - by Matias
    I'm wondering if there is any way to execute a program, run some command or anything else when closing the laptop's lid. My question aims to be generic, both for Windows and Linux. Something useful should be locking the laptop when closing the lid (Win + L in windows), or running expensive processes such as antivirus analysis etc. without doing it manually before closing the lid.

    Read the article

  • How can I unbind a UDP port that has no entry in lsof?

    - by Chocohound
    On my Mac, I have a UDP port that is "already in use", but doesn't have an associated process: sudo netstat -na | grep "udp.*\.500\>" shows udp4 0 0 192.168.50.181.500 *.* udp4 0 0 192.168.29.166.500 *.* sudo lsof doesn't show a process on port 500 (ie sudo lsof -i:500 -P reports nothing). Note I'm using 'sudo' on both commands so it should show all processes. (rebooting works, but looking for something less disruptive) How can I unbind port 500 so I can use it again?

    Read the article

  • Should I switch to Linux for development?

    - by Alex
    Is there any advantage to use a Linux machine to develop instead of Windows? Everyone at work tells me to switch to Linux, since I'm developing hard-core on linux anyway. I manage 40 servers, and do everything from DB to data-backend to developing web services. I don't find anything wrong with Putty. I"m just too lazy to install another OS... What do you guys think?

    Read the article

  • Varnish -> Nginx -> Apache a good idea?

    - by Zoran Zaric
    Hey, I'm thinking about the architecture for a new Webserver. Would having Varnish as a cache in front of Nginx as a reverse-proxy and serving static files in front of apache for all heavy lifting be a good idea? I'm going to run php and ruby on rails applications. Will there be too much overhead passing php requests to apache through two other processes? Thanks a lot!

    Read the article

  • Can a process be frozen temporarily in linux?

    - by Pal Szasz
    I was wondering if there is a way to freeze any process for a certain amount of time? What I mean is that: is it possible for one application (probably running as root) to pause the execution of another already running process (any process, both GUI and command line) and then resume it later? In other words I don't want certain processes to be scheduled by the linux scheduler for a certain amount of time.

    Read the article

  • Impact of the L3 cache on performance - worth a dual-processor system?

    - by Dan Nissenbaum
    I will be purchasing a new high-end system, and I would like to have a better sense of whether a dual-processor Xeon system (I am looking at the new, high-end Xeon E5-2687W) might, realistically, provide a noticeable performance improvement due to the doubling of the L3 cache (20 MB per CPU). (This is in addition to the occasional added advantage due to the doubling of cores and RAM.) My usage scenario is, roughly, that I have many background applications running at any time - 3 or 4 data compression/backup applications, a low-impact web server, one or two virtual machines at any given time (usually fairly idle), and perhaps 20 utility programs that utilize a noticeable (but small) portion of the CPU cores. In total, when I am not actively using the computer, about 25% of the total CPU power is utilized in my current i7-970 6-core (12 thread) system. When I am doing routine work, the CPU utilization often exceeds 50%, and occasionally hits 75%-80%. The Xeon E5-2687W is not only a second-generation i7 (so should improve performance for that reason), but also has 8 cores (16 threads), rather than 6 cores. For this reason, I expect to run into the 75% CPU range even less frequently. Nonetheless, the ability to double the cores and the RAM is a consideration. However, in the end, I believe this decision comes down to whether the doubling of the L3 cache will provide a noticeable improvement. There are many benchmarks, and a lot of discussion, regarding CPU power. However, I find very little discussion of L3 cache utilization, and how increases in the L3 cache (such as doubling it with dual processors) affect performance. For example: If there are only two processes running, but each benefits from a large L3 cache (such as might be the case for background processes that frequently scan the file system), perhaps the overall system performance might noticeably improve with dual CPU's - even if only a single core is active on each CPU - due to each process having double the effective L3 cache. I am hoping that someone has a sense of the benefits of increasing (or doubling) the L3 cache size. Note: the CPU I am considering (the Xeon E5-2687W) has 20 MB L3 cache, so a system with dual CPU's would have 40 MB L3 cache.

    Read the article

  • IIS EventLog Errors

    - by chris
    I keep getting this error in my event viewer on IIS 6. I'm trying to figure out if my error resets my connection (maybe recycles the worker processes?). The error is: An attempt was made to load filter 'C:\Program Files\Software Artisans\FileUp \FileUpIsapi.dll' but it requires the SF_NOTIFY_READ_RAW_DATA filter notification and this notification is not supported in Worker Process Isolation Mode. For more information, see Help and Support Center at http://go.microsoft.com/fwlink /events.asp.

    Read the article

  • CUPS in linux and printer

    - by yogesh
    How can I achieve the following behavior? CUPS Server communicate with hardware printer by using the "PostScript3" language and the LPD/LPR protocol to manage and transmit printing jobs. CUPS server must be configured to accept following file formats: PS, TXT, PDF, JPEG and PNG. This means only these files get printed and others should be blocked. I want to connect actual hardware printer using IPP from the linux machine.

    Read the article

  • Dedicated hard disk for Informix SE dbname.dbs files & dedicated ramdisk for /tmp files.

    - by Frank Computer
    INFORMIX-SE 7.2: I would like to dedicate a hard disk, exclusively for my dbname.dbs directory which holds all the .dat and .idx files, and create a ramdisk for my /tmp temporary files in order to improve performance. I would also like to strip down the OS from any unecessary files and processes to minimize overhead for my dedicated application. Is this a good idea and are there any roadmaps for accomplishing this?

    Read the article

  • Windows 7 sidebar, where is it?

    - by Martti Laine
    Hello I installed Win7 a while ago, and now I'm wondering if there is no sidebar in Win7? I can't find it, running "sidebar" (win+r) doesn't find it. It's not in the processes either. Is there a way to install it? Martti Laine

    Read the article

  • killing all instances of chrome on the command line?

    - by Fedor
    In some cases killing a single tab/process doesn't do it and I need to close Chrome entirely. Since Chrome has multiple processes, how can I close all of them at once? I know that... pgrep chrome returns all the pids, can someone tell me a trick that would allow me to close all of them by feeding them to another command or merging them to a csv or something?

    Read the article

  • What is the best way to change the replication scheme of 2 currently replicated slaves?

    - by mmattax
    I have MySQL replication set up in production as follows: DB1 - DB2 DB1 - BAK Where DB2 and BAK are slaves to DB1. All 3 servers are in sync (0 seconds behind the master) and have 30+ GB of data. I'd like to put the servers in a new master-slave configuration as follows: DB1 - DB2 - BAK What is the best way to change the master host on BAK? Is there a way to avoid having to stop the slave thread on DB2 and getting a mysqldump for BAK (a 5-6 hour processes) ?

    Read the article

  • Git: push via ssh to a root owned repository with ssh root logins disabled

    - by anthonysomerset
    is that even possible? Summary, i'm running puppet master on a server and ideally we want root logins via ssh disabled, we want to force all access via sudo if root access required however we have puppet installed using a git repo to manage the manifests, this repo is currently owned by root and currently i only know of 2 solutions (less ideal) allow root access via key auth only - if so, what can i lock it down to to only allow the git push commands? own the repo in /etc/puppet as a different owner - will puppet work reliably with this?

    Read the article

  • Seeking past end of file causes Apache hang, and it never restarts.

    - by talkingnews
    I've actually solved my problem with a better script, but I'm still left wondering why Apache2 hung completely - this is an out-of-the-box ISPCONFIG 3.03 install, everything bang up to date, running perfectly. Until... The troublesome but innocent-looking script: $fp = fopen("/var/log/ispconfig/cron.log", "r"); fseek($fp, -5000, SEEK_END); $line_buffer = array(); while (!feof($fp)) { $line = fgets($fp, 1024); $line_buffer[] = $line; $line_buffer = array_slice($line_buffer, -10, 10); } foreach ($line_buffer as $line) { echo $line; } You get the idea, just a script I found on a forum somwehere. I did this for various logs, since it's a nice easy window on what's occurring (in a protect dir, of course!). One day, the logs having grown large an me having sorted all my cron, scripting and mail queue errors, I thought I was time to start afresh. updated, rebooted, archived and deleted the logs. When I ran my script a couple of hours later, it hung. And hung. 8 minutes I waited. Chrome timed the page out, of course, but the server never came back to life. htop showed /usr/sbin/apache2 -k restart using 100% CPU. Never came back until I did a service apache2 restart. Ran fine, as soon as I hit that logfile again...dead. So, I worked out it was the logfile script, and I worked out that seeking beyond the end of the file wasn't good, and I found a better script http://www.php.net/manual/en/function.fseek.php#90450 But what I'm left wondering is... why didn't something restart or kill the process? How was one hanging page able to bring down the whole server? It's running suphp. I say "out of the box", I've tweaked mysql and apache to fork and reserve sensible amounts of processes for the 512Mb RAM the VPS has, and it'll handle multiple refreshes of large pages, and hadn't hung before. Any ideas how I'd avoid this? Google isn't my friend in this instance beyond the reccs. above about number of processes vs RAM available.

    Read the article

  • How to fix Windows 2008 R2 show less memory than available

    - by eugeneK
    I've got Win2008 R2 64bit installed on Dell R410 server with 8GB of RAM. Dell Open Manage shows 8GB total and 4GB available for use, In Windows Control Panel, System i see 64bit and 8GB of RAM while in Windows Task Manager at Performance tab there is 4GB of memory available. Dell support has made some checks and told me that if BIOS shows 8Gb of RAM and indeed BIOS does then it's operational system issue. Tried to search online for resolution but none found. Please help, thanks

    Read the article

  • Nginx fastcgi problems with django

    - by wizard
    I'm deploying my first django app. I'm familiar with nginx and fastcgi from deploying php-fpm. I can't get python to recognize the urls. I'm also at a loss on how to debug this further. I'd welcome solutions to this problem and tips on debugging fastcgi problems. Currently I get a 404 page regardless of the url and for some reason a double slash For http://www.site.com/admin/ Page not found (404) Request Method: GET Request URL: http://www.site.com/admin// My urls.py from the debug output - which work in the dev server. Using the URLconf defined in ahrlty.urls, Django tried these URL patterns, in this order: ^listings/ ^admin/ ^accounts/login/$ ^accounts/logout/$ my nginx config server { listen 80; server_name beta.ahrlty.com; access_log /home/ahrlty/ahrlty/logs/access.log; error_log /home/ahrlty/ahrlty/logs/error.log; location /static/ { alias /home/ahrlty/ahrlty/ahrlty/static/; break; } location /media/ { alias /usr/lib/python2.6/dist-packages/django/contrib/admin/media/; break; } location / { include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:8001; break; } } and my fastcgi_params fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param PATH_INFO $fastcgi_script_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; And lastly I'm running fastcgi from the commandline with django's manage.py. python manage.py runfcgi method=threaded host=127.0.0.1 port=8080 pidfile=mysite.pid minspare=4 maxspare=30 daemonize=false I'm having a hard time debugging this one. Does anything jump out at anybody?

    Read the article

  • Awesome WM: Goodbye UI

    - by Håvard Geithus
    Sometimes some of my UI's disappear, but the processes they belong to lives on. So far it has happened to gnome-terminal, emacs, eclipse and evince. I've only experienced this behavior with the Awesome Window Manager. Any ideas how to fix it? Update: It has also happened to a popped out Gmail chat window . When I close the main Gmail window, it warns me that the invisible chat window will also be closed.

    Read the article

  • Explorer is missing half tray icons in XP

    - by Ither
    Hi, lately and with no explanation half of the tray icons disappear every time I start up XP SP3. I use Process Explorer (procexp.exe) to look for the missing processes and they still there. When i kill and restart explorer.exe, the tray is complete again. I don't know how to diagnostic or repair the problem. Any suggestions? Thanks in advance.

    Read the article

  • What does this httpd directive do?

    - by alsciende
    Hello, I stumbled upon a httpd.conf directive that I can't manage to understand: <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy All </Files> According to the doc , I would say that Satisfy doesn't have any effect since there is no Allow. Am I wrong? What do you think this directive does?

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >