Search Results

Search found 5679 results on 228 pages for 'kill processes'.

Page 123/228 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • Most efficient way to connect an ISAPI Dll to a windows service

    - by Mike Trader
    I am writing a custom server for a client. They want scalability so I must use a thread pool and probably I/O completion port to regulate it. The main requirement is that a windows service manage the HTTP requests for a number of reasons. An example of one would be that a client session spans many requests and continuity must be maintained. Another would be that the ISAPI Dll will be in the IIS address space and so it's code will be lean and very carefully implemented. The extensive processing in the Windows service may get unruly for the duration of the lengthy development. If the service crashes it will not take out IIS. Anyway, the remaining decision is how to have these two processes communicate. We have talked about pipes, tcp, global memory and even a single pipe with multiplexed data ala FastCGI. Would love to hear anyones experience with a decision like this.

    Read the article

  • Git - post-receive hook with git pull "Failed to find a valid git directory"

    - by ludicco
    It's very weird but when setting a git repository and creating a post-receive hook with: echo "--initializing hook--" cd ~/websites/testing echo "--prepare update--" git pull echo "--update completed--" the hook runs indeed, but it never manage to run git pull properly: 6bfa32c..71c3d2a master -> master --initializing hook-- --prepare update-- fatal: Not a git repository: '.' Failed to find a valid git directory. --update completed-- so I'm asking myself now, how it's possible to make the hook update the clone with post-receive? in this case the user running the processes is the same, and its everything inside the user folder so I really don't understand...because if if I go manually into cd ~/websites/testing git pull it works without any problem... any help on that would be pretty much appreciated Thanks a lot

    Read the article

  • unable to boot into safe mode even after fixing registry

    - by Anirudh Goel
    I have a windows XP sp3 system which is affected by Sality Worm. The usual symptoms of taskmanager and regedit disabled were there, and i saw that i was unable to boot my system in safe mode. Then i found that the sality worm removes the SAFEBOOT keys from registry hive. So i downloaded this reg file from http://support.kaspersky.com/faq/?qid=208279889 and was successfully able to update the reg file to my system. But still when i hit F8 during boot and select safe mode option, it still restarts after loading mup.sys file. i don't know what more to do to get to safe mode. The virus is still there in dormant stage, i can verify that because taskmanager and regedit is not disabled after i restarted in normal mode and i could browse any site and it did not kill the browser process. I also ran the salitykiller from same link above and it healed all infected exe files. This is related to another question which i have asked here,but i don't see how a common solution can solve both of those problems. Any help folks? Thanks

    Read the article

  • Linux Email Server Auto-Reply

    - by Robert Smith
    I need to setup a mail server that has the following functionality: if a user sends an email to a specific address on this server, the server must first check if the email has a PDF attachment, do some processing to that PDF file and then reply to the user's initial mail with the new PDF file attached. My question is how would it be possible to achieve this functionality, and what software / mail server do you recommend? I'm thinking that it can be solved the following way: when the server receives a new email it executes an external Python script that checks the attachment, processes the PDF file and then sends it back in the user's mailbox. What mail server would be able to do this, and what configurations does it need?

    Read the article

  • Configure mod_wsgi WSGIScriptAlias with mod_rewrite

    - by Lazik
    I want to redirect ex.com to www.ex.com but I still want www.ex.com/ to point to my app.wsgi without it showing up in the url. When I use the conf below and I go to ex.com, I get a 404 error saying can't find www.ex.com/app.wsgi/ If I change the WSGIScriptAlias / /var/www/vhosts/ex/app.wsgi to WSGIScriptAlias /app.wsgi /var/www/vhosts/ex/app.wsgi Then all my url look like www.ex.com/app.wsgi/blabla/... Is it possible to use some kind of rule to redirect ex.com to www.ex.com and still keeping / as the app.wsgi root? my conf file <VirtualHost *:80> ServerName www.ex.com ServerAlias ex.com *.ex.com RewriteEngine On RewriteCond %{HTTP_HOST} !^www\. RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] WSGIDaemonProcess ex user=www-data group=www-data processes=1 threads=5 WSGIScriptAlias / /var/www/vhosts/ex/app.wsgi <Directory /var/www/vhosts/ex> WSGIProcessGroup ex WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all </Directory> </VirtualHost>

    Read the article

  • SAN with iSCSI-Target Performance Horrendous

    - by Justin
    We have a poor man's SAN setup in a 1U Ubuntu server running iSCSI-Target with two 300GB drives in RAID-0. We then are using it for block level storage for virtual machines. The hypervisor is connected to the SAN via gigabit on a dedicated VLAN and interfaces. We only have a single virtual machine setup and doing some benchmarks. If we run hdparm -t /dev/sda1 from the virtual machine, we get 'ok' performance of 75MB/s from the virtual machine to the SAN. Then we basically compile a package with ./configure and make. Things start ok, but then all the sudden the load average on the SAN grows to 7+ and things slow down to a crawl. When we SSH into the SAN and run top, sure the load is 7+, but the CPU usage is basically nothing, also the server has 1.5GB of memory available. When we kill the compile on the virtual machine, slowly the LOAD on the SAN goes back to sub 1 figures. What in the world is causing this? How can we diagnosis this further? Here are two screenshot from the SAN during high load. 1> Output of iotop on the SAN: 2> Output of top on the SAN:

    Read the article

  • Running perfmon continuously with periodic reports

    - by Sal
    I have a question very similar to this one, but I want to continuously run perfmon, during reboots and throughout the day. Further, I'd like to generate a perfmon report every 10 mins or so. The original question tells me how to run perfmon when the server is restarted, but I don't know how to make perfmon continuously run while throwing periodic files. I've tried setting it as a scheduled task that needs to be done every 10 mins, but this is too sloppy, and when the scheduled task kicks another instance, the current perfmon report writer crashes, and I get a garbage report. I've also tried writing a sloppy batch script that would fire off the task at scheduled intervals, but this is the same problem as the scheduled task. I'm sure I'm just missing something silly, but I don't see it. Ideas? (If it helps, I'm running Windows 7 locally, and I'm trying to set up the processes for boxes running Windows 2008.)

    Read the article

  • Apache2 Segfault - need help interpreting this coredump (suspect cause is memcache / php session related)

    - by WayneDV
    Three Apache2 web servers running a PHP 5.2.3 web site. We're using Memcache to cache rendered pages but also as the storage engine of the PHP Sessions. At peak traffic times we're getting Apache segmentation faults on all three web servers and all HTTPD child processes segfault. My gut tells me that the increased Memcache traffic is stopping PHP sessions from being created or cleaned up and thus the processes die. Is it possible for someone to confirm that from the following? : #0 _zend_mm_free_int (heap=0x7fb67a075820, p=0x7fb67a011538) at /usr/src/debug/php-5.3.3/Zend/zend_alloc.c:2018 #1 0x00007fb665d02e82 in mmc_buffer_free (request=0x7fb67a011548) at /usr/src/debug/php-pecl-memcache-3.0.4/memcache-3.0.4/memcache_pool.c:50 #2 mmc_request_free (request=0x7fb67a011548) at /usr/src/debug/php-pecl-memcache-3.0.4/memcache-3.0.4/memcache_pool.c:169 #3 0x00007fb665d031ea in mmc_pool_free (pool=0x7fb67a00e458) at /usr/src/debug/php-pecl-memcache-3.0.4/memcache-3.0.4/memcache_pool.c:917 #4 0x00007fb665d0a2f1 in ps_close_memcache (mod_data=0x7fb66d625440) at /usr/src/debug/php-pecl-memcache-3.0.4/memcache-3.0.4/memcache_session.c:185 #5 0x00007fb66d1b0935 in php_session_save_current_state () at /usr/src/debug/php-5.3.3/ext/session/session.c:625 #6 php_session_flush () at /usr/src/debug/php-5.3.3/ext/session/session.c:1517 #7 0x00007fb66d1b0c1b in zm_deactivate_session (type=<value optimized out>, module_number=<value optimized out>) at /usr/src/debug/php-5.3.3/ext/session/session.c:2171 #8 0x00007fb66d2a719c in module_registry_cleanup (module=<value optimized out>) at /usr/src/debug/php-5.3.3/Zend/zend_API.c:2150 #9 0x00007fb66d2b1994 in zend_hash_reverse_apply (ht=0x7fb66d629d60, apply_func=0x7fb66d2a7180 <module_registry_cleanup>) at /usr/src/debug/php-5.3.3/Zend/zend_hash.c:755 #10 0x00007fb66d2a5c0d in zend_deactivate_modules () at /usr/src/debug/php-5.3.3/Zend/zend.c:866 #11 0x00007fb66d2541b5 in php_request_shutdown (dummy=<value optimized out>) at /usr/src/debug/php-5.3.3/main/main.c:1607 #12 0x00007fb66d32e037 in php_apache_request_dtor (r=0x7fb67a229658) at /usr/src/debug/php-5.3.3/sapi/apache2handler/sapi_apache2.c:509 #13 php_handler (r=0x7fb67a229658) at /usr/src/debug/php-5.3.3/sapi/apache2handler/sapi_apache2.c:681 #14 0x00007fb6784166f0 in ap_run_handler (r=0x7fb67a229658) at /usr/src/debug/httpd-2.2.15/server/config.c:158 #15 0x00007fb678419f58 in ap_invoke_handler (r=0x7fb67a229658) at /usr/src/debug/httpd-2.2.15/server/config.c:372 #16 0x00007fb6784254f0 in ap_process_request (r=0x7fb67a229658) at /usr/src/debug/httpd-2.2.15/modules/http/http_request.c:282 #17 0x00007fb678422418 in ap_process_http_connection (c=0x7fb67a2193a8) at /usr/src/debug/httpd-2.2.15/modules/http/http_core.c:190 #18 0x00007fb67841e1b8 in ap_run_process_connection (c=0x7fb67a2193a8) at /usr/src/debug/httpd-2.2.15/server/connection.c:43 #19 0x00007fb678429f4b in child_main (child_num_arg=<value optimized out>) at /usr/src/debug/httpd-2.2.15/server/mpm/prefork/prefork.c:662 #20 0x00007fb67842a21a in make_child (s=0x7fb679cd7860, slot=153) at /usr/src/debug/httpd-2.2.15/server/mpm/prefork/prefork.c:758 #21 0x00007fb67842aea4 in perform_idle_server_maintenance (_pconf=<value optimized out>, plog=<value optimized out>, s=<value optimized out>) at /usr/src/debug/httpd-2.2.15/server/mpm/prefork/prefork.c:893 #22 ap_mpm_run (_pconf=<value optimized out>, plog=<value optimized out>, s=<value optimized out>) at /usr/src/debug/httpd-2.2.15/server/mpm/prefork/prefork.c:1097 #23 0x00007fb678402890 in main (argc=1, argv=0x7fff6fecacb8) at /usr/src/debug/httpd-2.2.15/server/main.c:740 PHP.INI Follows: [PHP] engine = On short_open_tag = On asp_tags = Off precision = 14 y2k_compliance = On output_buffering = 4096 zlib.output_compression = Off implicit_flush = Off unserialize_callback_func = serialize_precision = 100 allow_call_time_pass_reference = Off safe_mode = Off safe_mode_gid = Off safe_mode_include_dir = safe_mode_exec_dir = safe_mode_allowed_env_vars = PHP_ safe_mode_protected_env_vars = LD_LIBRARY_PATH disable_functions = disable_classes = expose_php = On max_execution_time = 30 max_input_time = 60 memory_limit = 128M error_reporting = E_ALL & ~E_DEPRECATED display_errors = Off display_startup_errors = Off log_errors = Off log_errors_max_len = 1024 ignore_repeated_errors = Off ignore_repeated_source = Off report_memleaks = On track_errors = Off html_errors = Off variables_order = "GPCS" request_order = "GP" register_globals = Off register_long_arrays = Off register_argc_argv = Off auto_globals_jit = On post_max_size = 8M magic_quotes_gpc = Off magic_quotes_runtime = Off magic_quotes_sybase = Off auto_prepend_file = auto_append_file = default_mimetype = "text/html" doc_root = user_dir = enable_dl = Off file_uploads = On upload_max_filesize = 2M allow_url_fopen = On allow_url_include = Off default_socket_timeout = 60 [Date] [filter] [iconv] [intl] [sqlite] [sqlite3] [Pcre] [Pdo] [Phar] [Syslog] define_syslog_variables = Off [mail function] SMTP = localhost smtp_port = 25 sendmail_path = /usr/sbin/sendmail -t -i mail.add_x_header = On [SQL] sql.safe_mode = Off [ODBC] odbc.allow_persistent = On odbc.check_persistent = On odbc.max_persistent = -1 odbc.max_links = -1 odbc.defaultlrl = 4096 odbc.defaultbinmode = 1 [MySQL] mysql.allow_persistent = On mysql.max_persistent = -1 mysql.max_links = -1 mysql.default_port = mysql.default_socket = mysql.default_host = mysql.default_user = mysql.default_password = mysql.connect_timeout = 60 mysql.trace_mode = Off [MySQLi] mysqli.max_links = -1 mysqli.default_port = 3306 mysqli.default_socket = mysqli.default_host = mysqli.default_user = mysqli.default_pw = mysqli.reconnect = Off [PostgresSQL] pgsql.allow_persistent = On pgsql.auto_reset_persistent = Off pgsql.max_persistent = -1 pgsql.max_links = -1 pgsql.ignore_notice = 0 pgsql.log_notice = 0 [Sybase-CT] sybct.allow_persistent = On sybct.max_persistent = -1 sybct.max_links = -1 sybct.min_server_severity = 10 sybct.min_client_severity = 10 [bcmath] bcmath.scale = 0 [browscap] [Session] session.save_handler = files session.save_path = "/var/lib/php/session" session.use_cookies = 1 session.use_only_cookies = 1 session.name = PHPSESSID session.auto_start = 1 session.cookie_lifetime = 0 session.cookie_path = / session.cookie_domain = session.cookie_httponly = session.serialize_handler = php session.gc_probability = 1 session.gc_divisor = 1000 session.gc_maxlifetime = 1440 session.bug_compat_42 = Off session.bug_compat_warn = Off session.referer_check = session.entropy_length = 0 session.entropy_file = session.cache_limiter = nocache session.cache_expire = 180 session.use_trans_sid = 0 session.hash_function = 0 session.hash_bits_per_character = 5 url_rewriter.tags = "a=href,area=href,frame=src,input=src,form=fakeentry" [MSSQL] mssql.allow_persistent = On mssql.max_persistent = -1 mssql.max_links = -1 mssql.min_error_severity = 10 mssql.min_message_severity = 10 mssql.compatability_mode = Off mssql.secure_connection = Off [Assertion] [COM] [mbstring] [gd] [exif] [Tidy] tidy.clean_output = Off [soap] soap.wsdl_cache_enabled=1 soap.wsdl_cache_dir="/tmp" soap.wsdl_cache_ttl=86400 /etc/php.d/memcached.ini : session.save_path="tcp://memcache1:11211?persistent=1&weight=1&timeout=3&retry_interval=15"

    Read the article

  • AWS RDS connection count

    - by wmarbut
    I am using AWS RDS with MySQL for a project and have a "large" instance. The documentation is clear on what this means as far as compute resources and RAM goes, but I can't find anything that documents how many open database connections that I can have. The app that I am using is PHP and it utilizes PDO with persistent connections. This means that the number of open connections could reach the maximum number of PHP child processes running at any given point. How do I ensure that my RDS instance has a max connections setting high enough to be comfortable with this?

    Read the article

  • Open Office crashes, recovers, crashes again

    - by Daniel R Hicks
    After completely reinstalling my laptop due to apparent registry corruption, I've encountered a problem with Open Office: I open a simple Calc spreadsheet, it comes up normally, but then after anywhere from 5 seconds to several minutes (without even touching the Calc window) OO crashes, then comes up through recovery. If I let it "recover" it will do so and bring the spreadsheet up again, only to repeat the crash scenario again. If I kept clicking "OK" it would apparently do this all day. I reinstalled OO once and the problem went away for awhile, but it came back. I then attempted to "reset" my profile (ie, rename the OO user directory in App Data), but OO crashed during the first startup after that, then resumed the original behavior. If I open the same file using Excel it complains of errors in the file, and "recovers" them, but the "error report" it generates contains no details. If I save the "recovered" file then OO Calc will open it, but the problem returns after saving again. Any ideas? (The system is Vista SP2, running OO 3.4.1) How to reproduce: Start Open Office Calc. Save workspace as "CrashTest.ods" From Task Manager kill Open Office (soffice.exe/bin -- one of each) Double click on the saved "CrashTest.ods" in Explorer. OO puts up a message that recovery will occur -- allow it. When the Calc window comes up, don't touch it -- just wait about 10 seconds. Calc window closes and OO puts up a message that recovery will occur -- from now on the sequence will repeat. I suspect this behavior is limited to a few (recent) versions of OO, and very possibly only Calc. Reported as Open Office Bug 1211094. Sigh!! As much as it irritates me, I'm having to switch over to Excel for several things I used to do with Calc. Excel has a miserable UI, but at least it says up for longer than 10 seconds.

    Read the article

  • Running perfmon continuously with periodic files

    - by Sal
    I have a question very similar to this one, but I want to continuously run perfmon, during reboots and throughout the day. Further, I'd like to generate a perfmon report every 10 mins or so. The original question tells me how to run perfmon when the server is restarted, but I don't know how to make perfmon continuously run while throwing periodic files. I've tried setting it as a scheduled task that needs to be done every 10 mins, but this is too sloppy, and when the scheduled task kicks another instance, the current perfmon report writer crashes, and I get a garbage report. I've also tried writing a sloppy batch script that would fire off the task at scheduled intervals, but this is the same problem as the scheduled task. I'm sure I'm just missing something silly, but I don't see it. Ideas? (If it helps, I'm running Windows 7 locally, and I'm trying to set up the processes for boxes running Windows 2008.)

    Read the article

  • Windows 7: Touch gestures in IE not working without explorer.exe being run once

    - by Michael
    Details: Internet Explorer 9 and Windows 7 Professional, running on a HP TouchSmart (touch screen PC). It is going to be a kiosk PC (running a custom GUI for displaying websites). Scenario 1: When running Internet Explorer as a normal program in Windows 7, touch functions work perfectly. I can scroll the website by dragging it with my finger, I can pinch zoom and I can touch-and-hold right click. I now change the default shell in Windows to Internet Explorer (ie. IE starts instead of explorer.exe). Internet Explorer of course starts up when logging in. However, touch functions are reduced to basic clicking (no dragging, no pinch zooming, no touch-and-hold right click). Then I manually start explorer.exe, and the touch functions work again! And here is the weird part: When I kill explorer.exe, the touch functions keeps working - even if I close IE and start a new instance. Scenario 2: The exact same, but instead of changing the default shell to Internet Explorer, I change it to my own program, which uses an embedded Internet Explorer ("WebBrowser"). Same thing happens. What I've tried: Autorun programs: When explorer.exe launches, it launches all the autorun programs. There are no relevant programs being run by explorer, but just in case, I have manually started all the autorun programs, so that it is identical (but without explorer.exe) to a normal login. It still does not work (until I launch explorer.exe). Specifically TabTip.exe, TabTip32.exe and wisptis.exe are all running. All services are also started. To sum it up Running explorer.exe once changes something in the touch capabilities of Internet Explorer. It doesn't matter if explorer.exe is running - as long as it has been run once. Does anyone know what causes this behavior? Or how I can circumvent it neatly? Thanks!

    Read the article

  • php-cgi cpu usage is super high

    - by Ryan Thompson
    I am getting constantly high and wildly fluctuating CPU usage % for php-cgi commands as seen via "top" on my Centos server.. I have a server density account and it seems that this is a common trend: User - PID - CPU % - MEM % - VSZ - RSS - TT - Stat - Started - Time - Command 500 - 6389 - 22.4 - 3 - 271136 - 32380 - ? - S - 20:26 - 0:40 - /usr/bin/php-cgi Seems there are about 6 or so of those records in my processes list at any given check-in. Any ideas what's causing this? I have fast_cgi installed and the module is loading.. Not sure why it isn't handling this though. Any help would be greatly appreciated! Ryan

    Read the article

  • lsass.exe memory leak on windows 2003 server

    - by thelsdj
    In the past month or so I noticed that lsass.exe has started to leak memory, getting to 500MB+ of ram in under a week after reboot. Before this I had never noticed it using any significant amount of memory compared to other processes on the system. This is happening on 2 identical servers, neither of which has anything to do with Active Directory. Maybe a recent Windows Update has caused this? Any thoughts on things to check? As a side question is there some way to recycle the memory usage of lsass.exe without rebooting? Edit: Here is what I'm seeing in Process Monitor, there are thousands of registry open/query/close a minute from lsass.exe. How can I track down what is triggering these?

    Read the article

  • Problem with IIS 6.0 in WOW WCF 4 (.net 4.0)

    - by Kevin
    We just upgraded to WCF 4 on IIS 6 (running in WoW 32 bit mode), and all of a sudden the services started running into what appears to be concurrency problems. Upon finding out we had a problem, we changed the Behavior Configuration Changes on the WCF server to the follow: <serviceThrottling maxConcurrentCalls="1000" maxConcurrentInstances="1000" maxConcurrentSessions="1000" /> We also changed the number of worker processes from 1 to 5. Doing all of this seemed to have no effect. The service seemed to be running, but throttled by something. Is there anything else that might need to be changed to remove the "artificial" throttling? Were using the default configuration WCF which should be Per-Call (not singleton).

    Read the article

  • /var/run/httpd.pid missing...

    - by user38043
    Recently one of our web servers httpd stopped working and I haven't been able to find the problem. Today I sat down and went through every directory in the httpd.conf and have found an issue. the /var/run/httpd.pid is missing from the folder. All other files are there and seem to be fine. I cannot create a new file with the same name in vi and I have no idea what could have caused this. I imagine it was caused by a cold reboot at some stage as no other extraordinary processes have been run on this server at the time it went down. I am running CentOS 3. How can I reinstate this file?

    Read the article

  • Mercurial SSH process blocks when run from Local System

    - by Liedman
    We are using Mercurial over SSH for our development. We use Hudson for continous integration, and have deployed it on Tomcat, running on a Windows 2003 Server using the Local System account. Mercurial is configured to use Putty's plink.exe as its ssh command in Mercurial.ini, together with a private key for SSH authentication. When Hudson attempts any Mercurial command over SSH, the operation just blocks. I can see the three processes being started: hg.exe, cmd.exe and plink.exe. On the remote machine, I can also see the SSH session being opened and the authentication key being accepted. After that, nothing appears to happen, and everything just blocks, seemingly forever. (As a side note, subversion/SVN over SSH works from Hudson to the same server, using the same user and authentication key). A solution would of course be the best, but at least a hint for how I should debug it to get further would be nice, since I'm stuck and haven't even got an error message right now.

    Read the article

  • COM+/Desktop Heap errors in IIS affecting sites at random?

    - by tresstylez
    We have a Win2K3 server that is hosting 30+ sites. Each site is configured to have its own unique application pool -- so that we can manually recycle specific sites if needed and not kill sessions for the others. From what I've read, the consequence of this type of setup is that each application pool worker process gets allocated a Desktop Heap (normally 512 kb's) and we limit the number of app pools we can serve. http://blogs.msdn.com/b/david.wang/archive/2006/01/25/security-considerations-of-usesharedwpdesktop-on-iis6.aspx PROBLEM: What we're seeing is that occasionally COM+ errors get triggered, presumably by hitting our 512 kb limit of the desktop heap -- and certain sites become unresponsive (or have errors) until we manually recycle that specific app pool. I know that I can increase the desktop heap limit to 1024, and make other tweaks/tunes, but I've been tasked with finding out what exactly causes one site's heap to max out as opposed to another. It seems that when we start seeing COM+ errors, the sites it affects are random -- small sites or big sites (heavier used). Is it based on process id? Traffic? Any pointers on understanding this a little more would be excellent. Thanks! jg

    Read the article

  • Recommended setting for using Apache mod_mono with a different user

    - by Korrupzion
    Hello, I'm setting up an ASP.net script in my linux machine using mod_mono. The script spawn procceses of a bin that belongs to another user, but the proccess is spawned by www-data because apache runs with that user, and i need to spawn the proccess with the user that owns the file. I tried setuid bit but it doesn't make any effect. I discovered that if I kill mod-mono-server2.exe and I run it with the user that I need, everything works right, but I want to know the proper way to do this, because after a while apache runs mod-mono-server2.exe as www-data again. Mono-Project webpage says: How can I Run mod-mono-server as a different user? Due to apache's design, there is no straightforward way to start processes from inside of a apache child as a specific user. Apache's SuExec wrapper is targeting CGI and is useless for modules. Mod_mono provides the MonoStartXSP option. You can set it to "False" and start mod-mono-server manually as the specific user. Some tinkering with the Unix socket's permissions might be necessary, unless MonoListenPort is used, which turns on TCP between mod_mono and mod-mono-server. Another (very risky) way: use a setuid 'root' wrapper for the mono executable, inspired by the sources of Apache's SuExec. I want to know how to use the setuid wrapper, because I tried adding the setuid to 'mono' bin and changing the owner to the user that I want, but that made mono crash. Or maybe a way to keep running mono-mod-server2.exe separated from apache without being closed (anyone has a script?) My environment: Debian Lenny 2.6.26-2-amd64 Mono 1.9.1 mod_mono from debian repository Dedicated server (root access and stuff) Using apache vhosts -I use mono for only that script Thanks!

    Read the article

  • TicTac Photo and Windows 7

    - by Ben
    Hello, My wife has been creating a tictac photo album. I had to upgrade to windows 7 as i had enough of Vista so i backed up the tic tac photo file and the photos to an external hard disk and performed a fresh install of win7. Now here is the problem. TicTacPhoto says it can find the photos in the album. The locations were as follows: Vista: C:\Users\Kelly\Pictures Win 7 C:\Users\Kelly\My Pictures When i try to create a Pictures folder under Kelly it popups a message about merging the two folders and simply moves the pictures to the My Pictures folder. Does anyone know a way to make a foler called pictures so i can eliminate the file path problem and then try again with tic tac photo support to get them to fix my file. My wife is going to kill me as its our wedding album and she has spent upwards of 30hrs designing it and me upgrading to win 7 means its all my fault. She does not understand file paths etc. Im going to try and open the album file in a text editor and see if i can see anything but thought i would ask here as well. Any help appreciated.

    Read the article

  • So Close: How to get this SSH login working (.bashrc)

    - by This_Is_Fun
    Objective: SSH login ( + eliminate warning message) / run 2 commands / stay logged in: EDIT: Oops, I made a mistake (see below): This code does ~95% of what I wanted to do # .bashrc # Run two commands and stay logged in to new server. alias gr='ssh -t -p 5xx4x [email protected] 2> /dev/null "cd /var; ls; /bin/bash -i"' Now, after successful login / verify user logged in = root pts/0 2011-01-30 22:09 Try to 'logout' = bash: logout: not login shell: use `exit' I seem to have full root access w/o being logged into the shell? (The " /bin/bash -i " was added to 'Stay logged in' but doesn't work quite as expected) FYI: The question is "How to get this SSH login working" & it is mostly solved, sorry I made a mess... ... .. . Original Question Here: # .bashrc # Run two commands and stay logged in to new server. alias gr='ssh -t -p 5xx4x [email protected] "cd /var; ls; /bin/bash -i"' # (hack) Hide "map back to the address - POSSIBLE BREAK-IN ATTEMPT!" message. alias gr='ssh -p 5xx4x [email protected] 2> /dev/null' Both examples 'work' as shown; When I try to add the ' 2 /dev/null ' to the first example, then the whole thing breaks. I'm out of time trying to solve the warning message other ways, so is it possible to combine both examples to make example #1 work w/o the warning message? Thank you. ps. If you also know a proper way to kill the login warning message, please do tell (the 'standard' "edit host file" advice isn't working for me)

    Read the article

  • Bluehost: 1 Minute Delays?

    - by feklee
    On Bluehost shared hosting (Apache 2.2 + FastCGI + APC), I have the problem that some requests take almost exactly one minute to respond. Yet time spent in PHP is only two seconds. To demonstrate the issue, I created a temporary test page. Sample output: When asking Bluehost support about the issue, I got the following reply: “the fastcgi process don't stay running they will only stay running for a certan period which would explain the timeouts you are seeing it traffic would spawn new ones. [...]” I understand that spawning new FastCGI processes takes some time. But almost exactly one minute? That must be some timeout. But which timeout may that be? What I want in the end: No request should take longer than five seconds to respond, even if it fails. When I asked Bluehost support to set the Apache TimeOut directive accordingly, they told me: “we do not modify the Apache Config File even on a virtual host level.”

    Read the article

  • How can I close a port that appears to be orphaned by Xvfb?

    - by Jim Fiorato
    I'm running Xvfb on a FC8 Amazon EC2 image. On occasion Xvfb will crash (unable at the moment to find out the reason for the crash), and after crashing the TCP port will appear to be orphaned. I'm unable to get a PID to kill any process that may be using it. I'm starting Xvfb with: Xvfb :7 -screen 0 1024x768x24 & Examples of what I'm working with are below, the Xvfb port is (was) 6007: # netstat -ap Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 *:ssh *:* LISTEN 1894/sshd tcp 0 0 *:6007 *:* LISTEN - tcp 0 352 ip-10-84-69-165.ec2.int:ssh c-71-194-253-238.hsd1:51689 ESTABLISHED 2981/0 udp 0 0 *:bootpc *:* 1817/dhclient udp 0 0 *:bootpc *:* 1463/dhclient Active UNIX domain sockets (servers and established) Proto RefCnt Flags Type State I-Node PID/Program name Path unix 2 [ ] DGRAM 871 668/udevd @/org/kernel/udev/udevd unix 2 [ ACC ] STREAM LISTENING 5385 1880/dbus-daemon /var/run/dbus/system_bus_socket unix 6 [ ] DGRAM 5353 1867/rsyslogd /dev/log unix 2 [ ] DGRAM 11861 2981/0 unix 2 [ ] DGRAM 5461 1974/crond unix 2 [ ] DGRAM 5451 1904/console-kit-da unix 3 [ ] STREAM CONNECTED 5438 1880/dbus-daemon /var/run/dbus/system_bus_socket unix 3 [ ] STREAM CONNECTED 5437 1904/console-kit-da unix 3 [ ] STREAM CONNECTED 5396 1880/dbus-daemon unix 3 [ ] STREAM CONNECTED 5395 1880/dbus-daemon unix 2 [ ] DGRAM 5361 1871/rklogd # lsof -i COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME dhclient 1463 root 3u IPv4 4704 UDP *:bootpc dhclient 1817 root 4u IPv4 5173 UDP *:bootpc sshd 1894 root 3u IPv4 5414 TCP *:ssh (LISTEN) sshd 2981 root 3u IPv4 11825 TCP ip-10-84-69-165.ec2.internal:ssh->c-71-194-253-238.hsd1.il.comcast.net:51689 (ESTABLISHED) Attempting to force the port closed with iptables doesn't seem to work either. iptables -A INPUT -p tcp --dport 6007 -j DROP I'm at a loss as to how to reclaim/free the port. From what I can tell, this port will remain in this state until the EC2 instance is shut down. So, how can I close this port so I can restart Xvfb?

    Read the article

  • Writing a script for ash?

    - by rumtscho
    My VPN is behaving funny sometimes, and I have to restart it often. I wanted to write a script which does that for me. It doesn't have to be anything fancy, just a shortcut for the commands I have to type into the terminal. More specifically: it will look at the running processes. If it finds a running vpnc process, it will kill it. Then it will start vpnc. I've written bash scripts of similar complexity, but now I don't have a bash, only an ash. Until now, the only difference I noticed is that there are much less commands available, but then, I don't use it very often. So I have some questions. Is writing ash scripts different than writing bash scripts? Is there something specific to consider when doing it? When the script is ready, how can I deploy it? For bash, I just put the executable file under /usr/lib and run it by typing the file name into the command line, will this work with ash? Are there any special pitfalls to watch out for in the script I want to write? I think that the killing process part may get hairy, if I write something that kills the wrong process, but even then running the script shouldn't break anything permanently, right?

    Read the article

  • Selenium server causes crazy load on server - how to prevent?

    - by Eric
    I'm running this linux: Linux host.themepark.com 2.6.32-220.4.1.el6.x86_64 #1 SMP Tue Jan 24 02:13:44 GMT 2012 x86_64 x86_64 x86_64 GNU/Linux And I run the Selenium stand-alone server on my box with this command: java -jar /home/l/cron/selenium-server-standalone-2.24.1.jar > /logs/selenium.log 2>&1 & Here's the problem: as soon as I do that, the server load starts skyrocketing. I even went back and downloaded older versions of the Selenium server, but same results with 2.23.1, 2.23.0, and 2.19.0. Note that the server load starts going nuts before I issue ANY commands to Selenium or do anything else. All I'm doing is firing up the server, per the command above. This used to work perfectly on my server without causing massive server load, so something has changed, but I'm not sure what. My server is a managed VPS so I don't know if there is some kind of auto-update script that kicked in or what... but it's a problem. (Incidentally, even though the server load climbs like crazy, everything still works: after firing up Selenium, my server creates a screen with Xvfb so Firefox will be happy, then a PHP script talks to Selenium to do what it needs to do before shutting everything down. It takes a LONG time, and the load gets all the way up to 8 [!!!] before it is finished, which kills my web server and makes the main site horribly unresponsive... but it does get everything done.) Any suggestions for what is going on, why it's started doing this and/or, most importantly, how I can make Selenium not kill the server when it starts up... would be GREATLY appreciated!

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >