Search Results

Search found 1675 results on 67 pages for 'kill krt'.

Page 60/67 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • Creating independent process!

    - by Neha
    I am trying to create a process from a service in C++. This new process is creating as a child process. I want to create an independent process and not a child process... I am using CreateProcess function for the same. Since the new process i create is a child process when i try to kill process tree at the service level it is killing the child process too... I dont want this to happen. I want the new process created to run independent of the service. Please advice on the same.. Thanks.. Code STARTUPINFO si; PROCESS_INFORMATION pi; ZeroMemory( &si, sizeof(si) ); si.cb = sizeof(si); // Start the child process. ZeroMemory( &pi, sizeof(pi) ); si.dwFlags = STARTF_USESHOWWINDOW; if(bRunOnWinLogonDesktop) { if(csDesktopName.empty()) si.lpDesktop = _T("winsta0\\default"); else _tcscpy(si.lpDesktop, csDesktopName.c_str()); } if(bHide) si.wShowWindow = SW_HIDE; /* maybe even SW_HIDE */ else si.wShowWindow = SW_SHOW; /* maybe even SW_HIDE */ TCHAR szCmdLine[512]; _tcscpy(szCmdLine, csCmdLine.c_str()); if( !CreateProcess( NULL, szCmdLine, NULL, NULL, FALSE, CREATE_NEW_PROCESS_GROUP, NULL, NULL, &si, &pi ) )

    Read the article

  • The best way to predict performance without actually porting the code?

    - by ardiyu07
    I believe there are people with the same experience with me, where he/she must give a (estimated) performance report of porting a program from sequential to parallel with some designated multicore hardwares, with a very few amount of time given. For instance, if a 10K LoC sequential program was given and executes on Intel i7-3770k (not vectorized) in 100 ms, how long would it take to run if one parallelizes the code to a Tesla C2075 with NVIDIA CUDA, given that all kinds of parallelizing optimization techniques were done? (but you're only given 2-4 days to report the performance? assume that you didn't know the algorithm at all. Or perhaps it'd be safer if we just assume that it's an impossible situation to finish the job) Therefore, I'm wondering, what most likely be the fastest way to give such performance report? Is it safe to calculate solely by the hardware's capability, such as GFLOPs peak and memory bandwidth rate? Is there a mathematical way to calculate it? If there is, please prove your method with the corresponding problem description and the algorithm, and also the target hardwares' specifications. Or perhaps there already exists such tool to (roughly) estimate code porting? (Please don't the answer: 'kill yourself is the fastest way.')

    Read the article

  • [C++] Parent-Child scheme

    - by rubenvb
    I'm writing a class that holds a pointer to a parent object of the same type (think Qt's QObject system). Each object has one parent, and the parent should not be destroyed when a child is destroyed (obviously). class MyClass { public: MyClass(const MyClass* ptr_parent): parent(parent){}; ~MyClass(){ delete[] a_children; }; private: const MyClass* ptr_parent; // go to MyClass above MyClass* a_children; // go to MyClass below size_t sz_numChildren; // for iterating over a_children } (Excuse my inline coding, it's only for brevity) Will destroying the "Master MyClass" take care of all children? No child should be able to kill it's parent, because I would then have pointers in my main program to destroyed objects, correct? Why might you ask? I need a way to "iterate" through all subdirectories and find all files on a platform independent level. The creation of this tree will be handled by native API's, the rest won't. Is this a good idea to start with? Thanks!

    Read the article

  • Web pages that a long time to load keep on reloading, just on vista on my work n/w...

    - by Ralpharama
    I have a curious problem at work which I've been struggling with since the advent of Windows Vista. We send our own email newsletter out to 40,000+ people once a week. The sending code has been in place for years, it's in classic ASP/VBscript called through a browser and simply loops through each email address, sending it to them. The page takes 40 mins or more to run, so has a big timeout value to allow it to do so. All well and good, suddenly, after Windows Vista is installed on the work PCs, the email sending page behaved oddly - after a period of time it seems to reload the page, endlessly, so the first 20% of our users get multiple copies of the newsletter until we kill the process! If we run the code on an XP machine in the on the same office network, it works fine. If we run it on Vista outside the office, so, say, on my own ISP, then it also works fine! Note, same effect in IE and FF... So, something about my office network and Vista is causing this... I recently re-wrote the newsletter code so it would split the task into chunks of 100 users at a time, hoping this would fix it, but my most recent test shows that the office n/w vista machine once again reloads the same page over any over, even though it takes 1/10th of the time to run... Does anyone have any ideas what it might be, how I can prove it, or, better, how I can get round it? Thanks for your advice :)

    Read the article

  • Need help managing MySql connections

    - by David Jenings
    I'm having trouble finding a clear explanation of connection pooling. I'm building an app using the .NET connector I downloaded from mysql.com. The app only needs one db connection but will be running simultaneously on about 6 machines on my network. Normally, I'd create the connection at startup and just leave it. But I'm seeing lots of posts from people who say that's bad practice. Also I'm concerned about timeouts. My app will run 24/7 and there may be extended periods without database activity. I'm leaning toward the following: using (MySqlConnection conn = new MySqlConnection(connStr)) { conn.Open(); // use connection } But I'm not sure I understand what's going on in the background. Is this actually closing the connection and allowing gc to kill the object, or is there a built in pooling behavior that preserves the object and redelivers it the next time I try to create one? I certainly don't want the app reauthenticating across the network every time I hit the database. Can anyone offer me some advise?

    Read the article

  • How to convert a string to variable name

    - by p1xelarchitect
    I'm loading several external JSON files and want to check if they have been successfully cached before the the next screen is displayed. My strategy is to check the name of the array to see if it an object. My code works perfectly when I check only a single array and hard code the array name into my function. My question is: how can i make this dynamic? not dynamic: (this works) $("document").ready(){ checkJSON("nav_items"); } function checkJSON(){ if(typeof nav_items == "object"){ // success... } } dynamic: (this doesn't work) $("document").ready(){ checkJSON("nav_items"); checkJSON("foo_items"); checkJSON("bar_items"); } function checkJSON(item){ if(typeof item == "object"){ // success... } } here is the a larger context of my code: var loadAttempts = 0; var reloadTimer = false; $("document").ready(){ checkJSON("nav_items"); } function checkJSON(item){ loadAttempts++; //if nav_items exists if(typeof nav_items == "object"){ //if a timer is running, kill it if(reloadTimer != false){ clearInterval(reloadTimer); reloadTimer = false; } console.log("found!!"); console.log(nav_items[1]); loadAttempts = 0; //reset // load next screen.... } //if nav_items does not yet exist, try 5 times, then give up! else { //set a timer if(reloadTimer == false){ reloadTimer = setInterval(function(){checkJSON(nav_items)},300); console.log(item + " not found. Attempts: " + loadAttempts ); } else { if(loadAttempts <= 5){ console.log(item + " not found. Attempts: " + loadAttempts ); } else { clearInterval(reloadTimer); reloadTimer = false; console.log("Giving up on " + item + "!!"); } } } }

    Read the article

  • VB.NET Program Locks Up with Internet Explorer Opened

    - by aaronsj
    I'm using Visual Studio 2008 and developing a VB.NET application. I'm having strange lockup problems with my program, but only when Internet explorer 8 is opened. When I cover my form with another window and then uncover it, I find that it has locked up. My program has no references to IE and the only thing it even has to do with IE is using Process.Start with a web address. My program works fine and exactly as it should, but only when IE is not opened. Does anyone know why a program would lock up only while IE is running? Edit: I've done some digging and I've found the offending thread in my program. I don't know what starts this thread or what it does, but when I kill it, my program no longer freezes. The thread is one of the CreateApplicationContext threads, here is the last few items in the stack trace of that thread. 6 ntkrnlpa.exe+0x897bc 7 ntdll.dll!KiFastSystemCallRet 8 mscorwrks.dll!LogHelp_TerminateOnAssert+0x61 9 mscorwrks.dll!DllUnregisterServerInternal+0x10523 10 mscorwrks.dll!DllUnregisterServerInternal+0x10542 11 mscorwrks.dll!StrongNameErrorInfo+0x34387 12 mscorwrks.dll!StrongNameErrorInfo+0x34815 13 mscorwrks.dll!CreateApplicationContext+0xbc35 14 KERNEL32.dll!GetModuleHandleA+0xdf Process explorer says my program is using no CPU nor throwing any exceptions while it is hung.

    Read the article

  • IE8 error when using dyanamic form actions

    - by user330711
    Hello all: Please go here to see an iframe based web app. Click on the map of Australia, choose a city, then buy some tickets. Now you will see the cart form located on the lower right corner. The problem is in IE8, I cannot delete checked rows from the table; whereas in other browsers such as FireFox3.6, Opera10, Safari4 and Chrome4, this action is all right. Below is the related javascript. It doesn't use jQuery, as part of the requirement is no framework allowed! And iframes are the my best bet, ajax will simply kill me under this restriction. /* cartForm.js */ function toDeleteRoutes() //this function is executed before form is to be submitted. { if(document.getElementsByClassName('delete_box').length > 0) //there're rows to delete { document.getElementById('cartForm').action ="./deleteRoutes.php"; document.getElementById('cartForm').target ="section4"; return true; //this enables the form to be submitted as usual. } else return false; //there is no more row in table to delete! } function toSendEmail() //this function is executed before form is to be submitted. { document.getElementById('cartForm').action ="./sendEmail.php"; document.getElementById('cartForm').target ="section3"; document.getElementById('delete_btn').disabled = true; //disable delete button now return true; //this enables the form to be submitted as usual. } function toCancelPurchase() { document.getElementById('cartForm').action ="./cancelPurchase.php"; document.getElementById('cartForm').target ="section4"; return true; //this enables the form to be submitted as usual. } I don't know which part is wrong, or this is just because IE8 screws all?

    Read the article

  • vs2002: c# multi threading question..

    - by dotnet-practitioner
    I would like to invoke heavy duty method dowork on a separate thread and kill it if its taking longer than 3 seconds. Is there any problem with the following code? class Class1 { /// <summary> /// The main entry point for the application. /// </summary> /// [STAThread] static void Main(string[] args) { Console.WriteLine("starting new thread"); Thread t = new Thread(new ThreadStart(dowork)); t.Start(); DateTime start = DateTime.Now; TimeSpan span = DateTime.Now.Subtract(start); bool wait = true; while (wait == true) { if (span.Seconds>3) { t.Abort(); wait = false; } span = DateTime.Now.Subtract(start); } Console.WriteLine("ending new thread after seconds = {0}", span.Seconds); Console.WriteLine("all done"); Console.ReadLine(); } static void dowork() { Console.WriteLine("doing heavy work inside hello"); Thread.Sleep(7000); Console.WriteLine("*** finished**** doing heavy work inside hello"); } }

    Read the article

  • Looking for PyQt4 embeddable terminal widget

    - by redShadow
    I wrote an application that, among other things, launches some "backend" processes to do some stuff. These subprocesses are very likely to fail or have unexpected behavior since they have to operate in quite hard conditions, so I prefer to give full control over them to the operator. NOTE: I am running these processes using a subprocess module based class instead of QProcess to have some more control functionality over the running process. At the moment, I'm using a QPlainTextEdit widget to which I append standard output/error from the subprocess, plus some buttons to quickly send some common signals (INT, STOP, CONT, KILL, ..), but: In some cases it would be useful to send some input too. Although it could be done with a text input box, I would prefer using something more "professional" Of course, there is no direct way to interpret special control characters, such as color codes, cursor movement, etc.. I had to implement an auto-scroll management of the console, but it is not guaranteed 100% to work nicely (sometimes the scroll locking doesn't work as expected, etc.) So: does anyone know something I could use to accomplish these needs? I found qtermwidget but it seems more oriented on handling a shell process (and the Python bindings seems to let you run /bin/bash only) by itself than communicating with an already existing process I/O.

    Read the article

  • Multi threading question..

    - by dotnet-practitioner
    I would like to invoke heavy duty method dowork on a separate thread and kill it if its taking longer than 3 seconds. Is there any problem with the following code? class Class1 { /// <summary> /// The main entry point for the application. /// </summary> [STAThread] static void Main(string[] args) { Console.WriteLine("starting new thread"); Thread t = new Thread(new ThreadStart(dowork)); t.Start(); DateTime start = DateTime.Now; TimeSpan span = DateTime.Now.Subtract(start); bool wait = true; while (wait == true) { if (span.Seconds > 3) { t.Abort(); wait = false; } span = DateTime.Now.Subtract(start); } Console.WriteLine("ending new thread after seconds = {0}", span.Seconds); Console.WriteLine("all done"); Console.ReadLine(); } static void dowork() { Console.WriteLine("doing heavy work inside hello"); Thread.Sleep(7000); Console.WriteLine("*** finished**** doing heavy work inside hello"); } }

    Read the article

  • SelectQuery eating up 100% CPU

    - by modernzombie
    I am doing a query for all the users on the machine and when it executes it grabs 100% CPU and locks up the system. I have waited up to 5 minutes and nothing happens. In the Task Manager wmiprvse.exe is using all the CPU. When I kill that process everything returns to normal. Here is my code: SelectQuery query = new SelectQuery("Win32_UserAccount", "LocalAccount=1 and Domain='" + GetMachine().DomainName + "'"); using(ManagementObjectSearcher searcher = new ManagementObjectSearcher(query)) { IList<WindowsUser> users = new List<WindowsUser>(); Console.WriteLine("Getting users..."); foreach (ManagementObject envVar in searcher.Get()) { Console.WriteLine("Getting " + envVar["Name"].ToString() + "..."); } } In the console all I see is Getting users... and nothing else. The problem appears to be with searcher.Get(). Does anyone know why this query is taking 100% CPU? Thanks. EDIT: OK I found that it the WMI process is only eating 25% CPU but it doesn't get released if I end the program (the query never finishes). The next time I start an instance the process goes up to 50% CPU, etc, etc until it is at 100%. So my new question is why is the CPU not getting released and how long should a query like this take to complete?

    Read the article

  • 3Ware 9650SE RAID-6, two degraded drives, one ECC, rebuild stuck

    - by cswingle
    This morning I came in the office to discover that two of the drives on a RAID-6, 3ware 9650SE controller were marked as degraded and it was rebuilding the array. After getting to about 4%, it got ECC errors on a third drive (this may have happened when I attempted to access the filesystem on this RAID and got I/O errors from the controller). Now I'm in this state: > /c2/u1 show Unit UnitType Status %RCmpl %V/I/M Port Stripe Size(GB) ------------------------------------------------------------------------ u1 RAID-6 REBUILDING 4%(A) - - 64K 7450.5 u1-0 DISK OK - - p5 - 931.312 u1-1 DISK OK - - p2 - 931.312 u1-2 DISK OK - - p1 - 931.312 u1-3 DISK OK - - p4 - 931.312 u1-4 DISK OK - - p11 - 931.312 u1-5 DISK DEGRADED - - p6 - 931.312 u1-6 DISK OK - - p7 - 931.312 u1-7 DISK DEGRADED - - p3 - 931.312 u1-8 DISK WARNING - - p9 - 931.312 u1-9 DISK OK - - p10 - 931.312 u1/v0 Volume - - - - - 7450.5 Examining the SMART data on the three drives in question, the two that are DEGRADED are in good shape (PASSED without any Current_Pending_Sector or Offline_Uncorrectable errors), but the drive listed as WARNING has 24 uncorrectable sectors. And, the "rebuild" has been stuck at 4% for ten hours now. So: How do I get it to start actually rebuilding? This particular controller doesn't appear to support /c2/u1 resume rebuild, and the only rebuild command that appears to be an option is one that wants to know what disk to add (/c2/u1 start rebuild disk=<p:-p...> [ignoreECC] according to the help). I have two hot spares in the server, and I'm happy to engage them, but I don't understand what it would do with that information in the current state it's in. Can I pull out the drive that is demonstrably failing (the WARNING drive), when I have two DEGRADED drives in a RAID-6? It seems to me that the best scenario would be for me to pull the WARNING drive and tell it to use one of my hot spares in the rebuild. But won't I kill the thing by pulling a "good" drive in a RAID-6 with two DEGRADED drives? Finally, I've seen reference in other posts to a bad bug in this controller that causes good drives to be marked as bad and that upgrading the firmware may help. Is flashing the firmware a risky operation given the situation? Is it likely to help or hurt wrt the rebuilding-but-stuck-at-4% RAID? Am I experiencing this bug in action? Advice outside the spiritual would be much appreciated. Thanks.

    Read the article

  • Cacti: "An internal Net-Snmp error condition detected in Cacti snmp_count"

    - by Recc
    There's the odd forum topic about an error similarly obscure as this, but I haven't seen any for snmp_count in particular. Also I don't see graphing problems, though I can't simply go and eyeball all graphs. However the poller does time out and has to be stopped by its internal process preventing overruns. If I filter out the flood of this error in the log I dont get anything else except the poller timeout: 06/12/2014 12:48:00 PM - POLLER: Poller[0] Maximum runtime of 58 seconds exceeded. Exiting. 06/12/2014 12:48:00 PM - SYSTEM STATS: Time:58.8566 Method:spine Processes:1 Threads:40 Hosts:1923 HostsPerProcess:1923 DataSources:61584 RRDsProcessed:0 06/12/2014 12:48:00 PM - SPINE: Poller[0] ERROR: Spine Timed Out While Processing Hosts Internal I saw in the running processes /usr/local/spine/spine 0 2053 that's always left behind. When I kill it the flooding of the error stops. Of course it's the same on the next poll run as it goes through the devices. 2053 is apparently the DB ID for a device. I deleted it completely to see if that stops it. It doesn't, instead 2052 is seen there. I suspect It'll be the same if I keep deleting devices which I will not do. This started happening midday when I wasn't doing anything to the cacti server. I have tried reducing Maximum Threads per Process to 1 and Number of PHP Script Servers to 1. I've been running it at 10 script servers / 40 threads for months with poll cycle time of about 20 sec. I just found out Running snmpwalk on any host would begin returning the values but then timeout halfway through. This doesn't happen from different servers on the network this Cacti is suggesting still that it's a problem with it locally. Any suggestions? For one polling cycle I changed to use cmd.php instead. then I started getting errors like CMDPHP: Poller[0] Host[45] DS[541] WARNING: Result from SNMP not valid. Partial Result: U Perhaps as expected. Looking closely I see that every snmpwalk I do is interrupted at the same place as if some byte limit is hit and the connection torn down.

    Read the article

  • No network upsets gnome

    - by Darren Cook
    An issue that has been bothering me for over a year now. My notebook, running ubuntu 10.04, is almost all the time using a wired connection, with static IP address. And a remote DNS server. Network is configured with entries in /etc/network/interfaces and /etc/resolv.conf, rather than whatever the gnome UI tool was (*) But if I'm out, or simply unplug the network cable, a few things get weird. Specifically the gnome-panel stops working - it is still there, but isn't updating. And opening a nautilus window (e.g. to look at files on the local disk) has huge time-outs. By that I mean it will not open the window for something like 30 or 60 seconds; but when it does finally open it I can see the files and it is perfectly usable. Everything else works fine, alt-tab between windows, etc. I use the commandline to find the pid of gnome-panel, kill it, wait a couple of seconds, and it opens up a fresh panel which is normally usable. (Something like 10 minutes later it will have locked/crashed again; the same for the nautilus windows.) I'm guessing this is a DNS issue? Would setting up a local DNS server help? Guess number 2 was related to having a file server mount (samba, though running on another linux box), and symbolic links to files and directories on that file server on my desktop. My question is a bit vague... Does anyone recognize these symptoms, and have a suggestion? Or do you have some troubleshooting suggestions for narrowing down the problem? My /etc/hosts: 127.0.0.1 localhost 127.0.1.1 myhost # The following lines are desirable for IPv6 capable hosts ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts 127.0.0.1 testsite.local #Other test website URLs here UPDATE: Some timings to open some desktop folder icons. This is after pulling out the network cable. A sub-directory of the desktop took 23 secs to open up. Content appears immediately (just 8 files, it has no further subdirectories). The home directory icon took 12 seconds to open up, but then took about 30 seconds for the files to appear. I closed it and tried again. This time it took 18 seconds to open up, but then 70 seconds before anything appeared. *: I couldn't work out how to use the gnome network tool for my needs, which include 3-4 static IPs for testing virtual hosts locally.

    Read the article

  • Touch gestures in IE not working without explorer.exe being run once

    - by Michael
    Edit: Rephrasing my question: Upon further troubleshooting, I can conclude that: Touch gestures (dragging, pinch to zoom, touch-and-hold right click) in Internet Explorer start to work when: The system has been running for ~2 minutes. This coincides with the delayed start of services. Explorer.exe is being run, then killed. I assume Explorer.exe starts some services? The services with delayed start are as follows: Security Center Software Protection Windows Defender, Search and Update Windows Font Cache Service Microsoft .NET Framework NGEN v4.0.30319_X64 and X86 I see no connection between these services and touch gestures, but just in case, I manually tried starting these services, but without luck. What else happens delayed after system boot, which also happens when explorer is started? Old question: Details: Internet Explorer 9 and Windows 7 Professional, running on a HP TouchSmart (touch screen PC). It is going to be a kiosk PC (running a custom GUI for displaying websites). Scenario 1: When running Internet Explorer as a normal program in Windows 7, touch functions work perfectly. I can scroll the website by dragging it with my finger, I can pinch zoom and I can touch-and-hold right click. I now change the default shell in Windows to Internet Explorer (ie. IE starts instead of explorer.exe). Internet Explorer of course starts up when logging in. However, touch functions are reduced to basic clicking (no dragging, no pinch zooming, no touch-and-hold right click). Then I manually start explorer.exe, and the touch functions work again! And here is the weird part: When I kill explorer.exe, the touch functions keeps working - even if I close IE and start a new instance. Scenario 2: The exact same, but instead of changing the default shell to Internet Explorer, I change it to my own program, which uses an embedded Internet Explorer ("WebBrowser"). Same thing happens. What I've tried: Autorun programs: When explorer.exe launches, it launches all the autorun programs. There are no relevant programs being run by explorer, but just in case, I have manually started all the autorun programs, so that it is identical (but without explorer.exe) to a normal login. It still does not work (until I launch explorer.exe). Specifically TabTip.exe, TabTip32.exe and wisptis.exe are all running. All services are also started. To sum it up Running explorer.exe once changes something in the touch capabilities of Internet Explorer. It doesn't matter if explorer.exe is running - as long as it has been run once. Does anyone know what causes this behavior? Or how I can circumvent it neatly?

    Read the article

  • Apache taking up too much CPU

    - by andrewtweber
    I'm trying to manage a server on Amazon for a network of sites that receives about 100 million pageviews per month. Unfortunately, nobody out of my team of 5 developers has much server admin experience. Right now we have the MaxClients set to 1400. Currently our traffic is about average, and we have 1150 total Apache processes running, which use about 2% CPU each! Out of those 1150, 800 of them are currently sleeping, but still taking up CPU. I'm sure there are ways to optimize this. I have a few thoughts: It appears Apache is creating a new process for every single connection. Is this normal? Is there a way to more quickly kill the sleeping processes? Should we turn KeepAlive on? Each page loads about 15-20 medium-sized graphics and a lot of javascript/css. So, here's our Apache setup. We do plan on contracting a server admin asap, but I would really appreciate some advice until we can find someone. Timeout 25 KeepAlive Off MaxKeepAliveRequests 200 KeepAliveTimeout 5 <IfModule prefork.c> StartServers 100 MinSpareServers 20 MaxSpareServers 50 ServerLimit 1400 MaxClients 1400 MaxRequestsPerChild 5000 </IfModule> <IfModule worker.c> StartServers 4 MaxClients 400 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> Full top output: top - 23:44:36 up 1 day, 6:43, 4 users, load average: 379.14, 379.17, 377.22 Tasks: 1153 total, 379 running, 774 sleeping, 0 stopped, 0 zombie Cpu(s): 71.9%us, 26.2%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 1.9%si, 0.0%st Mem: 70343000k total, 23768448k used, 46574552k free, 527376k buffers Swap: 0k total, 0k used, 0k free, 10054596k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1756 mysql 20 0 10.2g 1.8g 5256 S 19.8 2.7 904:41.13 mysqld 21515 apache 20 0 396m 18m 4512 R 2.1 0.0 0:34.42 httpd 21524 apache 20 0 396m 18m 4032 R 2.1 0.0 0:32.63 httpd 21544 apache 20 0 394m 16m 4084 R 2.1 0.0 0:36.38 httpd 21643 apache 20 0 396m 18m 4360 R 2.1 0.0 0:34.20 httpd 21817 apache 20 0 396m 17m 4064 R 2.1 0.0 0:38.22 httpd 22134 apache 20 0 395m 17m 4584 R 2.1 0.0 0:35.62 httpd 22211 apache 20 0 397m 18m 4104 R 2.1 0.0 0:29.91 httpd 22267 apache 20 0 396m 18m 4636 R 2.1 0.0 0:35.29 httpd 22334 apache 20 0 397m 18m 4096 R 2.1 0.0 0:34.86 httpd 22549 apache 20 0 395m 17m 4056 R 2.1 0.0 0:31.01 httpd 22612 apache 20 0 397m 19m 4152 R 2.1 0.0 0:34.34 httpd 22721 apache 20 0 396m 18m 4060 R 2.1 0.0 0:32.76 httpd 22932 apache 20 0 396m 17m 4020 R 2.1 0.0 0:37.34 httpd 22933 apache 20 0 396m 18m 4060 R 2.1 0.0 0:34.77 httpd 22949 apache 20 0 396m 18m 4060 R 2.1 0.0 0:34.61 httpd 22956 apache 20 0 402m 24m 4072 R 2.1 0.0 0:41.45 httpd

    Read the article

  • xterm not wrapping text properly

    - by mulllhausen
    I'm configuring both my gnome-terminal and xterm columns (i still haven't picked which of these I will be using) and I have a couple of issues I would like to fix: the typing area seems to be smaller (fewer columns) than the display area the typed text is not wrapping to the next line when it reaches the end - it just continues back around on the same line, overwriting the prompt (i have set a custom bash prompt with PS1 in case this is relevant) $ lsb_release -a No LSB modules are available. Distributor ID: Debian Description: Debian GNU/Linux 7.1 (wheezy) Release: 7.1 Codename: wheezy $ echo $TERM xterm $ stty -a [peter@pc ~] $ stty -a speed 38400 baud; rows 52; columns 126; line = 0; intr = ^C; quit = ^\; erase = ^?; kill = ^U; eof = ^D; eol = M-^?; eol2 = M-^?; swtch = M-^?; start = ^Q; stop = ^S; susp = ^Z; rprnt = ^R; werase = ^W; lnext = ^V; flush = ^O; min = 1; time = 0; -parenb -parodd cs8 hupcl -cstopb cread -clocal -crtscts -ignbrk brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr icrnl ixon -ixoff -iuclc ixany imaxbel iutf8 opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0 isig icanon iexten echo echoe echok -echonl -noflsh -xcase -tostop -echoprt echoctl echoke $[peter@mine ~] $ # the column width only goes up to here ------------------------------------------------> the results are identical in both the xterm and in gnome-terminal 3.4.1.1 and as you can see, the output of the stty -a command goes right up to the edge of the screen, while the typing does not go that far. I have found that I can get the desired result by setting the columns to a very large number, eg: $ stty cols 1800 this fixes both problems. Is it the right way to go about solving this problem? Will this "break" any of the output from programs? So far I have tried top and stty -a and these seem OK. more info as requested in the comments i found that if i cat some input into a file then the columns actually strech the full width of the terminal window: [peter@mine applications] $ cat > /tmp/asd aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaasssssssssssssssssssssssssssssssssssssssssssssssssssssssssssqqqqqqqqqqqqqqqq qqqq does this imply that it is actually bash that is restricting the number of columns and not the terminal? if so then how to alter the number of columns in bash?

    Read the article

  • Setting Timeouts: SQL Server 2008/IIS 7.5

    - by Julie
    We have recently migrated from a Win 2003/SQL Server 2000 system to Win 2008 64 bit R2, SQL Server 2008 R2. Our websites are in classic asp, and this can't be changed to another scripting language at this time. On the old server, if I got stuck in some kind of endless loop, the page would throw an error. On the new server, I have a page that has some sort of looping problem, that even though the SQL SP is called only once (and runs fine run as a query on the server) it pegs SQL server and therefore locks all of our websites. I'll get my code figured out, no biggie. But I need to make sure the server times out when this happens. (The page I'm working on runs fine with certain instances of the query, and locks with others using a different query variable. I can't have something like that sneak up on me on a page I haven't touched for three years.) I can't figure out how an SP that runs once on the server, from an ASP page, is tying up SQL server this way. It's obviously some sort of a timeout issue, but I can't figure out where/which timeout values to change. I actually have to remote desktop to the server and kill the process in SQL server. I'm afraid I'm a generalist, and server management is not my thing, even though it's my responsibility, so I am almost certain to have questions about any answer that I receive. How can I track this down? What settings do I need to change? More info: It's not SQL Server On our test site, I created an ASP file that just did an endless loop (do while 1=1) and had the same problem - the other websites wouldn't load - without SQL server being involved. So I think the reason the process was hanging is that the page wasn't timing out as it should, and so the connection to SQL was never closed. Killing the process in SQL server would reset the page somehow. For my intentional endless loop, I had to refresh the app pool to get rid of it. This points more to either IIS or the ASP settings. The ASP timeouts are set to whatever the default were when the server was first loaded. I still can't figure out why one file is locking up all websites, though. Again, that didn't happen on the old server.

    Read the article

  • Django + gunicorn + virtualenv + Supervisord issue

    - by Florian Le Goff
    Dear all, I have a strange issue with my virtualenv + gunicorn setup, only when gunicorn is launched via supervisord. I do realize that it may very well be an issue with my supervisord and I would appreciate any feedback on a better place to ask for help... In a nutshell : when I run gunicorn from my user shell, inside my virtualenv, everything is working flawlessly. I'm able to access all the views of my Django project. When gunicorn is launched by supervisord at the system startup, everything is OK. But, if I have to kill the gunicorn_django processes, or if I perform a supervisord restart, once that gunicorn_django has relaunched, every request is answered with a weird Traceback : (...) File "/home/hc/prod/venv/lib/python2.6/site-packages/Django-1.2.5-py2.6.egg/django/db/__init__.py", line 77, in connection = connections[DEFAULT_DB_ALIAS] File "/home/hc/prod/venv/lib/python2.6/site-packages/Django-1.2.5-py2.6.egg/django/db/utils.py", line 92, in __getitem__ backend = load_backend(db['ENGINE']) File "/home/hc/prod/venv/lib/python2.6/site-packages/Django-1.2.5-py2.6.egg/django/db/utils.py", line 50, in load_backend raise ImproperlyConfigured(error_msg) TemplateSyntaxError: Caught ImproperlyConfigured while rendering: 'django.db.backends.postgresql_psycopg2' isn't an available database backend. Try using django.db.backends.XXX, where XXX is one of: 'dummy', 'mysql', 'oracle', 'postgresql', 'postgresql_psycopg2', 'sqlite3' Error was: cannot import name utils Full stack available here : http://pastebin.com/BJ5tNQ2N I'm running... Ubuntu/maverick (up-to-date) Python = 2.6.6 virtualenv = 1.5.1 gunicorn = 0.12.0 Django = 1.2.5 psycopg2 = '2.4-beta2 (dt dec pq3 ext)' gunicorn configuration : backlog = 2048 bind = "127.0.0.1:8000" pidfile = "/tmp/gunicorn-hc.pid" daemon = True debug = True workers = 3 logfile = "/home/hc/prod/log/gunicorn.log" loglevel = "info" supervisord configuration : [program:gunicorn] directory=/home/hc/prod/hc command=/home/hc/prod/venv/bin/gunicorn_django -c /home/hc/prod/hc/gunicorn.conf.py user=hc umask=022 autostart=True autorestart=True redirect_stderr=True Any advice ? I've been stuck on this one for quite a while. It seems like some weird memory limit, as I'm not enforcing anything special : $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 20 file size (blocks, -f) unlimited pending signals (-i) 16382 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) unlimited virtual memory (kbytes, -v) unlimited file locks (-x) unlimited Thank you.

    Read the article

  • Archlinux/atheros WLAN configuration troubles

    - by GrinReaper
    I'm trying to config archlinux to use my wireless network adapter. It's quite troublesome. From what I've gathered, it's an atheros network adapter, using the ath5k driver/module... I can't get it to work; any ideas? Here's some of the output from my tinkering: # lspci | grep -i net 00:0a.0 Ethernet controller: nVidia corporation MCP67 Ethernet (reva2) 03:00.0 Ethernet controller: atheros communications inc. AR5001 Wireless Network Adapter (rev01) # lsusb ... Bus 004 Device 003: ID 03f0:17d Hewlett Packard Wireless (Bluetooth + WLAN Interface [Integrated Module] # ping -c 3 www.google.com ping: unknown host www.google.com #ping -c 3 8.8.8.8 ping: network is unreachable # lspci -v 03:00.0 Ethernet controller: atheros communications inc. AR5001 Wireless Network Adapter (rev01) ... Kernel driver in use: ath5k Kernel modules: ath5k # dmesg |grep ath5k registered as phy0 registered led device ath5k: atheros chip found PCI INT A disabled registered led device registered as phy1 # ip addr | sed '/^[0-9]/!d;s/: <.*$//' 1: lo 2: eth1 3: eth0 # ip link set <interface> up/down RNETLINK answers: Operation not possible due to RF-kill Also, is there a way to dump text from command-line to a text file so i can just copy pasta? Sorry, first time using a linux distro... EDIT: So I just tried this: I actually just did this twice. (I can't tell which setting is on/off for my wireless adapter. The lights are blue all the time now.) #rfkill list 0: hp-wifi: wireless lan softblocked: no hardblocked :yes 1: hp-bluetooth: bluetooth softblocked: no hardblocked :yes 3: phy1: wireless lan softblocked: no hardblocked :yes #rfkill list 0: hp-wifi: wireless lan softblocked: no hardblocked :no 1: hp-bluetooth: bluetooth softblocked: no hardblocked no 3: phy1: wireless lan softblocked: no hardblocked :yes 7: hci0: bluetooh 0: hp-wifi: wireless lan softblocked: no hardblocked :no I've dug around some other articles and it seems like ath5k is supposed to be preferable to madwifi, so should i be using madwifi? I'm 99% sure I disabled the hardblock (by turning it ON) but, as shown above, phy1 wireless lan is STILL hardblocked. What gives? Maybe I've made some more fundamental error in a basic config file? EDIT: I've fixed the hardblock. I've tried pinging www.google.com, but to no avail. I get: ping: unknown host www.google.com In the arch wiki: Edit /etc/hosts and add the same HOSTNAME you entered in /etc/rc.conf: 127.0.0.1 archlinux.domain.org localhost.localdomain localhost archlinux To my understanding, hostname is just a user-specified and based on preference(?) My /etc/rc.conf: HOSTNAME="gestalt" My /etc/hosts: 127.0.0.1 localhost.localdomain localhost gestalt but should it be the following? 120.0.0.1 localhost.domain.org localhost.localdomain localhost gestalt

    Read the article

  • Malware - Technical anlaysis

    - by nullptr
    Note: Please do not mod down or close. Im not a stupid PC user asking to fix my pc problem. I am intrigued and am having a deep technical look at whats going on. I have come across a Windows XP machine that is sending unwanted p2p traffic. I have done a 'netstat -b' command and explorer.exe is sending out the traffic. When I kill this process the traffic stops and obviously Windows Explorer dies. Here is the header of the stream from the Wireshark dump (x.x.x.x) is the machines IP. GNUTELLA CONNECT/0.6 Listen-IP: x.x.x.x:8059 Remote-IP: 76.164.224.103 User-Agent: LimeWire/5.3.6 X-Requeries: false X-Ultrapeer: True X-Degree: 32 X-Query-Routing: 0.1 X-Ultrapeer-Query-Routing: 0.1 X-Max-TTL: 3 X-Dynamic-Querying: 0.1 X-Locale-Pref: en GGEP: 0.5 Bye-Packet: 0.1 GNUTELLA/0.6 200 OK Pong-Caching: 0.1 X-Ultrapeer-Needed: false Accept-Encoding: deflate X-Requeries: false X-Locale-Pref: en X-Guess: 0.1 X-Max-TTL: 3 Vendor-Message: 0.2 X-Ultrapeer-Query-Routing: 0.1 X-Query-Routing: 0.1 Listen-IP: 76.164.224.103:15649 X-Ext-Probes: 0.1 Remote-IP: x.x.x.x GGEP: 0.5 X-Dynamic-Querying: 0.1 X-Degree: 32 User-Agent: LimeWire/4.18.7 X-Ultrapeer: True X-Try-Ultrapeers: 121.54.32.36:3279,173.19.233.80:3714,65.182.97.15:5807,115.147.231.81:9751,72.134.30.181:15810,71.59.97.180:24295,74.76.84.250:25497,96.234.62.221:32344,69.44.246.38:42254,98.199.75.23:51230 GNUTELLA/0.6 200 OK So it seems that the malware has hooked into explorer.exe and hidden its self quite well as a Norton Scan doesn't pick anything up. I have looked in Windows firewall and it shouldn't be letting this traffic through. I have had a look into the messages explorer.exe is sending in Spy++ and the only related ones I can see are socket connections etc... My question is what can I do to look into this deeper? What does malware achieve by sending p2p traffic? I know to fix the problem the easiest way is to reinstall Windows but I want to get to the bottom of it first, just out of interest. Edit: Had a look at Deoendency Walker and Process Explorer. Both great tools. Here is a image of the TCP connections for explorer.exe in Process Explorer http://img210.imageshack.us/img210/3563/61930284.gif

    Read the article

  • Ubuntu Lucid: Erratic screen behaviour after boot

    - by fgysin
    In short: about 50% of the time I have a screwed up monitor setup after reboot. About 50% it is totally correct. Now the longer version: I updated my machine from 9.04 to 10.04 (via 9.10). At first I run into some monitor problems (I have a 3-monitor setup) because of the known bug in the new xserver driver for xinerama. This messes up behaviour if the mouse goes either left or above the screen number 0, i.e. I had to make my left-most monitor screen 0. Everything worked out fine finally, I got my 3-monitor setup back with xinerama enabled to get one big desktop streched over 3 screens. Now the fun part: Every time I start up my machine only one of the 3 monitors gets a signal and is woken up: it only recognizes the left-most monitor (screen 0) and crams all the desktop stuff into this one screen. If I go into nvidia settings I only see one physical device although all 3 are connected and have power. When I look into the xorg.conf I can still see my old setup with 3 devices, 3 screens, xinerama active etc... But I was totally unable to get 3 montitors to work. (I tried unplugging monitors, reconfiguring whole nvidia setup, ...) But it gets even better: When I restart my machine (i.e. choose the restart option from the Ubuntu menu) it shuts down and tries to restart. The restart then gets stuck after showing the Ubuntu splash screen with the 'loading bar' (the moving dots thingy) and I am forced to kill the machine by cutting power. But after the power cut the machine boots up normally and suddenly I get my 3 monitor setup back up working. That is until the next time I shut down and start up, where it all starts over again and I only have one monitor... (see above) I really have a hard time seeing where the error is. It must be that the restart boot somehow differs from the 'normal' boot. But the fact that it gets stuck and I need to cut power which then basically triggers a 'normal' boot does not really support this theory... My setup (please tell me if you need further info): 3 monitors as 3 screens as one desktop (with xinerama) 2 nvidia cards where screen 0 and 1 are on card 0 and screen 2 is on card 1 Ubuntu 10.04 Lucid Lynx (updated from 9.10, 9.04, ....) I would appreciate every idea on the subject, at the moment I really don't have any clue what to do...

    Read the article

  • Converting an EC2 AMI to vmdk image

    - by Reed G. Law
    I've come quite close to getting Amazon Linux to boot inside VirtualBox, thanks to this answer and these websites. A quick overview of the steps I've taken: Launch EC2 instance with Amazon Linux 2011.09 64-bit AMI dd the contents of the EBS volume over ssh to a local image file. Mount the image file as a loopback device and then to a local mount point. Create a new empty disk image file, partition with an offset for a bootloader, and create an ext4 filesystem. Mount the new image's partition and copy everything from the EC2 image. Install grub (using Ubuntu's grub-legacy-ec2 package, not grub2). Convert the image file to vmdk using qemu-img. Create a new VirtualBox VM with the vmdk. Now the VM boots, grub loads, and the kernel is found. But it fails when it tries to mount the root device: dracut Warning: No root device "block:/dev/xvda1" found dracut Warning: Boot has failed. To debug this issue add "rdshell" to the kernel command line. dracut Warning: Signal caught! dracut Warning: Boot has failed. To debug this issue add "rdshell" to the kernel command line. Kernel panic - not syncing: Attempted to kill init! Pid: 1, comm: init Not tainted 2.6.35.14-107.1.39.amzn1.x86_64 #1 I have tried changing /boot/grub/menu.lst to find the root device by label and UUID, but nothing works. I'm guessing the xen kernel is not compatible with VirtualBox. The reasoning behind all this effort is to make a Vagrant box that is as close to possible as the production enviroment, so deploys can be tested locally. I know it's cheap to do test runs on EC2, but poor connectivity often ruins the experience. Plus it would be really nice to have a virtual machine with the production environment so that co-workers don't have to install everything under the sun just to get up and running with app development. If I were to try running a different kernel, what kernel could I get to be as close as possible to Amazon Linux 2011.09?

    Read the article

  • How might I stop BACKSCATTER using Qmail?

    - by alecb
    New to ServerFault , please pardon if my details are too much Linux box acting as Virtual Host for domain hosting. Runs CentOs. Runs Parallels Plesk 9.x Regardless of the following, the SPAM keeps flowing in at 1-3 / second. An explanation of the problem... "xinetd service listens for SMTP connections and forwards to qmail-smtpd. The qmail service only process the queue, but does not control messages coming into the queue...that's why stopping it has no effect. If you stop xinetd AND qmail, then kill any open qmail-smtpd processes, all mail flow comes to a stop SOMETIMES Problem is, qmail-smtpd is not smart enough to check for valid mailboxes on the localhost before accepting the mail. So, it accepts bad mail with a forged replyto address which gets processed in the queue by qmail. Qmail cannot deliver locally and bounces to the forged replyto address." We believe the fix is to patch the qmail-smtpd process to give it the intelligence to check for the existence of local mailboxes BEFORE accepting the message. The problem is when we try to compile the chkuser patch we run into failures due to Plesk Control Panel." Is anyone aware of something we could do differently or better?" Other things that have NOT worked thus far: -Turning off any and all mail processes (to check as an indicator that an individual account has been compromised. This has been verified as NOT the case.) -Turning off mail AND http server processes (in the case of a compromised formmail) -Running EXIM in lieu of Qmail( easy/quick install but xinetd forces exim to close and restarts qmail on its own) -Turned on SPF protection via Plesk GUI. Does not help. -Turned on Greylisting via Plesk GUI. Does not help. -Disabled Bounce notifications via command line That which MIGHT work but have complications: -Use POSTFIX instead of QMAIL (No knowledge of POSTIFX and don't want to bother with it unless anyone knows it has potential to handle backscatter WELL before investing time) -As mentioned above, compiling a chkusr patch, we believe will STOP this problem, along with qmail (because of plesk in the mix, the comile fails every time and Parallels Plesk support is unresponsive unless I cough up MONEY) If I don't clear out the SPAM from the outgoing mail queue nightly, then it clogs up with millions of SPAMs and will bring down the OUTGOING email services. Any and all help welcome and appreciated!

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >