Search Results

Search found 5335 results on 214 pages for 'agile processes'.

Page 89/214 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • implementing a Intelligent File Transfer Software in java over TCP/IP

    - by whyjava
    Hello I am working on a proposal where we have to implement a software which can move files between one source to destination.The overall goal of this project is to create intelligent file transfer.This software will have three components :- 1) Broker : Broker is the module that communicates with other brokers, monitors files, moves files, retrieves configurations from the Configuration Manager, supplies process information for the monitor, archives files, writes all process data to log files and escalates issues if necessary 2) Configuration Manager :Configuration Manager is a web-based application used to configure and deploy the configuration to all brokers. 3) Monitor : Monitor is a web-based application used to monitor each Broker in the environment. This project has to be built up in java and protocol for file transfer in tcp/ip. Client does not want to use FTP. File Transfer seems very easy, until there are several processes who are waiting to pick the file up automatically. Several problems arise: How can we guarantee the file is received at the destination? If a file isn’t received the first time, we should try it again (even after a restart or power breakdown) ? How does the receiver knows the file that is received is complete? How can we transfer multiple files synchronously? How can we protect the bandwidth, so file transfer isn’t blocking other processes? How does one interoperate between multiple OS platforms? What about authentication? How can we monitor het workflow? Auditing / logging Archiving Can you please provide answer to some of these? Thanks

    Read the article

  • Distributed Cache with Serialized File as DataStore in Oracle Coherence

    - by user226295
    Weired but I am investigating the Oracle Coherence as a substitue for distribute cache. My primarr problem is that we dont have distribituted cache as such as of now in our app. Thats my major concern. And thats what I want to implement. So, lets say if I take up a machine and start a new (3rd) reading process, it will be able to connect to the cache and listen to the cache and will have a full set of cache triplicated (as of now its duplicated) Now thats waste from a common person stanpoint too. The size of the cache is 2 GB and without going distibuted its limiting us. Thats bring me to Coheremce. But now, we dont have database as persistent store too. we have the archival processes as our persistent store. (90 days worth of data) Ok now multiply that with soem where around 2 GB * 90 (thats the bare minimum we want to keep). Preliminary/Intermediate analysis of Coherence as a solution. And a (supposedly) brilliant thought crossed my mind. Why not have this as persistant storage with my distributed cache. Does Oracle Coherence support that. I will get rid of archiving infrastructure too (i hate daemon archiving processes). For some starnge reasons, I dont wanna go to the DB to replace those flat files. What say?, can Coherence be my savior? Any other stable alternate too. (Coherence is imposed on me by big guys, FYI)

    Read the article

  • Does anyone still believe in the Capability Maturity Model for Software?

    - by Ed Guiness
    Ten years ago when I first encountered the CMM for software I was, I suppose like many, struck by how accurately it seemed to describe the chaotic "level one" state of software development in many businesses, particularly with its reference to reliance on heroes. It also seemed to provide realistic guidance for an organisation to progress up the levels improving their processes. But while it seemed to provide a good model and realistic guidance for improvement, I never really witnessed an adherence to CMM having a significant positive impact on any organisation I have worked for, or with. I know of one large software consultancy that claims CMM level 5 - the highest level - when I can see first hand that their processes are as chaotic, and the quality of their software products as varied, as other, non-CMM businesses. So I'm wondering, has anyone seen a real, tangible benefit from adherence to process improvement according to CMM? And if you have seen improvement, do you think that the improvement was specifically attributable to CMM, or would an alternative approach (such as six-sigma) have been equally or more beneficial? Does anyone still believe? As an aside, for those who haven't yet seen it, check out this funny-because-its-true parody

    Read the article

  • Is there a better way to keep track of session variable creation/access throughout different pages?

    - by Brandon
    Here's what I am working on. At my website I have multiple processes with each one containing multiple steps. Now in one of the processes, there is an error checking routine executed before proceeding to the next step of that process. A session var is set indicating the error status and it will either redirect back to the referrer or display the next page's contents. Now this kind of functionality, I believe, is common throughout web development. The issue that is occurring is that session vars are left around and are not being cleaned up properly. At times this introduces undesired behavior. My website is growing and I find that I am requiring more and more session vars to keep track of different system and error states. So I was thinking about creating a kind of "session variable keeper" to keep track of session var usage. The idea is fairly simple. It will have the notion of a context (e.g. registration process) and allow access to a predefined set of session vars within that context. In addition, the var and context will be paired with an action to proceed to some form of event handling. So if you haven't noticed I'm new to web development. Any thoughts or comments on the idea that I am proposing would be greatly appreciated. The back-end is written in PHP/MySQL.

    Read the article

  • django multiprocess problem

    - by iKiR
    I have django application, running under lighttpd via fastcgi. FCGI running script looks like: python manage.py runfcgi socket=<path>/main.socket method=prefork \ pidfile=<path>/server.pid \ minspare=5 maxspare=10 maxchildren=10 maxrequests=500 \ I use SQLite. So I have 10 proccess, which all work with the same DB. Next I have 2 views: def view1(request) ... obj = MyModel.objects.get_or_create(id=1) obj.param1 = <some value> obj.save () def view2(request) ... obj = MyModel.objects.get_or_create(id=1) obj.param2 = <some value> obj.save () And If this views are executed in two different threads sometimes I get MyModel instance in DB with id=1 and updated either param1 or param2 (BUT not both) - it depends on which process was the first. (of course in real life id changes, but sometimes 2 processes execute these two views with same id) The question is: What should I do to get instance with updated param1 and param2? I need something for merging changes in different processes. One decision is create interprocess lock object but in this case I will get sequence executing views and they will not be able to be executed simultaneously, so I ask help

    Read the article

  • MPI Odd/Even Compare-Split Deadlock

    - by erebel55
    I'm trying to write an MPI version of a program that runs an odd/even compare-split operation on n randomly generated elements. Process 0 should generated the elements and send nlocal of them to the other processes, (keeping the first nlocal for itself). From here, process 0 should print out it's results after running the CompareSplit algorithm. Then, receive the results from the other processes run of the algorithm. Finally, print out the results that it has just received. I have a large chunk of this already done, but I'm getting a deadlock that I can't seem to fix. I would greatly appreciate any hints that people could give me. Here is my code http://pastie.org/3742474 Right now I'm pretty sure that the deadlock is coming from the Send/Recv at lines 134 and 151. I've tried changing the Send to use "tag" instead of myrank for the tag parameter..but when I did that I just keep getting a "MPI_ERR_TAG: invalid tag" for some reason. Obviously I would also run the algorithm within the processors 0 but I took that part out for now, until I figure out what is going wrong. Any help is appreciated.

    Read the article

  • MySQL running on an EC2 m1.small instance has high load but low memory usage, possible resolutions?

    - by Tosh
    I have a MySQL server 5.0.75 Ubuntu, on an m1.small instance running on Amazon's EC2 as part of an application. During peak usage the server load will rise very high, while the memory usage stays low and the application server is no longer responsive since it's waiting for query results. The application server has only 5-8 apache processes running (mod_perl processes). The data directory uses only 140MB of data so the MyIsam tables aren't very big. The queries are pretty complicated with some big joins being performed, and the application makes a lot of queries. mysqltuner reports everything OK except "Maximum possible memory usage: 1.7G (99% of installed RAM)" but I'm nowhere close to using that. My question is, where should I be looking to fix this? Is this something that can be tuned away, or do I just need a larger instance/server? Googling indicates either or also upgrading MySQL server. Any pointers in the right direction would be greatly appreciated, thanks! EDIT: I just discovered this in my slow queries log: # Time: 101116 11:17:00 # User@Host: user[pass] @ [host] # Query_time: 4063 Lock_time: 1035 Rows_sent: 0 Rows_examined: 19960174 SELECT * FROM contacts WHERE contacts.contact_id IN (SELECT external_id FROM contact_relations WHERE external_table = 'contacts' AND contact_id IN (SELECT contact_id FROM contacts WHERE (company_name like '%%butan%%%' OR country like '%%butan%%%' OR city like '%%butan%%%' OR email1 like '%%butan%%%') AND (company_name is not null and company_name != ''))); Which actually brings up a different but related question: If I have a contact table containing: John Smith,The Fun Factory,555-1212,[email protected] What's the best way to search for that record using "factory" as a search key? Fulltext rarely seems to find items in the middle of a word, for example "actor" should bring up "Factory"

    Read the article

  • Process is killed without a (obvious) reason and program stops working

    - by Krzysiek Gurniak
    Here's what my program is supposed to do: create 4 child processes: process 0 is reading 1 byte at a time from STDIN, then writing it into FIFO process 1 is reading this 1 byte from fifo and write its value as HEX into shared memory process 2 is reading HEX value from shared memory and writing it into pipe finally process 3 is reading from pipe and writing into STDOUT (in my case: terminal) I can't change communication channels. FIFO, then shared memory, then pipes are the only option. My problem: Program stops at random moments when some file is directed into stdin (for example:./program < /dev/urandom). Sometimes after writing 5 HEX values, sometimes after 100. Weird thing is that when it is working and in another terminal I write "pstree -c" there is 1 main process with 4 children processes (which is what I want), but when I write "pstree -c" after it stopped writing (but still runs) there are only 3 child processes. For some reason 1 is gone even though they all have while(1) in them.. I think I might have problem with synchronization here, but I am unable to spot it (I've tried for many hours). Here's the code: #include <unistd.h> #include <fcntl.h> #include <stdio.h> #include <string.h> #include <stdlib.h> #include <sys/shm.h> #include <sys/sem.h> #include <sys/types.h> #include <sys/wait.h> #include <sys/stat.h> #include <string.h> #include <signal.h> #define BUFSIZE 1 #define R 0 #define W 1 // processes ID pid_t p0, p1, p2, p3; // FIFO variables int fifo_fd; unsigned char bufor[BUFSIZE] = {}; unsigned char bufor1[BUFSIZE] = {}; // Shared memory variables key_t key; int shmid; char * tab; // zmienne do pipes int file_des[2]; char bufor_pipe[BUFSIZE*30] = {}; void proces0() { ssize_t n; while(1) { fifo_fd = open("/tmp/fifo",O_WRONLY); if(fifo_fd == -1) { perror("blad przy otwieraniu kolejki FIFO w p0\n"); exit(1); } n = read(STDIN_FILENO, bufor, BUFSIZE); if(n<0) { perror("read error w p0\n"); exit(1); } if(n > 0) { if(write(fifo_fd, bufor, n) != n) { perror("blad zapisu do kolejki fifo w p0\n"); exit(1); } memset(bufor, 0, n); // czyszczenie bufora } close(fifo_fd); } } void proces1() { ssize_t m, x; char wartosc_hex[30] = {}; while(1) { if(tab[0] == 0) { fifo_fd = open("/tmp/fifo", O_RDONLY); // otwiera plik typu fifo do odczytu if(fifo_fd == -1) { perror("blad przy otwieraniu kolejki FIFO w p1\n"); exit(1); } m = read(fifo_fd, bufor1, BUFSIZE); x = m; if(x < 0) { perror("read error p1\n"); exit(1); } if(x > 0) { // Konwersja na HEX if(bufor1[0] < 16) { if(bufor1[0] == 10) // gdy enter { sprintf(wartosc_hex, "0x0%X\n", bufor1[0]); } else { sprintf(wartosc_hex, "0x0%X ", bufor1[0]); } } else { sprintf(wartosc_hex, "0x%X ", bufor1[0]); } // poczekaj az pamiec bedzie pusta (gotowa do zapisu) strcpy(&tab[0], wartosc_hex); memset(bufor1, 0, sizeof(bufor1)); // czyszczenie bufora memset(wartosc_hex, 0, sizeof(wartosc_hex)); // przygotowanie tablicy na zapis wartosci hex x = 0; } close(fifo_fd); } } } void proces2() { close(file_des[0]); // zablokuj kanal do odczytu while(1) { if(tab[0] != 0) { if(write(file_des[1], tab, strlen(tab)) != strlen(tab)) { perror("blad write w p2"); exit(1); } // wyczysc pamiec dzielona by przyjac kolejny bajt memset(tab, 0, sizeof(tab)); } } } void proces3() { ssize_t n; close(file_des[1]); // zablokuj kanal do zapisu while(1) { if(tab[0] == 0) { if((n = read(file_des[0], bufor_pipe, sizeof(bufor_pipe))) > 0) { if(write(STDOUT_FILENO, bufor_pipe, n) != n) { perror("write error w proces3()"); exit(1); } memset(bufor_pipe, 0, sizeof(bufor_pipe)); } } } } int main(void) { key = 5678; int status; // Tworzenie plikow przechowujacych ID procesow int des_pid[2] = {}; char bufor_proces[50] = {}; mknod("pid0", S_IFREG | 0777, 0); mknod("pid1", S_IFREG | 0777, 0); mknod("pid2", S_IFREG | 0777, 0); mknod("pid3", S_IFREG | 0777, 0); // Tworzenie semaforow key_t klucz; klucz = ftok(".", 'a'); // na podstawie pliku i pojedynczego znaku id wyznacza klucz semafora if(klucz == -1) { perror("blad wyznaczania klucza semafora"); exit(1); } semafor = semget(klucz, 1, IPC_CREAT | 0777); // tworzy na podstawie klucza semafor. 1 - ilosc semaforow if(semafor == -1) { perror("blad przy tworzeniu semafora"); exit(1); } if(semctl(semafor, 0, SETVAL, 0) == -1) // ustawia poczatkowa wartosc semafora (klucz, numer w zbiorze od 0, polecenie, argument 0/1/2) { perror("blad przy ustawianiu wartosci poczatkowej semafora"); exit(1); } // Tworzenie lacza nazwanego FIFO if(access("/tmp/fifo", F_OK) == -1) // sprawdza czy plik istnieje, jesli nie - tworzy go { if(mkfifo("/tmp/fifo", 0777) != 0) { perror("blad tworzenia FIFO w main"); exit(1); } } // Tworzenie pamieci dzielonej // Lista pamieci wspoldzielonych, komenda "ipcs" // usuwanie pamieci wspoldzielonej, komenta "ipcrm -m ID_PAMIECI" shmid = shmget(key, (BUFSIZE*30), 0666 | IPC_CREAT); if(shmid == -1) { perror("shmget"); exit(1); } tab = (char *) shmat(shmid, NULL, 0); if(tab == (char *)(-1)) { perror("shmat"); exit(1); } memset(tab, 0, (BUFSIZE*30)); // Tworzenie lacza nienazwanego pipe if(pipe(file_des) == -1) { perror("pipe"); exit(1); } // Tworzenie procesow potomnych if(!(p0 = fork())) { des_pid[W] = open("pid0", O_WRONLY | O_TRUNC | O_CREAT); // 1 - zapis, 0 - odczyt sprintf(bufor_proces, "Proces0 ma ID: %d\n", getpid()); if(write(des_pid[W], bufor_proces, sizeof(bufor_proces)) != sizeof(bufor_proces)) { perror("blad przy zapisie pid do pliku w p0"); exit(1); } close(des_pid[W]); proces0(); } else if(p0 == -1) { perror("blad przy p0 fork w main"); exit(1); } else { if(!(p1 = fork())) { des_pid[W] = open("pid1", O_WRONLY | O_TRUNC | O_CREAT); // 1 - zapis, 0 - odczyt sprintf(bufor_proces, "Proces1 ma ID: %d\n", getpid()); if(write(des_pid[W], bufor_proces, sizeof(bufor_proces)) != sizeof(bufor_proces)) { perror("blad przy zapisie pid do pliku w p1"); exit(1); } close(des_pid[W]); proces1(); } else if(p1 == -1) { perror("blad przy p1 fork w main"); exit(1); } else { if(!(p2 = fork())) { des_pid[W] = open("pid2", O_WRONLY | O_TRUNC | O_CREAT); // 1 - zapis, 0 - odczyt sprintf(bufor_proces, "Proces2 ma ID: %d\n", getpid()); if(write(des_pid[W], bufor_proces, sizeof(bufor_proces)) != sizeof(bufor_proces)) { perror("blad przy zapisie pid do pliku w p2"); exit(1); } close(des_pid[W]); proces2(); } else if(p2 == -1) { perror("blad przy p2 fork w main"); exit(1); } else { if(!(p3 = fork())) { des_pid[W] = open("pid3", O_WRONLY | O_TRUNC | O_CREAT); // 1 - zapis, 0 - odczyt sprintf(bufor_proces, "Proces3 ma ID: %d\n", getpid()); if(write(des_pid[W], bufor_proces, sizeof(bufor_proces)) != sizeof(bufor_proces)) { perror("blad przy zapisie pid do pliku w p3"); exit(1); } close(des_pid[W]); proces3(); } else if(p3 == -1) { perror("blad przy p3 fork w main"); exit(1); } else { // proces macierzysty waitpid(p0, &status, 0); waitpid(p1, &status, 0); waitpid(p2, &status, 0); waitpid(p3, &status, 0); //wait(NULL); unlink("/tmp/fifo"); shmdt(tab); // odlaczenie pamieci dzielonej shmctl(shmid, IPC_RMID, NULL); // usuwanie pamieci wspoldzielonej printf("\nKONIEC PROGRAMU\n"); } } } } exit(0); }

    Read the article

  • Exited event of Process is not rised?

    - by Kanags.Net
    In my appliation,I am opening an excel sheet to show one of my Excel documents to the user.But before showing the excel I am saving it to a folder in my local machine which in fact will be used for showwing. While the user closes the application I wish to close the opened excel files and delete all the excel files which are present in my local folder.For this, in the logout event I have written code to close all the opened files like shown below, Process[] processes = Process.GetProcessesByName(fileType); foreach (Process p in processes) { IntPtr pFoundWindow = p.MainWindowHandle; if (p.MainWindowTitle.Contains(documentName)) { p.CloseMainWindow(); p.Exited += new EventHandler(p_Exited); } } And in the process exited event I wish to delete the excel file whose process is been exited like shown below void p_Exited(object sender, EventArgs e) { string file = strOriginalPath; if (File.Exists(file)) { //Pdf issue fix FileStream fs = new FileStream(file, FileMode.Open, FileAccess.Read); fs.Flush(); fs.Close(); fs.Dispose(); File.Delete(file); } } But the problem is this exited event is not called at all.On the other hand if I delete the file after closing the MainWindow of the process I am getting an exception "File already used by another process". Could any help me on how to achieve my objective or give me an reason why the process exited event is not being called?

    Read the article

  • using threads in menu options

    - by vbNewbie
    I have an app that has a console menu with 2/3 selections. One process involves uploading a file and performing a lengthy search process on its contents, whilst another process involves SQL queries and is an interactive process with the user. I wish to use threads to allow one process to run and the menu to offer the option for the second process to run. However you cannot run the first process twice. I have created threads and corrected some compilation errors but the threading options are not working correctly. Any help appreciated. main... Dim tm As Thread = New Thread(AddressOf loadFile) Dim ts As Thread = New Thread(AddressOf reports) .... While Not response.Equals("3") Try Console.Write("Enter choice: ") response = Console.ReadLine() Console.WriteLine() If response.Equals("1") Then Console.WriteLine("Thread 1 doing work") tm.SetApartmentState(ApartmentState.STA) tm.IsBackground = True tm.Start() response = String.Empty ElseIf response.Equals("2") Then Console.WriteLine("Starting a second Thread") ts.Start() response = String.Empty End If ts.Join() tm.Join() Catch ex As Exception errormessage = ex.Message End Try End While I realize that a form based will be easier to implement with perhaps just calling different forms to handle the processes.But I really dont have that option now since the console app will be added to api later. But here are my two processes from the menu functions. Also not sure what to do with the boolean variabel again as suggested below. Private Sub LoadFile() Dim dialog As New OpenFileDialog Dim response1 As String = Nothing Dim filepath As String = Environment.GetFolderPath(Environment.SpecialFolder.MyDocuments) dialog.InitialDirectory = filepath If dialog.ShowDialog() = DialogResult.OK Then fileName = dialog.FileName ElseIf DialogResult.Cancel Then Exit Sub End If Console.ResetColor() Console.Write("Begin Search -- Discovery Search, y or n? ") response1 = Console.ReadLine() If response1 = "y" Then Search() ElseIf response1 = "n" Then Console.Clear() main() End If isRunning = False End Sub and the second one Private Shared Sub report() Dim rptGen As New SearchBlogDiscovery.rptGeneration Console.WriteLine("Tread Process started") rptGen.main() Console.WriteLine("Thread Process ended") isRunning = False End Sub

    Read the article

  • Which options do I have for Java process communication?

    - by Dmitriy Matveev
    We have a place in a code of such form: void processParam(Object param) { wrapperForComplexNativeObject result = jniCallWhichMayCrash(param); processResult(result); } processParam - method which is called with many different arguments. jniCallWhichMayCrash - a native method which is intended to do some complex processing of it's parameter and to create some complex object. It can crash in some cases. wrapperForComplexNativeObject - wrapper type generated by SWIG processResult - a method written in pure Java which processes it's parameter by creation of several kinds (by the kinds I'm not meaning classes, maybe some like hierarchies) of objects: 1 - Some non-unique objects which are referencing each other (from the same hierarchy), these objects can have duplicates created from the invocations of processParam() method with different parameter values. Since it's costly to keep all the duplicates it's necessary to cache them. 2 - Some unique objects which are referencing each other (from the same hierarchy) and some of the objects of 1st kind. After processParam is executed for each of the arguments from some set the data created in processResult will be processed together. The problem is in fact that jniCallWhichMayCrash method may crash the entire JVM and this will be very bad. The reason of crash may be such that it can happen for one argument value and not for the other. We've decided that it's better to ignore crashes inside of JVM and just skip some chunks of data when such crashes occur. In order to do this we should run processParam function inside of separate process and pass the result somehow (HOW? HOW?! This is a question) to the main process and in case of any crashes we will only lose some part of data (It's ok) without lose of everything else. So for now the main problem is implementation of transport between different processes. Which options do I have? I can think about serialization and transmitting of binary data by the streams, but serialization may be not very fast due to object complexity. Maybe I have some other options of implementing this?

    Read the article

  • How to debug JBoss out of memory problem?

    - by user561733
    Hello, I am trying to debug a JBoss out of memory problem. When JBoss starts up and runs for a while, it seems to use memory as intended by the startup configuration. However, it seems that when some unknown user action is taken (or the log file grows to a certain size) using the sole web application JBoss is serving up, memory increases dramatically and JBoss freezes. When JBoss freezes, it is difficult to kill the process or do anything because of low memory. When the process is finally killed via a -9 argument and the server is restarted, the log file is very small and only contains outputs from the startup of the newly started process and not any information on why the memory increased so much. This is why it is so hard to debug: server.log does not have information from the killed process. The log is set to grow to 2 GB and the log file for the new process is only about 300 Kb though it grows properly during normal memory circumstances. This is information on the JBoss configuration: JBoss (MX MicroKernel) 4.0.3 JDK 1.6.0 update 22 PermSize=512m MaxPermSize=512m Xms=1024m Xmx=6144m This is basic info on the system: Operating system: CentOS Linux 5.5 Kernel and CPU: Linux 2.6.18-194.26.1.el5 on x86_64 Processor information: Intel(R) Xeon(R) CPU E5420 @ 2.50GHz, 8 cores This is good example information on the system during normal pre-freeze conditions a few minutes after the jboss service startup: Running processes: 183 CPU load averages: 0.16 (1 min) 0.06 (5 mins) 0.09 (15 mins) CPU usage: 0% user, 0% kernel, 1% IO, 99% idle Real memory: 17.38 GB total, 2.46 GB used Virtual memory: 19.59 GB total, 0 bytes used Local disk space: 113.37 GB total, 11.89 GB used When JBoss freezes, system information looks like this: Running processes: 225 CPU load averages: 4.66 (1 min) 1.84 (5 mins) 0.93 (15 mins) CPU usage: 0% user, 12% kernel, 73% IO, 15% idle Real memory: 17.38 GB total, 17.18 GB used Virtual memory: 19.59 GB total, 706.29 MB used Local disk space: 113.37 GB total, 11.89 GB used

    Read the article

  • Sharing a global/static variable between a process and DLL

    - by minjang
    I'd like to share a static/global variable only between a process and a dll that is invoked by the process. The exe and dll are in the same memory address space. I don't want the variable to be shared among other processes. Elaboration of the problem: Say that there is a static/global variable x in a.cpp. Both the exe foo.exe and the dll bar.dll have a.cpp, so the variable x is in both images. Now, foo.exe dynamically loads (or statically) bar.dll. Then, the problem is whether the variable x is shared by the exe and dll, or not. In Windows, these two guys never share the x: the exe and dll will have a separate copy of x. However, in Linux, the exe and dll do share the variable x. Unfortunately, I want the behavior of Linux. I first considered using pragma data_seg on Windows. However, even if I correctly setup the shared data segment, foo.exe and bar.dll never shares the x. Recall that bar.dll is loaded into the address space of foo.exe. However, if I run another instance of foo.exe, then x is shared. But, I don't want x to be shared by different processes. So, using data_seg was failed. I may it use a memory-mapped file by making an unique name between exe and dll, which I'm trying now. Two questions: Why the behavior of Linux and Windows is different? Can anyone explain more about this? What would be most easiest way to solve this problem on Windows?

    Read the article

  • Best suited tool to document message processing done in C written program

    - by user3494614
    I am relatively new to UML and it's seems to be very vast I have a small program which basically receives messages on socket and then depending upon message ID embedded as first byte of message it processes the buffer. There are around 5 different message ID which it processes and communicates on another socket and has around 8 major functions. So program in short is like this. I am not pasting entire .c file or main function but just giving some bits and pieces of it so that to get idea of program flow. int main(int argc, char** argv) { register_shared_mem(); listen(); while(get_next_message(buffer)) { switch((msg)(buffer)->id) { case TYPE1: process1(); answer(); ..... } } } I want to document this is pictorial way like for Message type 1 it calls this function which calls another and which calls another. Please let me know any open source tool which will allow me to quickly draw such kind of UML or sequence diagram and will also allow me to write brief description of what each function does? Thanks In Advance

    Read the article

  • Indy client receive string

    - by Eszee
    Im writing an Indy chat app, and am wondering if there is a way for the server component to tell the client that there is a string waiting, or even a way for the client to have an "OnExecute" like event. This is what i have now: server: procedure TServer.ServerExecute(AContext: TIdContext); var sResponse: string; I: Integer; list: Tlist; begin List := Server.Contexts.LockList; sResponse:= AContext.Connection.Socket.ReadLn; try for I := 0 to List.Count-1 do begin try TIdContext(List[I]).Connection.IOHandler.WriteLn(sResponse); except end; end; finally Server.Contexts.UnlockList; end; end; Client: procedure TForm1.Button1Click(Sender: TObject); var sMsg : string; begin Client.Socket.WriteLn(edit1.Text); sMsg := Client.Socket.ReadLn; Memo1.Lines.Add(sMsg); end; The problem is when i have 2 or more clients running the messages keep stacking because the button only processes 1 message a time. I'd like a way for the client to wait for messages and when it is triggered it processes those messages, like it does now under the button procedure. I've tried to put the "readln" part under a timer, but that causes some major problems. Im Using Delphi 2010 and Indy 10

    Read the article

  • Slowing process creation under Java?

    - by oconnor0
    I have a single, large heap (up to 240GB, though in the 20-40GB range for most of this phase of execution) JVM [1] running under Linux [2] on a server with 24 cores. We have tens of thousands of objects that have to be processed by an external executable & then load the data created by those executables back into the JVM. Each executable produces about half a megabyte of data (on disk) that when read right in, after the process finishes, is, of course, larger. Our first implementation was to have each executable handle only a single object. This involved the spawning of twice as many executables as we had objects (since we called a shell script that called the executable). Our CPU utilization would start off high, but not necessarily 100%, and slowly worsen. As we began measuring to see what was happening we noticed that the process creation time [3] continually slows. While starting at sub-second times it would eventually grow to take a minute or more. The actual processing done by the executable usually takes less than 10 seconds. Next we changed the executable to take a list of objects to process in an attempt to reduce the number of processes created. With batch sizes of a few hundred (~1% of our current sample size), the process creation times start out around 2 seconds & grow to around 5-6 seconds. Basically, why is it taking so long to create these processes as execution continues? [1] Oracle JDK 1.6.0_22 [2] Red Hat Enterprise Linux Advanced Platform 5.3, Linux kernel 2.6.18-194.26.1.el5 #1 SMP [3] Creation of the ProcessBuilder object, redirecting the error stream, and starting it.

    Read the article

  • Looking for PyQt4 embeddable terminal widget

    - by redShadow
    I wrote an application that, among other things, launches some "backend" processes to do some stuff. These subprocesses are very likely to fail or have unexpected behavior since they have to operate in quite hard conditions, so I prefer to give full control over them to the operator. NOTE: I am running these processes using a subprocess module based class instead of QProcess to have some more control functionality over the running process. At the moment, I'm using a QPlainTextEdit widget to which I append standard output/error from the subprocess, plus some buttons to quickly send some common signals (INT, STOP, CONT, KILL, ..), but: In some cases it would be useful to send some input too. Although it could be done with a text input box, I would prefer using something more "professional" Of course, there is no direct way to interpret special control characters, such as color codes, cursor movement, etc.. I had to implement an auto-scroll management of the console, but it is not guaranteed 100% to work nicely (sometimes the scroll locking doesn't work as expected, etc.) So: does anyone know something I could use to accomplish these needs? I found qtermwidget but it seems more oriented on handling a shell process (and the Python bindings seems to let you run /bin/bash only) by itself than communicating with an already existing process I/O.

    Read the article

  • Effective simulation of compound poisson process in Matlab

    - by Henrik
    I need to simulate a huge bunch of compound poisson processes in Matlab on a very fine grid so I am looking to do it most effectively. I need to do a lot of simulations on the same random numbers but with parameters changing so it is practical to draw the uniforms and normals beforehand even though it means i have to draw a lot more than i will probably need and won't matter much because it will only need to be done once compared to in the order 500*n repl times the actual compound process generation. My method is the following: Let T be for how long i need to simulate and N the grid points, then my grid is: t=linspace(1,T,N); Let nrepl be the number of processes i need then I simulate P=poissrnd(lambda,nrepl,1); % Number of jumps for each replication U=(T-1)*rand(10000,nrepl)+1; % Set of uniforms on (1,T) for jump times N=randn(10000,nrepl); % Set of normals for jump size Then for replication j: Poiss=P(j); % Jumps for replication Uni=U(1:Poiss,j);% Jump times Norm=mu+sigma*N(1:Poiss,j);% Jump sizes Then this I guess is where I need your advice, I use this one-liner but it seems very slow: CPP_norm=sum(bsxfun(@times,bsxfun(@gt,t,Uni),Norm),1); In the inner for each jump it creates a series of same length as t with 0 until jump and then 1 after, multiplying this will create a grid with zeroes until jump has arrived and then the jump size and finally adding all these will produce the entire jump process on the grid. How can this be done more effectively? Thank you very much.

    Read the article

  • Another Marketing Conference, part one – the best morning sessions.

    - by Roger Hart
    Yesterday I went to Another Marketing Conference. I honestly can’t tell if the title is just tipping over into smug, but in the balance of things that doesn’t matter, because it was a good conference. There was an enjoyable blend of theoretical and practical, and enough inter-disciplinary spread to keep my inner dilettante grinning from ear to ear. Sure, there was a bumpy bit in the middle, with two back-to-back sales pitches and a rather thin overview of the state of the web. But the signal:noise ratio at AMC2012 was impressively high. Here’s the first part of my write-up of the sessions. It’s a bit of a mammoth. It’s also a bit of a mash-up of what was said and what I thought about it. I’ll add links to the videos and slides from the sessions as they become available. Although it was in the morning session, I’ve not included Vanessa Northam’s session on the power of internal comms to build brand ambassadors. It’ll be in the next roundup, as this is already pushing 2.5k words. First, the important stuff. I was keeping a tally, and nobody said “synergy” or “leverage”. I did, however, hear the term “marketeers” six times. Shame on you – you know who you are. 1 – Branding in a post-digital world, Graham Hales This initially looked like being a sales presentation for Interbrand, but Graham pulled it out of the bag a few minutes in. He introduced a model for brand management that was essentially Plan >> Do >> Check >> Act, with Do and Check rolled up together, and went on to stress that this looks like on overall business management model for a reason. Brand has to be part of your overall business strategy and metrics if you’re going to care about it at all. This was the first iteration of what proved to be one of the event’s emergent themes: do it throughout the stack or don’t bother. Graham went on to remind us that brands, in so far as they are owned at all, are owned by and co-created with our customers. Advertising can offer a message to customers, but they provide the expression of a brand. This was a preface to talking about an increasingly chaotic marketplace, with increasingly hard-to-manage purchase processes. Services like Amazon reviews and TripAdvisor (four presenters would make this point) saturate customers with information, and give them a kind of vigilante power to comment on and define brands. Consequentially, they experience a number of “moments of deflection” in our sales funnels. Our control is lessened, and failure to engage can negatively-impact buying decisions increasingly poorly. The clearest example given was the failure of NatWest’s “caring bank” campaign, where staff in branches, customer support, and online presences didn’t align. A discontinuity of experience basically made the campaign worthless, and disgruntled customers talked about it loudly on social media. This in turn presented an opportunity to engage and show caring, but that wasn’t taken. What I took away was that brand (co)creation is ongoing and needs monitoring and metrics. But reciprocally, given you get what you measure, strategy and metrics must include brand if any kind of branding is to work at all. Campaigns and messages must permeate product and service design. What that doesn’t mean (and Graham didn’t say it did) is putting Marketing at the top of the pyramid, and having them bawl demands at Product Management, Support, and Development like an entitled toddler. It’s going to have to be collaborative, and session 6 on internal comms handled this really well. The main thing missing here was substantiating data, and the main question I found myself chewing on was: if we’re building brands collaboratively and in the open, what about the cultural politics of trolling? 2 – Challenging our core beliefs about human behaviour, Mark Earls This was definitely the best show of the day. It was also some of the best content. Mark talked us through nudging, behavioural economics, and some key misconceptions around decision making. Basically, people aren’t rational, they’re petty, reactive, emotional sacks of meat, and they’ll go where they’re led. Comforting stuff. Examples given were the spread of the London Riots and the “discovery” of the mountains of Kong, and the popularity of Susan Boyle, which, in turn made me think about Per Mollerup’s concept of “social wayshowing”. Mark boiled his thoughts down into four key points which I completely failed to write down word for word: People do, then think – Changing minds to change behaviour doesn’t work. Post-rationalization rules the day. See also: mere exposure effects. Spock < Kirk - Emotional/intuitive comes first, then we rationalize impulses. The non-thinking, emotive, reactive processes run much faster than the deliberative ones. People are not really rational decision makers, so  intervening with information may not be appropriate. Maximisers or satisficers? – Related to the last point. People do not consistently, rationally, maximise. When faced with an abundance of choice, they prefer to satisfice than evaluate, and will often follow social leads rather than think. Things tend to converge – Behaviour trends to a consensus normal. When faced with choices people overwhelmingly just do what they see others doing. Humans are extraordinarily good at mirroring behaviours and receiving influence. People “outsource the cognitive load” of choices to the crowd. Mark’s headline quote was probably “the real influence happens at the table next to you”. Reference examples, word of mouth, and social influence are tremendously important, and so talking about product experiences may be more important than talking about products. This reminded me of Kathy Sierra’s “creating bad-ass users” concept of designing to make people more awesome rather than products they like. If we can expose user-awesome, and make sharing easy, we can normalise the behaviours we want. If we normalize the behaviours we want, people should make and post-rationalize the buying decisions we want.  Where we need to be: “A bigger boy made me do it” Where we are: “a wizard did it and ran away” However, it’s worth bearing in mind that some purchasing decisions are personal and informed rather than social and reactive. There’s a quadrant diagram, in fact. What was really interesting, though, towards the end of the talk, was some advice for working out how social your products might be. The standard technology adoption lifecycle graph is essentially about social product diffusion. So this idea isn’t really new. Geoffrey Moore’s “chasm” idea may not strictly apply. However, his concepts of beachheads and reference segments are exactly what is required to normalize and thus enable purchase decisions (behaviour change). The final thing is that in only very few categories does a better product actually affect purchase decision. Where the choice is personal and informed, this is true. But where it’s personal and impulsive, or in any way social, “better” is trumped by popularity, endorsement, or “point of sale salience”. UX, UCD, and e-commerce know this to be true. A better (and easier) experience will always beat “more features”. Easy to use, and easy to observe being used will beat “what the user says they want”. This made me think about the astounding stickiness of rational fallacies, “common sense” and the pathological willful simplifications of the media. Rational fallacies seem like they’re basically the heuristics we use for post-rationalization. If I were profoundly grimy and cynical, I’d suggest deploying a boat-load in our messaging, to see if they’re really as sticky and appealing as they look. 4 – Changing behaviour through communication, Stephen Donajgrodzki This was a fantastic follow up to Mark’s session. Stephen basically talked us through some tactics used in public information/health comms that implement the kind of behavioural theory Mark introduced. The session was largely about how to get people to do (good) things they’re predisposed not to do, and how communication can (and can’t) make positive interventions. A couple of things stood out, in particular “implementation intentions” and how they can be linked to goals. For example, in order to get people to check and test their smoke alarms (a goal intention, rarely actualized  an information campaign will attempt to link this activity to the clocks going back or forward (a strong implementation intention, well-actualized). The talk reinforced the idea that making behaviour changes easy and visible normalizes them and makes them more likely to succeed. To do this, they have to be embodied throughout a product and service cycle. Experiential disconnects undermine the normalization. So campaigns, products, and customer interactions must be aligned. This is underscored by the second section of the presentation, which talked about interventions and pre-conditions for change. Taking the examples of drug addiction and stopping smoking, Stephen showed us a framework for attempting (and succeeding or failing in) behaviour change. He noted that when the change is something people fundamentally want to do, and that is easy, this gets a to simpler. Coordinated, easily-observed environmental pressures create preconditions for change and build motivation. (price, pub smoking ban, ad campaigns, friend quitting, declining social acceptability) A triggering even leads to a change attempt. (getting a cold and panicking about how bad the cough is) Interventions can be made to enable an attempt (NHS services, public information, nicotine patches) If it succeeds – yay. If it fails, there’s strong negative enforcement. Triggering events seem largely personal, but messaging can intervene in the creation of preconditions and in supporting decisions. Stephen talked more about systems of thinking and “bounded rationality”. The idea being that to enable change you need to break through “automatic” thinking into “reflective” thinking. Disruption and emotion are great tools for this, but that is only the start of the process. It occurs to me that a great deal of market research is focused on determining triggers rather than analysing necessary preconditions. Although they are presumably related. The final section talked about setting goals. Marketing goals are often seen as deriving directly from business goals. However, marketing may be unable to deliver on these directly where decision and behaviour-change processes are involved. In those cases, marketing and communication goals should be to create preconditions. They should also consider priming and norms. Content marketing and brand awareness are good first steps here, as brands can be heuristics in decision making for choice-saturated consumers, or those seeking education. 5 – The power of engaged communities and how to build them, Harriet Minter (the Guardian) The meat of this was that you need to let communities define and establish themselves, and be quick to react to their needs. Harriet had been in charge of building the Guardian’s community sites, and learned a lot about how they come together, stabilize  grow, and react. Crucially, they can’t be about sales or push messaging. A community is not just an audience. It’s essential to start with what this particular segment or tribe are interested in, then what they want to hear. Eventually you can consider – in light of this – what they might want to buy, but you can’t start with the product. A community won’t cohere around one you’re pushing. Her tips for community building were (again, sorry, not verbatim): Set goals Have some targets. Community building sounds vague and fluffy, but you can have (and adjust) concrete goals. Think like a start-up This is the “lean” stuff. Try things, fail quickly, respond. Don’t restrict platforms Let the audience choose them, and be aware of their differences. For example, LinkedIn is very different to Twitter. Track your stats Related to the first point. Keeping an eye on the numbers lets you respond. They should be qualified, however. If you want a community of enterprise decision makers, headcount alone may be a bad metric – have you got CIOs, or just people who want to get jobs by mingling with CIOs? Build brand advocates Do things to involve people and make them awesome, and they’ll cheer-lead for you. The last part really got my attention. Little bits of drive-by kindness go a long way. But more than that, genuinely helping people turns them into powerful advocates. Harriet gave an example of the Guardian engaging with an aspiring journalist on its Q&A forums. Through a series of serendipitous encounters he became a BBC producer, and now enthusiastically speaks up for the Guardian community sites. Cultivating many small, authentic, influential voices may have a better pay-off than schmoozing the big guys. This could be particularly important in the context of Mark and Stephen’s models of social, endorsement-led, and example-led decision making. There’s a lot here I haven’t covered, and it may be worth some follow-up on community building. Thoughts I was quite sceptical of nudge theory and behavioural economics. First off it sounds too good to be true, and second it sounds too sinister to permit. But I haven’t done the background reading. So I’m going to, and if it seems to hold real water, and if it’s possible to do it ethically (Stephen’s presentations suggests it may be) then it’s probably worth exploring. The message seemed to be: change what people do, and they’ll work out why afterwards. Moreover, the people around them will do it too. Make the things you want them to do extraordinarily easy and very, very visible. Normalize and support the decisions you want them to make, and they’ll make them. In practice this means not talking about the thing, but showing the user-awesome. Glib? Perhaps. But it feels worth considering. Also, if I ever run a marketing conference, I’m going to ban speakers from using examples from Apple. Quite apart from not being consistently generalizable, it’s becoming an irritating cliché.

    Read the article

  • SQLAuthority News – Job Interviewing the Right Way (and for the Right Reasons) – Guest Post by Feodor Georgiev

    - by pinaldave
    Feodor Georgiev is a SQL Server database specialist with extensive experience of thinking both within and outside the box. He has wide experience of different systems and solutions in the fields of architecture, scalability, performance, etc. Feodor has experience with SQL Server 2000 and later versions, and is certified in SQL Server 2008. Feodor has written excellent article on Job Interviewing the Right Way. Here is his article in his own language. A while back I was thinking to start a blog post series on interviewing and employing IT personnel. At that time I had just read the ‘Smart and gets things done’ book (http://www.joelonsoftware.com/items/2007/06/05.html) and I was hyped up on some debatable topics regarding finding and employing the best people in the branch. I have no problem with hiring the best of the best; it’s just the definition of ‘the best of the best’ that makes things a bit more complicated. One of the fundamental books one can read on the topic of interviewing is the one mentioned above. If you have not read it, then you must do so; not because it contains the ultimate truth, and not because it gives the answers to most questions on the subject, but because the book contains an extensive set of questions about interviewing and employing people. Of course, a big part of these questions have different answers, depending on location, culture, available funds and so on. (What works in the US may not necessarily work in the Nordic countries or India, or it may work in a different way). The only thing that is valid regardless of any external factor is this: curiosity. In my belief there are two kinds of people – curious and not-so-curious; regardless of profession. Think about it – professional success is directly proportional to the individual’s curiosity + time of active experience in the field. (I say ‘active experience’ because vacations and any distractions do not count as experience :)  ) So, curiosity is the factor which will distinguish a good employee from the not-so-good one. But let’s shift our attention to something else for now: a few tips and tricks for successful interviews. Tip and trick #1: get your priorities straight. Your status usually dictates your priorities; for example, if the person looking for a job has just relocated to a new country, they might tend to ignore some of their priorities and overload others. In other words, setting priorities straight means to define the personal criteria by which the interview process is lead. For example, similar to the following questions can help define the criteria for someone looking for a job: How badly do I need a (any) job? Is it more important to work in a clean and quiet environment or is it important to get paid well (or both, if possible)? And so on… Furthermore, before going to the interview, the candidate should have a list of priorities, sorted by the most importance: e.g. I want a quiet environment, x amount of money, great helping boss, a desk next to a window and so on. Also it is a good idea to be prepared and know which factors can be compromised and to what extent. Tip and trick #2: the interview is a two-way street. A job candidate should not forget that the interview process is not a one-way street. What I mean by this is that while the employer is interviewing the potential candidate, the job seeker should not miss the chance to interview the employer. Usually, the employer and the candidate will meet for an interview and talk about a variety of topics. In a quality interview the candidate will be presented to key members of the team and will have the opportunity to ask them questions. By asking the right questions both parties will define their opinion about each other. For example, if the candidate talks to one of the potential bosses during the interview process and they notice that the potential manager has a hard time formulating a question, then it is up to the candidate to decide whether working with such person is a red flag for them. There are as many interview processes out there as there are companies and each one is different. Some bigger companies and corporates can afford pre-selection processes, 3 or even 4 stages of interviews, small companies usually settle with one interview. Some companies even give cognitive tests on the interview. Why not? In his book Joel suggests that a good candidate should be pampered and spoiled beyond belief with a week-long vacation in New York, fancy hotels, food and who knows what. For all I can imagine, an interview might even take place at the top of the Eifel tower (right, Mr. Joel, right?) I doubt, however, that this is the optimal way to capture the attention of a good employee. The ‘curiosity’ topic What I have learned so far in my professional experience is that opinions can be subjective. Plus, opinions on technology subjects can also be subjective. According to Joel, only hiring the best of the best is worth it. If you ask me, there is no such thing as best of the best, simply because human nature (well, aside from some physical limitations, like putting your pants on through your head :) ) has no boundaries. And why would it have boundaries? I have seen many curious and interesting people, naturally good at technology, though uninterested in it as one  can possibly be; I have also seen plenty of people interested in technology, who (in an ideal world) should have stayed far from it. At any rate, all of this sums up at the end to the ‘supply and demand’ factor. The interview process big-bang boils down to this: If there is a mutual benefit for both the employer and the potential employee to work together, then it all sorts out nicely. If there is no benefit, then it is much harder to get to a common place. Tip and trick #3: word-of-mouth is worth a thousand words Here I would just mention that the best thing a job candidate can get during the interview process is access to future team members or other employees of the new company. Nowadays the world has become quite small and everyone knows everyone. Look at LinkedIn, look at other professional networks and you will realize how small the world really is. Knowing people is a good way to become more approachable and to approach them. Tip and trick #4: Be confident. It is true that for some people confidence is as natural as breathing and others have to work hard to express it. Confidence is, however, a key factor in convincing the other side (potential employer or employee) that there is a great chance for success by working together. But it cannot get you very far if it’s not backed up by talent, curiosity and knowledge. Tip and trick #5: The right reasons What really bothers me in Sweden (and I am sure that there are similar situations in other countries) is that there is a tendency to fill quotas and to filter out candidates by criteria different from their skill and knowledge. In job ads I see quite often the phrases ‘positive thinker’, ‘team player’ and many similar hints about personality features. So my guess here is that discrimination has evolved to a new level. Let me clear up the definition of discrimination: ‘unfair treatment of a person or group on the basis of prejudice’. And prejudice is the ‘partiality that prevents objective consideration of an issue or situation’. In other words, there is not much difference whether a job candidate is filtered out by race, gender or by personality features – it is all a bad habit. And in reality, there is no proven correlation between the technology knowledge paired with skills and the personal features (gender, race, age, optimism). It is true that a significantly greater number of Darwin awards were given to men than to women, but I am sure that somewhere there is a paper or theory explaining the genetics behind this. J This topic actually brings to mind one of my favorite work related stories. A while back I was working for a big company with many teams involved in their processes. One of the teams was occupying 2 rooms – one had the team members and was full of light, colorful posters, chit-chats and giggles, whereas the other room was dark, lighted only by a single monitor with a quiet person in front of it. Later on I realized that the ‘dark room’ person was the guru and the ultimate problem-solving-brain who did not like the chats and giggles and hence was in a separate room. In reality, all severe problems which the chatty and cheerful team members could not solve and all emergencies were directed to ‘the dark room’. And thus all worked out well. The moral of the story: Personality has nothing to do with technology knowledge and skills. End of story. Summary: I’d like to stress the fact that there is no ultimately perfect candidate for a job, and there is no such thing as ‘best-of-the-best’. From my personal experience, the main criteria by which I measure people (co-workers and bosses) is the curiosity factor; I know from experience that the more curious and inventive a person is, the better chances there are for great achievements in their field. Related stories: (for extra credit) 1) Get your priorities straight. A while back as a consultant I was working for a few days at a time at different offices and for different clients, and so I was able to compare and analyze the work environments. There were two different places which I compared and recently I asked a friend of mine the following question: “Which one would you prefer as a work environment: a noisy office full of people, or a quiet office full of faulty smells because the office is rarely cleaned?” My friend was puzzled for a while, thought about it and said: “Hmm, you are talking about two different kinds of pollution… I will probably choose the second, since I can clean the workplace myself a bit…” 2) The interview is a two-way street. One time, during a job interview, I met a potential boss that had a hard time phrasing a question. At that particular time it was clear to me that I would not have liked to work under this person. According to my work religion, the properly asked question contains at least half of the answer. And if I work with someone who cannot ask a question… then I’d be doing double or triple work. At another interview, after the technical part with the team leader of the department, I was introduced to one of the team members and we were left alone for 5 minutes. I immediately jumped on the occasion and asked the blunt question: ‘What have you learned here for the past year and how do you like your job?’ The team member looked at me and said ‘Nothing really. I like playing with my cats at home, so I am out of here at 5pm and I don’t have time for much.’ I was disappointed at the time and I did not take the job offer. I wasn’t that shocked a few months later when the company went bankrupt. 3) The right reasons to take a job: personality check. A while back I was asked to serve as a job reference for a coworker. I agreed, and after some weeks I got a phone call from the company where my colleague was applying for a job. The conversation started with the manager’s question about my colleague’s personality and about their social skills. (You can probably guess what my internal reaction was… J ) So, after 30 minutes of pouring common sense into the interviewer’s head, we finally agreed on the fact that a shy or quiet personality has nothing to do with work skills and knowledge. Some years down the road my former colleague is taking the manager’s position as the manager is demoted to a different department. Reference: Feodor Georgiev, Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • What is thnuclnt, vmware

    - by Viktor
    Hi, after starting vmware, i noticed that there are more 10 processes called 'thnuclnt'. they're listening on the port 4000. (since i use this port for something else, it's annoying) I'm wondering what this is, since i didn’t find anything about it. I use Mac 10.5.8 with VMware Fusion 2.0.2 thx and best, Viktor

    Read the article

  • Cacti: "An internal Net-Snmp error condition detected in Cacti snmp_count"

    - by Recc
    There's the odd forum topic about an error similarly obscure as this, but I haven't seen any for snmp_count in particular. Also I don't see graphing problems, though I can't simply go and eyeball all graphs. However the poller does time out and has to be stopped by its internal process preventing overruns. If I filter out the flood of this error in the log I dont get anything else except the poller timeout: 06/12/2014 12:48:00 PM - POLLER: Poller[0] Maximum runtime of 58 seconds exceeded. Exiting. 06/12/2014 12:48:00 PM - SYSTEM STATS: Time:58.8566 Method:spine Processes:1 Threads:40 Hosts:1923 HostsPerProcess:1923 DataSources:61584 RRDsProcessed:0 06/12/2014 12:48:00 PM - SPINE: Poller[0] ERROR: Spine Timed Out While Processing Hosts Internal I saw in the running processes /usr/local/spine/spine 0 2053 that's always left behind. When I kill it the flooding of the error stops. Of course it's the same on the next poll run as it goes through the devices. 2053 is apparently the DB ID for a device. I deleted it completely to see if that stops it. It doesn't, instead 2052 is seen there. I suspect It'll be the same if I keep deleting devices which I will not do. This started happening midday when I wasn't doing anything to the cacti server. I have tried reducing Maximum Threads per Process to 1 and Number of PHP Script Servers to 1. I've been running it at 10 script servers / 40 threads for months with poll cycle time of about 20 sec. I just found out Running snmpwalk on any host would begin returning the values but then timeout halfway through. This doesn't happen from different servers on the network this Cacti is suggesting still that it's a problem with it locally. Any suggestions? For one polling cycle I changed to use cmd.php instead. then I started getting errors like CMDPHP: Poller[0] Host[45] DS[541] WARNING: Result from SNMP not valid. Partial Result: U Perhaps as expected. Looking closely I see that every snmpwalk I do is interrupted at the same place as if some byte limit is hit and the connection torn down.

    Read the article

  • sudo apt-key adv --keyserver keyserver.ubuntu.com --recv 7F0CEB10 command returns error

    - by nyamka
    I'm trying to install Mongodb on Ubuntu 12 but when I run this command: sudo apt-key adv --keyserver keyserver.ubuntu.com --recv 7F0CEB10 This returned the error below: keyserver.ubuntu.com host not found gpgkeys: HTTP fetch error 7: couldn't connect: no such file or directory gpg:no valid openPGP data found gpg: Total number processes :0 I turned off Firewall on Iptables, but it don't work. Is there any idea?

    Read the article

  • Weird Apache error - Apache hangs & needs restarting regularly

    - by Moe
    I've been recently receiving this weird error where Apache just becomes unresponsive and completely stops until it is manually restarted. It gets to a point where I can not longer retrieve apache status from cPanel, and all websites running apache just hang on "connecting" until it times out. Has anyone else received this problem? This is a screenshot of my top when this weird problem occurs, usually the top has all httpd and php processes. Thanks for your help

    Read the article

  • IIS on localhost is very slow

    - by Nyla Pareska
    I am using IIS7 on Windows Vista dual core cpu. The first time hit on a WCF service or an ASP.NET webform sometimes takes way longer than a minute which is not really acceptable for me. I configured the application to use the Classic .NET application pool and tried playing with the Maximum worker processes, first setting it to 4 but put it back to 1 as it did not have the expected result. Are there any other things that I can try?

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >