Search Results

Search found 5335 results on 214 pages for 'agile processes'.

Page 110/214 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • Is it possible to make web app proactive rather than reactive?

    - by Ari B.
    Web applications traditionally follow the request/response cycle, where a request is made by a user or another web app. However, I'm curious if it is possible to make a web app automatically initiate certain tasks upon it's deployment to a app server. For example, let's say we have a web app that retrieves and processes data. Is it possible to configure this app to automatically retrieve and process data when certain criteria are met, rather than needing a request from a user/another web app?

    Read the article

  • Debugging utilities for Linux process hang issues?

    - by Niranjan
    I have a daemon process which does the configuration management. all the other processes should interact with this daemon for their functioning. But when I execute a large action, after few hours the daemon process is unresponsive for 2 to 3 hours. And After 2- 3 hours it is working normally. Debugging utilities for Linux process hang issues? How to get at what point the linux process hangs?

    Read the article

  • How do I reconnect to Memcache when forking in rails?

    - by Daniel Huckstep
    I have a rails 3 application, and a script called by rails runner. This script forks and does some stuff in other processes. I do the proper thing with ActiveRecord before forking, where I disconnect-fork-reconnect and all that jazz. My question is I also use memcache for the Rails.cache but should I be disconnecting-reconnecting that too for my forks? If so, how would I go about that in the rails way.

    Read the article

  • Fairness: Where can it be better handled?

    - by Srinivas Nayak
    Hi, I would like to share one of my practical experience with multiprogramming here. Yesterday I had written a multiprogram. Modifications to sharable resources were put under critical sections protected by P(mutex) and V(mutex) and those critical section code were put in a common library. The library will be used by concurrent applications (of my own). I had three applications that will use the common code from library and do their stuff independently. my library --------- work_on_shared_resource { P(mutex) get_shared_resource work_with_it V(mutex) } --------- my application ----------- application1 { *[ work_on_shared_resource do_something_else_non_ctitical ] } application2 { *[ work_on_shared_resource do_something_else_non_ctitical ] } application3 { *[ work_on_shared_resource ] } *[...] denote a loop. ------------ I had to run the applications on Linux OS. I had a thought in my mind, hanging over years, that, OS shall schedule all the processes running under him with all fairness. In other words, it will give all the processes, their pie of resource-usage equally well. When first two applications were put to work, they run perfectly well without deadlock. But when the third application started running, always the third one got the resources, but since it is not doing anything in its non-critical region, it gets the shared resource more often when other tasks are doing something else. So the other two applications were found almost totally halted. When the third application got terminated forcefully, the previous two applications resumed their work as before. I think, this is a case of starvation, first two applications had to starve. Now how can we ensure fairness? Now I started believing that OS scheduler is innocent and blind. It depends upon who won the race; he got the largest pie of CPU and resource. Shall we attempt to ensure fairness of resource users in the critical-section code in library? Or shall we leave it up to the applications to ensure fairness by being liberal, not greedy? To my knowledge, adding code to ensure fairness to the common library shall be an overwhelming task. On the other hand, believing on the applications will also never ensure 100% fairness. The application which does a very little task after working with shared resources shall win the race where as the application which does heavy processing after their work with shared resources shall always starve. What is the best practice in this case? Where we ensure fairness and how? Sincerely, Srinivas Nayak

    Read the article

  • Processor affinity settings for Linux kernel modules?

    - by Stephen Pape
    In Windows, I can set the processor affinity of driver code using KeSetSystemAffinityThread, and check which processor my code is running on using KeGetCurrentProcessorNumber. I'm trying to do something similar in a Linux kernel module, but the only affinity calls I can see are for userland processes. Is there any way to do this, so that I can run assembly code on a specific processor? (i.e. sgdt)

    Read the article

  • How to debug Sharepoint solution/feature through Visual studio ?

    - by pointlesspolitics
    Recently I tried to install a webpart through wspbuilder utility to the Sharepoint Site. I have created, built and deployed a project to the 12 hive. After that installed the solution through Cental Administration Site and activated in the site collection. I just wonder how can I debug the complex feature/solution ? Because both processes (build-deploy and activate) totally independent, how can I attach a process with the worker process ?

    Read the article

  • How to I serialize a large graph of .NET object into a SQL Server BLOB without creating a large bu

    - by Ian Ringrose
    We have code like: ms = New IO.MemoryStream bin = New System.Runtime.Serialization.Formatters.Binary.BinaryFormatter bin.Serialize(ms, largeGraphOfObjects) dataToSaveToDatabase = ms.ToArray() // put dataToSaveToDatabase in a Sql server BLOB But the memory steam allocates a large buffer from the large memory heap that is giving us problems. So how can we stream the data without needing enough free memory to hold the serialized objects. I am looking for a way to get a Stream from SQL server that can then be passed to bin.Serialize() so avoiding keeping all the data in my processes memory. Likewise for reading the data back... Some more background. This is part of a complex numerical processing system that processes data in near real time looking for equipment problems etc, the serialization is done to allow a restart when there is a problem with data quality from a data feed etc. (We store the data feeds and can rerun them after the operator has edited out bad values.) Therefore we serialize the object a lot more often then we de-serialize them. The objects we are serializing include very large arrays mostly of doubles as well as a lot of small “more normal” objects. We are pushing the memory limit on a 32 bit system and make the garage collector work very hard. (Effects are being made elsewhere in the system to improve this, e.g. reusing large arrays rather then create new arrays.) Often the serialization of the state is the last straw that courses an out of memory exception; our peak memory usage is while this serialization is being done. I think we get large memory pool fragmentation when we de-serialize the object, I expect there are also other problem with large memory pool fragmentation given the size of the arrays. (This has not yet been investigated, as the person that first looked at this is a numerical processing expert, not a memory management expert.) Are customers use a mix of Sql Server 2000, 2005 and 2008 and we would rather not have different code paths for each version of Sql Server if possible. We can have many active models at a time (in different process, across many machines), each model can have many saved states. Hence the saved state is stored in a database blob rather then a file. As the spread of saving the state is important, I would rather not serialize the object to a file, and then put the file in a BLOB one block at a time. Other related questions I have asked How to Stream data from/to SQL Server BLOB fields? Is there a SqlFileStream like class that works with Sql Server 2005?

    Read the article

  • Changing the working directory for a process remotely

    - by Michael
    I've got an application that has a bug right now, but we're unable to update the end-user to get the fix out. What a possible workaround would be is to change the working directory to the application's install directory, but from what I can tell, there's no way to do that outside of the program itself. Is there some sort of Windows API call that can change other processes' working directories, or is that not available due to security issues? I figure it's not possible.

    Read the article

  • Threading in GWT (Client)

    - by 8EM
    From what I understand, the entire client side of a GWT application is converted to Javascript when you build, therefore I suppose this question is related to both Javascript and the possibilities that GWT offers. I have a couple of dozen processes that will need to be initiated in my GWT application, each process will then continuously make calls to a server. Does GWT support threading? Does the GWT client side support threading?

    Read the article

  • Sharepoint Wikis

    - by Keng
    Ok, I've seen a few posts that mention a few other posts about not using SP wikis because they suck. Since we are looking at doing our wiki in SP, I need to know why we shouldn't do it for a group of 6 automation-developers to document the steps in various automated processes and the changes that have to be made from time to time. Thanks.

    Read the article

  • How to delay putting process in background until after it is ready to serve, in shell

    - by Jakub Narebski
    I have two processes: a server that should be run in background, but starts serving requests after a delay, and a client that should be started when server is ready. The server prints line containg "Acceptin connections" to its stderr when ready (server stderr is redirected to a file when running it in background). How to delay putting server process in background until server is ready to serve requests? Alternatively, how to delay running client until server is ready? Language: shell script (or optionally Perl).

    Read the article

  • Concept of GUI's - Centralized or decentralized

    - by wvd
    Hello all, Since a few months I've been learning Erlang, and now it was time to do some basic GUI. After some quick research I saw there was an interesting library called 'wxi' (based on Fudgets of Haskell) which uses a different approach on GUI's. No central loop, every widget processes it's own events and handles it's own data. What do you guys think about this? It looks like it kind of can be efficient in languages such as Erlang, and it's an interesting approach. William van Doorn

    Read the article

  • About fork system call and global variables

    - by lurks
    I have this program in C++ that forks two new processes: #include <pthread.h> #include <iostream> #include <unistd.h> #include <sys/types.h> #include <sys/wait.h> #include <cstdlib> using namespace std; int shared; void func(){ extern int shared; for (int i=0; i<10;i++) shared++; cout<<"Process "<<getpid()<<", shared " <<shared<<", &shared " <<&shared<<endl; } int main(){ extern int shared; pid_t p1,p2; int status; shared=0; if ((p1=fork())==0) {func();exit(0);}; if ((p2=fork())==0) {func();exit(0);}; for(int i=0;i<10;i++) shared++; waitpid(p1,&status,0); waitpid(p2,&status,0);; cout<<"shared variable is: "<<shared<<endl; cout<<"Process "<<getpid()<<", shared " <<shared<<", &shared " <<&shared<<endl; } The two forked processes make an increment on the shared variables and the parent process does the same. As the variable belongs to the data segment of each process, the final value is 10 because the increment is independent. However, the memory address of the shared variables is the same, you can try compiling and watching the output of the program. How can that be explained ? I cannot understand that, I thought I knew how the fork() works, but this seems very odd.. I need an explanation on why the address is the same, although they are separate variables.

    Read the article

  • Reading a file N lines at a time in ruby

    - by Sam
    I have a large file (hundreds of megs) that consists of filenames, one per line. I need to loop through the list of filenames, and fork off a process for each filename. I want a maximum of 8 forked processes at a time and I don't want to read the whole filename list into RAM at once. I'm not even sure where to begin, can anyone help me out?

    Read the article

  • How can i uniquely identify users trying to open() a kernel module?

    - by user349072
    Dear Gurus, I'm working on a Kernel module and i'm trying to Uniquely identify each one of the users trying to open() the module ( can be either processes or threads ). what is the best way to identify them? is there an ID i can get from a system call? i wish to get all users in a list that specifies whether they're trying to open the module for read \ write and i need to know which one tried acting... many many thanks. in regards, IK

    Read the article

  • Multi Process Configuration

    - by user200937
    Hi, I have a product built out of multiple processes. Each process uses internally commons configuration. Does anyone have an idea how to manage the config? I.e. we do not want to duplicate variables so each process will be able to read them. Additionally, DB solution is no good, as we do not want to be dependent on DB for something like configuration. Thanks Yair

    Read the article

  • Munin Mongodb Plugin Not Showing. . . ?

    - by alfredo
    I have installed munin and munin-node on my monitoring server and installed munin-node on my mongodb server, I have set them both up and all is working great. But, the mongodb plugins aren't showing on my monitoring server. I see the node listed and "Disk, Network, Processes, System", but not the mongo stuff. If I execute one of the plugins directly on the mongo server "python /usr/share/munin/plugins/mongo_btree" it returns output, but nothing shows on the monitoring server.

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >