Search Results

Search found 23098 results on 924 pages for 'multiple processes'.

Page 442/924 | < Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >

  • Why is process not being displayed by TOP

    - by drN
    I am running a Mathematica script (this question probably doesn't fit in Mathematica.SE however) and I know that it generally takes up a lot of RAM and loads up my cores. However, althought pgrep MathKernel is showing a pid, I find that top doesn't show this in the top processes, although I notice that it is taking up about 2.25GB of the 8GB available to me. pmap -x my_process_id total kB 2243132 1907404 1892108 AND ps aux | grep MathKernel dnaneet 20837 12.6 23.3 2234944 1907404 pts/1 Sl 09:23 8:01 /share/apps/mathematica/8.0.4/SystemFiles/Kernel/Binaries/Linux-x86-64/MathKernel -runfirst $TopDirectory="/share/apps/mathematica/8.0.4" -script ./dcm_10micrometer_2x -- ./dcm_10micrometer_2x ps aux shows that the process is taking about 12% (In asterisks) dnaneet 20601 0.0 0.0 68264 1660 pts/1 Ss 09:15 0:00 -bash **dnaneet 20837 12.2 23.3 2234944 1907404 pts/1 Sl 09:23 8:01 /share/apps/mat** dnaneet 21922 0.0 0.0 65604 948 pts/1 R+ 10:29 0:00 ps -aux Did this process fail and is the MathKernel just lingering?

    Read the article

  • Entity Framework and WCf

    - by Nihilist
    Hi I am little confused on designing WCf services with EF. When using WCf and EF, where do we draw this line on what properties to return and what not to with the entity. Here is my scenario I have User. Here are the relations. User [1 to many] Address, User [ 1 to many] Email, User [ 1 to many] Phone So now on the webform, on page1 I can edit user information. say I can edit few properties on the user entity and can also edit address, phone, email entities[ like add / delete and update any] On page2, i can only update user properties and nothing related to navigation properties [ address, email, phone]. So when I return the User Entity [ OR DTO] should i be returning the navigation properties too? Or should the client make multiple calls to get navigation properites. Also, how does it go with Save? Like should the client make multiple calls to save user and related entites or just one call to save the graph? Lets say, if I just have a Save(User user) [ where user has all the related entities too] both page1 and page2 will call save and pass me the user. but one page1 i will need a lot more information. but on page2 i just need the user primitive properties. So my question is, where do we draw this line, how do we design theses services ? Is the WCF operation designed on the page and the fields it has ? I am hoping i explained my problem well enough.

    Read the article

  • Logging off does not kill process in Windows Server 2003

    - by user25951
    I have a Windows Server 2003(Enterprise, SP2). My understanding was that any process created by a user will be terminated when the user loggs off the account. But its not happening. I login via Administrator account. Start a simple java process and logoff. But the process is not killed. Is there any configuration for this or something? I am mostly a software programmer and not much in to servers and so I am stuck. I found out that while logging off, 1) Win32 is supposed to send a CTRL_LOGOFF_EVENT to all processes started by that user. 2) JVM is supposed to handle this event and terminate the VM. But I can't understand why my java process is not killed when i logoff. Any idea!!!

    Read the article

  • Dependency injection in constructors

    - by andre
    Hello everyone. I'm starting a new project and setting up the base to work on. A few questions have risen and I'll probably be asking quite a few in here, hopefully I'll find some answers. First step is to handle dependencies for objects. I've decided to go with the dependency injection design pattern, to which I'm somewhat new, to handle all of this for the application. When actually coding it I came across a problem. If a class has multiple dependencies and you want to pass on multiple dependencies via the constructor (so that they cannot be changed after you instantiate the object). How do you do it without passing an array of dependencies, using call_user_func_array(), eval() or Reflection? This is what i'm looking for: <?php class DI { public function getClass($classname) { if(!$this->pool[$classname]) { # Load dependencies $deps = $this->loadDependencies($classname); # Here is where the magic should happen $instance = new $classname($dep1, $dep2, $dep3); # Add to pool $this->pool[$classname] = $instance; return $instance; } else { return $this->pool[$classname]; } } } Again, I would like to avoid the most costly methods to call the class. Any other suggestions?

    Read the article

  • Resource mapping in a Ruby on Rails URL (RESTful API)

    - by randombits
    I'm having a bit of difficulty coming up with the right answer to this, so I will solicit my problem here. I'm working on a RESTFul API. Naturally, I have multiple resources, some of which consist of parent to child relationships, some of which are stand alone resources. Where I'm having a bit of difficulty is figuring out how to make things easier for the folks who will be building clients against my API. The situation is this. Hypothetically I have a 'Street' resource. Each street has multiple homes. So Street :has_many to Homes and Homes :belongs_to Street. If a user wants to request an HTTP GET on a specific home resource, the following should work: http://mymap/streets/5/homes/10 That allows a user to get information for a home with the id 10. Straight forward. My question is, am I breaking the rules of the book by giving the user access to: http://mymap/homes/10 Technically that home resource exists on its own without the street. It makes sense that it exists as its own entity without an encapsulating street, even though business logic says otherwise. What's the best way to handle this?

    Read the article

  • Drupal 6 - I am using drupal_execute to insert a CCK node into my site. Everything is working excep

    - by rdurbin
    $form_state['values']['field_prx_mp3_labels'][0][value] = $mp3_labels; $form_state['values']['taxonomy'][0][value] = array('tags'=>array('1'=>'Music')); $errs = drupal_execute('prx_content_node_form', $form_state, (object) $nodeTmp); This is a Drupal 6 site. I am using drupal_execute to create a node programatically. The first line is working for field_prx_mp3_labels. The second (for taxonomy) is not. Here is what my select on the node add for my cck looks like: <select name="taxonomy[2][]" multiple="multiple" class="form-select" id="edit-taxonomy-2" size="9"><option value="">- None -</option><option value="5">Music</option><option value="6">-Rock/Pop</option><option value="7">-Jazz/Blues</option><option value="8">-Classical</option><option value="9">-Music Documentaries</option><option value="10">-Festivals/Concerts</option><option value="11">Arts</option><option value="19">-Literature</option><option value="12">Nature</option><option value="13">History</option><option value="15">-Music</option><option value="14">Culture</option><option value="17">-American Indian</option><option value="18">-Latino</option><option value="16">-Youth Perspective</option></select> I have tried many many variations for line 2 (relating to the taxonomy). This comment seemed close but it hasn't worked for me: http://drupal.org/node/178506#comment-1155576 Thanks!

    Read the article

  • Multi-reader IPC solution?

    - by gct
    I'm working on a framework in C++ (just for fun for now), that lets the user write plugins that use a standard API to stream data between each other. There's going to be three basic transport mechanisms for the data: files, sockets, and some kind of IPC piping system. The system is set up so that for the non-file transport, each stream can have multiple readers. IE once a server socket it setup, multiple computers can connect and stream the data. I'm a little stuck at the multi-reader IPC system though. All my plugins run in threads so they live in the same address space, so some kind of shared memory system would work fine, I was thinking I'd write my own circular buffer with a write pointer and read pointers chassing it around the buffer, but I have my doubts that I can achieve the same performance as something like linux pipes. I'm curious what people would suggest for a multi-reader solution to something like this? Is the overhead for pipes or domain sockets low enough that I could just open a connection to each reader and issue separate writes to each reader? This is intended to be significant volumes of data (tens of mega-samples/sec), so performance is a must.

    Read the article

  • Database schema for a library

    - by ABach
    Hi all - I'm designing a library management system for a department at my university and I wanted to enlist your eyes on my proposed schema. This post is primarily concerned with how we store multiple copies of each book; something about what I've designed rubs me the wrong way, and I'm hoping you all can point out better ways to tackle things. For dealing with users checking out books, I've devised three tables: book, customer, and book_copy. The relationships between these tables are as follows: Every book has many book_copies (to avoid duplicating the book's information while storing the fact that we have multiple copies of that book). Every user has many book_copies (the other end of the relationship) The tables themselves are designed like this: ------------------------------------------------ book ------------------------------------------------ + id + title + author + isbn + etc. ------------------------------------------------ ------------------------------------------------ customer ------------------------------------------------ + id + first_name + first_name + email + address + city + state + zip + etc. ------------------------------------------------ ------------------------------------------------ book_copy ------------------------------------------------ + id + book_id (FK to book) + customer_id (FK to customer) + checked_out + due_date + etc. ------------------------------------------------ Something about this seems incorrect (or at least inefficient to me) - the perfectionist in me feels like I'm not normalizing this data correctly. What say ye? Is there a better, more effective way to design this schema? Thanks!

    Read the article

  • django join-like expansion of queryset

    - by jimbob
    I have a list of Persons each which have multiple fields that I usually filter what's upon, using the object_list generic view. Each person can have multiple Comments attached to them, each with a datetime and a text string. What I ultimately want to do is have the option to filter comments based on dates. class Person(models.Model): name = models.CharField("Name", max_length=30) ## has ~30 other fields, usually filtered on as well class Comment(models.Model): date = models.DateTimeField() person = models.ForeignKey(Person) comment = models.TextField("Comment Text", max_length=1023) What I want to do is get a queryset like Person.objects.filter(comment__date__gt=date(2011,1,1)).order_by('comment__date') send that queryset to object_list and be able to only see the comments ordered by date with only so many objects on a page. E.g., if "Person A" has comments 12/3/11, 1/2/11, 1/5/11, "Person B" has no comments, and person C has a comment on 1/3, I would see: "Person A", 1/2 - comment "Person C", 1/3 - comment "Person A", 1/5 - comment I would strongly prefer not to have to switch to filtering based on Comments.objects.filter(), as that would make me have to largely repeat large sections of code in the both the view and template. Right now if I tried executing the following command, I will get a queryset returning (PersonA, PersonC, PersonA), but if I try rendering that in a template each persons comment_set will contain all their comments even if they aren't in the date range. Ideally they're would be some sort of functionality where I could expand out a Person queryset's comment_set into a larger queryset that can be sorted and ordered based on the comment and put into a object_list generic view. This normally is fairly simple to do in SQL with a JOIN, but I don't want to abandon the ORM, which I use everywhere else.

    Read the article

  • No grammar constraints (DTD or XML schema) detected for the document.

    - by fastcodejava
    I have this dtd : http://fast-code.sourceforge.net/template.dtd But when I include in an xml I get the warning : No grammar constraints (DTD or XML schema) detected for the document. The xml is : <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE templates PUBLIC "//UNKNOWN/" "http://fast-code.sourceforge.net/template.dtd"> <templates> <template type="type"> <description>Some</description> <variation></variation> <variation-field></variation-field> <allow-multiple-variation></allow-multiple-variation> <class-pattern></class-pattern> <getter-setter>setter</getter-setter> <allowed-file-extensions>java</allowed-file-extensions> <number-required-classes>1</number-required-classes> <template-body> <![CDATA[ Some Data ]]> </template-body> </template> </templates> Any clue?

    Read the article

  • Can a GPO Startup Script starts a background process and exit immediately?

    - by pepoluan
    I have Googled, and not yet found an answer. Scenario: One of my GPOs have a Startup Script that takes a long time to finish. For some reasons, we have to run the scripts synchronously. Naturally, this causes slow startup time (sometimes as long as 15 minutes!) before the Logon screen appears. After profiling and analyzing the perpetrator script, I conclusively determined that the step where it's taking a long time to finish will not affect the result of the succesive GPOs. In other words, that particular step (and all steps afterwards) can run in the background. My Question: Is it possible for the Startup Script to just 'trigger' another script/program that will run to completion even when the Startup Script exits? That is, the "child processes" of the Startup Script continues to live even when the Startup Script's process ends? Additional Info: The Domain Controllers are 2008 and 2008 R2's. The workstations are Windows XP.

    Read the article

  • Problem with Ctrl key on a hp 2510p

    - by Ernelli
    I have a tricky problem with my corporate Compaq hp 2510p, the ctrl key is not working properly at all times. I belive that it is hooked in some filter chain that processes certain ctrl+[key] combinations which is very annoying. I would like some pointers on how to proceed when analysing what application/setup can can cause this kind of keyboard filtering to happen. Anyway some more background info: Ctrl works together with X, C, V. Both in editors and terminals (Ctrl-C, Z etc) but Ctrl-Shift-Esc and Ctrl-Alt-Del does not work. Very annoying so my only option for logging in us using HP's security app. Shift-Arrow works for selecting text, but not Ctrl-shift arrow to select word by word, but Ctrl-Arrow works when moving the caret word by word. Now the strange thing is that everything works ok with an external USB keyboard so it might be the driver, but still, google yields 0 when searching for the problem description. I have vm-ware player installed (but not running it), HP Protect Tools installed, if any of these could affect the keyboard driver.

    Read the article

  • A way to auto cycle (close) through all screen sessions

    - by JBWhitmore
    I frequently use screen when I log into the interactive nodes to a supercomputer that I have access to -- and I often run things and move on. There are about 20 separate nodes that I can log into; and if I check any one of them I'll have something like 4 detached sessions. Each of those sessions will have maybe 5 screen sessions within that. Is there a quick way to cycle through all of these and close them down if they are not running any processes? My current process is to screen -ls and then screen -r #### then type exit until I'm back to the base screen.

    Read the article

  • uWSGI and python virtual env

    - by user27512
    I'm trying to use uWSGI with a virtual env in order to use the Trac bug tracker on it. I've installed system-wide uwsgi via pip. Next, I've installed trac in a virtualenv $ virtualenv venv $ . venv/bin/activate $ pip install trac I've then written a simple uWSGI configuration script: [uwsgi] master = true processes = 1 socket = localhost:3032 home = /srv/http/trac/venv/ no-site = true gid = www-data uid = www-data env = TRAC_ENV=/srv/http/trac/projects/my_project module = trac.web.main:dispatch_request But when I try to launch it, it fails: $ uwsgi --http :8000 --ini /etc/uwsgi/vassals-available/my_project.ini --gid www-data --uid www-data ... Set PythonHome to /srv/http/trac/venv/ ... *** Operational MODE: single process *** ImportError: No module named trac.web.main unable to load app 0 (mountpoint='') (callable not found or import error) I think uWSGI isn't using the virtual env. When inside the virtual env, I can import trac.web.main without having an ImportError. How can I do that ? Thanks

    Read the article

  • merge my code with Ajax code>>> problem

    - by sandy
    I want to help me In the following link i found nice code in Ajax http://www.w3schools.com/php/php_ajax_livesearch.asp I want to link my code with the code you see in the link above and replace dropdown list How can I do it for I could not where is it change in code even my code work as Ajax ?? I wish .... I wish .... I wish any somebody can help me <?php include ("connect.php"); print_r($_POST['sector_list']); $member_id = intval($_POST['sector_list']); if($member_id == 0) { // Default choice was selected } else { $res = mysql_query("SELECT * FROM members WHERE MemberID = $member_id LIMIT 1"); if(mysql_num_rows($res) == 0) { // Not a valid member } else { // The member is in the database } } ?> <form method="POST" action=<?php echo $_SERVER["PHP_SELF"]; ?> > <input type="hidden" name="sector" value="sector_list"> <select name="sector_list[]" class="inputstandard" multiple="multiple"> <option size ="40" value="default">send to </option> <?php $result = mysql_query('SELECT * from members') or die(mysql_error()); while ($row = mysql_fetch_assoc($result)) { echo '<option value="' . $row['MemberName'] . '">' . $row['MemberName']. '</option>'; } ?> <input type ="submit" name ="go" value = "go" > </select> </form>

    Read the article

  • Can I optimize this mod_wsgi / apache file better?

    - by tomwolber
    I am new to Django/Python/ mod_wsgi, and I was wondering if I could optimize this file to reduce memory usage: ServerRoot "/home/<foo>/webapps/django_wsgi/apache2" LoadModule dir_module modules/mod_dir.so LoadModule env_module modules/mod_env.so LoadModule log_config_module modules/mod_log_config.so LoadModule mime_module modules/mod_mime.so LoadModule rewrite_module modules/mod_rewrite.so LoadModule setenvif_module modules/mod_setenvif.so LoadModule wsgi_module modules/mod_wsgi.so LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined CustomLog /home/<foo>/logs/user/access_django_wsgi.log combined ErrorLog /home/<foo>/logs/user/error_django_wsgi.log KeepAlive Off Listen 12345 MaxSpareThreads 3 MinSpareThreads 1 MaxClients 5 MaxRequestsPerChild 300 ServerLimit 4 HostnameLookups Off SetEnvIf X-Forwarded-SSL on HTTPS=1 ThreadsPerChild 5 WSGIDaemonProcess django_wsgi processes=5 python-path=/home/<foo>/webapps/django_wsgi:/home/<foo>/webapps/django_wsgi/lib/python2.6 threads=1 WSGIPythonPath /home/<foo>/webapps/django_wsgi:/home/<foo>/webapps/django_wsgi/lib/python2.6 WSGIScriptAlias /auctions /home/<foo>/webapps/django_wsgi/auctions.wsgi WSGIScriptAlias /achievers /home/<foo>/webapps/django_wsgi/achievers.wsgi

    Read the article

  • How to design data storage for partitioned tagging system?

    - by Morgan Cheng
    How to design data storage for huge tagging system (like digg or delicious)? There is already discussion about it, but it is about centralized database. Since the data is supposed to grow, we'll need to partition the data into multiple shards soon or later. So, the question turns to be: How to design data storage for partitioned tagging system? The tagging system basically has 3 tables: Item (item_id, item_content) Tag (tag_id, tag_title) TagMapping(map_id, tag_id, item_id) That works fine for finding all items for given tag and finding all tags for given item, if the table is stored in one database instance. If we need to partition the data into multiple database instances, it is not that easy. For table Item, we can partition its content with its key item_id. For table Tag, we can partition its content with its key tag_id. For example, we want to partition table Tag into K databases. We can simply choose number (tag_id % K) database to store given tag. But, how to partition table TagMapping? The TagMapping table represents the many-to-many relationship. I can only image to have duplication. That is, same content of TagMappping has two copies. One is partitioned with tag_id and the other is partitioned with item_id. In scenario to find tags for given item, we use partition with tag_id. If scenario to find items for given tag, we use partition with item_id. As a result, there is data redundancy. And, the application level should keep the consistency of all tables. It looks hard. Is there any better solution to solve this many-to-many partition problem?

    Read the article

  • Can't open Control Panel or IE

    - by Josh
    I have a XP computer where when ever I try to open Control Panel nothing happens, nothing flashes on the screen. Same thing with Internet Explorer. I've scanned the computer with Malwarebytes and Avast, Malwaresbytes found some Adware which it removed without problems. Avast found nothing. I looked at the running processes with Process Explorer, nothing malicious running. And looked at a Process Monitor output when I tried to run IE, nothing obviously wrong. The process just decides to exit. What can I try next? I would suspect corrupt IE install but Control Panel doesn't work either. UPDATE: Nether work in Safe Mode under the user account. (only 1 user on the computer) But in Safe Mode, under the built in Administrator account, they work. So what ever is broken, is only broken in the one account. Anything under the HKCU registry key that could break this?

    Read the article

  • Bash: verify that process has stopped

    - by pfac
    I'm working on script meant to start/stop a set of services. For stopping, it has to terminate many processes which take a while and might hang. The script needs to verify that the process has indeed terminated, and send an email if such does not happen after a given period. This is what I have: pkill -f "stuff" for i in {1..30}; do VERIFICATIONS=$i if verification_command then echo "It's gone" break fi sleep 2 done if [ $VERIFICATIONS -ge 30 ]; then echo "failed to terminate" # send mail fi Is there a better way to do this?

    Read the article

  • Separation of memory oriented process and CPU oriented process

    - by Jeevan Dongre
    I am develops guy working for an e-commerce company I am running my e-commerce application built using ruby on rails spree commerce. I am presently running 2 medium instances in the production. One is a high memory instance which has 3.8 RAM and single Core CPU and another one is high CPU instance which has Dual Core CPU. Basically AWS calls it has m1.medium and c1.medium instance respectively. My question is it possible to separate the processes according the cpu intense and memory intense? So that all the cpu intense process can be made run in high cpu instance and all the memory intense process can be made to run in the high memory instances. Is any tool available to identify those process. Kindly give me some heads up. Thank you

    Read the article

  • Questions about the Backpropogation Algorithm

    - by Colemangrill
    I have a few questions concerning backpropogation. I'm trying to learn the fundamentals behind neural network theory and wanted to start small, building a simple XOR classifier. I've read a lot of articles and skimmed multiple textbooks - but I can't seem to teach this thing the pattern for XOR. Firstly, I am unclear about the learning model for backpropogation. Here is some pseudo-code to represent how I am trying to train the network. [Lets assume my network is setup properly (ie: multiple inputs connect to a hidden layer connect to an output layer and all wired up properly)]. SET guess = getNetworkOutput() // Note this is using a sigmoid activation function. SET error = desiredOutput - guess SET delta = learningConstant * error * sigmoidDerivative(guess) For Each Node in inputNodes For Each Weight in inputNodes[n] inputNodes[n].weight[j] += delta; // At this point, I am assuming the first layer has been trained. // Then I recurse a similar function over the hidden layer and output layer. // The prime difference being that it further divi's up the adjustment delta. I realize this is probably not enough to go off of, and I will gladly expound on any part of my implementation. Using the above algorithm, my neural network does get trained, kind of. But not properly. The output is always XOR 1 1 [smallest number] XOR 0 0 [largest number] XOR 1 0 [medium number] XOR 0 1 [medium number] I can never train the [1,1] [0,0] to be the same value. If you have any suggestions, additional resources, articles, blogs, etc for me to look at I am very interested in learning more about this topic. Thank you for your assistance, I appreciate it greatly!

    Read the article

  • Select return dynamic columns

    - by Ascalonian
    I have two tables: Standards and Service Offerings. A Standard can have multiple Service Offerings. Each Standard can have a different number of Service Offerings associated to it. What I need to be able to do is write a view that will return some common data and then list the service offerings on one line. For example: Standard Id | Description | SO #1 | SO #2 | SO #3 | ... | SO #21 | SO Count 1 | One | A | B | C | ... | G | 21 2 | Two | A | | | ... | | 1 3 | Three | B | D | E | ... | | 3 I have no idea how to write this. The number of SO columns is set to a specific number (21 in this case), so we cannot exceed past that. Any ideas on how to approach this? A place I started is below. It just returned multiple rows for each Service Offering, when they need to be on one row. SELECT * FROM SERVICE_OFFERINGS WHERE STANDARD_KEY IN (SELECT STANDARD_KEY FROM STANDARDS)

    Read the article

  • Will USMT 4.0 in MDT 2010 Move/Migrate the .NK2 File for Outlook?

    - by Mitch
    We're about to begin a refresh project for about 100 XP Pro laptops and have a concern with regards to the .NK2 file which holds cached email addresses(?). If possible we'd like to have USMT move/migrate this but I can't find anything that confirms that this happens automatically or has been done before. I see lots of manual processes but at this point I'm not sure that we can use that. Has anyone done this or seen this done? Perhaps you can point me to a resource that can give me an idea how its done? Any information would be appreciated. USMT seems to get a lot of the details but missing this part seems odd. Thanks in advance for any responses.

    Read the article

  • NSFetchedResultsController on secondary UITableView - how to query data?

    - by Jason
    I am creating a core-data based Navigation iPhone app with multiple screens. Let's say it is a flash-card application. The data model is very simple, with only two entities: Language, and CardSet. There is a one-to-many relationship between the Language entity and the CardSet entities, so each Language may contain multiple CardSets. In other words, Language has a one-to-many relationship Language.cardSets which points to the list of CardSets, and CardSet has a relationship CardSet.language which points to the Language. There are two screens: (1) An initial TableView screen, which displays the list of languages; and (2) a secondary TableView screen, which displays the list of CardSets in the Language. In the initial screen, which lists the languages, I am using NSFetchedResultsController to keep the list of languages up-to-date. The screen passes the Language selected to the secondary screen. On the secondary screen, I am trying to figure out whether I should again use an NSFetchedResultsController to maintain the list of CardSets, or if I should work through Language.cardSets to simply pull the list out of the object model. The latter makes the most sense programatically because I already have the Language - but then it would not automatically be updated on changes. I have looked at the NSFetchedResultsController documentation, and it seems like I can easily create predicates based on attributes - but not relationships. I.e., I can create the following NSFetchedResultsController: NSPredicate *predicate = [NSPredicate predicateWithFormat:@"name LIKE[c] 'Chuck Norris'"]; How can I access my data through the direct relationship - Language.cardSets - and also have the table auto-update using NSFetchedResultsController? Is this possible?

    Read the article

  • Am I using too much memory? (Rails on EC2 with Resque)

    - by Stpn
    I am looking at the memory usage of the Rails application (it uses background processes via Resque) and since the common answer to the question, "how many workers is too many" was "test and see", I ran some memory commands and wonder if someone can help figuring if the memory usage is high enough already, or I can still add some extra workers.. so (this is all under the maximum load): $ free -t -m total used free shared buffers cached Mem: 1756 1532 223 0 12 229 -/+ buffers/cache: 1291 464 Swap: 895 10 885 Total: 2652 1543 1108 $ vmstat procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 10588 156172 13400 326476 1 6 4 0 5 4 1 0 99 0 If there is any extra info I can provide to help answer this, I would be happy to do so. If the question is strange in some way, please let me know I'd be glad to fix etc..

    Read the article

< Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >