Search Results

Search found 10536 results on 422 pages for 'cpu usage'.

Page 63/422 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • Javascript dynamic table memory usage

    - by Dan
    I use a dynamic table: <html> <body> <button id="button">Build table</button> <div id="container"> <script type="text/javascript"> window.onload=function(){ var table = null; var row = "<tr><td>111111111111111111111111111111111111111111111111111111</td>" + "<td>222222222222222222222222222222222222222222222222222222</td>" + "<td>333333333333333333333333333333333333333333333333333333</td></tr>"; var data = null; for (var i = 0; i < 2000; i++){ data += row; } var obj = document.getElementById("button"); obj.onclick=function buildTable(){ document.getElementById("container").innerHTML = "<div><table><tbody>" + data + "</tbody></table></div>"; }; }; </script> </body> </html> Using chromes task manager, each time new data is loaded the memory usage increases considerably and doesn't go down, so after some time the app consumes a lot of memory and requires the browser to be closed. Is there any change in the code I can use to solve this or is it a browser side problem?

    Read the article

  • Keeping the DI-container usage in the composition root in Silverlight and MVVM

    - by adrian hara
    It's not quite clear to me how I can design so I keep the reference to the DI-container in the composition root for a Silverlight + MVVM application. I have the following simple usage scenario: there's a main view (perhaps a list of items) and an action to open an edit view for one single item. So the main view has to create and show the edit view when the user takes the action (e.g. clicks some button). For this I have the following code: public interface IView { IViewModel ViewModel {get; set;} } Then, for each view that I need to be able to create I have an abstract factory, like so public interface ISomeViewFactory { IView CreateView(); } This factory is then declared a dependency of the "parent" view model, like so: public class SomeParentViewModel { public SomeParentViewModel(ISomeViewFactory viewFactory) { // store it } private void OnSomeUserAction() { IView view = viewFactory.CreateView(); dialogService.ShowDialog(view); } } So all is well until here, no DI-container in sight :). Now comes the implementation of ISomeViewFactory: public class SomeViewFactory : ISomeViewFactory { public IView CreateView() { IView view = new SomeView(); view.ViewModel = ???? } } The "????" part is my problem, because the view model for the view needs to be resolved from the DI-container so it gets its dependencies injected. What I don't know is how I can do this without having a dependency to the DI-container anywhere except the composition root. One possible solution would be to have either a dependency on the view model that gets injected into the factory, like so: public class SomeViewFactory : ISomeViewFactory { public SomeViewFactory(ISomeViewModel viewModel) { // store it } public IView CreateView() { IView view = new SomeView(); view.ViewModel = viewModel; } } While this works, it has the problem that since the whole object graph is wired up "statically" (i.e. the "parent" view model will get an instance of SomeViewFactory, which will get an instance of SomeViewModel, and these will live as long as the "parent" view model lives), the injected view model implementation is stateful and if the user opens the child view twice, the second time the view model will be the same instance and have the state from before. I guess I could work around this with an "Initialize" method or something similar, but it doesn't smell quite right. Another solution might be to wrap the DI-container and have the factories depend on the wrapper, but it'd still be a DI-container "in disguise" there :) Any thoughts on this are greatly appreciated. Also, please forgive any mistakes or rule-breaking, since this is my first post on stackoverflow :) Thanks! ps: my current solution is that the factories know about the DI-container, and it's only them and the composition root that have this dependency.

    Read the article

  • MemoryStream, XmlTextWriter and Warning 4 CA2202 : Microsoft.Usage

    - by rasx
    The Run Code Analysis command in Visual Studio 2010 Ultimate returns a warning when seeing a certain pattern with MemoryStream and XmlTextWriter. This is the warning: Warning 7 CA2202 : Microsoft.Usage : Object 'ms' can be disposed more than once in method 'KinteWritePages.GetXPathDocument(DbConnection)'. To avoid generating a System.ObjectDisposedException you should not call Dispose more than one time on an object.: Lines: 421 C:\Visual Studio 2010\Projects\Songhay.DataAccess.KinteWritePages\KinteWritePages.cs 421 Songhay.DataAccess.KinteWritePages This is the form: static XPathDocument GetXPathDocument(DbConnection connection) { XPathDocument xpDoc = null; var ms = new MemoryStream(); try { using(XmlTextWriter writer = new XmlTextWriter(ms, Encoding.UTF8)) { using(DbDataReader reader = CommonReader.GetReader(connection, Resources.KinteRssSql)) { writer.WriteStartDocument(); writer.WriteStartElement("data"); do { while(reader.Read()) { writer.WriteStartElement("item"); for(int i = 0; i < reader.FieldCount; i++) { writer.WriteRaw(String.Format("<{0}>{1}</{0}>", reader.GetName(i), reader[i].ToString())); } writer.WriteFullEndElement(); } } while(reader.NextResult()); writer.WriteFullEndElement(); writer.WriteEndDocument(); writer.Flush(); ms.Position = 0; xpDoc = new XPathDocument(ms); } } } finally { ms.Dispose(); } return xpDoc; } The same kind of warning is produced for this form: XPathDocument xpDoc = null; using(var ms = new MemoryStream()) { using(XmlTextWriter writer = new XmlTextWriter(ms, Encoding.UTF8)) { using(DbDataReader reader = CommonReader.GetReader(connection, Resources.KinteRssSql)) { //... } } } return xpDoc; By the way, the following form produces another warning: XPathDocument xpDoc = null; var ms = new MemoryStream(); using(XmlTextWriter writer = new XmlTextWriter(ms, Encoding.UTF8)) { using(DbDataReader reader = CommonReader.GetReader(connection, Resources.KinteRssSql)) { //... } } return xpDoc; The above produces the warning: Warning 7 CA2000 : Microsoft.Reliability : In method 'KinteWritePages.GetXPathDocument(DbConnection)', object 'ms' is not disposed along all exception paths. Call System.IDisposable.Dispose on object 'ms' before all references to it are out of scope. C:\Visual Studio 2010\Projects\Songhay.DataAccess.KinteWritePages\KinteWritePages.cs 383 Songhay.DataAccess.KinteWritePages In addition to the following, what are my options?: Supress warning CA2202. Supress warning CA2000 and hope that Microsoft is disposing of MemoryStream (because Reflector is not showing me the source code). Rewrite my legacy code to recognize the wonderful XDocument and LINQ to XML.

    Read the article

  • How to show percentage of 'memory used' in a win32 process?

    - by pj4533
    I know that memory usage is a very complex issue on Windows. I am trying to write a UI control for a large application that shows a 'percentage of memory used' number, in order to give the user an indication that it may be time to clear up some memory, or more likely restart the application. One implementation used ullAvailVirtual from MEMORYSTATUSEX as a base, then used HeapWalk() to walk the process heap looking for additional free memory. The HeapWalk() step was needed because we noticed that after a while of running the memory allocated and freed by the heap was never returned and reported by the ullAvailVirtual number. After hours of intensive working, the ullAvailVirtual number no longer would accurately report the amount of memory available. However, this method proved not ideal, due to occasional odd errors that HeapWalk() would return, even when the process heap was not corrupted. Further, since this is a UI control, the heap walking code was executing every 5-10 seconds. I tried contacting Microsoft about why HeapWalk() was failing, escalated a case via MSDN, but never got an answer other than "you probably shouldn't do that". So, as a second implementation, I used PagefileUsage from PROCESS_MEMORY_COUNTERS as a base. Then I used VirtualQueryEx to walk the virtual address space adding up all regions that weren't MEM_FREE and returned a value for GetMappedFileNameA(). My thinking was that the PageFileUsage was essentially 'private bytes' so if I added to that value the total size of the DLLs my process was using, it would be a good approximation of the amount of memory my process was using. This second method seems to (sorta) work, at least it doesn't cause crashes like the heap walker method. However, when both methods are enabled, the values are not the same. So one of the methods is wrong. So, StackOverflow world...how would you implement this? which method is more promising, or do you have a third, better method? should I go back to the original method, and further debug the odd errors? should I stay away from walking the heap every 5-10 seconds? Keep in mind the whole point is to indicate to the user that it is getting 'dangerous', and they should either free up memory or restart the application. Perhaps a 'percentage used' isn't the best solution to this problem? What is? Another idea I had was a color based system (red, yellow, green, which I could base on more factors than just a single number)

    Read the article

  • Usage of Document() function in XSLT 1.0

    - by infant programmer
    I am triggering the transformation using a .NET code, unless I add "EnableDocumentFunction" property to the XSL-Setting, the program throws error saying .. "Usage of Document() function is prohibited", Actually the program is not editable and a kind of read-only .. is it possible to edit the XSL code itself so that I can use document() function?? The sample XSL and XMLs are Here: Sample XML : <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:msxsl="urn:schemas-microsoft-com:xslt" exclude-result-prefixes="msxsl" > <xsl:output method="xml" indent="yes"/> <xsl:template match="@* | node()"> <xsl:copy> <xsl:apply-templates select="@* | node()"/> </xsl:copy> </xsl:template> <xsl:variable name="State_Code_Trn"> <State In="California" Out="CA"/> <State In="CA" Out="CA"/> <State In="Texas" Out="TX"/> <State In="TX" Out="TX"/> </xsl:variable> <xsl:template name="testing" match="test_node"> <xsl:variable name="test_val"> <xsl:value-of select="."/> </xsl:variable> <xsl:element name="{name()}"> <xsl:choose> <xsl:when test="document('')/*/xsl:variable[@name='State_Code_Trn'] /State[@In=$test_val]"> <xsl:value-of select="document('')/*/xsl:variable[@name='State_Code_Trn'] /State[@In=$test_val]/@Out"/> </xsl:when> <xsl:otherwise> <xsl:text>Other</xsl:text> </xsl:otherwise> </xsl:choose> </xsl:element> </xsl:template> </xsl:stylesheet> And the sample XML : <?xml version="1.0" encoding="utf-8"?> <root> <test_node>California</test_node> <test_node>CA</test_node> <test_node> CA</test_node> <test_node>Texas</test_node> <test_node>TX</test_node> <test_node>CAA</test_node> <test_node></test_node> </root>

    Read the article

  • Linux configurations that would affect Java memory usage?

    - by wmacura
    Hi, Background: I have a set of java background workers I start as part of my webapp. I develop locally on Ubuntu 10.10 and deploy to an Ubuntu 10.04LTS server (a media temple (ve) instance). They're both running the same JVM: Sun JVM 1.6.0_22-b04. As part of the initialization script each worker is started with explicit Xmx, Xms, and XX:MaxPermGen settings. Yet somehow locally all 10 workers use 250MB, while on the server they use more than 2.7GB. I don't know how to begin to track this down. I thought the Ubuntu (and thus, kernel) version might make a difference, but I tried an old 10.04 VM and it behaves as expected. I've noticed that the machine does not seem to ever use memory for buffer or cache (according to htop), which seems a bit strange, but perhaps normal for a server? (edited) Some info: (server) root@devel:/app/axir/target# uname -a Linux devel 2.6.18-028stab069.5 #1 SMP Tue May 18 17:26:16 MSD 2010 x86_64 GNU/Linux (local) wiktor@beastie:~$ uname -a Linux beastie 2.6.35-25-generic #44-Ubuntu SMP Fri Jan 21 17:40:44 UTC 2011 x86_64 GNU/Linux (edited) Comparing PS output: (ps -eo "ppid,pid,cmd,rss,sz,vsz") PPID PID CMD RSS SZ VSZ (local) 1588 1615 java -cp axir-distribution. 25484 234382 937528 1615 1631 java -cp /home/wiktor/Code/ 83472 163059 652236 1615 1657 java -cp /home/wiktor/Code/ 70624 89135 356540 1615 1658 java -cp /home/wiktor/Code/ 37652 77625 310500 1615 1669 java -cp /home/wiktor/Code/ 38096 77733 310932 1615 1675 java -cp /home/wiktor/Code/ 37420 61395 245580 1615 1684 java -cp /home/wiktor/Code/ 38000 77736 310944 1615 1703 java -cp /home/wiktor/Code/ 39180 78060 312240 1615 1712 java -cp /home/wiktor/Code/ 38488 93882 375528 1615 1719 java -cp /home/wiktor/Code/ 38312 77874 311496 1615 1726 java -cp /home/wiktor/Code/ 38656 77958 311832 1615 1727 java -cp /home/wiktor/Code/ 78016 89429 357716 (server) 22522 23560 java -cp axir-distribution. 24860 285196 1140784 23560 23585 java -cp /app/axir/target/a 100764 161629 646516 23560 23667 java -cp /app/axir/target/a 72408 92682 370728 23560 23670 java -cp /app/axir/target/a 39948 97671 390684 23560 23674 java -cp /app/axir/target/a 40140 81586 326344 23560 23739 java -cp /app/axir/target/a 39688 81542 326168 They look very similar. In fact, the question now is why, if I add up the virtual memory usage on the server (3.2GB) does it more closely reflect 2.4GB of memory used (according to free), yet locally the virtual memory used adds up to a much more substantial 4.7GB but only actually uses ~250MB. It seems that perhaps memory isn't being shared as aggressively. (if that's even possible) Thank you for your help, Wiktor

    Read the article

  • Mysql 100% CPU + Slow query

    - by felipeclopes
    I'm using the RDS database from amazon with a some very big tables, and yesterday I started to face 100% CPU utilisation on the server and a bunch of slow query logs that were not happening before. I tried to check the queries that were running and faced this result from the explain command +----+-------------+-------------------------------+--------+----------------------------------------------------------------------------------------------+---------------------------------------+---------+-----------------------------------------------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------------------------------+--------+----------------------------------------------------------------------------------------------+---------------------------------------+---------+-----------------------------------------------------------------+------+----------------------------------------------+ | 1 | SIMPLE | businesses | const | PRIMARY | PRIMARY | 4 | const | 1 | Using index; Using temporary; Using filesort | | 1 | SIMPLE | activities_businesses | ref | PRIMARY,index_activities_users_on_business_id,index_tweets_users_on_tweet_id_and_business_id | index_activities_users_on_business_id | 9 | const | 2252 | Using index condition; Using where | | 1 | SIMPLE | activities_b_taggings_975e9c4 | ref | taggings_idx | taggings_idx | 782 | const,myapp_production.activities_businesses.id,const | 1 | Using index condition; Using where | | 1 | SIMPLE | activities | eq_ref | PRIMARY,index_activities_on_created_at | PRIMARY | 8 | myapp_production.activities_businesses.activity_id | 1 | Using where | +----+-------------+-------------------------------+--------+----------------------------------------------------------------------------------------------+---------------------------------------+---------+-----------------------------------------------------------------+------+----------------------------------------------+ Also checkin in the process list, I got something like this: +----+-----------------+-------------------------------------+----------------------------+---------+------+--------------+------------------------------------------------------------------------------------------------------+ | Id | User | Host | db | Command | Time | State | Info | +----+-----------------+-------------------------------------+----------------------------+---------+------+--------------+------------------------------------------------------------------------------------------------------+ | 1 | my_app | my_ip:57152 | my_app_production | Sleep | 0 | | NULL | | 2 | my_app | my_ip:57153 | my_app_production | Sleep | 2 | | NULL | | 3 | rdsadmin | localhost:49441 | NULL | Sleep | 9 | | NULL | | 6 | my_app | my_other_ip:47802 | my_app_production | Sleep | 242 | | NULL | | 7 | my_app | my_other_ip:47807 | my_app_production | Query | 231 | Sending data | SELECT my_fields... | | 8 | my_app | my_other_ip:47809 | my_app_production | Query | 231 | Sending data | SELECT my_fields... | | 9 | my_app | my_other_ip:47810 | my_app_production | Query | 231 | Sending data | SELECT my_fields... | | 10 | my_app | my_other_ip:47811 | my_app_production | Query | 231 | Sending data | SELECT my_fields... | | 11 | my_app | my_other_ip:47813 | my_app_production | Query | 231 | Sending data | SELECT my_fields... | ... So based on the numbers, it looks like there is no reason to have a slow query, since the worst execution plan is the one that goes through 2k rows which is not much. Edit 1 Another information that might be useful is the slow query_log SET timestamp=1401457485; SELECT my_query... # User@Host: myapp[myapp] @ ip-10-195-55-233.ec2.internal [IP] Id: 435 # Query_time: 95.830497 Lock_time: 0.000178 Rows_sent: 0 Rows_examined: 1129387 Edit 2 After profiling, I got this result. The result have approximately 250 rows with two columns each. +----------------------+----------+ | state | duration | +----------------------+----------+ | Sending data | 272 | | removing tmp table | 0 | | optimizing | 0 | | Creating sort index | 0 | | init | 0 | | cleaning up | 0 | | executing | 0 | | checking permissions | 0 | | freeing items | 0 | | Creating tmp table | 0 | | query end | 0 | | statistics | 0 | | end | 0 | | System lock | 0 | | Opening tables | 0 | | logging slow query | 0 | | Sorting result | 0 | | starting | 0 | | closing tables | 0 | | preparing | 0 | +----------------------+----------+ Edit 3 Adding query as requested SELECT activities.share_count, activities.created_at FROM `activities_businesses` INNER JOIN `businesses` ON `businesses`.`id` = `activities_businesses`.`business_id` INNER JOIN `activities` ON `activities`.`id` = `activities_businesses`.`activity_id` JOIN taggings activities_b_taggings_975e9c4 ON activities_b_taggings_975e9c4.taggable_id = activities_businesses.id AND activities_b_taggings_975e9c4.taggable_type = 'ActivitiesBusiness' AND activities_b_taggings_975e9c4.tag_id = 104 AND activities_b_taggings_975e9c4.created_at >= '2014-04-30 13:36:44' WHERE ( businesses.id = 1 ) AND ( activities.created_at > '2014-04-30 13:36:44' ) AND ( activities.created_at < '2014-05-30 12:27:03' ) ORDER BY activities.created_at; Edit 4 There may be a chance that the indexes are not being applied due to difference in column type between the taggings and the activities_businesses, on the taggable_id column. mysql> SHOW COLUMNS FROM activities_businesses; +-------------+------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------+------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | activity_id | bigint(20) | YES | MUL | NULL | | | business_id | bigint(20) | YES | MUL | NULL | | +-------------+------------+------+-----+---------+----------------+ 3 rows in set (0.01 sec) mysql> SHOW COLUMNS FROM taggings; +---------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | tag_id | int(11) | YES | MUL | NULL | | | taggable_id | bigint(20) | YES | | NULL | | | taggable_type | varchar(255) | YES | | NULL | | | tagger_id | int(11) | YES | | NULL | | | tagger_type | varchar(255) | YES | | NULL | | | context | varchar(128) | YES | | NULL | | | created_at | datetime | YES | | NULL | | +---------------+--------------+------+-----+---------+----------------+ So it is examining way more rows than it shows in the explain query, probably because some indexes are not being applied. Do you guys can help m with that?

    Read the article

  • "The usage of semaphores is subtly wrong"

    - by Hoonose
    This past semester I was taking an OS practicum in C, in which the first project involved making a threads package, then writing a multiple producer-consumer program to demonstrate the functionality. However, after getting grading feedback, I lost points for "The usage of semaphores is subtly wrong" and "The program assumes preemption (e.g. uses yield to change control)" (We started with a non-preemptive threads package then added preemption later. Note that the comment and example contradict each other. I believe it doesn't assume either, and would work in both environments). This has been bugging me for a long time - the course staff was kind of overwhelmed, so I couldn't ask them what's wrong with this over the semester. I've spent a long time thinking about this and I can't see the issues. If anyone could take a look and point out the error, or reassure me that there actually isn't a problem, I'd really appreciate it. I believe the syntax should be pretty standard in terms of the thread package functions (minithreads and semaphores), but let me know if anything is confusing. #include <stdio.h> #include <stdlib.h> #include "minithread.h" #include "synch.h" #define BUFFER_SIZE 16 #define MAXCOUNT 100 int buffer[BUFFER_SIZE]; int size, head, tail; int count = 1; int out = 0; int toadd = 0; int toremove = 0; semaphore_t empty; semaphore_t full; semaphore_t count_lock; // Semaphore to keep a lock on the // global variables for maintaining the counts /* Method to handle the working of a student * The ID of a student is the corresponding minithread_id */ int student(int total_burgers) { int n, i; semaphore_P(count_lock); while ((out+toremove) < arg) { n = genintrand(BUFFER_SIZE); n = (n <= total_burgers - (out + toremove)) ? n : total_burgers - (out + toremove); printf("Student %d wants to get %d burgers ...\n", minithread_id(), n); toremove += n; semaphore_V(count_lock); for (i=0; i<n; i++) { semaphore_P(empty); out = buffer[tail]; printf("Student %d is taking burger %d.\n", minithread_id(), out); tail = (tail + 1) % BUFFER_SIZE; size--; toremove--; semaphore_V(full); } semaphore_P(count_lock); } semaphore_V(count_lock); printf("Student %d is done.\n", minithread_id()); return 0; } /* Method to handle the working of a cook * The ID of a cook is the corresponding minithread_id */ int cook(int total_burgers) { int n, i; printf("Creating Cook %d\n",minithread_id()); semaphore_P(count_lock); while ((count+toadd) <= arg) { n = genintrand(BUFFER_SIZE); n = (n <= total_burgers - (count + toadd) + 1) ? n : total_burgers - (count + toadd) + 1; printf("Cook %d wants to put %d burgers into the burger stack ...\n", minithread_id(),n); toadd += n; semaphore_V(count_lock); for (i=0; i<n; i++) { semaphore_P(full); printf("Cook %d is putting burger %d into the burger stack.\n", minithread_id(), count); buffer[head] = count++; head = (head + 1) % BUFFER_SIZE; size++; toadd--; semaphore_V(empty); } semaphore_P(count_lock); } semaphore_V(count_lock); printf("Cook %d is done.\n", minithread_id()); return 0; } /* Method to create our multiple producers and consumers * and start their respective threads by fork */ void starter(int* c){ int i; for (i=0;i<c[2];i++){ minithread_fork(cook, c[0]); } for (i=0;i<c[1];i++){ minithread_fork(student, c[0]); } } /* The arguments are passed as command line parameters * argv[1] is the no of students * argv[2] is the no of cooks */ void main(int argc, char *argv[]) { int pass_args[3]; pass_args[0] = MAXCOUNT; pass_args[1] = atoi(argv[1]); pass_args[2] = atoi(argv[2]); size = head = tail = 0; empty = semaphore_create(); semaphore_initialize(empty, 0); full = semaphore_create(); semaphore_initialize(full, BUFFER_SIZE); count_lock = semaphore_create(); semaphore_initialize(count_lock, 1); minithread_system_initialize(starter, pass_args); }

    Read the article

  • Lua metatable Objects cannot be purge from memory?

    - by Prometheus3k
    Hi there, I'm using a proprietary platform that reported memory usage in realtime on screen. I decided to use a Class.lua I found on http://lua-users.org/wiki/SimpleLuaClasses However, I noticed memory issues when purging object created by this using a simple Account class. Specifically, I would start with say 146k of memory used, create 1000 objects of a class that just holds an integer instance variable and store each object into a table. The memory used is now 300k I would then exit, iterating through the table and setting each element in the table to nil. But would never get back the 146k, usually after this I am left using 210k or something similar. If I run the load sequence again during the same session, it does not exceed 300k so it is not a memory leak. I have tried creating 1000 integers in a table and setting these to nil, which does give me back 146k. In addition I've tried a simpler class file (Account2.lua) that doesn't rely on a class.lua. This still incurs memory fragmentation but not as much as the one that uses Class.lua Can anybody explain what is going on here? How can I purge these objects and get back the memory? here is the code --------Class.lua------ -- class.lua -- Compatible with Lua 5.1 (not 5.0). --http://lua-users.org/wiki/SimpleLuaClasses function class(base,ctor) local c = {} -- a new class instance if not ctor and type(base) == 'function' then ctor = base base = nil elseif type(base) == 'table' then -- our new class is a shallow copy of the base class! for i,v in pairs(base) do c[i] = v end c._base = base end -- the class will be the metatable for all its objects, -- and they will look up their methods in it. c.__index = c -- expose a ctor which can be called by () local mt = {} mt.__call = function(class_tbl,...) local obj = {} setmetatable(obj,c) if ctor then ctor(obj,...) else -- make sure that any stuff from the base class is initialized! if base and base.init then base.init(obj,...) end end return obj end c.init = ctor c.instanceOf = function(self,klass) local m = getmetatable(self) while m do if m == klass then return true end m = m._base end return false end setmetatable(c,mt) return c end --------Account.lua------ --Import Class template require 'class' local classname = "Account" --Declare class Constructor Account = class(function(acc,balance) --Instance variables declared here. if(balance ~= nil)then acc.balance = balance else --default value acc.balance = 2097 end acc.classname = classname end) --------Account2.lua------ local account2 = {} account2.classname = "unnamed" account2.balance = 2097 -----------Constructor 1 do local metatable = { __index = account2; } function Account2() return setmetatable({}, metatable); end end --------Main.lua------ require 'Account' require 'Account2' MAX_OBJ = 5000; test_value = 1000; Obj_Table = {}; MODE_ACC0 = 0 --integers MODE_ACC1 = 1 --Account MODE_ACC2 = 2 --Account2 TEST_MODE = MODE_ACC0; Lua_mem = ""; print("##1) collectgarbage('count'): " .. collectgarbage('count')); function Load() for i=1, MAX_OBJ do if(TEST_MODE == MODE_ACC0 )then table.insert(Obj_Table, test_value); elseif(TEST_MODE == MODE_ACC1 )then table.insert(Obj_Table, Account(test_value)); --Account.lua elseif(TEST_MODE == MODE_ACC2 )then table.insert(Obj_Table, Account2()); --Account2.lua Obj_Table[i].balance = test_value; end end print("##2) collectgarbage('count'): " .. collectgarbage('count')); end function Purge() --metatable purge if(TEST_MODE ~= MODE_ACC0)then --purge stage 0: print("set each elements metatable to nil") for i=1, MAX_OBJ do setmetatable(Obj_Table[i], nil); end end --purge stage 1: print("set table element to nil") for i=1, MAX_OBJ do Obj_Table[i] = nil; end --purge stage 2: print("start table.remove..."); for i=1, MAX_OBJ do table.remove(Obj_Table, i); end print("...end table.remove"); --purge stage 3: print("create new object_table {}"); Obj_Table= {}; --purge stage 4: print("collectgarbage('collect')"); collectgarbage('collect'); print("##3) collectgarbage('count'): " .. collectgarbage('count')); end --Loop callback function OnUpdate() collectgarbage('collect'); Lua_mem = collectgarbage('count'); end ------------------- --NOTE: --On start of game runs Load(), another runs Purge() --Update I've updated the code with suggestions from comments below, and will post my findings later today.

    Read the article

  • Upgrade Intel Xeon Prestonia to a 64-bit processor

    - by IDisposable
    In theory, could I upgrade a mPGA604-socket motherboard with a Prestonia processor to some Intel Xeon processor with 64-bit? I've got a Dell PowerEdge 1750 with dual 2.8GHz Xeon processors running my Windows Home Server machine. I want to upgrade to the upcoming Vail release, but it is 64-bit only. The processors are Prestonia-core, which is pre-64bit, but I was wondering if it was possible to swap in some pin-compatible later generation processor. According to wikipedia, the mPGA604-socket continues to be used for several later generations that do have same pinout. So, IN THEORY, could I swap in a 64-bit, like a Nocona-core?

    Read the article

  • Is the Asus Lion Square compatible with an AMD Athlon II AM3?

    - by wag2639
    I bought an Asus Lion Square compatible with a AMD Athlon II X3 435 Socket AM3 processor? I know strictly speaking, the Lion Square specifies AM2 but I'm a little confused since AM2 and AM3 are suppose to be socket compatible (I'm a little confused here as well but I assume it means an AM3 board will support AM2/AM2+ CPUs). However, will there be a problem with chip height and spacing? Or do people have experience asking ASUS for a standoff adapter?

    Read the article

  • which factors determines the speed of a processor? [closed]

    - by Deb
    I think that clock rate of processor determines the speed of core, in my case it is 1.86GHz. But If I am not wrong, it also determines that how much energy it will consume. If you have more frequency then more power it will consume. I choose Power Saver scheme to increase my battery life, however it reduces my core speed to half of the actual speed. I understand this happens because of SpeedStep, but I don't see any slowdown of my computer. So my problem is why we have such high frequency cores as it uses too much power. We can use low frequency cores. Actually I get confused between the two terms Speed of the processor and its frequency. So how much important is the frequency of core in case of any processor.

    Read the article

  • Socket 1155 vs 2011 vs Haswell

    - by woody
    The title says it all. I am trying to decide between sockets and just cant pinpoint which to get based on pros and cons. The build this will go into will be my primary PC. It will be used for every day computing, coding, some multimedia and gaming. I have read that 1155 and 2011 will be dead within the new year and that Haswell will double the performance of Ivy Bridge. What is a general run down on the different sockets? Pros and Cons? More specifically what are the technical differences between the three?

    Read the article

  • Home ZFS based NAS...What processor/chipset to use?

    - by MrBlargityBlarg
    So, I'm building a home/personal NAS. My plan is to expose both SMB fileshares for sharing files/media between hosts, but also to carve an iSCSI target LUN out of it for use by VMWare as a datastore. I want to use ZFS (software RAID) so that means I'll either be using FreeNAS, Solaris Express, or OpenIndiana. My question is basically: How much horsepower do I need? Obviously I/O is going to be my bottleneck but I want to be sure that I am not limiting my I/O because of a slow processor or chipset. So far the hardware plan is to use an Intel i3 and motherboard with one of the H87, Q87, or Z87 chipsets, a SAS controller (JBOD, no RAID) and if budget allows, I'm also hoping to get an SSD for the ZFS L2ARC and ZIL. Does anyone think I could get away with an Intel Atom or cheaper/less-capable processor/chipset than the i3 and [HQZ]87 listed above?

    Read the article

  • Can I upgrade my Soltek sl-67a-c motherboard from Pentium II to Pentium III?

    - by jedatu
    I have a Pentium III processor and I thought I would replace the Pentium II on my Soltek SL-67A-C. The motherboard has Slot 1 so the PIII fits in fine, however, when I turn on the computer the bios does not come up. Here's the link to some specs on the board: http://stason.org/TULARC/pc/motherboards/F/FLASH-TECH-INC-Pentium-II-Deschutes-SL-67A-C.html I know that I updated the bios at one time. I see references to a "67af3.zip" bios upgrade file out there but can't find any source to download it. I am thinking I may have to adjust the jumpers on the motherboard to match the speed of the processor, but I am not sure how to go about that. Any suggestions?

    Read the article

  • Is 850W SMPS enough for the below Config?

    - by bali208
    We are planing to assemble a file server of the following configuration. 1) Intel S2400SC2 Motherboard. 2) Xeon 2407 Processor x 2 (Dual Processors) 3) 8gb x 8 ECC DDR3 RAM 4) One 2 TB Seagate HDD. 5) 8 cabinet fans fitted for NZXT Switch 810 Cabinet. Our question is is CoolerMaster 850 Watts power supply enough for our machine? Or we have to put 1050 Watts power Supply? Please suggest the best Brand and Wattage of SMPS we have to buy for our system. Thank you.

    Read the article

  • There are lots of "Core i" CPUs, but Dell only offers a few -- who builds systems with the others?

    - by Jesse
    Passmark shows many varieties of Core i3, i5, and i7 cpus. Some of them, even at similar prices, are much faster than others. But Dell only offers a few options, and they're not the fast ones. For example, Dell offers the Core i5 650 (benchmark), which costs $220, and doesn't come close to the performance of the Core i3-2100 (benchmark), which costs $120. Does anyone sell systems with the faster, cheaper chips?

    Read the article

  • Intel Core i7-4960HQ vs. 4850HQ (Haswell) [on hold]

    - by Timothy R. Butler
    I'm looking at the new MacBook Pros and trying to decide between the Core i7-4960HQ (2.6 GHz) and i7-4850 (2.3 GHz). I've found some synthetic benchmarks comparing them, but I haven't found a lot of data, so I'd appreciate any pointers to good comparisons for the Haswell family (especially these two processors). My cursory analysis seems to suggest there isn't a huge gain from the extra 300 MHz. I'd like to determine not only whether this is generally true, but also to figure out if the gains that are made in performance come at too high of cost. Is the 2.6 going to be pushing the limits of what can fit in a thin laptop without overheating? I've looked at some of Intel's documentation, but have not been able to determine what the normal and maximum operating temperature differences are for the models. In the past, there have been times that Intel's fastest models in a given range ran especially hot and/or consumed significantly more power compared to slightly slower models. Do those concerns factor into the current generation?

    Read the article

  • Does my motherboard support dual-core processors?

    - by Filip
    Hi there! I'm very confused about my motherboard (ConRoe945PL-GLAN) It's manual says, that i can use only some kind of ConRoe processors. But some pages on internet says that i can plug in almost everything. For ex.: aria.co.uk says that i can plug in even Core 2 Duo. It would be awesome if i would not have to buy new motherboard! Anyway, if my motherboard will not let me to plug in Core 2 Duo, should i buy Pentium 4 (very cheap) with 3,2 GHz and insanely overclock it to have some performance or buy new motherboard + new processor for big money? THX for any answers! =)

    Read the article

  • How does ARM Cortex A8 compare with a modern x86 processor

    - by thomasrutter
    I was wondering how does a modern ARM chip based on ARM Cortex A8 compare, in clock-for-clock performance and capability, to a modern x86 chip such as a Core 2 Duo or Core i5? I realise due to the different instruction sets it'll depend heavily on what you're doing. To put it another way, rendering a web page in webkit on a 1GHz ARM Cortex A8 based chip should be about equivalent to doing in on a Core i5 at __ MHz? Update October 2013: Since I asked this question years ago it's become a lot more common, when reading about mobile devices, to see architecture-agnostic benchmarks that you can compare across platforms - for example, in-browser benchmarks like Sunspider in Webkit will run on just about anything and you see these in reviews all the time now. And there's things like Geekbench now.

    Read the article

  • How is time measured in computer systems?

    - by DRK3
    Something I have been puzzled about is this: how exactly does a computer regulate and tell time? For example: if I were to write a program that did this: Do 2+2 then wait 5 seconds How does the processor know what "5 seconds" is? How is time measured in computer systems? Is there a specific chip for that sole purpose? How does it work? Thanks for any replies; I'm really interested in computer science, and would love any help you could give me =D.

    Read the article

  • Core i7 920 vs 870

    - by JL
    I am not sure which is better. Surely with processors you would think the 920 would be a higher version because 920 870. What's bothering me is that the 870 seems to have a higher clock speed, so which one is the better processor?

    Read the article

  • aTop like tool for OSX

    - by Maciek Sawicki
    I'm trying to debug some issues with my Mac. This machine is used as continuous integration server. It stops responding from time to time. I think it could be some software issue, since the machine is working (so i.e. it's not a kernel panic) - that is when I go to the server room I see the login screen and I can move the mouse. Unfortunately I can't login neither directly nor by VNC or SSH. There is a nice tool that helps in debugging this type of problem called aTop. It's like top but with history. Unfortunately it's Linux-only software. Is there any tool that is similar to aTop for OSX?

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >