Search Results

Search found 12445 results on 498 pages for 'memory fragmentation'.

Page 25/498 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • Allocating More Than 4 GB Of Memory

    - by TPatti
    I am facing an issue with memory allocation. I have: Host OS: Microsoft Windows XP - Professional x64 Edition - Version 2003 - Service Pack 2. Host Physical Memory: 8 GB Guest OS: Red Hat Enterprise Linux WS release 4 (Nahant Update 5). I am not sure if it is 32 or 64 bits. The lsb_release -a command says that argument LSB Version: core-3.0-ia32, so I guess that would be 32 bits... VMware Player Version: 2.5.2 build-156735 I would like that VMware Player could allocate more that 4 GB, but when I go to the setting, it only lists 4 GB. If I choose the "About" option, it actually says that I have 8 GB installed in the host machine. This VMware image created by someone else and provided to me, apparently done with VMware Workstation 5. Why can't I allocate 8 GB? Where is the problem? In the WMware Player Version, Guest OS or Host OS? How can I solve this? I understand that for this version of player there isn't one version for 32 and another for 64 bits.

    Read the article

  • How to solve QPixmap::fromImage memory leak?

    - by dodoent
    Hello everyone! I have a problem with Qt. Here is a part of code that troubles me: void FullScreenImage::QImageIplImageCvt(IplImage *input) { help=cvCreateImage(cvGetSize(input), input->depth, input->nChannels); cvCvtColor(input, help, CV_BGR2RGB); QImage tmp((uchar *)help->imageData, help->width, help->height, help->widthStep, QImage::Format_RGB888); this->setPixmap(QPixmap::fromImage(tmp).scaled(this->size(), Qt::IgnoreAspectRatio, Qt::SmoothTransformation)); cvReleaseImage(&help); } void FullScreenImage::hideOnScreen() { this->hide(); this->clear(); } void FullScreenImage::showOnScreen(IplImage *slika, int delay) { QImageIplImageCvt(slika); this->showFullScreen(); if(delay>0) QTimer::singleShot(delay*1000, this, SLOT(hideOnScreen())); } So, the method showOnScreen uses private method QImageIplImageCvt to create QImage from IplImage (which is used by the openCV), which is then used to create QPixmap in order to show the image in full screen. FullScreenImage class inherits QLabel. After some delay, the fullscreen picture should be hidden, so I use QTimer to trigger an event after some delay. The event handler is the hideOnScreen method which hides the label and should clear the memory. The problem is the following: Whenever I call QPixmap::fromImage, it allocates the memory for the pixmap data and copies the data from QImage memory buffer to the QPixmap memory buffer. After the label is hidden, the QPixmap data still remains allocated, and even worse, after the new QPixmap::fromImage call the new chunk of memory is allocated for the new picture, and the old data is not freed from memory. This causes a memory leak (cca 10 MB per method call with my testing pictures). How can I solve that leak? I've even tried to create a private QPixmap variable, store pixmap created by the QPixmap::fromImage to it, and then tried to call its destructor in hideOnScreen method, but it didn't help. Is there a non-static way to create QPixmap from QImage? Or even better, is there a way to create QPixmap directly from IplImage* ? Thank you in advance for your answers.

    Read the article

  • Does a CPU assigns a value atomically to memory?

    - by Poni
    Hi! A quick question I've been wondering about for some time; Does the CPU assign values atomically, or, is it bit by bit (say for example a 32bit integer). If it's bit by bit, could another thread accessing this exact location get a "part" of the to-be-assigned value? Think of this: I have two threads and one shared "unsigned int" variable (call it "g_uiVal"). Both threads loop. On is printing "g_uiVal" with printf("%u\n", g_uiVal). The second just increase this number. Will the printing thread ever print something that is totally not or part of "g_uiVal"'s value? In code: unsigned int g_uiVal; void thread_writer() { g_uiVal++; } void thread_reader() { while(1) printf("%u\n", g_uiVal); }

    Read the article

  • Plan Caching and Query Memory Part II (Hash Match) – When not to use stored procedure - Most common performance mistake SQL Server developers make.

    - by sqlworkshops
    SQL Server estimates Memory requirement at compile time, when stored procedure or other plan caching mechanisms like sp_executesql or prepared statement are used, the memory requirement is estimated based on first set of execution parameters. This is a common reason for spill over tempdb and hence poor performance. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union. This article covers Hash Match operations with examples. It is recommended to read Plan Caching and Query Memory Part I before this article which covers an introduction and Query memory for Sort. In most cases it is cheaper to pay for the compilation cost of dynamic queries than huge cost for spill over tempdb, unless memory requirement for a query does not change significantly based on predicates.   This article covers underestimation / overestimation of memory for Hash Match operation. Plan Caching and Query Memory Part I covers underestimation / overestimation for Sort. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   To read additional articles I wrote click here.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script. Most of these concepts are also covered in our webcasts: www.sqlworkshops.com/webcasts  Let’s create a Customer’s State table that has 99% of customers in NY and the rest 1% in WA.Customers table used in Part I of this article is also used here.To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'. --Example provided by www.sqlworkshops.com drop table CustomersState go create table CustomersState (CustomerID int primary key, Address char(200), State char(2)) go insert into CustomersState (CustomerID, Address) select CustomerID, 'Address' from Customers update CustomersState set State = 'NY' where CustomerID % 100 != 1 update CustomersState set State = 'WA' where CustomerID % 100 = 1 go update statistics CustomersState with fullscan go   Let’s create a stored procedure that joins customers with CustomersState table with a predicate on State. --Example provided by www.sqlworkshops.com create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1) end go  Let’s execute the stored procedure first with parameter value ‘WA’ – which will select 1% of data. set statistics time on go --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' goThe stored procedure took 294 ms to complete.  The stored procedure was granted 6704 KB based on 8000 rows being estimated.  The estimated number of rows, 8000 is similar to actual number of rows 8000 and hence the memory estimation should be ok.  There was no Hash Warning in SQL Profiler. To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'.   Now let’s execute the stored procedure with parameter value ‘NY’ – which will select 99% of data. -Example provided by www.sqlworkshops.com exec CustomersByState 'NY' go  The stored procedure took 2922 ms to complete.   The stored procedure was granted 6704 KB based on 8000 rows being estimated.    The estimated number of rows, 8000 is way different from the actual number of rows 792000 because the estimation is based on the first set of parameter value supplied to the stored procedure which is ‘WA’ in our case. This underestimation will lead to spill over tempdb, resulting in poor performance.   There was Hash Warning (Recursion) in SQL Profiler. To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'.   Let’s recompile the stored procedure and then let’s first execute the stored procedure with parameter value ‘NY’.  In a production instance it is not advisable to use sp_recompile instead one should use DBCC FREEPROCCACHE (plan_handle). This is due to locking issues involved with sp_recompile, refer to our webcasts, www.sqlworkshops.com/webcasts for further details.   exec sp_recompile CustomersByState go --Example provided by www.sqlworkshops.com exec CustomersByState 'NY' go  Now the stored procedure took only 1046 ms instead of 2922 ms.   The stored procedure was granted 146752 KB of memory. The estimated number of rows, 792000 is similar to actual number of rows of 792000. Better performance of this stored procedure execution is due to better estimation of memory and avoiding spill over tempdb.   There was no Hash Warning in SQL Profiler.   Now let’s execute the stored procedure with parameter value ‘WA’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go  The stored procedure took 351 ms to complete, higher than the previous execution time of 294 ms.    This stored procedure was granted more memory (146752 KB) than necessary (6704 KB) based on parameter value ‘NY’ for estimation (792000 rows) instead of parameter value ‘WA’ for estimation (8000 rows). This is because the estimation is based on the first set of parameter value supplied to the stored procedure which is ‘NY’ in this case. This overestimation leads to poor performance of this Hash Match operation, it might also affect the performance of other concurrently executing queries requiring memory and hence overestimation is not recommended.     The estimated number of rows, 792000 is much more than the actual number of rows of 8000.  Intermediate Summary: This issue can be avoided by not caching the plan for memory allocating queries. Other possibility is to use recompile hint or optimize for hint to allocate memory for predefined data range.Let’s recreate the stored procedure with recompile hint. --Example provided by www.sqlworkshops.com drop proc CustomersByState go create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1, recompile) end go  Let’s execute the stored procedure initially with parameter value ‘WA’ and then with parameter value ‘NY’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go exec CustomersByState 'NY' go  The stored procedure took 297 ms and 1102 ms in line with previous optimal execution times.   The stored procedure with parameter value ‘WA’ has good estimation like before.   Estimated number of rows of 8000 is similar to actual number of rows of 8000.   The stored procedure with parameter value ‘NY’ also has good estimation and memory grant like before because the stored procedure was recompiled with current set of parameter values.  Estimated number of rows of 792000 is similar to actual number of rows of 792000.    The compilation time and compilation CPU of 1 ms is not expensive in this case compared to the performance benefit.   There was no Hash Warning in SQL Profiler.   Let’s recreate the stored procedure with optimize for hint of ‘NY’. --Example provided by www.sqlworkshops.com drop proc CustomersByState go create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1, optimize for (@State = 'NY')) end go  Let’s execute the stored procedure initially with parameter value ‘WA’ and then with parameter value ‘NY’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go exec CustomersByState 'NY' go  The stored procedure took 353 ms with parameter value ‘WA’, this is much slower than the optimal execution time of 294 ms we observed previously. This is because of overestimation of memory. The stored procedure with parameter value ‘NY’ has optimal execution time like before.   The stored procedure with parameter value ‘WA’ has overestimation of rows because of optimize for hint value of ‘NY’.   Unlike before, more memory was estimated to this stored procedure based on optimize for hint value ‘NY’.    The stored procedure with parameter value ‘NY’ has good estimation because of optimize for hint value of ‘NY’. Estimated number of rows of 792000 is similar to actual number of rows of 792000.   Optimal amount memory was estimated to this stored procedure based on optimize for hint value ‘NY’.   There was no Hash Warning in SQL Profiler.   This article covers underestimation / overestimation of memory for Hash Match operation. Plan Caching and Query Memory Part I covers underestimation / overestimation for Sort. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   Summary: Cached plan might lead to underestimation or overestimation of memory because the memory is estimated based on first set of execution parameters. It is recommended not to cache the plan if the amount of memory required to execute the stored procedure has a wide range of possibilities. One can mitigate this by using recompile hint, but that will lead to compilation overhead. However, in most cases it might be ok to pay for compilation rather than spilling sort over tempdb which could be very expensive compared to compilation cost. The other possibility is to use optimize for hint, but in case one sorts more data than hinted by optimize for hint, this will still lead to spill. On the other side there is also the possibility of overestimation leading to unnecessary memory issues for other concurrently executing queries. In case of Hash Match operations, this overestimation of memory might lead to poor performance. When the values used in optimize for hint are archived from the database, the estimation will be wrong leading to worst performance, so one has to exercise caution before using optimize for hint, recompile hint is better in this case.   I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.  Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • quick look at: dm_db_index_physical_stats

    - by fatherjack
    A quick look at the key data from this dmv that can help a DBA keep databases performing well and systems online as the users need them. When the dynamic management views relating to index statistics became available in SQL Server 2005 there was much hype about how they can help a DBA keep their servers running in better health than ever before. This particular view gives an insight into the physical health of the indexes present in a database. Whether they are use or unused, complete or missing some columns is irrelevant, this is simply the physical stats of all indexes; disabled indexes are ignored however. In it’s simplest form this dmv can be executed as:   The results from executing this contain a record for every index in every database but some of the columns will be NULL. The first parameter is there so that you can specify which database you want to gather index details on, rather than scan every database. Simply specifying DB_ID() in place of the first NULL achieves this. In order to avoid the NULLS, or more accurately, in order to choose when to have the NULLS you need to specify a value for the last parameter. It takes one of 4 values – DEFAULT, ‘SAMPLED’, ‘LIMITED’ or ‘DETAILED’. If you execute the dmv with each of these values you can see some interesting details in the times taken to complete each step. DECLARE @Start DATETIME DECLARE @First DATETIME DECLARE @Second DATETIME DECLARE @Third DATETIME DECLARE @Finish DATETIME SET @Start = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, DEFAULT) AS ddips SET @First = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'SAMPLED') AS ddips SET @Second = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'LIMITED') AS ddips SET @Third = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'DETAILED') AS ddips SET @Finish = GETDATE() SELECT DATEDIFF(ms, @Start, @First) AS [DEFAULT] , DATEDIFF(ms, @First, @Second) AS [SAMPLED] , DATEDIFF(ms, @Second, @Third) AS [LIMITED] , DATEDIFF(ms, @Third, @Finish) AS [DETAILED] Running this code will give you 4 result sets; DEFAULT will have 12 columns full of data and then NULLS in the remainder. SAMPLED will have 21 columns full of data. LIMITED will have 12 columns of data and the NULLS in the remainder. DETAILED will have 21 columns full of data. So, from this we can deduce that the DEFAULT value (the same one that is also applied when you query the view using a NULL parameter) is the same as using LIMITED. Viewing the final result set has some details that are worth noting: Running queries against this view takes significantly longer when using the SAMPLED and DETAILED values in the last parameter. The duration of the query is directly related to the size of the database you are working in so be careful running this on big databases unless you have tried it on a test server first. Let’s look at the data we get back with the DEFAULT value first of all and then progress to the extra information later. We know that the first parameter that we supply has to be a database id and for the purposes of this blog we will be providing that value with the DB_ID function. We could just as easily put a fixed value in there or a function such as DB_ID (‘AnyDatabaseName’). The first columns we get back are database_id and object_id. These are pretty explanatory and we can wrap those in some code to make things a little easier to read: SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName] … FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips  gives us   SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName], [i].[name] AS [IndexName] , ….. FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips INNER JOIN [sys].[indexes] AS i ON [ddips].[index_id] = [i].[index_id] AND [ddips].[object_id] = [i].[object_id]     These handily tie in with the next parameters in the query on the dmv. If you specify an object_id and an index_id in these then you get results limited to either the table or the specific index. Once again we can place a  function in here to make it easier to work with a specific table. eg. SELECT * FROM [sys].[dm_db_index_physical_stats] (DB_ID(), OBJECT_ID(‘AdventureWorks2008.Person.Address’) , 1, NULL, NULL) AS ddips   Note: Despite me showing that functions can be placed directly in the parameters for this dmv, best practice recommends that functions are not used directly in the function as it is possible that they will fail to return a valid object ID. To be certain of not passing invalid values to this function, and therefore setting an automated process off on the wrong path, declare variables for the OBJECT_IDs and once they have been validated, use them in the function: DECLARE @db_id SMALLINT; DECLARE @object_id INT; SET @db_id = DB_ID(N’AdventureWorks_2008′); SET @object_id = OBJECT_ID(N’AdventureWorks_2008.Person.Address’); IF @db_id IS NULL BEGINPRINT N’Invalid database’; ENDELSE IF @object_id IS NULL BEGINPRINT N’Invalid object’; ENDELSE BEGINSELECT * FROM sys.dm_db_index_physical_stats (@db_id, @object_id, NULL, NULL , ‘LIMITED’); END; GO In cases where the results of querying this dmv don’t have any effect on other processes (i.e. simply viewing the results in the SSMS results area)  then it will be noticed when the results are not consistent with the expected results and in the case of this blog this is the method I have used. So, now we can relate the values in these columns to something that we recognise in the database lets see what those other values in the dmv are all about. The next columns are: We’ll skip partition_number, index_type_desc, alloc_unit_type_desc, index_depth and index_level  as this is a quick look at the dmv and they are pretty self explanatory. The final columns revealed by querying this view in the DEFAULT mode are avg_fragmentation_in_percent. This is the amount that the index is logically fragmented. It will show NULL when the dmv is queried in SAMPLED mode. fragment_count. The number of pieces that the index is broken into. It will show NULL when the dmv is queried in SAMPLED mode. avg_fragment_size_in_pages. The average size, in pages, of a single fragment in the leaf level of the IN_ROW_DATA allocation unit. It will show NULL when the dmv is queried in SAMPLED mode. page_count. Total number of index or data pages in use. OK, so what does this give us? Well, there is an obvious correlation between fragment_count, page_count and avg_fragment_size-in_pages. We see that an index that takes up 27 pages and is in 3 fragments has an average fragment size of 9 pages (27/3=9). This means that for this index there are 3 separate places on the hard disk that SQL Server needs to locate and access to gather the data when it is requested by a DML query. If this index was bigger than 72KB then having it’s data in 3 pieces might not be too big an issue as each piece would have a significant piece of data to read and the speed of access would not be too poor. If the number of fragments increases then obviously the amount of data in each piece decreases and that means the amount of work for the disks to do in order to retrieve the data to satisfy the query increases and this would start to decrease performance. This information can be useful to keep in mind when considering the value in the avg_fragmentation_in_percent column. This is arrived at by an internal algorithm that gives a value to the logical fragmentation of the index taking into account the multiple files, type of allocation unit and the previously mentioned characteristics if index size (page_count) and fragment_count. Seeing an index with a high avg_fragmentation_in_percent value will be a call to action for a DBA that is investigating performance issues. It is possible that tables will have indexes that suffer from rapid increases in fragmentation as part of normal daily business and that regular defragmentation work will be needed to keep it in good order. In other cases indexes will rarely become fragmented and therefore not need rebuilding from one end of the year to another. Keeping this in mind DBAs need to use an ‘intelligent’ process that assesses key characteristics of an index and decides on the best, if any, defragmentation method to apply should be used. There is a simple example of this in the sample code found in the Books OnLine content for this dmv, in example D. There are also a couple of very popular solutions created by SQL Server MVPs Michelle Ufford and Ola Hallengren which I would wholly recommend that you review for much further detail on how to care for your SQL Server indexes. Right, let’s get back on track then. Querying the dmv with the fifth parameter value as ‘DETAILED’ takes longer because it goes through the index and refreshes all data from every level of the index. As this blog is only a quick look a we are going to skate right past ghost_record_count and version_ghost_record_count and discuss avg_page_space_used_in_percent, record_count, min_record_size_in_bytes, max_record_size_in_bytes and avg_record_size_in_bytes. We can see from the details below that there is a correlation between the columns marked. Column 1 (Page_Count) is the number of 8KB pages used by the index, column 2 is how full each page is (how much of the 8KB has actual data written on it), column 3 is how many records are recorded in the index and column 4 is the average size of each record. This approximates to: ((Col1*8) * 1024*(Col2/100))/Col3 = Col4*. avg_page_space_used_in_percent is an important column to review as this indicates how much of the disk that has been given over to the storage of the index actually has data on it. This value is affected by the value given for the FILL_FACTOR parameter when creating an index. avg_record_size_in_bytes is important as you can use it to get an idea of how many records are in each page and therefore in each fragment, thus reinforcing how important it is to keep fragmentation under control. min_record_size_in_bytes and max_record_size_in_bytes are exactly as their names set them out to be. A detail of the smallest and largest records in the index. Purely offered as a guide to the DBA to better understand the storage practices taking place. So, keeping an eye on avg_fragmentation_in_percent will ensure that your indexes are helping data access processes take place as efficiently as possible. Where fragmentation recurs frequently then potentially the DBA should consider; the fill_factor of the index in order to leave space at the leaf level so that new records can be inserted without causing fragmentation so rapidly. the columns used in the index should be analysed to avoid new records needing to be inserted in the middle of the index but rather always be added to the end. * – it’s approximate as there are many factors associated with things like the type of data and other database settings that affect this slightly.  Another great resource for working with SQL Server DMVs is Performance Tuning with SQL Server Dynamic Management Views by Louis Davidson and Tim Ford – a free ebook or paperback from Simple Talk. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • Why does jquery leak memory so badly?

    - by Thomas Lane
    This is kind of a follow-up to a question I posted last week: http://stackoverflow.com/questions/2429056/simple-jquery-ajax-call-leaks-memory-in-ie I love the jquery syntax and all of its nice features, but I've been having trouble with a page that automatically updates table cells via ajax calls leaking memory. So I created two simple test pages for experimenting. Both pages do an ajax call every .1 seconds. After each successful ajax call, a counter is incremented and the DOM is updated. The script stops after 1000 cycles. One uses jquery for both the ajax call and to update the DOM. The other uses the Yahoo API for the ajax and does a document.getElementById(...).innerHTML to update the DOM. The jquery version leaks memory badly. Running in drip (on XP Home with IE7), it starts at 9MB and finishes at about 48MB, with memory growing linearly the whole time. If I comment out the line that updates the DOM, it still finishes at 32MB, suggesting that even simple DOM updates leak a significant amount of memory. The non-jquery version starts and finishes at about 9MB, regardless of whether it updates the DOM. Does anyone have a good explanation of what is causing jquery to leak so badly? Am I missing something obvious? Is there a circular reference that I'm not aware of? Or does jquery just have some serious memory issues? Here is the source for the leaky (jquery) version: <html> <head> <script type="text/javascript" src="http://www.google.com/jsapi"></script> <script type="text/javascript"> google.load('jquery', '1.4.2'); </script> <script type="text/javascript"> var counter = 0; leakTest(); function leakTest() { $.ajax({ url: '/html/delme.x', type: 'GET', success: incrementCounter }); } function incrementCounter(data) { if (counter<1000) { counter++; $('#counter').text(counter); setTimeout(leakTest,100); } else $('#counter').text('finished.'); } </script> </head> <body> <div>Why is memory usage going up?</div> <div id="counter"></div> </body> </html> And here is the non-leaky version: <html> <head> <script type="text/javascript" src="http://yui.yahooapis.com/2.8.0r4/build/yahoo/yahoo-min.js"></script> <script type="text/javascript" src="http://yui.yahooapis.com/2.8.0r4/build/event/event-min.js"></script> <script type="text/javascript" src="http://yui.yahooapis.com/2.8.0r4/build/connection/connection_core-min.js"></script> <script type="text/javascript"> var counter = 0; leakTest(); function leakTest() { YAHOO.util.Connect.asyncRequest('GET', '/html/delme.x', {success:incrementCounter}); } function incrementCounter(o) { if (counter<1000) { counter++; document.getElementById('counter').innerHTML = counter; setTimeout(leakTest,100); } else document.getElementById('counter').innerHTML = 'finished.' } </script> </head> <body> <div>Memory usage is stable, right?</div> <div id="counter"></div> </body> </html>

    Read the article

  • Is special memory required for a MacBook Pro ?

    - by user38900
    I have a MacBook Pro (MacBookPro5,2 / 2.8 GHz) with 4 GB of ram (2x2GB). I'm looking to upgrade to 8GB. The memory in it now is DDR3 PC3-8500 1067. Checking out prices for 4 GB sticks of PC3-8500 there is about $100 difference for "apple certified" ram. Will any DDR3 PC3-8500 module work or is there really a difference?

    Read the article

  • How memory hungry is Jetty?

    - by Sanoj
    I am planning to use Jetty + MySQL on a small VPS with just 256 or 512MB memory, for serving a few websites. I haven't used Jetty before, only PHP on shared hosting. Is 256 or 512MB too limited for a Jetty server? or should I go with nginx + php + php-fpm setup instead? The websites will not have much traffic, they are just small sites.

    Read the article

  • MySQL: Load database to memory

    - by Adam Matan
    Hi, Is there a way to load an entire MySQL database to the RAM, especially on en EC2 server? The database is quite small (~500 MegaBytes) I have enough memory Speed issues are crucial - the resulted queries are used to serve a dynamic webpage. Thanks, Adam

    Read the article

  • Increase process memory cap.

    - by Janis Veinbergs
    Doing copy/paste in Visual Studio 2010 RTM on Windows 7, 3GB ram machine, I was unable to copy text because of an error: Task Manager shows that devenv.exe is using a little more than 500MB. However I still have almost 1GB of free RAM available. Is that somekind of memory cap? If so, is there a way to increase it? It may be a bug, but maybe there is a workaround?

    Read the article

  • check history of cpu/memory usage in ubuntu?

    - by johnlai2004
    Is there a way for me to review cpu or memory usage on my ubuntu linux server? I've noticed my server (lamp set up) being slow at times, but by the time I log in as root and run a PS command, everything may have returned to normal. It would be great to review a log of what resources different parts of the server consumed.

    Read the article

  • Free memory on linux [closed]

    - by Julia Roberts
    Possible Duplicate: Meaning of the buffers/cache line in the output of free What would be a good setting to free memory on linux? I have 8GB but gets used up so fast. current settings: kernel.sched_min_granularity_ns = 10000000 kernel.sched_wakeup_granularity_ns = 15000000 vm.dirty_ratio = 40 kernel.pid_max = 4096 vm.bdflush = 100 1200 128 512 15 5000 500 1884 2 What settings would I need so linux frees old ram faster?

    Read the article

  • Celery - minimize memory consuption

    - by Andrew
    We have ~300 celeryd processes running under Ubuntu 10.4 64-bit , in idle every process takes ~19mb RES, ~174mb VIRT, thus - it's around 6GB of RAM in idle for all processes. In active state - process takes up to 100mb of RES and ~300mb VIRT Every process uses minidom(xml files are < 500kb, simple structure) and urllib. Quetions is - how can we decrease RAM consuption - at least for idle workers, probably some celery or python options may help? How to determine which part takes most of memory?

    Read the article

  • Why is the Task Manager Total Physical Memory not 2048 MB or 2 GB

    - by Dorothy
    I found 3 numbers for the Total Physical Memory: In the Task Manager under the Performance tab: 1978 MB In Computer Properties: 2 GB And running wmic computersystem get TotalPhysicalMemory /format:list in the command line: 2074554368 Bites Number 1 matches Number 3 except Number 1 is rounded. When I convert Number 3 to GB 2074554368 / 1024 / 1024 / 1024 I don't quite get 2 GB. I get 1.93207932 GB. Why does Number 1 and Number 3 not match Number 2?

    Read the article

  • monitoring services, CPU, memory remotely on a Windows server machine

    - by ToastMan
    I'm looking for a tool that is able to (remotely) monitor CPU and Memory in a Windows server but most importantly, which service/process is using it. Or-- is it possible to monitor a specific running service? We got a server that freezes on regular basis and we're trying to find the culprit without using a local debugger. Would be great if the monitoring software came with an agent that we can install on the remote clients for maximum accuracy. Any suggestions are very much appreciated.

    Read the article

  • Centos swap cache memory leak

    - by user30008
    We have a image server that keeps running out of memory and crashing, we thought there was a hardware issue with the machine because the code base has not changed and this is a new issue. We brought a new machine online with newer kernel and fresh centos 5.4 install and just brought online one subdomain and the exact same error is occurring on the new machine. How should I try and troubleshoot this issue.

    Read the article

  • Identify an instance of Google Chrome by PID

    - by Laramie
    While working I generally need around 40 windows open at a time and run 100-200 processes. When memory constraints become an issue, I start picking off the processes that are the most resource intensive and disposable. Often these are chrome.exe. It would be helpful to be able to match a particularly memory-hungry instance of chrome to it's PID so I can selectively close it. That is, if I knew what the page title it is currently open to, I could choose whether it lives or dies. I've tried Process Explorer to no avail. Any ideas?

    Read the article

  • Current trends in Random Access Memory speed [closed]

    - by Vetal
    As I know for now because of laws of Physics there will be not any tangible improvements in CPU cycles per second for the nearest future. However because of Von Neumann bottleneck it seems to not be an issue for non-server applications. So what about RAM, is there any upcoming technologies that promise to improve memory speed or we are stack with the current situation till quantum computers will come out from labs?

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >