Search Results

Search found 1188 results on 48 pages for 'heap fragmentation'.

Page 5/48 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • In windbg, how do I get a heap header address from !heap -l results?

    - by Kevin
    I am playing around with windbg's !heap command, particular the "-l" switch which detects memory leaks. When -l does detect a leak, I am having problems navigating from its results to a stack trace for the source of the leak. Here is a snippet of the results from !heap -l 0:066 !heap -l Searching the memory for potential unreachable busy blocks. Entry User Heap Segment Size PrevSize Unused Flags 0324b500 0324b508 01580000 03230000 20 60 a busy 0324b520 0324b528 01580000 03230000 20 20 a busy 0324b5c8 0324b5d0 01580000 03230000 20 28 a busy Windbg's documentation for !heap tells me to use dt _DPH_BLOCK_INFORMATION with the header address, followed by dds with the blocks' StackTrace field. But the output for !heap -l doesn't specify a header address! It's only specififying Entry, User, Heap, and Segment. I've racked my brain looking over the other commands but can't figure out how to get the header address from any of these fields. Can someone help?

    Read the article

  • Filesystem fragmentation on the level of set of files

    - by trismarck
    The file is stored in blocks by the file system. The block is the smallest amount of data the file system can assign to store a file. The classical definition of a fragmented file is that the file is stored in blocks that are 'scattered' (that are physically non-contiguous) around the hard drive. What I want to ask about is this second type of fragmentation I've came up with. Lets suppose we install a program. This program has very many files. When the program starts, the program always loads the contents of those files sequentially. Now, even if the hard disk is defragmented, there is still a possibility that the files (but not the blocks building up to files) will be scattered on the disk and thus the program launch time will be longer. Actually, this time could be longer due to defragmentation of the disk, as the defragmentation process not only glues fragmented files but also moves some files to optimize free space chunks. The questions: is the type of fragmentation I mentioned relevant for the file system? is it possible to remedy this kind of fragmentation and if yes, how would you do it? Also, I'm not sure if this question should belong to superuser or to serverfault (as I guess the filesystem fragmentation is more important in the server environment).

    Read the article

  • Too many heap subpools might break the upgrade

    - by Mike Dietrich
    Recently one of our new upcoming Oracle Database 11.2 reference customers did upgrade their production database - a huge EBS system - from Oracle 9.2.0.8 to Oracle Database 11.2.0.2. They've tested very well, we've optimized the upgrade process, the recompilation timings etc.  But once the live upgrade was done it did fail in the JAVA component piece with this error: begin if initjvmaux.startstep('CREATE_JAVA_SYSTEM') then * ORA-29553: classw in use: SYS.javax/mail/folder ORA-06512: at "SYS.INITJVMAUX", line 23 ORA-06512: at line 5 Support diagnosis was pretty quick - and refered to:Bug 10165223 - ORA-29553: class in use: sys.javax/mail/folder during database upgrade But how could this happen? Actually I don't know as we have used the same init.ora setup on test and production. The only difference: the prod system has more CPUs and RAM. Anyway, the bug names as workarounds to either decrease the SGA to less than 1GB or decrease the number of heap subpools to 1. Finally this query did help to diagnose the number of heap subpools: select count(distinct kghluidx) num_subpools from x$kghlu where kghlushrpool = 1; The result was 2 - so we did run the upgrade now with this parameter set: _kghdsidx_count=1 And finally it did work well. One sad thing:After the upgrade did fail Support did recommend to restore the whole database - which took an additional 3-4 hours. As the ORACLE SERVER component has been already upgraded successfully at the stage where the error did happen it would have been fine to go on with the manual upgrade and start catupgrd.sql script. It would have been detected that the ORACLE SERVER is upgraded already and just picked up the non-upgraded components. The good news:Finally I had one extra slide to add to our workshop presentation

    Read the article

  • Getting the start address of the current process's heap?

    - by beta
    Hey, I am exploring the lower level workings of the system, and was wondering how malloc determines the start address of the heap. Is the heap a constant offset or is there a call of some sort to get the start address? Does the stack effect the start address of the heap? Thanks, Braden McDorman

    Read the article

  • How to get just free heap size (not together w stack/method mem) in Java?

    - by Pentium10
    I want to calculate the heap usage for my app. I would like to get a procent value of Heap size only. How do I get the value in code for the current running app? EDIT The upvoted answer is NOT complete/correct. The values returned by those methods include stack and method area too, and I need to monitor only heap size. With that code I got HeapError exception when I reached 43%, so I can't use those methods to monitor just heap.

    Read the article

  • CVE-2012-5195 Heap Buffer Overrun vulnerability in Perl

    - by Ritwik Ghoshal
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2012-5195 Heap Buffer Overrun vulnerability 5.1 Perl 5.12 Solaris 11.1 11.1.7.5.0 This notification describes vulnerabilities fixed in third-party components that are included in Oracle's product distributions.Information about vulnerabilities affecting Oracle products can be found on Oracle Critical Patch Updates and Security Alerts page.

    Read the article

  • quick look at: dm_db_index_physical_stats

    - by fatherjack
    A quick look at the key data from this dmv that can help a DBA keep databases performing well and systems online as the users need them. When the dynamic management views relating to index statistics became available in SQL Server 2005 there was much hype about how they can help a DBA keep their servers running in better health than ever before. This particular view gives an insight into the physical health of the indexes present in a database. Whether they are use or unused, complete or missing some columns is irrelevant, this is simply the physical stats of all indexes; disabled indexes are ignored however. In it’s simplest form this dmv can be executed as:   The results from executing this contain a record for every index in every database but some of the columns will be NULL. The first parameter is there so that you can specify which database you want to gather index details on, rather than scan every database. Simply specifying DB_ID() in place of the first NULL achieves this. In order to avoid the NULLS, or more accurately, in order to choose when to have the NULLS you need to specify a value for the last parameter. It takes one of 4 values – DEFAULT, ‘SAMPLED’, ‘LIMITED’ or ‘DETAILED’. If you execute the dmv with each of these values you can see some interesting details in the times taken to complete each step. DECLARE @Start DATETIME DECLARE @First DATETIME DECLARE @Second DATETIME DECLARE @Third DATETIME DECLARE @Finish DATETIME SET @Start = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, DEFAULT) AS ddips SET @First = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'SAMPLED') AS ddips SET @Second = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'LIMITED') AS ddips SET @Third = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'DETAILED') AS ddips SET @Finish = GETDATE() SELECT DATEDIFF(ms, @Start, @First) AS [DEFAULT] , DATEDIFF(ms, @First, @Second) AS [SAMPLED] , DATEDIFF(ms, @Second, @Third) AS [LIMITED] , DATEDIFF(ms, @Third, @Finish) AS [DETAILED] Running this code will give you 4 result sets; DEFAULT will have 12 columns full of data and then NULLS in the remainder. SAMPLED will have 21 columns full of data. LIMITED will have 12 columns of data and the NULLS in the remainder. DETAILED will have 21 columns full of data. So, from this we can deduce that the DEFAULT value (the same one that is also applied when you query the view using a NULL parameter) is the same as using LIMITED. Viewing the final result set has some details that are worth noting: Running queries against this view takes significantly longer when using the SAMPLED and DETAILED values in the last parameter. The duration of the query is directly related to the size of the database you are working in so be careful running this on big databases unless you have tried it on a test server first. Let’s look at the data we get back with the DEFAULT value first of all and then progress to the extra information later. We know that the first parameter that we supply has to be a database id and for the purposes of this blog we will be providing that value with the DB_ID function. We could just as easily put a fixed value in there or a function such as DB_ID (‘AnyDatabaseName’). The first columns we get back are database_id and object_id. These are pretty explanatory and we can wrap those in some code to make things a little easier to read: SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName] … FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips  gives us   SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName], [i].[name] AS [IndexName] , ….. FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips INNER JOIN [sys].[indexes] AS i ON [ddips].[index_id] = [i].[index_id] AND [ddips].[object_id] = [i].[object_id]     These handily tie in with the next parameters in the query on the dmv. If you specify an object_id and an index_id in these then you get results limited to either the table or the specific index. Once again we can place a  function in here to make it easier to work with a specific table. eg. SELECT * FROM [sys].[dm_db_index_physical_stats] (DB_ID(), OBJECT_ID(‘AdventureWorks2008.Person.Address’) , 1, NULL, NULL) AS ddips   Note: Despite me showing that functions can be placed directly in the parameters for this dmv, best practice recommends that functions are not used directly in the function as it is possible that they will fail to return a valid object ID. To be certain of not passing invalid values to this function, and therefore setting an automated process off on the wrong path, declare variables for the OBJECT_IDs and once they have been validated, use them in the function: DECLARE @db_id SMALLINT; DECLARE @object_id INT; SET @db_id = DB_ID(N’AdventureWorks_2008′); SET @object_id = OBJECT_ID(N’AdventureWorks_2008.Person.Address’); IF @db_id IS NULL BEGINPRINT N’Invalid database’; ENDELSE IF @object_id IS NULL BEGINPRINT N’Invalid object’; ENDELSE BEGINSELECT * FROM sys.dm_db_index_physical_stats (@db_id, @object_id, NULL, NULL , ‘LIMITED’); END; GO In cases where the results of querying this dmv don’t have any effect on other processes (i.e. simply viewing the results in the SSMS results area)  then it will be noticed when the results are not consistent with the expected results and in the case of this blog this is the method I have used. So, now we can relate the values in these columns to something that we recognise in the database lets see what those other values in the dmv are all about. The next columns are: We’ll skip partition_number, index_type_desc, alloc_unit_type_desc, index_depth and index_level  as this is a quick look at the dmv and they are pretty self explanatory. The final columns revealed by querying this view in the DEFAULT mode are avg_fragmentation_in_percent. This is the amount that the index is logically fragmented. It will show NULL when the dmv is queried in SAMPLED mode. fragment_count. The number of pieces that the index is broken into. It will show NULL when the dmv is queried in SAMPLED mode. avg_fragment_size_in_pages. The average size, in pages, of a single fragment in the leaf level of the IN_ROW_DATA allocation unit. It will show NULL when the dmv is queried in SAMPLED mode. page_count. Total number of index or data pages in use. OK, so what does this give us? Well, there is an obvious correlation between fragment_count, page_count and avg_fragment_size-in_pages. We see that an index that takes up 27 pages and is in 3 fragments has an average fragment size of 9 pages (27/3=9). This means that for this index there are 3 separate places on the hard disk that SQL Server needs to locate and access to gather the data when it is requested by a DML query. If this index was bigger than 72KB then having it’s data in 3 pieces might not be too big an issue as each piece would have a significant piece of data to read and the speed of access would not be too poor. If the number of fragments increases then obviously the amount of data in each piece decreases and that means the amount of work for the disks to do in order to retrieve the data to satisfy the query increases and this would start to decrease performance. This information can be useful to keep in mind when considering the value in the avg_fragmentation_in_percent column. This is arrived at by an internal algorithm that gives a value to the logical fragmentation of the index taking into account the multiple files, type of allocation unit and the previously mentioned characteristics if index size (page_count) and fragment_count. Seeing an index with a high avg_fragmentation_in_percent value will be a call to action for a DBA that is investigating performance issues. It is possible that tables will have indexes that suffer from rapid increases in fragmentation as part of normal daily business and that regular defragmentation work will be needed to keep it in good order. In other cases indexes will rarely become fragmented and therefore not need rebuilding from one end of the year to another. Keeping this in mind DBAs need to use an ‘intelligent’ process that assesses key characteristics of an index and decides on the best, if any, defragmentation method to apply should be used. There is a simple example of this in the sample code found in the Books OnLine content for this dmv, in example D. There are also a couple of very popular solutions created by SQL Server MVPs Michelle Ufford and Ola Hallengren which I would wholly recommend that you review for much further detail on how to care for your SQL Server indexes. Right, let’s get back on track then. Querying the dmv with the fifth parameter value as ‘DETAILED’ takes longer because it goes through the index and refreshes all data from every level of the index. As this blog is only a quick look a we are going to skate right past ghost_record_count and version_ghost_record_count and discuss avg_page_space_used_in_percent, record_count, min_record_size_in_bytes, max_record_size_in_bytes and avg_record_size_in_bytes. We can see from the details below that there is a correlation between the columns marked. Column 1 (Page_Count) is the number of 8KB pages used by the index, column 2 is how full each page is (how much of the 8KB has actual data written on it), column 3 is how many records are recorded in the index and column 4 is the average size of each record. This approximates to: ((Col1*8) * 1024*(Col2/100))/Col3 = Col4*. avg_page_space_used_in_percent is an important column to review as this indicates how much of the disk that has been given over to the storage of the index actually has data on it. This value is affected by the value given for the FILL_FACTOR parameter when creating an index. avg_record_size_in_bytes is important as you can use it to get an idea of how many records are in each page and therefore in each fragment, thus reinforcing how important it is to keep fragmentation under control. min_record_size_in_bytes and max_record_size_in_bytes are exactly as their names set them out to be. A detail of the smallest and largest records in the index. Purely offered as a guide to the DBA to better understand the storage practices taking place. So, keeping an eye on avg_fragmentation_in_percent will ensure that your indexes are helping data access processes take place as efficiently as possible. Where fragmentation recurs frequently then potentially the DBA should consider; the fill_factor of the index in order to leave space at the leaf level so that new records can be inserted without causing fragmentation so rapidly. the columns used in the index should be analysed to avoid new records needing to be inserted in the middle of the index but rather always be added to the end. * – it’s approximate as there are many factors associated with things like the type of data and other database settings that affect this slightly.  Another great resource for working with SQL Server DMVs is Performance Tuning with SQL Server Dynamic Management Views by Louis Davidson and Tim Ford – a free ebook or paperback from Simple Talk. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • How to reliably take Java Heap Dumps?

    - by karlcyr
    My team is running into difficulties when trying to take good heap dumps triggered by OutOfMemoryErrors. For specific reasons we are currently taking the dumps with jmap called from a bash script instead of using the HeapDumpOnOutOfMemoryError flag. We're using a 64-bit 1.6 JVM with a heap size around 3 GB. Our heap dumps fail 90% of the time (guesstimate). Is there anything we can do to improve our odds of getting a clean heap dump we can use to troubleshoot memory problems? I have read that jmap had major issues in Java 1.4 but that those issues should be mostly addressed now.

    Read the article

  • Default maximum heap size -- Ubuntu 10.04 LTS, openjdk6-jre

    - by sachin
    I just installed openjdk6-jre on Ubuntu 10.04 java version "1.6.0_20" OpenJDK Runtime Environment (IcedTea6 1.9.2) (6b20-1.9.2-0ubuntu1~10.04.1) OpenJDK 64-Bit Server VM (build 19.0-b09, mixed mode) Every time I run "java" I get this error: Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. This can be solved by specifying a maximum heap size and running "java -Xmx256m" But is there anyway to permanently fix this error (i.e. set the default heap size to 256MB so that I do not need to specify the max heap size every time I run the command)

    Read the article

  • What do the 'size' numbers mean in the windbg !heap output?

    - by pj4533
    I see output like this in my DMP file: Heap entries for Segment00 in Heap 00150000 00150640: 00640 . 00040 [01] - busy (40) 00150680: 00040 . 01808 [01] - busy (1800) 00151e88: 01808 . 00210 [01] - busy (208) 00152098: 00210 . 00228 [00] 001522c0: 00228 . 00030 [01] - busy (22) 001522f0: 00030 . 00018 [01] - busy (10) 00152308: 00018 . 00048 [01] - busy (3c) The WinDbg docs say this: Heap entries for Segment00 in Heap 250000 0x01 - HEAP_ENTRY_BUSY 0x02 - HEAP_ENTRY_EXTRA_PRESENT 0x04 - HEAP_ENTRY_FILL_PATTERN 0x08 - HEAP_ENTRY_VIRTUAL_ALLOC 0x10 - HEAP_ENTRY_LAST_ENTRY 0x20 - HEAP_ENTRY_SETTABLE_FLAG1 0x40 - HEAP_ENTRY_SETTABLE_FLAG2 Entry Prev Cur 0x80 - HEAP_ENTRY_SETTABLE_FLAG3 Address Size Size flags (Bytes used) (Tag name) 00250000: 00000 . 00b90 [01] - busy (b90) 00250b90: 00b90 . 00038 [01] - busy (38) 00250bc8: 00038 . 00040 [07] - busy (24), tail fill (NTDLL!LDR Database) The spacing is weird in the docs though. Does that mean 'entry address' and 'prev size' and 'cur size', or are the 'entry' 'prev' and 'cur' not for the line below? What does 'prev size' and 'cur size' mean? Especially with regard to 'bytes used'. What is the difference between 'bytes used' and 'cur size'?

    Read the article

  • Exceptional PowerShell DBA Pt 3 - Collation and Fragmentation

    In this final look into his everyday essentials, Laerte Junior provides some useful scripts for the DBA that use an alternative way of error-logging. He shows how to use a PowerShell script to check and, if necessary, to defragment your indexes, write data to a SQL Server table, and change the collation for a table. Being an exceptional DBA just got a little easier.

    Read the article

  • Using Bulk Operations with Coherence Off-Heap Storage

    - by jpurdy
    Some NamedCache methods (including clear(), entrySet(Filter), aggregate(Filter, …), invoke(Filter, …)) may generate large intermediate results. The size of these intermediate results may result in out-of-memory exceptions on cache servers, and in some cases on cache clients. This may be particularly problematic if out-of-memory exceptions occur on more than one server (since these operations may be cluster-wide) or if these exceptions cause additional memory use on the surviving servers as they take over partitions from the failed servers. This may be particularly problematic with clusters that use off-heap storage (such as NIO or Elastic Data storage options), since these storage options allow greater than normal cache sizes but do nothing to address the size of intermediate results or final result sets. One workaround is to use a PartitionedFilter, which allows the application to break up a larger operation into a number of smaller operations, each targeting either a set of partitions (useful for reducing the load on each cache server) or a set of members (useful for managing client result set sizes). It is also possible to return a key set, and then pull in the full entries using that key set. This also allows the application to take advantage of near caching, though this may be of limited value if the result is large enough to result in near cache thrashing.

    Read the article

  • Rebuilding indexes does not change the fragmentation % for nonclustered indexes.

    - by Noddy
    For starters, I am no DBA and I am working on rebuilding the indexes. I made use of the amazing TSQL script from msdn to alter index based onthe fragmente percent returned by dm_db_index_physical_stats and if the fragment percent is more than 30 then do a REBUILD or do a REORGANISE. What I found out was, in the first iteration, there were 87 records which needed defrag.I ran the script and all the 87 indexes (clustered & nonclustered) were rebuilt or reindexed. When I got the stats from dm_db_index_physical_stats , there were still 27 records which needed defrag and all of theses were NON CLUSTERED Indexes. All the Clustered indexes were fixed. No matter how many times I run the script to defrag these records, I still have the same indexes to be defraged and most of them with the same fragmentation %. Nothing seems to change after this. Note: I did not perform any inserts/ updates/ deletes to the tables during these iterations. Still the Rebuild/reorganise did not result in any change. More information: Using SQL 2008 Script as available in msdn http://msdn.microsoft.com/en-us/library/ms188917.aspx Could you please explain why these 27 records of non clustered indexes are not being changed/ modified ? Any help on this would be highly appreciated. Nod

    Read the article

  • Linux memory fragmentation

    - by Raghu
    Hi all, Is there a way to detect memory fragmentation on linux ? This is because on some long running servers I have noticed performance degradation and only after I restart process I see better performance. I noticed it more when using linux huge page support -- are huge pages in linux more prone to fragmentation ? I have looked at /proc/buddyinfo in particular. I want to know whether there are any better ways(not just CLI commands per se, any program or theoretical background would do) to look at it.

    Read the article

  • Java heap space

    - by Gandalf StormCrow
    I get this message during build of my project java.lang.OutOfMemoryError: Java heap space How do I increase heap space, I've got 8Gb or RAM its impossible that maven consumed that much, I found this http://vikashazrati.wordpress.com/2007/07/26/quicktip-how-to-increase-the-java-heap-memory-for-maven-2-on-linux/ how to do it on linux, but I'm on windows 7. How can I change java heap space under windows ?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >