Search Results

Search found 15543 results on 622 pages for 'memory editing'.

Page 11/622 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Why is my addSubview: method causing a leak?

    - by Nathan
    Okay, so I have done a ton of research on this and have been pulling my hair out for days trying to figure out why the following code leaks: [UIApplication sharedApplication].networkActivityIndicatorVisible = YES; UIImage *comicImage = [self getCachedImage:[NSString stringWithFormat:@"%@%@%@",@"http://url/",comicNumber,@".png"]]; self.imageView = [[[UIImageView alloc] initWithImage:comicImage] autorelease]; [self.scrollView addSubview:self.imageView]; self.scrollView.contentSize = self.imageView.frame.size; self.imageWidth = [NSString stringWithFormat:@"%f",imageView.frame.size.width]; [UIApplication sharedApplication].networkActivityIndicatorVisible = NO; Both self.imageView and self.scrollView are @propety (nonatomic, retain) and released in my dealloc.. imageView isn't used anywhere else in the code. This code is also run in a thread off of the main thread. If I run this code on my device, it will quickly run out of memory if I continually load this view. However, I've found if I comment out the following line: [UIApplication sharedApplication].networkActivityIndicatorVisible = YES; UIImage *comicImage = [self getCachedImage:[NSString stringWithFormat:@"%@%@%@",@"http://url/",comicNumber,@".png"]]; self.imageView = [[[UIImageView alloc] initWithImage:comicImage] autorelease]; //[self.scrollView addSubview:self.imageView]; self.scrollView.contentSize = self.imageView.frame.size; self.imageWidth = [NSString stringWithFormat:@"%f",imageView.frame.size.width]; [UIApplication sharedApplication].networkActivityIndicatorVisible = NO; Memory usage becomes stable, no matter how many times I load the view. I have gone over everything I can think to see why this is leaking, but as far as I can tell I have all my releases straight. Can anyone see what I am missing?

    Read the article

  • ObjectiveC - Releasing objects added as parameters

    - by NobleK
    Ok, here goes. Being a Java developer I'm still struggling with the memory management in ObjectiveC. I have all the basics covered, but once in a while I encounter a challenge. What I want to do is something which in Java would look like this: MyObject myObject = new MyObject(new MyParameterObject()); The constructor of MyObject class takes a parameter of type MyParameterObject which I initiate on-the-fly. In ObjectiveC I tried to do this using following code: MyObject *myObject = [[MyObject alloc] init:[[MyParameterObject alloc] init]]; However, running the Build and Analyze tool this gives me a "Potential leak of an object" warning for the MyParameter object which indeed occurs when I test it using Instruments. I do understand why this happens since I am taking ownership of the object with the alloc method and not relinquishing it, I just don't know the correct way of doing it. I tried using MyObject *myObject = [[MyObject alloc] init:[[[MyParameterObject alloc] init] autorelease]]; but then the Analyze tool told me that "Object sent -autorelease too many times". I could solve the issue by modifying the init method of MyParameterObject to say return [self autorelease]; in stead of just return self;. Analyze still warnes about a potential leak, but it doesn't actually occur. However I believe that this approach violates the convention for managing memory in ObjectiveC and I really want to do it the right way. Thanx in advance.

    Read the article

  • Out of memory when creating a lot of objects C#

    - by Bas
    I'm processing 1 million records in my application, which I retrieve from a MySQL database. To do so I'm using Linq to get the records and use .Skip() and .Take() to process 250 records at a time. For each retrieved record I need to create 0 to 4 Items, which I then add to the database. So the average amount of total Items that has to be created is around 2 million. while (objects.Count != 0) { using (dataContext = new LinqToSqlContext(new DataContext())) { foreach (Object objectRecord in objects) { // Create a list of 0 - 4 Random Items and add each Item to the Object for (int i = 0; i < Random.Next(0, 4); i++) { Item item = new Item(); item.Id = Guid.NewGuid(); item.Object = objectRecord.Id; item.Created = DateTime.Now; item.Changed = DateTime.Now; dataContext.InsertOnSubmit(item); } } dataContext.SubmitChanges(); } amountToSkip += 250; objects = objectCollection.Skip(amountToSkip).Take(250).ToList(); } Now the problem arises when creating the Items. When running the application (and not even using dataContext) the memory increases consistently. It's like the items are never getting disposed. Does anyone notice what I'm doing wrong? Thanks in advance!

    Read the article

  • Setting synthesized arrays causing memory leaks using nested arrays

    - by webtoad
    Hello: Why is the following code causing a memory leak in an iPhone App? All of the initted objects below leak, including the arrays, the strings and the numbers. So, I'm thinking it has something to do with the the synthesized array property not releasing the object when I set the property again on the second and subsequent time this piece of code is called. Here is the code: "controller" (below) is my custom view controller class, which I have a reference to, and I am setting with this code snippet: sqlite3_stmt *statement; NSMutableArray *foo_IDs = [[NSMutableArray alloc] init]; NSMutableArray *foo_Names = [[NSMutableArray alloc] init]; NSMutableArray *foo_IDsBySection = [[NSMutableArray alloc] init]; NSMutableArray *foo_NamesBySection = [[NSMutableArray alloc] init]; // Get data: NSString *sql = @"select distinct p.foo_ID, p.foo_Name from foo as p "; if (sqlite3_prepare_v2(...) == SQLITE_OK) { while (sqlite3_step(statement) == SQLITE_ROW) { int p_id; NSString *foo_Name; p_id = sqlite3_column_int(statement, 0); char *str2 = (char *)sqlite3_column_text(statement, 1); foo_Name = [NSString stringWithCString:str2]; [foo_IDs addObject:[NSNumber numberWithInt:p_id]]; [foo_Names addObject:foo_Name]; } sqlite3_finalize(statement); } // Pass the array itself into another array: // (normally there is more than one array in each array) [foo_IDsBySection addObject: foo_IDs]; [foo_NamesBySection addObject: foo_Names]; [foo_IDs release]; [foo_Names release]; // Set some synthesized properties (of type NSArray, nonatomic, // retain) in controller: controller.foo_IDsBySection = foo_IDsBySection; controller.foo_NamesBySection = foo_NamesBySection; [foo_IDsBySection release]; [foo_NamesBySection release]; Thanks for any help!

    Read the article

  • Out-of-memory algorithms for addressing large arrays

    - by reve_etrange
    I am trying to deal with a very large dataset. I have k = ~4200 matrices (varying sizes) which must be compared combinatorially, skipping non-unique and self comparisons. Each of k(k-1)/2 comparisons produces a matrix, which must be indexed against its parents (i.e. can find out where it came from). The convenient way to do this is to (triangularly) fill a k-by-k cell array with the result of each comparison. These are ~100 X ~100 matrices, on average. Using single precision floats, it works out to 400 GB overall. I need to 1) generate the cell array or pieces of it without trying to place the whole thing in memory and 2) access its elements (and their elements) in like fashion. My attempts have been inefficient due to reliance on MATLAB's eval() as well as save and clear occurring in loops. for i=1:k [~,m] = size(data{i}); cur_var = ['H' int2str(i)]; %# if i == 1; save('FileName'); end; %# If using a single MAT file and need to create it. eval([cur_var ' = cell(1,k-i);']); for j=i+1:k [~,n] = size(data{j}); eval([cur_var '{i,j} = zeros(m,n,''single'');']); eval([cur_var '{i,j} = compare(data{i},data{j});']); end save(cur_var,cur_var); %# Add '-append' when using a single MAT file. clear(cur_var); end The other thing I have done is to perform the split when mod((i+j-1)/2,max(factor(k(k-1)/2))) == 0. This divides the result into the largest number of same-size pieces, which seems logical. The indexing is a little more complicated, but not too bad because a linear index could be used. Does anyone know/see a better way?

    Read the article

  • Memory Leakage using datatables

    - by Vix
    Hi, I have situation in which i'm compelled to retrieve 30,000 records each to 2 datatables.I need to do some manipulations and insert into records into the SQL server in Manipulate(dt1,dt2) function.I have to do this in 15 times as you can see in the for loop.Now I want to know what would be the effective way in terms of memory usage.I've used the first approach.Please suggest me the best approach. (1) for (int i = 0; i < 15; i++) { DataTable dt1 = GetInfo(i); DataTable dt2 = GetData(i); Manipulate(dt1,dt2); } (OR) (2) DataTable dt1 = new DataTable(); DataTable dt2 = new DataTable(); for (int i = 0; i < 15; i++) { dt1=null; dt2=null; dt1 = GetInfo(); dt2 = GetData(); Manipulate(dt1, dt2); } Thanks, Vix.

    Read the article

  • SQL Server 2008 R2 still requires a trace flag for Lock Pages in Memory

    - by AaronBertrand
    Almost two years ago, I blogged that Lock Pages in Memory was finally available to Standard Edition customers (Enterprise Edition customers had long been deemed smart enough to not abuse this feature). In addition to applying a cumulative update (2005 SP3 CU4 or 2008 SP1 CU2), in order to take advantage of LPIM, you also had to enable trace flag 845. Since the trace flag isn't documented for SQL Server 2008 R2, several of us in the community assumed that it was no longer required (since it was introduced...(read more)

    Read the article

  • SQL Server 2008 R2 still requires a trace flag for Lock Pages in Memory

    - by AaronBertrand
    Almost two years ago, I blogged that Lock Pages in Memory was finally available to Standard Edition customers (Enterprise Edition customers had long been deemed smart enough to not abuse this feature). In addition to applying a cumulative update (2005 SP3 CU4 or 2008 SP1 CU2), in order to take advantage of LPIM, you also had to enable trace flag 845. Since the trace flag isn't documented for SQL Server 2008 R2, several of us in the community assumed that it was no longer required (since it was introduced...(read more)

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #032

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Complete Series of Database Coding Standards and Guidelines SQL SERVER Database Coding Standards and Guidelines – Introduction SQL SERVER – Database Coding Standards and Guidelines – Part 1 SQL SERVER – Database Coding Standards and Guidelines – Part 2 SQL SERVER Database Coding Standards and Guidelines Complete List Download Explanation and Example – SELF JOIN When all of the data you require is contained within a single table, but data needed to extract is related to each other in the table itself. Examples of this type of data relate to Employee information, where the table may have both an Employee’s ID number for each record and also a field that displays the ID number of an Employee’s supervisor or manager. To retrieve the data tables are required to relate/join to itself. Insert Multiple Records Using One Insert Statement – Use of UNION ALL This is very interesting question I have received from new developer. How can I insert multiple values in table using only one insert? Now this is interesting question. When there are multiple records are to be inserted in the table following is the common way using T-SQL. Function to Display Current Week Date and Day – Weekly Calendar Straight blog post with script to find current week date and day based on the parameters passed in the function.  2008 In my beginning years, I have almost same confusion as many of the developer had in their earlier years. Here are two of the interesting question which I have attempted to answer in my early year. Even if you are experienced developer may be you will still like to read following two questions: Order Of Column In Index Order of Conditions in WHERE Clauses Example of DISTINCT in Aggregate Functions Have you ever used DISTINCT with the Aggregation Function? Here is a simple example about how users can do it. Create a Comma Delimited List Using SELECT Clause From Table Column Straight to script example where I explained how to do something easy and quickly. Compound Assignment Operators SQL SERVER 2008 has introduced new concept of Compound Assignment Operators. Compound Assignment Operators are available in many other programming languages for quite some time. Compound Assignment Operators is operator where variables are operated upon and assigned on the same line. PIVOT and UNPIVOT Table Examples Here is a very interesting question – the answer to the question can be YES or NO both. “If we PIVOT any table and UNPIVOT that table do we get our original table?” Read the blog post to get the explanation of the question above. 2009 What is Interim Table – Simple Definition of Interim Table The interim table is a table that is generated by joining two tables and not the final result table. In other words, when two tables are joined they create an interim table as resultset but the resultset is not final yet. It may be possible that more tables are about to join on the interim table, and more operations are still to be applied on that table (e.g. Order By, Having etc). Besides, it may be possible that there is no interim table; sometimes final table is what is generated when the query is run. 2010 Stored Procedure and Transactions If Stored Procedure is transactional then, it should roll back complete transactions when it encounters any errors. Well, that does not happen in this case, which proves that Stored Procedure does not only provide just the transactional feature to a batch of T-SQL. Generate Database Script for SQL Azure When talking about SQL Azure the most common complaint I hear is that the script generated from stand-along SQL Server database is not compatible with SQL Azure. This was true for some time for sure but not any more. If you have SQL Server 2008 R2 installed you can follow the guideline below to generate a script which is compatible with SQL Azure. Convert IN to EXISTS – Performance Talk It is NOT necessary that every time when IN is replaced by EXISTS it gives better performance. However, in our case listed above it does for sure give better performance. You can read about this subject in the associated blog post. Subquery or Join – Various Options – SQL Server Engine Knows the Best Every single time whenever there is a performance tuning exercise, I hear the conversation from developer where some prefer subquery and some prefer join. In this two part blog post, I explain the same in the detail with examples. Part 1 | Part 2 Merge Operations – Insert, Update, Delete in Single Execution MERGE is a new feature that provides an efficient way to do multiple DML operations. In earlier versions of SQL Server, we had to write separate statements to INSERT, UPDATE, or DELETE data based on certain conditions; however, at present, by using the MERGE statement, we can include the logic of such data changes in one statement that even checks when the data is matched and then just update it, and similarly, when the data is unmatched, it is inserted. 2011 Puzzle – Statistics are not updated but are Created Once Here is the quick scenario about my setup. Create Table Insert 1000 Records Check the Statistics Now insert 10 times more 10,000 indexes Check the Statistics – it will be NOT updated – WHY? Question to You – When to use Function and When to use Stored Procedure Personally, I believe that they are both different things - they cannot be compared. I can say, it will be like comparing apples and oranges. Each has its own unique use. However, they can be used interchangeably at many times and in real life (i.e., production environment). I have personally seen both of these being used interchangeably many times. This is the precise reason for asking this question. 2012 In year 2012 I had two interesting series ran on the blog. If there is no fun in learning, the learning becomes a burden. For the same reason, I had decided to build a three part quiz around SEQUENCE. The quiz was to identify the next value of the sequence. I encourage all of you to take part in this fun quiz. Guess the Next Value – Puzzle 1 Guess the Next Value – Puzzle 2 Guess the Next Value – Puzzle 3 Guess the Next Value – Puzzle 4 Simple Example to Configure Resource Governor – Introduction to Resource Governor Resource Governor is a feature which can manage SQL Server Workload and System Resource Consumption. We can limit the amount of CPU and memory consumption by limiting /governing /throttling on the SQL Server. If there are different workloads running on SQL Server and each of the workload needs different resources or when workloads are competing for resources with each other and affecting the performance of the whole server resource governor is a very important task. Tricks to Replace SELECT * with Column Names – SQL in Sixty Seconds #017 – Video  Retrieves unnecessary columns and increases network traffic When a new columns are added views needs to be refreshed manually Leads to usage of sub-optimal execution plan Uses clustered index in most of the cases instead of using optimal index It is difficult to debug SQL SERVER – Load Generator – Free Tool From CodePlex The best part of this SQL Server Load Generator is that users can run multiple simultaneous queries again SQL Server using different login account and different application name. The interface of the tool is extremely easy to use and very intuitive as well. A Puzzle – Swap Value of Column Without Case Statement Let us assume there is a single column in the table called Gender. The challenge is to write a single update statement which will flip or swap the value in the column. For example if the value in the gender column is ‘male’ swap it with ‘female’ and if the value is ‘female’ swap it with ‘male’. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Random memory corruption going undetected by memtest86

    - by sds
    Thinkpad t520; Ubuntu 12.04.1 LTS; 3.2.0-33-generic 16GB of ram Memtest86+ ran for 26 hours, 9 passes, no errors Booted into "recovery mode"; Ran fsck all filesystems - no errors; "check all packages" - no errors apparent random memory corruption: perl/R/chrome segfault every now and then, seemingly at random; sort(1) produces corrupt unsorted files. What could be possibly wrong and how do I debug it?

    Read the article

  • MySQL Memory usage

    - by Rob Stevenson-Leggett
    Our MySQL server seems to be using a lot of memory. I've tried looking for slow queries and queries with no index and have halved the peak CPU usage and Apache memory usage but the MySQL memory stays constantly at 2.2GB (~51% of available memory on the server). Here's the graph from Plesk. Running top in the SSH window shows the same figures. Does anyone have any ideas on why the memory usage is constant like this and not peaks and troughs with usage of the app? Here's the output of the MySQL Tuning Primer script: -- MYSQL PERFORMANCE TUNING PRIMER -- - By: Matthew Montgomery - MySQL Version 5.0.77-log x86_64 Uptime = 1 days 14 hrs 4 min 21 sec Avg. qps = 22 Total Questions = 3059456 Threads Connected = 13 Warning: Server has not been running for at least 48hrs. It may not be safe to use these recommendations To find out more information on how each of these runtime variables effects performance visit: http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html Visit http://www.mysql.com/products/enterprise/advisors.html for info about MySQL's Enterprise Monitoring and Advisory Service SLOW QUERIES The slow query log is enabled. Current long_query_time = 1 sec. You have 6 out of 3059477 that take longer than 1 sec. to complete Your long_query_time seems to be fine BINARY UPDATE LOG The binary update log is NOT enabled. You will not be able to do point in time recovery See http://dev.mysql.com/doc/refman/5.0/en/point-in-time-recovery.html WORKER THREADS Current thread_cache_size = 0 Current threads_cached = 0 Current threads_per_sec = 2 Historic threads_per_sec = 0 Threads created per/sec are overrunning threads cached You should raise thread_cache_size MAX CONNECTIONS Current max_connections = 100 Current threads_connected = 14 Historic max_used_connections = 20 The number of used connections is 20% of the configured maximum. Your max_connections variable seems to be fine. INNODB STATUS Current InnoDB index space = 6 M Current InnoDB data space = 18 M Current InnoDB buffer pool free = 0 % Current innodb_buffer_pool_size = 8 M Depending on how much space your innodb indexes take up it may be safe to increase this value to up to 2 / 3 of total system memory MEMORY USAGE Max Memory Ever Allocated : 2.07 G Configured Max Per-thread Buffers : 274 M Configured Max Global Buffers : 2.01 G Configured Max Memory Limit : 2.28 G Physical Memory : 3.84 G Max memory limit seem to be within acceptable norms KEY BUFFER Current MyISAM index space = 4 M Current key_buffer_size = 7 M Key cache miss rate is 1 : 40 Key buffer free ratio = 81 % Your key_buffer_size seems to be fine QUERY CACHE Query cache is supported but not enabled Perhaps you should set the query_cache_size SORT OPERATIONS Current sort_buffer_size = 2 M Current read_rnd_buffer_size = 256 K Sort buffer seems to be fine JOINS Current join_buffer_size = 132.00 K You have had 16 queries where a join could not use an index properly You should enable "log-queries-not-using-indexes" Then look for non indexed joins in the slow query log. If you are unable to optimize your queries you may want to increase your join_buffer_size to accommodate larger joins in one pass. Note! This script will still suggest raising the join_buffer_size when ANY joins not using indexes are found. OPEN FILES LIMIT Current open_files_limit = 1024 files The open_files_limit should typically be set to at least 2x-3x that of table_cache if you have heavy MyISAM usage. Your open_files_limit value seems to be fine TABLE CACHE Current table_cache value = 64 tables You have a total of 426 tables You have 64 open tables. Current table_cache hit rate is 1% , while 100% of your table cache is in use You should probably increase your table_cache TEMP TABLES Current max_heap_table_size = 16 M Current tmp_table_size = 32 M Of 15134 temp tables, 9% were created on disk Effective in-memory tmp_table_size is limited to max_heap_table_size. Created disk tmp tables ratio seems fine TABLE SCANS Current read_buffer_size = 128 K Current table scan ratio = 2915 : 1 read_buffer_size seems to be fine TABLE LOCKING Current Lock Wait ratio = 1 : 142213 Your table locking seems to be fine The app is a facebook game with about 50-100 concurrent users. Thanks, Rob

    Read the article

  • Reduce memory usage

    - by Flintoff
    I have just installed the standard default desktop configuration of Ubuntu 12.10 (Quantal Quetzal). My PC only has 1GB of RAM and is struggling a little. What steps can I take to reduce the memory overhead of the standard install? If it makes a difference, I use Firefox, and a terminal most of the time. Simply running those two applications I see: free -m total used free shared buffers cached Mem: 938 873 64 0 5 167 -/+ buffers/cache: 701 237 Swap: 959 158 801

    Read the article

  • non-volatile virtual memory for C++ containers

    - by arieberman
    Is there a virtual memory management process that would allow a program to use the standard container structures and classes, but retain these structures and their data when the program is not running (or being used), for use by the program at a later time? This should be possible, but can it be done without changing the source code and its (container) declarations? Is there a standard way of doing this?

    Read the article

  • loadparm.c:4864, leaking memory?

    - by RandomOzzy
    I’m running Ubuntu 14.04 Server with a LAMP stack, Samba, and FTP, no GUI, just SSHing into the server and working on it. I’m having trouble searching down a solution to this issue, but as far as I can Google for it, it might have something to do with Samba. no talloc stackframe at ../source3/param/loadparm.c:4864, leaking memory?? The warning doesn't pop up at any kind of regular intervals or in response to the same or repeated actions. It pops up between things I’m doing - changing directories, editing files, copying stuff, and it often pops up when I first log in. Has anyone got experience fixing this issue?

    Read the article

  • ACDSee alternatives for batch editing images

    - by Oxwivi
    I am looking for free, preferably open, alternatives to ACDSee for batch editing work. While I can do much of the work well on ACDSee, it's not entirely satisfactory despite having to pay for it. I need at least the following batch editing functions: Resize using either height or width and maintain aspect ration Auto contrast text overlays and occasionally, cropping oh, I make extensive use of renaming features as well Couple of issues with ACDSee are: I always need to highlight the Exposure section or auto contrast will not be done despite it being saved in the preset; and I can't define, move around the cropping box, forcing me to manually crop tons of images. I'm not an advanced, or "power photo-editor". I only require the basic stuff I described to be automated. My personal feature wish list (I'm pretty sure something so niche doesn't exist) would be text overlay based on the image names (images are named as image-1_1, image-1_2 or image-2_c1_1, image-2_c1_2, and text overlay would Image-1 and Image-2 C1 and Image-2 C2). I tried digiKam, but damn that thing is huge. It runs very slowly on my Pentium 4 and 1.5 GB RAM. On top of being a program with over 1 GB of files, the KDE library it uses is always slow regardless of it running on either Windows or Linux.

    Read the article

  • UIImageWriteToSavedPhotosAlbum showing memory leak with iPhone connected to Instruments

    - by user168739
    Hi, I'm using version 3.0.1 of the SDK. With the iPhone connected to Instruments I'm getting a memory leak when I call UIImageWriteToSavedPhotosAlbum. Below is my code: NSString *gnTmpStr = [NSString stringWithFormat:@"%d", count]; UIImage *ganTmpImage = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:gnTmpStr ofType:@"jpg"]]; // Request to save the image to camera roll UIImageWriteToSavedPhotosAlbum(ganTmpImage, self, @selector(imageSavedToPhotosAlbum:didFinishSavingWithError:contextInfo:), nil); and the selector method - (void)imageSavedToPhotosAlbum:(UIImage *)image didFinishSavingWithError:(NSError *)error contextInfo:(void *)contextInfo { NSString *message; NSString *title; if (!error) { title = @"Wallpaper"; message = @"Wallpaper Saved"; } else { title = @"Error"; message = [error description]; } UIAlertView *alert = [[UIAlertView alloc] initWithTitle:title message:message delegate:self cancelButtonTitle:@"OK" otherButtonTitles:nil]; [alert show]; [alert release]; } Am I forgetting to release something once the image has been saved and the selector method imageSavedToPhotosAlbum is called? Or is there a possible known issue with UIImageWriteToSavedPhotosAlbum? Here is the stack trace from Instruments: Leaked Object: GeneralBlock-3584 size: 3.50 KB 30 MyApp start 29 MyApp main /Users/user/Desktop/MyApp/main.m:14 28 UIKit UIApplicationMain 27 UIKit -[UIApplication _run] 26 GraphicsServices GSEventRunModal 25 CoreFoundation CFRunLoopRunInMode 24 CoreFoundation CFRunLoopRunSpecific 23 GraphicsServices PurpleEventCallback 22 UIKit _UIApplicationHandleEvent 21 UIKit -[UIApplication sendEvent:] 20 UIKit -[UIWindow sendEvent:] 19 UIKit -[UIWindow _sendTouchesForEvent:] 18 UIKit -[UIControl touchesEnded:withEvent:] 17 UIKit -[UIControl(Internal) _sendActionsForEvents:withEvent:] 16 UIKit -[UIControl sendAction:to:forEvent:] 15 UIKit -[UIApplication sendAction:toTarget:fromSender:forEvent:] 14 UIKit -[UIApplication sendAction:to:from:forEvent:] 13 CoreFoundation -[NSObject performSelector:withObject:withObject:] 12 UIKit -[UIBarButtonItem(Internal) _sendAction:withEvent:] 11 UIKit -[UIApplication sendAction:to:from:forEvent:] 10 CoreFoundation -[NSObject performSelector:withObject:withObject:] 9 MyApp -[FlipsideViewController svPhoto] /Users/user/Desktop/MyApp/Classes/FlipsideViewController.m:218 8 0x317fa528 7 0x317e3628 6 0x317e3730 5 0x317edda4 4 0x3180fc74 3 Foundation +[NSThread detachNewThreadSelector:toTarget:withObject:] 2 Foundation -[NSThread start] 1 libSystem.B.dylib pthread_create 0 libSystem.B.dylib malloc I did a test with a new project and only added this code below in the viewDidLoad: NSString *gnTmpStr = [NSString stringWithFormat:@"DefaultTest"]; UIImage *ganTmpImage = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:gnTmpStr ofType:@"png"]]; // Request to save the image to camera roll UIImageWriteToSavedPhotosAlbum(ganTmpImage, nil, nil, nil); The same leak shows up right after the app loads Thank you for the help. Bryan

    Read the article

  • Small objects allocator

    - by Felics
    Hello, Has anybody used SmallObjectAllocator from Modern C++ Design by Andrei Alexandrescu in a big project? I want to implement this allocator but I need some opinions about it before using it in my project. I made some tests and it seems very fast, but the tests were made in a small test environment. I want to know how fast it is when are lots of small objects(like events, smart pointers, etc) and how much extra memory it uses.

    Read the article

  • What (tf) are the secrets behind PDF memory allocation (CGPDFDocumentRef)

    - by Kai
    For a PDF reader I want to prepare a document by taking 'screenshots' of each page and save them to disc. First approach is CGPDFDocumentRef document = CGPDFDocumentCreateWithURL((CFURLRef) someURL); for (int i = 1; i<=pageCount; i++) { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc]init]; CGPDFPageRef page = CGPDFDocumentGetPage(document, i); ...//getting + manipulating graphics context etc. ... CGContextDrawPDFPage(context, page); ... UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext(); ...//saving the image to disc [pool drain]; } CGPDFDocumentRelease(document); This results in a lot of memory which seems not to be released after the first run of the loop (preparing the 1st document), but no more unreleased memory in additional runs: MEMORY BEFORE: 6 MB MEMORY DURING 1ST DOC: 40 MB MEMORY AFTER 1ST DOC: 25 MB MEMORY DURING 2ND DOC: 40 MB MEMORY AFTER 2ND DOC: 25 MB .... Changing the code to for (int i = 1; i<=pageCount; i++) { CGPDFDocumentRef document = CGPDFDocumentCreateWithURL((CFURLRef) someURL); NSAutoreleasePool *pool = [[NSAutoreleasePool alloc]init]; CGPDFPageRef page = CGPDFDocumentGetPage(document, i); ...//getting + manipulating graphics context etc. ... CGContextDrawPDFPage(context, page); ... UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext(); ...//saving the image to disc CGPDFDocumentRelease(document); [pool drain]; } changes the memory usage to MEMORY BEFORE: 6 MB MEMORY DURING 1ST DOC: 9 MB MEMORY AFTER 1ST DOC: 7 MB MEMORY DURING 2ND DOC: 9 MB MEMORY AFTER 2ND DOC: 7 MB .... but is obviously a step backwards in performance. When I start reading a PDF (later in time, different thread) in the first case no more memory is allocated (staying at 25 MB), while in the second case memory goes up to 20 MB (from 7). In both cases, when I remove the CGContextDrawPDFPage(context, page); line memory is (nearly) constant at 6 MB during and after all preparations of documents. Can anybody explain whats going on there?

    Read the article

  • Web Services, Memory Leaks and CRM

    - by Neil
    Hi, I have a website that allows users to upload a csv file. This calls a service that reads the information from the csv, puts it into DynamicEntity objects and calls the CRM service to Create/Update entities in CRM. When this service creates/updates an entity this kicks off other plugins to apply certain business rules. These rules can also Create or Update entites in CRM. The issue here is that the handle count of the w3wp.exe process that the website is calling increases every time the an entity is created or updated and it never comes back down. I tried putting Garbage Collection code in the business rules and this reduces the handle count of the CRM w3wp process (run by the Network Service), but not the other w3wp process. Should I have Dispose methods on the Web Service that calls the CRM service? I hope that makes sense. I'm not overly familiar with memory management issues so any help is appreciated. Can anybody give me some tips on how to stop this from occurring? Thanks, Neil -- EDIT Okay well the handle count goes up when I call the Service.Create(DynamicEntity) method. I don't think placing any code here would be beneficial. When I exit the method/class/service that contains this call the handle count stays as it is. What I need to know is whether this is something I should be managing or is it something CRM takes care of (or doesn't take care of but I can't do anything about it) -- Another Edit Right this is how it works. 1) We have CRM and its related services 2) We have another service independent of CRM that uses the CRM services (number 1 above) to create entities based on csv info passed into it 3) We have a website that allows a user to upload a csv, and calls service no 2 above to Create/Update entities in CRM 4) We have plugins fired by CRM which use Service 1 above to create/update entities So the user uploads a csv to the website (3), this fires a service(2). When service 2 creates an entity using service 1, Service 4 fires. Service 4 calls also uses service 1 to Create entities, and when these services are called (using the Service.Create() method) the handle count of the process increases. When the method/class/services finish the handle count remains the same, and so when the whole process occurs again the handle count will increased again.

    Read the article

  • dmidecode showing less Memory Capacity than Motherboard spec?

    - by starchx
    We got a supermicro server, http://www.supermicro.com/products/system/1u/5016/sys-5016i-ur.cfm, according spec, the server supports up to 32G memory when using ECC Register Memory. However, when I tried the dmidecode command, it says 24G max memory: [root@c1 ~]# dmidecode | grep Maximum Maximum Size: 256 kB Maximum Size: 1024 kB Maximum Size: 8192 kB Maximum Memory Module Size: 4096 MB Maximum Total Memory Size: 24576 MB Maximum Capacity: 24 GB Maximum Value: Unknown Which one I should trust?

    Read the article

  • IBM memory, whats wrong, bought wrong type?

    - by michaelmore
    I have this server IBM System x3250 M3 Intel(R) Xeon(R) CPU X3450 @ 2.67GHz (8 CPUs), ~2.7GHz 2 GB PC3-10600 ECC DDR3 SDRAM HDD 500 TB Hotplug Windows server 2008 this is screenshot the default memory, micron PC3-10600R 2gb, its working well to entering windows http://freakimage.com/images/113memory_ram_micron_PC3_.jpg then i want to change to higher memory, i bought this memory, samsung PC3-10600R 4gb http://freakimage.com/images/602memory_ram_samsung_mic.jpg but its hang in uEFI boot, cant move forward to windows already googling it, no solution yet everywhere please take a look the screenshot, do i bought wrong new memory, if it is maybe i still can replace to the seller with another memory

    Read the article

  • IIS 7 Application Pools using a different amount of memory on multiple servers behind a load balancer

    - by Jim March
    We have 6 servers in a web farm behind an F5. There are approximately 25 AppPools on each of these servers. On servers 1 - 5 the apppools are consuming approx 500MB Private Bytes, and 5GB Virtual Bytes. On server 6 the apppools are consuming approx 800MB Private Bytes, and 8GB Virtual Bytes. I can not seem to figure out why we have this difference. The code is the exact same on each box. We replicate the apphost.config between the boxes, so the Appplication Configs are identical. The only difference seems to be that this box consumes more RAM, and in turn ends up using a lot more CPU. During Black Friday we observed the CPU on server 6 spiking to 100% and noticed that the % Memory Commit was also near 100%, while the rest of the farm was at closer to 50% utilization. Pulling the 6th server from the load balancer dropped CPU/Memory on the 6th server back to normal, and did not cause a noticeable strain on the other servers.

    Read the article

  • About memory cache of Linux

    - by cheneydeng
    I'm running a python script to do some statistics and the actually memory which used is low,about 10%.And no other process cost more memory.However,when i use free -m and it shows that almost 95% memory has been used.The point is that my script should do a lot of read from files,so i wonder if there's any mechanism of Linux memory cache that caused the problem?echo 1 >> /proc/sys/vm/drop_caches works,but it seems manually.How can i reduce the memory cost and doesn't make a bad effect on reading files?

    Read the article

  • Hyper-v dynamic memory client machines always use maximum memory

    - by Eric P
    When I create a virtual machine in Hyper-V and set it up to use dynamic memory, the virtual machine will always use the maximum memory within the virtualized OS. Hyper-V will show the assigned memory at 514mb, but when I log into the server and pull up task manager, it will show 90% memory used. When I bump the maximum memory up to 4gb, I get the same result: 90% memory usage. Nothing is even running on the virtual machine other than a clean instal of Windows Server 2008 R2. I have also tried it with Windows 7 with the same results. Is this the expected behavior or is something setup wrong

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >