Search Results

Search found 12445 results on 498 pages for 'memory fragmentation'.

Page 10/498 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • high virtual memory usage in openvz?

    - by freedrull
    We're having a lot of memory problems on a new OpenVZ box. It is supposed to have 1 gig of memory, I'm not sure how much of that is burstable or guaranteed memory. Programs in general seem to take up more virtual memory than they do on my box at home, and on our other OpenVZ box. I wrote this simple C program: #include <stdio.h> #include <stdlib.h> int main(){ char *thingy = malloc(500); getchar(): return 0; } So it simply allocates 500 bytes and then returns. I ran the program on 3 computers. On my home machine, and our other OpenVZ box it shows about 1k bytes of virtual memory being used. On the new problematic machine its about 3k. I know this is just virtual memory and not resident memory, but why is this machine allocating so much virtual memory? Are there some settings I need to adjust to the OpenVZ memory settings? I tried changing the stack size with ulimit -s 256 and restarting some demons, but I still saw the same results. I'm doing all of my monitoring with htop, is this even a good program to use with a OpenVZ vps? I've read I should be parsing the output of /proc/user_beancounters intead or something.

    Read the article

  • TCP Server Memory management: #Connections Vs. #Requests

    - by Andrew
    Given that, there is no theoretical limit to number of concurrent TCP connections a Windows 2008 server can handle. Only thing will happen is, with each connection there will be memory consumption in server. Unfortunately, memory is not unlimited (and I want to utilize only physical memory). For example, lets say we've 2GB server memory. Now there are two extreme cases: Case 1: If we've allocated 64KB buffer for each connection (only to receive incoming request), then 32768 connections can consume all the 2GB of memory. This will not leave any memory to queue/process incoming requests from those connections. Case 2: On the other hand, lets say a single (or very few) connections continuously keeps sending request buffers (for example, video streaming from one connection to other) and server cannot process them within time, those buffers will get piled up in server and eventually will occupy most of the servers memory. And it will not leave any memory for new connection thereafter. This is the real dilemma in server design bugging me badly for last many days. If I can decide on max size of request buffer per connection and max number of requests to allow in queue per connection. Then, based on available server memory, it will then automatically set limit on max number of concurrent connections. How to decide on these limits to achieve best performance and throughput? I am just looking for perfect utilization of server resources. Are there any standard guidelines or empirical data available with someone who can share with me please.

    Read the article

  • Windows Displays Double the Actual Installed Physical Memory

    - by Andrew Barber
    I have a server I've installed Windows Web 2008 R2 on, which is reporting that I have double the physical memory installed as is actually the case. In msinfo32 "Installed Physical Memory" shows as 2x what ever the actual installed amount is, though "Total Physical Memory" shows the correct amount. The "System" info window shows installed memory as 2x, with the correct amount in parenthesis listed as the "usable" amount). This server mistakenly had Windows Web 2008 (32-bit) installed on it just previously, and that OS also reported the same faulty information as Win2K8R2 is reporting. BIOS reports the correct amount, memtest was run on this server before installation, and a previous Windows 2000 instance installed on this system also reported the correct amount, as I recall. Server operation seems to be fine as well (it's only trying to use the correct amount of memory). The server is a generic pizzabox running on a SuperMicro X6DVL-EG with dual Xeon-3.2's. Memory installed are 4 matching mt18vddf12872g-335c3 sticks (1GB pc2700 DDR ECC REG cl2.5) This behavior occurs whether two or all four are installed. So, has anyone seen something like this before? Have any idea about what's causing it, and how I should be concerned about it? Everything else seems good so far, and I'll be upgrading the memory before putting the server into service, but I don't want to spend too much time/money/effort on the server if it's got something odd going wrong here. UPDATE: There was a question I ran into regarding memory sparing in the BIOS and a possible (buggy) effect thereof; however, flipping that bit back and forth in the BIOS revealed that isn't the issue. Still flummoxed a bit about this one, though I still have seen no negative impacts. Post-Answer Update (January 13, 2011): Upgrading the system with new, larger memory has fixed this issue.

    Read the article

  • Animating Tile with Blitting taking up Memory.

    - by Kid
    I am trying to animate a specific tile in my 2d Array, using blitting. The animation consists of three different 16x16 sprites in a tilesheet. Now that works perfect with the code below. BUT it's causing memory leakage. Every second the FlashPlayer is taking up +140 kb more in memory. What part of the following code could possibly cause the leak: //The variable Rectangle finds where on the 2d array we should clear the pixels //Fillrect follows up by setting alpha 0 at that spot before we copy in nxt Sprite //Tiletype is a variable that holds what kind of tile the next tile in animation is //(from tileSheet) //drawTile() gets Sprite from tilesheet and copyPixels it into right position on canvas public function animateSprite():void{ tileGround.bitmapData.lock(); if(anmArray[0].tileType > 42){ anmArray[0].tileType = 40; frameCount = 0; } var rect:Rectangle = new Rectangle(anmArray[0].xtile * ts, anmArray[0].ytile * ts, ts, ts); tileGround.bitmapData.fillRect(rect, 0); anmArray[0].tileType = 40 + frameCount; drawTile(anmArray[0].tileType, anmArray[0].xtile, anmArray[0].ytile); frameCount++; tileGround.bitmapData.unlock(); } public function drawTile(spriteType:int, xt:int, yt:int):void{ var tileSprite:Bitmap = getImageFromSheet(spriteType, ts); var rec:Rectangle = new Rectangle(0, 0, ts, ts); var pt:Point = new Point(xt * ts, yt * ts); tileGround.bitmapData.copyPixels(tileSprite.bitmapData, rec, pt, null, null, true); } public function getImageFromSheet(spriteType:int, size:int):Bitmap{ var sheetColumns:int = tSheet.width/ts; var col:int = spriteType % sheetColumns; var row:int = Math.floor(spriteType/sheetColumns); var rec:Rectangle = new Rectangle(col * ts, row * ts, size, size); var pt:Point = new Point(0,0); var correctTile:Bitmap = new Bitmap(new BitmapData(size, size, false, 0)); correctTile.bitmapData.copyPixels(tSheet, rec, pt, null, null, true); return correctTile; }

    Read the article

  • chrome memory and cpu footprint

    - by nmizar
    I've searched the forums for an answer but I couldn't find quite the answer I was looking for [1] , so I thought I it could as well be of interest to more people around here. I carry out a big part of my job on the browser (or for the browser, if you want to put it that way). I tend to use Chrome, because it's got natively many of the newest features that I need (DevTools stuff, mainly but not only). BTW, I'm usually running the last available Chrome version/build on a desktop Vaio with 4GB RAM and dual core CPU and Ubuntu 12.04 as distro and Gnome as window manager. So, I was curious about a) why does Chrome spawn so many threads even opening only three of four tabs and b) is there any way to allocate more memory to Chrome to prevent its performance from degrading? Thanks in advance, Nacho PS [1] I found threads about Chrome freezing or running out of memory but not about the reasons for this being so or for avoiding it to happen. PPS Of course, I could always buy a newer and more capable machine and that is exactly what I'm trying to evaluate: is this a question of outdated hardware or the problem will keep appearing on any (decently but not hugely sized) machine?

    Read the article

  • opengl memory issue - quite strange.

    - by user4707
    Hello, I have heard that textures consumes lot of memory but I am surprised how much.... I have 7 textures 1024 16 bit each. And while I will run my app it consumes 57MB of memory. I think that this is "a bit" too much. I am writing 2D application (no cocos or other framework) Strange is that while I will compile my app with disabled rendering methods: glDrawArrays than It uses only 27MB.... which is about 30MB less... Do you have any Idea why? I am creating textures before rendering of course: rendering looks like this: [EAGLContext setCurrentContext:context]; glBindFramebufferOES(GL_FRAMEBUFFER_OES, defaultFramebuffer); glClearColor(0.0f, 0.0f, 0.0f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); glPushMatrix(); glEnableClientState(GL_VERTEX_ARRAY); glEnableClientState(GL_TEXTURE_COORD_ARRAY); glEnable(GL_TEXTURE_2D); glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); TeksturaObrazek *obrazek_klaw =[[AppDirector sharedAppDirector] obrazek_klaw]; glBindTexture(GL_TEXTURE_2D, [[obrazek_klaw image_texture] name] ); glVertexPointer(2, GL_FLOAT, 0,vertex1); glTexCoordPointer(2, GL_FLOAT, 0, vertex2); glColor4f(1,1,1,alpha); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); glDisable(GL_BLEND); glDisable(GL_TEXTURE_2D); glDisableClientState(GL_TEXTURE_COORD_ARRAY); glPopMatrix(); [context presentRenderbuffer:GL_RENDERBUFFER_OES]; It looks like standard routine... I have spent about 2 days looking for for answer and I still have no clue.

    Read the article

  • Android : Google étend le champs d'application de l'API "Fragments" pour lutter contre la fragmentation de l'OS

    Android : Google étend le champs d'application de l'API Fragments aux versions 1.6 de son OS Pour lutter contre la fragmentation de sa plate-forme mobile Dans la lutte contre la fragmentation d'Android, Google vient d'étendre aux anciennes versions de l'OS, l'API « Fragments » conçue à l'origine pour Android 3.0 (alias Honeycomb). Initialement, Fragments a été conçue pour faciliter la tâche de rendre les anciennes applications compatibles avec les périphériques à écrans plus larges, notamment les tablettes que ciblent ess...

    Read the article

  • Returning objects with autorelease but I still leak memory

    - by gok
    I am leaking memory on this: my custom class: + (id)vectorWithX:(float)dimx Y:(float)dimy{ return [[[Vector alloc] initVectorWithX:dimx Y:dimy] autorelease]; } - (Vector*)add:(Vector*)q { return [[[Vector vectorWithX:x+q.x Y:y+q.y] retain] autorelease]; } in app delegate I initiate it: Vector *v1 = [[Vector alloc] initVector]; Vector *v2 = [[Vector alloc] initVector]; Vector *vtotal = [[v1 add:v2] retain]; [v1 release]; [v2 release]; [vtotal release]; How this leaks? I release or autorelease them properly. The app crashes immediately if I don't retain these, because of an early release I guess. It also crashes if I add another release.

    Read the article

  • Cocoa Memory Usage

    - by user288024
    I'm trying to track down some peculiar memory behavior in my Cocoa desktop app. My app does a lot of image processing using NSImage and uploads those images to a website over HTTP using NSURLConnection. After uploading several hundred images (some very large), when I run Instrument I get no leaks. I've also run through MallocDebug and get no leaks. When I dig into object allocations using Instrument I get output like this: GeneralBlock-9437184, Net Bytes 9437184, # Net 1 GeneralBlock-192512, Net Bytes 2695168, # Net 14 and etc., for smaller sizes. When I look at these in detail, they're marked as being owned by 'Foundation' and created via NSConcreteMutableData initWithCapacity. During HTTP upload I'm creating a post body using NSMutableData, so I'm guessing these are buffers Cocoa is caching for me when I create the NSMutableData objects. Is there a way to force Cocoa to free these? I'm 90% positive I'm releasing correctly (and Instruments and MallocDebug seem to confirm this), but I think Cocoa is keeping these around for perf reasons since I'm allocating so many MSMutableData buffers.

    Read the article

  • c# memory allocation and deallocation patterns

    - by Neal
    Since C# uses Garbage Collection. When is it necessary to use .Dispose to free the memory? I realize there are a few situations so I'll try to list the ones I can think of. If I close a Form that contains GUI type object, are those objects dereferenced and therefore will be collected? If I create a local object using new should I .Dispose of it before the method exits or just let the GC take care of it? What is good practice in this case? Are there any times in which forcing a GC is understandable? Are events collected by the GC when it's object is collected?

    Read the article

  • NSXMLParser 's delegate and memory leak

    - by dizzy_fingers
    Hello, I am using a NSXMLParser class in my program and I assign a delegate to it. This delegate, though, gets retained by the setDelegate: method resulting to a minor, yet annoying :-), memory leak. I cannot release the delegate class after the setDelegate: because the program will crash. Here is my code: self.parserDelegate = [[ParserDelegate alloc] init]; //retainCount:1 self.xmlParser = [[NSXMLParser alloc] initWithData:self.xmlData]; [self.xmlParser setDelegate:self.parserDelegate]; //retainCount:2 [self.xmlParser parse]; [self.xmlParser release]; ParserDelegate is the delegate class. Of course if I set 'self' as the delegate, I will have no problem but I would like to know if there is a way to use a different class as delegate with no leaks. Thank you in advance.

    Read the article

  • memory usage in iOS

    - by varun
    My app has a simple UI interface having simple buttons, date picker, picker view, table view, action sheet, toolbar, alert boxes etc. No images, no network access. Just plain simple UI. It accesses SQLite database a lot. ARC option is enabled. I have many questions to ask: In .h files, I am defining IBOutlets like @property(nonatomic, retain) IBOutlet UIButton *bt; Where do i need to do bt=nil? in didReceiveMemoryWarning or viewDidLoad Live Bytes in Instruments tool is 4-5MB. Is it enough or I need to reduce memory usage? If so, how can I do so? Please mention few important points. Also, what all need to be added to the following methods? applicationDidReceiveMemoryWarning UIApplicationDidReceiveMemoryWarningNotification

    Read the article

  • How to reduce memory consumption an AWS EC2 t1.micro instance (free tier) ubuntu server 14.04 LTS EBS

    - by CMPSoares
    Hi I'm working on my bachelor thesis and for that I need to host a node.js web application on AWS, in order to avoid costs I'm using a t1.micro instance with 30GB disk space (from what I know it's the maximum I get in the free tier) which is barely used. But instead I have problems with memory consumption, it's using all of it. I tried the approach of creating a virtual swap area as mentioned at Why don't EC2 ubuntu images have swap? with these commands: sudo dd if=/dev/zero of=/var/swapfile bs=1M count=2048 && sudo chmod 600 /var/swapfile && sudo mkswap /var/swapfile && echo /var/swapfile none swap defaults 0 0 | sudo tee -a /etc/fstab && sudo swapon -a But this swap area isn't used somehow. Is something missing in this approach or is there another process of reducing the memory consumption in these type of AWS instances? Bottom-line: This originates server freezes and crashes and that's what I want to stop either by using the swap, reducing memory usage or both.

    Read the article

  • What does this IIS memory dump mean? (reserved memory)

    - by Jesse
    My w3wp's are recycling every 60 seconds after using too much virtual memory. I ran the IIS Debug Diagnostic Tool to capture a memory dump before the worker process recycled; the most interesting part seems to be this: Virtual Allocation Summary Reserved memory 4.88 GBytes Committed memory 328.27 MBytes Mapped memory 17.36 MBytes Reserved block count 524 blocks Committed block count 1082 blocks Mapped block count 43 blocks So that 4.88 GBytes of reserved memory seems really big. But neither the DotNetMemoryAnalysis or the regular Memory Pressure Analyzer seems to tell me where that 4.88 GB went. How can I find out?

    Read the article

  • Tomcat memory usage grows until crash with no GC run

    - by Phil
    I'm administrating a server running Tomcat that is getting a lot of traffic lately. If I monitor memory usage in Task Manager I can see the memory usage growing and eventually tomcat crashes around the 1GB mark. Here's the memory relevent bits I've set in Tomcat Properties (this is a Windows Server): Intial memory pool: 1024 MB Maximum memory pool: 1024 MB -XX:MaxPermSize=256M The weird thing is since these problems arose I've deployed Lambda Probe to the Tomcat instance and the memory usage values I see there are much lower, for example Task Manager might show 467MB used while the "Total" used in Probe is 212 MB. Also, the Maximum Total listed in Probe is 1.29GB, when I would have expected 1GB, the maximum memory set above. If I force the garbage collector to run using Probe, I can keep Tomcat from crashing for a while (indefinitely, AFAIK). So why doesn't the GC run automatically and stop Tomcat from crashing? Thanks.

    Read the article

  • SQL 2008 Memory Usage

    - by Danilo Brambilla
    I have a SQL Server 2008 (ver 10.0.1600) running on a Windows Server 2008 R2 Enterprise server with 8 GB of physical ram. If I open Task Manager I can see on 'Physical Memory' section of 'Performance' tab that only 340 MB are Available of 8191 Total, but I can't see any process using such amount of memory. Please note SQL Server is memory limited to 6GB (Maximum Server Memory = 6000). If I open Sysinternals Process Explorer, I can see sqlsrvr.exe process has: Private Bytes: 227.000 K Working Set: 140.000 K Virtual Size: 8.762.000 K What does this means? Is there any way to free up this memory for other process? Why Virtual Size figure as allocated memory? I thought that Virtual Size was 'reserved memory' only.

    Read the article

  • SQL 2008 Memory Usage

    - by Danilo Brambilla
    I have a SQL Server 2008 (ver 10.0.1600) running on a Windows Server 2008 R2 Enterprise server with 8 GB of physical ram. If I open Task Manager I can see on 'Physical Memory' section of 'Performance' tab that only 340 MB are Available of 8191 Total, but I can't see any process using such amount of memory. Please note SQL Server is memory limited to 6GB (Maximum Server Memory = 6000). If I open Sysinternals Process Explorer, I can see sqlsrvr.exe process has: Private Bytes: 227.000 K Working Set: 140.000 K Virtual Size: 8.762.000 K What does this means? Is there any way to free up this memory for other process? Why Virtual Size figure as allocated memory? I thought that Virtual Size was 'reserved memory' only.

    Read the article

  • HP DL160 G6 memory PC3-10600R vs PC3-10600E

    - by Jeremy Hajek
    I am using a HP DL 160 G6 server that according to specs takes PC3 Registered or Unbuffered. When I combine the two types of memory below the system will not POST. When I use just the first type of memory listed the system will POST. I have two pieces of HP memory that came with the server labeled PC3-10600E-9-10-E0 and then I have some Crucial memory labeled PC3-10600R-9-10-B0 I wager that the R means Registered memory and the E means ECC - then shouldn't the crucial memory boot with the system according to the HP specs? Or does the E mean it is Unbuffered and therefore I shouldn't mix and match as according to this HP memory config doc?

    Read the article

  • Motherboard not recognizing memory anymore

    - by root
    I bought some new RAM and installed it on my motherboard. But, the BIOS would not post. There's an LED on my motherboard that shows error codes, and it showed the error: No usable memory detected. So, I removed the new memory and reinstalled the old memory, thus restoring the computer back to its original configuration. But, the BIOS still would not post, still giving the error: No usable memory detected. I've ensured that the memory and power headers are seated properly. I've tried all possible combinations of memory slots, and I've also reset the CMOS, but the error remains the same. The computer was working fine before I tried upgrading the memory, and I originally assembled the computer myself. What are some possible causes of this problem?

    Read the article

  • Why is my addSubview: method causing a leak?

    - by Nathan
    Okay, so I have done a ton of research on this and have been pulling my hair out for days trying to figure out why the following code leaks: [UIApplication sharedApplication].networkActivityIndicatorVisible = YES; UIImage *comicImage = [self getCachedImage:[NSString stringWithFormat:@"%@%@%@",@"http://url/",comicNumber,@".png"]]; self.imageView = [[[UIImageView alloc] initWithImage:comicImage] autorelease]; [self.scrollView addSubview:self.imageView]; self.scrollView.contentSize = self.imageView.frame.size; self.imageWidth = [NSString stringWithFormat:@"%f",imageView.frame.size.width]; [UIApplication sharedApplication].networkActivityIndicatorVisible = NO; Both self.imageView and self.scrollView are @propety (nonatomic, retain) and released in my dealloc.. imageView isn't used anywhere else in the code. This code is also run in a thread off of the main thread. If I run this code on my device, it will quickly run out of memory if I continually load this view. However, I've found if I comment out the following line: [UIApplication sharedApplication].networkActivityIndicatorVisible = YES; UIImage *comicImage = [self getCachedImage:[NSString stringWithFormat:@"%@%@%@",@"http://url/",comicNumber,@".png"]]; self.imageView = [[[UIImageView alloc] initWithImage:comicImage] autorelease]; //[self.scrollView addSubview:self.imageView]; self.scrollView.contentSize = self.imageView.frame.size; self.imageWidth = [NSString stringWithFormat:@"%f",imageView.frame.size.width]; [UIApplication sharedApplication].networkActivityIndicatorVisible = NO; Memory usage becomes stable, no matter how many times I load the view. I have gone over everything I can think to see why this is leaking, but as far as I can tell I have all my releases straight. Can anyone see what I am missing?

    Read the article

  • ObjectiveC - Releasing objects added as parameters

    - by NobleK
    Ok, here goes. Being a Java developer I'm still struggling with the memory management in ObjectiveC. I have all the basics covered, but once in a while I encounter a challenge. What I want to do is something which in Java would look like this: MyObject myObject = new MyObject(new MyParameterObject()); The constructor of MyObject class takes a parameter of type MyParameterObject which I initiate on-the-fly. In ObjectiveC I tried to do this using following code: MyObject *myObject = [[MyObject alloc] init:[[MyParameterObject alloc] init]]; However, running the Build and Analyze tool this gives me a "Potential leak of an object" warning for the MyParameter object which indeed occurs when I test it using Instruments. I do understand why this happens since I am taking ownership of the object with the alloc method and not relinquishing it, I just don't know the correct way of doing it. I tried using MyObject *myObject = [[MyObject alloc] init:[[[MyParameterObject alloc] init] autorelease]]; but then the Analyze tool told me that "Object sent -autorelease too many times". I could solve the issue by modifying the init method of MyParameterObject to say return [self autorelease]; in stead of just return self;. Analyze still warnes about a potential leak, but it doesn't actually occur. However I believe that this approach violates the convention for managing memory in ObjectiveC and I really want to do it the right way. Thanx in advance.

    Read the article

  • Out of memory when creating a lot of objects C#

    - by Bas
    I'm processing 1 million records in my application, which I retrieve from a MySQL database. To do so I'm using Linq to get the records and use .Skip() and .Take() to process 250 records at a time. For each retrieved record I need to create 0 to 4 Items, which I then add to the database. So the average amount of total Items that has to be created is around 2 million. while (objects.Count != 0) { using (dataContext = new LinqToSqlContext(new DataContext())) { foreach (Object objectRecord in objects) { // Create a list of 0 - 4 Random Items and add each Item to the Object for (int i = 0; i < Random.Next(0, 4); i++) { Item item = new Item(); item.Id = Guid.NewGuid(); item.Object = objectRecord.Id; item.Created = DateTime.Now; item.Changed = DateTime.Now; dataContext.InsertOnSubmit(item); } } dataContext.SubmitChanges(); } amountToSkip += 250; objects = objectCollection.Skip(amountToSkip).Take(250).ToList(); } Now the problem arises when creating the Items. When running the application (and not even using dataContext) the memory increases consistently. It's like the items are never getting disposed. Does anyone notice what I'm doing wrong? Thanks in advance!

    Read the article

  • Setting synthesized arrays causing memory leaks using nested arrays

    - by webtoad
    Hello: Why is the following code causing a memory leak in an iPhone App? All of the initted objects below leak, including the arrays, the strings and the numbers. So, I'm thinking it has something to do with the the synthesized array property not releasing the object when I set the property again on the second and subsequent time this piece of code is called. Here is the code: "controller" (below) is my custom view controller class, which I have a reference to, and I am setting with this code snippet: sqlite3_stmt *statement; NSMutableArray *foo_IDs = [[NSMutableArray alloc] init]; NSMutableArray *foo_Names = [[NSMutableArray alloc] init]; NSMutableArray *foo_IDsBySection = [[NSMutableArray alloc] init]; NSMutableArray *foo_NamesBySection = [[NSMutableArray alloc] init]; // Get data: NSString *sql = @"select distinct p.foo_ID, p.foo_Name from foo as p "; if (sqlite3_prepare_v2(...) == SQLITE_OK) { while (sqlite3_step(statement) == SQLITE_ROW) { int p_id; NSString *foo_Name; p_id = sqlite3_column_int(statement, 0); char *str2 = (char *)sqlite3_column_text(statement, 1); foo_Name = [NSString stringWithCString:str2]; [foo_IDs addObject:[NSNumber numberWithInt:p_id]]; [foo_Names addObject:foo_Name]; } sqlite3_finalize(statement); } // Pass the array itself into another array: // (normally there is more than one array in each array) [foo_IDsBySection addObject: foo_IDs]; [foo_NamesBySection addObject: foo_Names]; [foo_IDs release]; [foo_Names release]; // Set some synthesized properties (of type NSArray, nonatomic, // retain) in controller: controller.foo_IDsBySection = foo_IDsBySection; controller.foo_NamesBySection = foo_NamesBySection; [foo_IDsBySection release]; [foo_NamesBySection release]; Thanks for any help!

    Read the article

  • Out-of-memory algorithms for addressing large arrays

    - by reve_etrange
    I am trying to deal with a very large dataset. I have k = ~4200 matrices (varying sizes) which must be compared combinatorially, skipping non-unique and self comparisons. Each of k(k-1)/2 comparisons produces a matrix, which must be indexed against its parents (i.e. can find out where it came from). The convenient way to do this is to (triangularly) fill a k-by-k cell array with the result of each comparison. These are ~100 X ~100 matrices, on average. Using single precision floats, it works out to 400 GB overall. I need to 1) generate the cell array or pieces of it without trying to place the whole thing in memory and 2) access its elements (and their elements) in like fashion. My attempts have been inefficient due to reliance on MATLAB's eval() as well as save and clear occurring in loops. for i=1:k [~,m] = size(data{i}); cur_var = ['H' int2str(i)]; %# if i == 1; save('FileName'); end; %# If using a single MAT file and need to create it. eval([cur_var ' = cell(1,k-i);']); for j=i+1:k [~,n] = size(data{j}); eval([cur_var '{i,j} = zeros(m,n,''single'');']); eval([cur_var '{i,j} = compare(data{i},data{j});']); end save(cur_var,cur_var); %# Add '-append' when using a single MAT file. clear(cur_var); end The other thing I have done is to perform the split when mod((i+j-1)/2,max(factor(k(k-1)/2))) == 0. This divides the result into the largest number of same-size pieces, which seems logical. The indexing is a little more complicated, but not too bad because a linear index could be used. Does anyone know/see a better way?

    Read the article

  • Memory Leakage using datatables

    - by Vix
    Hi, I have situation in which i'm compelled to retrieve 30,000 records each to 2 datatables.I need to do some manipulations and insert into records into the SQL server in Manipulate(dt1,dt2) function.I have to do this in 15 times as you can see in the for loop.Now I want to know what would be the effective way in terms of memory usage.I've used the first approach.Please suggest me the best approach. (1) for (int i = 0; i < 15; i++) { DataTable dt1 = GetInfo(i); DataTable dt2 = GetData(i); Manipulate(dt1,dt2); } (OR) (2) DataTable dt1 = new DataTable(); DataTable dt2 = new DataTable(); for (int i = 0; i < 15; i++) { dt1=null; dt2=null; dt1 = GetInfo(); dt2 = GetData(); Manipulate(dt1, dt2); } Thanks, Vix.

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >