Search Results

Search found 16731 results on 670 pages for 'memory limit'.

Page 80/670 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • "Could not register destruction callback" warn message leads to memory leaks?

    - by Séb
    Hello all, I'm in the exact same situation as this old question: http://stackoverflow.com/questions/2077558/warn-could-not-register-destruction-callback In short: I see a warning saying that a destruction callback could not be registered for some beans. My question is: since the beans whose destruction callback cannot be registered are two persistance beans, could this be the source of a memory leak? I am experiencing a leak in my app. Although the session timeout is set (to 30 minutes), my profiler shows me more instances of the hibernate SessionImpl each time I run a thread dump. The number of instances of SessionImpl is exactly the number of times I tried to login between two thread dumps. Thanks for your help...

    Read the article

  • What is the complexity of the below code with respect to memory ?

    - by Cshah
    Hi, I read about Big-O Notation from here and had few questions on calculating the complexity.So for the below code i have calculated the complexity. need your inputs for the same. private void reverse(String strToRevers) { if(strToRevers.length() == 0) { return ; } else { reverse(strToRevers.substring(1)); System.out.print(strToRevers.charAt(0)); } } If the memory factor is considered then the complexity of above code for a string of n characters is O(n^2). The explanation is for a string that consists of n characters, the below function would be called recursively n-1 times and each function call creates a string of single character(stringToReverse.charAT(0)). Hence it is n*(n-1)*2 which translates to o(n^2). Let me know if this is right ?

    Read the article

  • Getting Out Of Memory: Java heap space, but while viewing heap space it max uses 50 MB

    - by ikky
    Hi! I'm using ASANT to run a xml file which points to a NARS.jar file. (i do not have the project file of the NARS.jar) I'm getting "java.lang.OutOfMemoryError: Java heap space. I used VisualVM to look at the heap while running the NARS.jar, and it says that it max uses 50 MB of the heapspace. I've set the initial and max size of heapspace to 512 MB. Does anyone have an ide of what could be wrong? I got 1 GB physical Memory and created a 5 GB pagefile (for test purpose). Thanks in advance.

    Read the article

  • zlib memory usage / performance. With 500kb of data.

    - by unixman83
    Is zLib Worth it? Are there other better suited compressors? I am using an embedded system. Frequently, I have only 3MB of RAM or less available to my application. So I am considering using zlib to compress my buffers. I am concerned about overhead however. The buffer's average size will be 30kb. This probably won't get compressed by zlib. Anyone know of a good compressor for extremely limited memory environments? However, I will experience occasional maximum buffer sizes of 700kb, with 500kb much more common. Is zlib worth it in this case? Or is the overhead too much to justify? My sole considerations for compression are RAM overhead of algorithm and performance at least as good as zlib.

    Read the article

  • How do I load the Oracle schema into memory instead of the hard drive?

    - by Andrew
    I have a certain web application that makes upwards of ~100 updates to an Oracle database in succession. This can take anywhere from 3-5 minutes, which sometimes causes the webpage to time out. A re-design of the application is scheduled soon but someone told me that there is a way to configure a "loader file" which loads the schema into memory and runs the transactions there instead of on the hard drive, supposedly improving speed by several orders of magnitude. I have tried to research this "loader file" but all I can find is information about the SQL* bulk data loader. Does anyone know what he's talking about? Is this really possible and is it a feasible quick fix or should I just wait until the application is re-designed?

    Read the article

  • Of these 3 methods for reading linked lists from shared memory, why is the 3rd fastest?

    - by Joseph Garvin
    I have a 'server' program that updates many linked lists in shared memory in response to external events. I want client programs to notice an update on any of the lists as quickly as possible (lowest latency). The server marks a linked list's node's state_ as FILLED once its data is filled in and its next pointer has been set to a valid location. Until then, its state_ is NOT_FILLED_YET. I am using memory barriers to make sure that clients don't see the state_ as FILLED before the data within is actually ready (and it seems to work, I never see corrupt data). Also, state_ is volatile to be sure the compiler doesn't lift the client's checking of it out of loops. Keeping the server code exactly the same, I've come up with 3 different methods for the client to scan the linked lists for changes. The question is: Why is the 3rd method fastest? Method 1: Round robin over all the linked lists (called 'channels') continuously, looking to see if any nodes have changed to 'FILLED': void method_one() { std::vector<Data*> channel_cursors; for(ChannelList::iterator i = channel_list.begin(); i != channel_list.end(); ++i) { Data* current_item = static_cast<Data*>(i->get(segment)->tail_.get(segment)); channel_cursors.push_back(current_item); } while(true) { for(std::size_t i = 0; i < channel_list.size(); ++i) { Data* current_item = channel_cursors[i]; ACQUIRE_MEMORY_BARRIER; if(current_item->state_ == NOT_FILLED_YET) { continue; } log_latency(current_item->tv_sec_, current_item->tv_usec_); channel_cursors[i] = static_cast<Data*>(current_item->next_.get(segment)); } } } Method 1 gave very low latency when then number of channels was small. But when the number of channels grew (250K+) it became very slow because of looping over all the channels. So I tried... Method 2: Give each linked list an ID. Keep a separate 'update list' to the side. Every time one of the linked lists is updated, push its ID on to the update list. Now we just need to monitor the single update list, and check the IDs we get from it. void method_two() { std::vector<Data*> channel_cursors; for(ChannelList::iterator i = channel_list.begin(); i != channel_list.end(); ++i) { Data* current_item = static_cast<Data*>(i->get(segment)->tail_.get(segment)); channel_cursors.push_back(current_item); } UpdateID* update_cursor = static_cast<UpdateID*>(update_channel.tail_.get(segment)); while(true) { if(update_cursor->state_ == NOT_FILLED_YET) { continue; } ::uint32_t update_id = update_cursor->list_id_; Data* current_item = channel_cursors[update_id]; if(current_item->state_ == NOT_FILLED_YET) { std::cerr << "This should never print." << std::endl; // it doesn't continue; } log_latency(current_item->tv_sec_, current_item->tv_usec_); channel_cursors[update_id] = static_cast<Data*>(current_item->next_.get(segment)); update_cursor = static_cast<UpdateID*>(update_cursor->next_.get(segment)); } } Method 2 gave TERRIBLE latency. Whereas Method 1 might give under 10us latency, Method 2 would inexplicably often given 8ms latency! Using gettimeofday it appears that the change in update_cursor-state_ was very slow to propogate from the server's view to the client's (I'm on a multicore box, so I assume the delay is due to cache). So I tried a hybrid approach... Method 3: Keep the update list. But loop over all the channels continuously, and within each iteration check if the update list has updated. If it has, go with the number pushed onto it. If it hasn't, check the channel we've currently iterated to. void method_three() { std::vector<Data*> channel_cursors; for(ChannelList::iterator i = channel_list.begin(); i != channel_list.end(); ++i) { Data* current_item = static_cast<Data*>(i->get(segment)->tail_.get(segment)); channel_cursors.push_back(current_item); } UpdateID* update_cursor = static_cast<UpdateID*>(update_channel.tail_.get(segment)); while(true) { for(std::size_t i = 0; i < channel_list.size(); ++i) { std::size_t idx = i; ACQUIRE_MEMORY_BARRIER; if(update_cursor->state_ != NOT_FILLED_YET) { //std::cerr << "Found via update" << std::endl; i--; idx = update_cursor->list_id_; update_cursor = static_cast<UpdateID*>(update_cursor->next_.get(segment)); } Data* current_item = channel_cursors[idx]; ACQUIRE_MEMORY_BARRIER; if(current_item->state_ == NOT_FILLED_YET) { continue; } found_an_update = true; log_latency(current_item->tv_sec_, current_item->tv_usec_); channel_cursors[idx] = static_cast<Data*>(current_item->next_.get(segment)); } } } The latency of this method was as good as Method 1, but scaled to large numbers of channels. The problem is, I have no clue why. Just to throw a wrench in things: if I uncomment the 'found via update' part, it prints between EVERY LATENCY LOG MESSAGE. Which means things are only ever found on the update list! So I don't understand how this method can be faster than method 2. The full, compilable code (requires GCC and boost-1.41) that generates random strings as test data is at: http://pastebin.com/e3HuL0nr

    Read the article

  • How to get accuracy memory usage on iphone device.

    - by Favo Yang
    I want to output accuracy memory usage on iphone device, the method I used was taking from, http://landonf.bikemonkey.org/code/iphone/Determining%5FAvailable%5FMemory.20081203.html natural_t mem_used = (vm_stat.active_count + vm_stat.inactive_count + vm_stat.wire_count) * pagesize; natural_t mem_free = vm_stat.free_count * pagesize; natural_t mem_total = mem_used + mem_free; The issue is that the total value is always changed after testing on device! used: 60200.0KB free: 2740.0KB total: 62940.0KB used: 53156.0KB free: 2524.0KB total: 55680.0KB used: 52500.0KB free: 2544.0KB total: 55044.0KB Have a look for the function implementation, it already sum active, inactive, wire and free pages, is there anything I missing here?

    Read the article

  • Why is memory management so visible in Java VM?

    - by Emil
    I'm playing around with writing some simple Spring-based web apps and deploying them to Tomcat. Almost immediately, I run into the need to customize the Tomcat's JVM settings with -XX:MaxPermSize (and -Xmx and -Xms); without this, the server easily runs out of PermGen space. Why is this such an issue for Java VMs compared to other garbage collected languages? Comparing counts of "tune X memory usage" for X in Java, Ruby, Perl and Python, shows that Java has easily an order of magnitude more hits in Google than the other languages combined. I'd also be interested in references to technical papers/blog-posts/etc explaining design choices behind JVM GC implementations, across different JVMs or compared to other interpreted language VMs (e.g. comparing Sun or IBM JVM to Parrot). Are there technical reasons why JVM users still have to deal with non-auto-tuning heap/permgen sizes?

    Read the article

  • memory of drawables, is it better to have resources inside APK, outside APK or is it the same for me

    - by Daniel Benedykt
    Hi I have an application that draws a lot of graphics and change them. Since I have many graphics, I thought of having the images outside the APK, downloaded from the internet as needed, and saved on the files application folder. But I started to get outOfMemory exceptions. The question is: Does android handle memory different if I load a graphic from APK than if I load it from 'disk'? code using APK: topView.setBackgroundResource(R.drawable.bg); code if image is outside APK: Drawable d = Drawable.createFromPath(pathName); topView.setBackgroundDrawable(d); Thanks Daniel

    Read the article

  • Local variable assign versus direct assign; properties and memory.

    - by Typeoneerror
    In objective-c I see a lot of sample code where the author assigns a local variable, assigns it to a property, then releases the local variable. Is there a practical reason for doing this? I've been just assigning directly to the property for the most part. Would that cause a memory leak in any way? I guess I'd like to know if there's any difference between this: HomeScreenBtns *localHomeScreenBtns = [[HomeScreenBtns alloc] init]; self.homeScreenBtns = localHomeScreenBtns; [localHomeScreenBtns release]; and this: self.homeScreenBtns = [[HomeScreenBtns alloc] init]; Assuming that homeScreenBtns is a property like so: @property (nonatomic, retain) HomeScreenBtns *homeScreenBtns; I'm getting ready to submit my application to the app store so I'm in full optimize/QA mode.

    Read the article

  • UIScrollView: Is 806k Image too much too handle? (Crash, out of memory)?

    - by Jordan
    I'm loading a 2400x1845 png image into a scroll view. The program crashes out of memory, is there a better way to handle this? mapScrollView is an UIScrollView in IB, along with a couple of UIButtons. -(void)loadMapWithName:(NSString *)mapName { NSString* bundlePath = [[NSBundle mainBundle] bundlePath]; UIImage *image = [UIImage imageWithContentsOfFile:[NSString stringWithFormat:@"%@/path/%@", bundlePath, [maps objectForKey:mapName]]]; UIImageView *imageView = [[UIImageView alloc] initWithImage:image]; CGSize imgSize = image.size; mapScrollView.contentSize = imgSize; [mapScrollView addSubview:imageView]; [imageView release]; [self.view addSubview:mapScrollView]; }

    Read the article

  • Which .NET performance and/or memory profilers will allow me to profile a DLL?

    - by Eric
    I write a lot of .NET based plug-ins for other programs which are usually compiled as a DLL which is up to the native application to start up. I've been using Equatec's profiler, which works great, but now would like something with more features, including the ability to profile memory usage. I tried out Red Gate's Ant Profiler, but as far as I can see there is no way to profile a DLL. The only option is to profile an EXE. So my question is what other profiling tools are available that will allow me to profile a single library DLL rather than an EXE. I'm assuming this would require injecting profile code into the library as Equatec does?

    Read the article

  • Is there any way to determine what type of memory the segments returned by VirtualQuery() are?

    - by bdbaddog
    Greetings, I'm able to walk a processes memory map using logic like this: MEMORY_BASIC_INFORMATION mbi; void *lpAddress=(void*)0; while (VirtualQuery(lpAddress,&mbi,sizeof(mbi))) { fprintf(fptr,"Mem base:%-10x start:%-10x Size:%-10x Type:%-10x State:%-10x\n", mbi.AllocationBase, mbi.BaseAddress, mbi.RegionSize, mbi.Type,mbi.State); lpAddress=(void *)((unsigned int)mbi.BaseAddress + (unsigned int)mbi.RegionSize); } I'd like to know if a given segment is used for static allocation, stack, and/or heap and/or other? Is there any way to determine that?

    Read the article

  • What Android tools and methods work best to find memory/resource leaks?

    - by jottos
    I've got an Android app developed, and I'm at the point of a phone app development where everything seems to be working well and you want to declare victory and ship, but you know there just have to be some memory and resource leaks in there; and there's only 16mb of heap on the Android and its apparently surprisingly easy to leak in an Android app. I've been looking around and so far have only been able to dig up info on 'hprof' and 'traceview' and neither gets a lot of favorable reviews. What tools or methods have you come across or developed and care to share maybe in an OS project?

    Read the article

  • Why do virtual memory addresses for linux binaries start at 0x8048000?

    - by muteW
    Disassembling an ELF binary on a Ubuntu x86 system I couldn't help but notice that the code(.text) section starts from the virtual address 0x8048000 and all lower memory addresses seem to be unused. This seems to be rather wasteful and all Google turns up is either folklore involving STACK_TOP or protection against null-pointer dereferences. The latter case looks like it can be fixed by using a single page instead of leaving a 128MB gap. So my question is this - is there a definitive answer to why the layout has been fixed to these values or is it just an arbitrary choice?

    Read the article

  • WPF Datatemplate + ItemsControl each item uses > 1 MB Memory?

    - by Matt H.
    Does that sound right to anyone???? I have an ItemsControl that displays data from a custom object that implements iNotifyPropertyChanged. The DataTemplate consists of: Border 3 buttons 5 textboxes An ellipse A Bindable RichTextBox (custom class that inherits from RichTextBox... so I could make Document a dependency property (to support binding)) Several grids and stackpanels for layout It uses: Styles (stored in a resource dictionary higher up the tree) Styles affect: colors, thicknesses, and text properties: which are data-bound to a "settings" class that implements iNotifyPropertyChanged, so the user can change display settings That's it! So what gives? I've also noticed that when I empty and remove the ItemsControl, memory isn't freed. over 5000 instances of "CommandBindingCollection" and "WeakReference" are CREATED (using ANTS profiler). And huge number of EffectiveValueEntry objects are created too. So really, what gives!!! :-) Thanks for your insight! Management needs this project soon but in its current state, it's unreleasable.

    Read the article

  • Java: Best approach to have a long list of variables needed all the time without consuming memory?

    - by evilReiko
    I wrote an abstract class to contain all rules of the application because I need them almost everywhere in my application. So most of what it contains is static final variables, something like this: public abstract class appRules { public static final boolean IS_DEV = true; public static final String CLOCK_SHORT_TIME_FORMAT = "something"; public static final String CLOCK_SHORT_DATE_FORMAT = "something else"; public static final String CLOCK_FULL_FORMAT = "other thing"; public static final int USERNAME_MIN = 5; public static final int USERNAME_MAX = 16; // etc. } The class is big and contains LOTS of such variables. My Question: Isn't setting static variables means these variables are floating in memory all the time? Do you suggest insteading of having an abstract class, I have a instantiable class with non-static variables (just public final), so I instantiate the class and use the variables only when I need them. Or is what am I doing is completely wrong approach and you suggest something else?

    Read the article

  • Apache Bad Request "Size of a request header field exceeds server limit" with Kerberos SSO

    - by Aurelin
    I'm setting up an SSO for Active Directory users through a website that runs on an Apache (Apache2 on SLES 11.1), and when testing with Firefox it all works fine. But when I try to open the website in Internet Explorer 8 (Windows 7), all I get is "Bad Request Your browser sent a request that this server could not understand. Size of a request header field exceeds server limit. Authorization: Negotiate [ultra long string]" My vhost.cfg looks like this: <VirtualHost hostname:443> LimitRequestFieldSize 32760 LimitRequestLine 32760 LogLevel debug <Directory "/data/pwtool/sec-data/adbauth"> AuthName "Please login with your AD-credentials (Windows Account)" AuthType Kerberos KrbMethodNegotiate on KrbAuthRealms REALM.TLD KrbServiceName HTTP/hostname Krb5Keytab /data/pwtool/conf/http_hostname.krb5.keytab KrbMethodK5Passwd on KrbLocalUserMapping on Order allow,deny Allow from all </Directory> <Directory "/data/pwtool/sec-data/adbauth"> Require valid-user </Directory> SSLEngine on SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL SSLCertificateFile /etc/apache2/ssl.crt/hostname-server.crt SSLCertificateKeyFile /etc/apache2/ssl.key/hostname-server.key </VirtualHost> I also made sure that the cookies are deleted and tried several smaller values for LimitRequestFieldSize and LimitRequestLine. Another thing that seems weird to me is that even with LogLevel debug I won't get any logs about this. The log's last line is ssl_engine_kernel.c(1879): OpenSSL: Write: SSL negotiation finished successfully Does anyone have an idea about that?

    Read the article

  • MSSQL Server Management Studio Express 2005 license limit error should not be displayed

    - by Erik Vold
    I just tried restoring a 250MB database from a backup on my local machine, and got the following message: TITLE: Microsoft SQL Server Management Studio Express ------------------------------ Restore failed for Server 'MULTIVIS-A0D9F3\SQLEXPRESS'. (Microsoft.SqlServer.Express.Smo) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=9.00.3042.00&EvtSrc=Microsoft.SqlServer.Management.Smo.ExceptionTemplates.FailedOperationExceptionText&EvtID=Restore+Server&LinkId=20476 ------------------------------ ADDITIONAL INFORMATION: System.Data.SqlClient.SqlError: CREATE DATABASE or ALTER DATABASE failed because the resulting cumulative database size would exceed your licensed limit of 4096 MB per database. (Microsoft.SqlServer.Express.Smo) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=9.00.3042.00&LinkId=20476 ------------------------------ BUTTONS: OK ------------------------------ I had 3 db's on the machine, one ~4.1gb db and two other dbs < 10mb each. So I did some googling on this error and saw the suggestion to try shrinking my other dbs to free up some space. So I did so on the 4.1gb db and now when I go to 'properties' for that db it says it is taking/using ~2.4gb. So I should have space now I figure, but whenever I try to restore the ~250mb database now I still get the error message above.. I tried restarting as well but that wasn't helpful. Any idea what the issue is?

    Read the article

  • WSS Search fills 10 GB limit on SBS server 2011

    - by Kactus
    I've got a SBS Server 2011 Standard SP1 that isn't very busy. 2 Users local and 2 remote. We have sharepoint that has maybe a dozen small documents at most. I've just started getting the following two error occur Could not allocate space for object 'dbo.MSSBatchHistory'.'IX_MSSBatchHistory' in database 'WSS_Search_SERVER' because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup. And CREATE DATABASE or ALTER DATABASE failed because the resulting cumulative database size would exceed your licensed limit of 10240 MB per database. Digging around in SQL manager I see that WSS Search DB file size is 10241MB, the log file is only 147 MB Firstly, why is WSS Search taking up so much space? How can I stop it from doing so, and what can I do now to get things running ok. I know about log file truncating and this isn't the case here since the log is tiny. Any help is appreciated. There is plenty of free space on the disk (791GB free) Thanks Kactus

    Read the article

  • bind9 named.conf zones size limit

    - by mox601
    I am trying to set up a test environment on my local machine, and I am trying to start a DNS daemon that loads tha configuration from a named.conf.custom file. As long as the size of that file is like 3-4 zones, the bind9 daemon loads fine, but when i enter the config file i need (like 10000 lines long), bind can't startup and in the syslog i find this message: starting BIND 9.7.0-P1 -u bind Jun 14 17:06:06 cibionte-pc named[9785]: built with '--prefix=/usr' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--sysconfdir=/etc/bind' '--localstatedir=/var' '--enable-threads' '--enable-largefile' '--with-libtool' '--enable-shared' '--enable-static' '--with-openssl=/usr' '--with-gssapi=/usr' '--with-gnu-ld' '--with-dlz-postgres=no' '--with-dlz-mysql=no' '--with-dlz-bdb=yes' '--with-dlz-filesystem=yes' '--with-dlz-ldap=yes' '--with-dlz-stub=yes' '--with-geoip=/usr' '--enable-ipv6' 'CFLAGS=-fno-strict-aliasing -DDIG_SIGCHASE -O2' 'LDFLAGS=-Wl,-Bsymbolic-functions' 'CPPFLAGS=' Jun 14 17:06:06 cibionte-pc named[9785]: adjusted limit on open files from 1024 to 1048576 Jun 14 17:06:06 cibionte-pc named[9785]: found 1 CPU, using 1 worker thread Jun 14 17:06:06 cibionte-pc named[9785]: using up to 4096 sockets Jun 14 17:06:06 cibionte-pc named[9785]: loading configuration from '/etc/bind/named.conf' Jun 14 17:06:06 cibionte-pc named[9785]: /etc/bind/named.conf.saferinternet:1: unknown option 'zone' Jun 14 17:06:06 cibionte-pc named[9785]: loading configuration: failure Jun 14 17:06:06 cibionte-pc named[9785]: exiting (due to fatal error) Are there any limits on the file size bind9 is allowed to load?

    Read the article

  • ixgbe driver: Limit the max number of cores

    - by Shellex Wai
    I have a Linux workstation with 48 cores and runs ixgbe driver for fiber interface. And I have to test a project name Netmap on it. NetMap is a high performance network framework for high speed interfaces, which has been ported to Linux recently. For some reasons, I must try it on the machine. So I compile it and follow the instructions to run the test problems, but it doesn't work. I check dmesg and it says: [10399.085736] 794.159015 netmap_set_ringid [486] ringid o4o1 set to all 48 HW RINGS [10399.085742] 794.282011 netmap_obj_malloc [220] netmap_if request size 816 too large I asked the author of netmap for help. He told me that I have too many cores in the machine and it should work if I tell ixgbe use less cores (2 to 4 is ok). I am not familiar to driver development and I don't know how to limit the ring numbers by passing arguments to ixgbe driver. So I check the spec from intel's website but found nothing about it. So I come here for more helps. Thank you.

    Read the article

  • Core data relationship memory leak

    - by cfihelp
    I have a strange (to me) memory leak when accessing an entity in a relationship. Series and Tiles have an inverse relationship to each other. // set up the fetch request NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Series" inManagedObjectContext:managedObjectContext]; [fetchRequest setEntity:entity]; // grab all of the series in the core data store NSError *error = nil; availableSeries = [[NSArray alloc] initWithArray:[managedObjectContext executeFetchRequest:fetchRequest error:&error]]; [fetchRequest release]; // grab one of the series Series *currentSeries = [availableSeries objectAtIndex:1]; // load all of the tiles attached to the series through the relationship NSArray *myTiles = [currentSeries.tile allObjects]; // 16 byte leak here! Instruments reports back that the final line has a 16 byte leak cause by NSPlaceHolderString. Stack trace: 2 UIKit UIApplicationMain 3 UIKit -[UIApplication _run] 4 CoreFoundation CFRunLoopRunInMode 5 CoreFoundation CFRunLoopRunSpecific 6 GraphicsServices PurpleEventCallback 7 UIKit _UIApplicationHandleEvent 8 UIKit -[UIApplication sendEvent:] 9 UIKit -[UIApplication handleEvent:withNewEvent:] 10 UIKit -[UIApplication _runWithURL:sourceBundleID:] 11 UIKit -[UIApplication _performInitializationWithURL:sourceBundleID:] 12 Memory -[AppDelegate_Phone application:didFinishLaunchingWithOptions:] /Users/cfish/svnrepo/Memory/src/Memory/iPhone/AppDelegate_Phone.m:49 13 UIKit -[UIViewController view] 14 Memory -[HomeScreenController_Phone viewDidLoad] /Users/cfish/svnrepo/Memory/src/Memory/iPhone/HomeScreenController_Phone.m:58 15 CoreData -[_NSFaultingMutableSet allObjects] 16 CoreData -[_NSFaultingMutableSet willRead] 17 CoreData -[NSFaultHandler retainedFulfillAggregateFaultForObject:andRelationship:withContext:] 18 CoreData -[NSSQLCore retainedRelationshipDataWithSourceID:forRelationship:withContext:] 19 CoreData -[NSSQLCore newFetchedPKsForSourceID:andRelationship:] 20 CoreData -[NSSQLCore rawSQLTextForToManyFaultStatement:stripBindVariables:swapEKPK:] 21 Foundation +[NSString stringWithFormat:] 22 Foundation -[NSPlaceholderString initWithFormat:locale:arguments:] 23 CoreFoundation _CFStringCreateWithFormatAndArgumentsAux 24 CoreFoundation _CFStringAppendFormatAndArgumentsAux 25 Foundation _NSDescriptionWithLocaleFunc 26 CoreFoundation -[NSObject respondsToSelector:] 27 libobjc.A.dylib class_respondsToSelector 28 libobjc.A.dylib lookUpMethod 29 libobjc.A.dylib _cache_addForwardEntry 30 libobjc.A.dylib _malloc_internal I think I'm missing something obvious but I can't quite figure out what. Thanks for your help! Update: I've copied the offending chunk of code to the first part of applicationDidFinishLaunching and it still leaks. Could there be something wrong with my model?

    Read the article

  • How to simulate inner join on very large files in java (without running out of memory)

    - by Constantin
    I am trying to simulate SQL joins using java and very large text files (INNER, RIGHT OUTER and LEFT OUTER). The files have already been sorted using an external sort routine. The issue I have is I am trying to find the most efficient way to deal with the INNER join part of the algorithm. Right now I am using two Lists to store the lines that have the same key and iterate through the set of lines in the right file once for every line in the left file (provided the keys still match). In other words, the join key is not unique in each file so would need to account for the Cartesian product situations ... left_01, 1 left_02, 1 right_01, 1 right_02, 1 right_03, 1 left_01 joins to right_01 using key 1 left_01 joins to right_02 using key 1 left_01 joins to right_03 using key 1 left_02 joins to right_01 using key 1 left_02 joins to right_02 using key 1 left_02 joins to right_03 using key 1 My concern is one of memory. I will run out of memory if i use the approach below but still want the inner join part to work fairly quickly. What is the best approach to deal with the INNER join part keeping in mind that these files may potentially be huge public class Joiner { private void join(BufferedReader left, BufferedReader right, BufferedWriter output) throws Throwable { BufferedReader _left = left; BufferedReader _right = right; BufferedWriter _output = output; Record _leftRecord; Record _rightRecord; _leftRecord = read(_left); _rightRecord = read(_right); while( _leftRecord != null && _rightRecord != null ) { if( _leftRecord.getKey() < _rightRecord.getKey() ) { write(_output, _leftRecord, null); _leftRecord = read(_left); } else if( _leftRecord.getKey() > _rightRecord.getKey() ) { write(_output, null, _rightRecord); _rightRecord = read(_right); } else { List<Record> leftList = new ArrayList<Record>(); List<Record> rightList = new ArrayList<Record>(); _leftRecord = readRecords(leftList, _leftRecord, _left); _rightRecord = readRecords(rightList, _rightRecord, _right); for( Record equalKeyLeftRecord : leftList ){ for( Record equalKeyRightRecord : rightList ){ write(_output, equalKeyLeftRecord, equalKeyRightRecord); } } } } if( _leftRecord != null ) { write(_output, _leftRecord, null); _leftRecord = read(_left); while(_leftRecord != null) { write(_output, _leftRecord, null); _leftRecord = read(_left); } } else { if( _rightRecord != null ) { write(_output, null, _rightRecord); _rightRecord = read(_right); while(_rightRecord != null) { write(_output, null, _rightRecord); _rightRecord = read(_right); } } } _left.close(); _right.close(); _output.flush(); _output.close(); } private Record read(BufferedReader reader) throws Throwable { Record record = null; String data = reader.readLine(); if( data != null ) { record = new Record(data.split("\t")); } return record; } private Record readRecords(List<Record> list, Record record, BufferedReader reader) throws Throwable { int key = record.getKey(); list.add(record); record = read(reader); while( record != null && record.getKey() == key) { list.add(record); record = read(reader); } return record; } private void write(BufferedWriter writer, Record left, Record right) throws Throwable { String leftKey = (left == null ? "null" : Integer.toString(left.getKey())); String leftData = (left == null ? "null" : left.getData()); String rightKey = (right == null ? "null" : Integer.toString(right.getKey())); String rightData = (right == null ? "null" : right.getData()); writer.write("[" + leftKey + "][" + leftData + "][" + rightKey + "][" + rightData + "]\n"); } public static void main(String[] args) { try { BufferedReader leftReader = new BufferedReader(new FileReader("LEFT.DAT")); BufferedReader rightReader = new BufferedReader(new FileReader("RIGHT.DAT")); BufferedWriter output = new BufferedWriter(new FileWriter("OUTPUT.DAT")); Joiner joiner = new Joiner(); joiner.join(leftReader, rightReader, output); } catch (Throwable e) { e.printStackTrace(); } } } After applying the ideas from the proposed answer, I changed the loop to this private void join(RandomAccessFile left, RandomAccessFile right, BufferedWriter output) throws Throwable { long _pointer = 0; RandomAccessFile _left = left; RandomAccessFile _right = right; BufferedWriter _output = output; Record _leftRecord; Record _rightRecord; _leftRecord = read(_left); _rightRecord = read(_right); while( _leftRecord != null && _rightRecord != null ) { if( _leftRecord.getKey() < _rightRecord.getKey() ) { write(_output, _leftRecord, null); _leftRecord = read(_left); } else if( _leftRecord.getKey() > _rightRecord.getKey() ) { write(_output, null, _rightRecord); _pointer = _right.getFilePointer(); _rightRecord = read(_right); } else { long _tempPointer = 0; int key = _leftRecord.getKey(); while( _leftRecord != null && _leftRecord.getKey() == key ) { _right.seek(_pointer); _rightRecord = read(_right); while( _rightRecord != null && _rightRecord.getKey() == key ) { write(_output, _leftRecord, _rightRecord ); _tempPointer = _right.getFilePointer(); _rightRecord = read(_right); } _leftRecord = read(_left); } _pointer = _tempPointer; } } if( _leftRecord != null ) { write(_output, _leftRecord, null); _leftRecord = read(_left); while(_leftRecord != null) { write(_output, _leftRecord, null); _leftRecord = read(_left); } } else { if( _rightRecord != null ) { write(_output, null, _rightRecord); _rightRecord = read(_right); while(_rightRecord != null) { write(_output, null, _rightRecord); _rightRecord = read(_right); } } } _left.close(); _right.close(); _output.flush(); _output.close(); } UPDATE While this approach worked, it was terribly slow and so I have modified this to create files as buffers and this works very well. Here is the update ... private long getMaxBufferedLines(File file) throws Throwable { long freeBytes = Runtime.getRuntime().freeMemory() / 2; return (freeBytes / (file.length() / getLineCount(file))); } private void join(File left, File right, File output, JoinType joinType) throws Throwable { BufferedReader leftFile = new BufferedReader(new FileReader(left)); BufferedReader rightFile = new BufferedReader(new FileReader(right)); BufferedWriter outputFile = new BufferedWriter(new FileWriter(output)); long maxBufferedLines = getMaxBufferedLines(right); Record leftRecord; Record rightRecord; leftRecord = read(leftFile); rightRecord = read(rightFile); while( leftRecord != null && rightRecord != null ) { if( leftRecord.getKey().compareTo(rightRecord.getKey()) < 0) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.LeftExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, leftRecord, null); } leftRecord = read(leftFile); } else if( leftRecord.getKey().compareTo(rightRecord.getKey()) > 0 ) { if( joinType == JoinType.RightOuterJoin || joinType == JoinType.RightExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, null, rightRecord); } rightRecord = read(rightFile); } else if( leftRecord.getKey().compareTo(rightRecord.getKey()) == 0 ) { String key = leftRecord.getKey(); List<File> rightRecordFileList = new ArrayList<File>(); List<Record> rightRecordList = new ArrayList<Record>(); rightRecordList.add(rightRecord); rightRecord = consume(key, rightFile, rightRecordList, rightRecordFileList, maxBufferedLines); while( leftRecord != null && leftRecord.getKey().compareTo(key) == 0 ) { processRightRecords(outputFile, leftRecord, rightRecordFileList, rightRecordList, joinType); leftRecord = read(leftFile); } // need a dispose for deleting files in list } else { throw new Exception("DATA IS NOT SORTED"); } } if( leftRecord != null ) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.LeftExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, leftRecord, null); } leftRecord = read(leftFile); while(leftRecord != null) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.LeftExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, leftRecord, null); } leftRecord = read(leftFile); } } else { if( rightRecord != null ) { if( joinType == JoinType.RightOuterJoin || joinType == JoinType.RightExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, null, rightRecord); } rightRecord = read(rightFile); while(rightRecord != null) { if( joinType == JoinType.RightOuterJoin || joinType == JoinType.RightExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, null, rightRecord); } rightRecord = read(rightFile); } } } leftFile.close(); rightFile.close(); outputFile.flush(); outputFile.close(); } public void processRightRecords(BufferedWriter outputFile, Record leftRecord, List<File> rightFiles, List<Record> rightRecords, JoinType joinType) throws Throwable { for(File rightFile : rightFiles) { BufferedReader rightReader = new BufferedReader(new FileReader(rightFile)); Record rightRecord = read(rightReader); while(rightRecord != null){ if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.RightOuterJoin || joinType == JoinType.FullOuterJoin || joinType == JoinType.InnerJoin ) { write(outputFile, leftRecord, rightRecord); } rightRecord = read(rightReader); } rightReader.close(); } for(Record rightRecord : rightRecords) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.RightOuterJoin || joinType == JoinType.FullOuterJoin || joinType == JoinType.InnerJoin ) { write(outputFile, leftRecord, rightRecord); } } } /** * consume all records having key (either to a single list or multiple files) each file will * store a buffer full of data. The right record returned represents the outside flow (key is * already positioned to next one or null) so we can't use this record in below while loop or * within this block in general when comparing current key. The trick is to keep consuming * from a List. When it becomes empty, re-fill it from the next file until all files have * been consumed (and the last node in the list is read). The next outside iteration will be * ready to be processed (either it will be null or it points to the next biggest key * @throws Throwable * */ private Record consume(String key, BufferedReader reader, List<Record> records, List<File> files, long bufferMaxRecordLines ) throws Throwable { boolean processComplete = false; Record record = records.get(records.size() - 1); while(!processComplete){ long recordCount = records.size(); if( record.getKey().compareTo(key) == 0 ){ record = read(reader); while( record != null && record.getKey().compareTo(key) == 0 && recordCount < bufferMaxRecordLines ) { records.add(record); recordCount++; record = read(reader); } } processComplete = true; // if record is null, we are done if( record != null ) { // if the key has changed, we are done if( record.getKey().compareTo(key) == 0 ) { // Same key means we have exhausted the buffer. // Dump entire buffer into a file. The list of file // pointers will keep track of the files ... processComplete = false; dumpBufferToFile(records, files); records.clear(); records.add(record); } } } return record; } /** * Dump all records in List of Record objects to a file. Then, add that * file to List of File objects * * NEED TO PLACE A LIMIT ON NUMBER OF FILE POINTERS (check size of file list) * * @param records * @param files * @throws Throwable */ private void dumpBufferToFile(List<Record> records, List<File> files) throws Throwable { String prefix = "joiner_" + files.size() + 1; String suffix = ".dat"; File file = File.createTempFile(prefix, suffix, new File("cache")); BufferedWriter writer = new BufferedWriter(new FileWriter(file)); for( Record record : records ) { writer.write( record.dump() ); } files.add(file); writer.flush(); writer.close(); }

    Read the article

  • Freeing of allocated memory in Solaris/Linux

    - by user355159
    Hi, I have written a small program and compiled it under Solaris/Linux platform to measure the performance of applying this code to my application. The program is written in such a way, initially using sbrk(0) system call, i have taken base address of the heap region. After that i have allocated an 1.5GB of memory using malloc system call, Then i used memcpy system call to copy 1.5GB of content to the allocated memory area. Then, I freed the allocated memory. After freeing, i used again sbrk(0) system call to view the heap size. This is where i little confused. In solaris, eventhough, i freed the memory allocated (of nearly 1.5GB) the heap size of the process is huge. But i run the same application in linux, after freeing, i found that the heap size of the process is equal to the size of the heap memory before allocation of 1.5GB. I know Solaris does not frees memory immediately, but i don't know how to tune the solaris kernel to immediately free the memory after free() system call. Also, please explain why the same problem does not comes under Linux? Can anyone help me out of this? Thanks, Santhosh.

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >