Search Results

Search found 12445 results on 498 pages for 'memory fragmentation'.

Page 71/498 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • improve Collision detection memory usage (blocks with bullets)

    - by Eddy
    i am making a action platform 2D game, something like Megaman. I am using XNA to make it. already made player phisics,collisions, bullets, enemies and AIs, map editor, scorolling X Y camera (about 75% of game is finished ). as i progressed i noticed that my game would be more interesting to play if bullets would be destroyed on collision with regular(stationary ) map blocks, only problem is that if i use my collision detection (each bullet with each block) sometimes it begins to lag(btw if my bullet exits the screen player can see it is removed from bullet list) So how to improve my collision detection so that memory usage would be so high? :) ( on a map 300x300 blocks for example don't think that bigger map should be made); int block = 0; int bulet= 0; bool destroy_bullet = false; while (bulet < bullets.Count) { while (block < blocks.Count) { if (bullets[bulet].P_Bul_rec.Intersects( blocks[block].rect)) {//bullets and block are Lists that holds objects of bullet and block classes //P_Bul_rec just bullet rectangle destroy_bullet = true; } block++; } if (destroy_bullet) { bullets.RemoveAt(bulet); destroy_bullet = false; } else { bulet++; } block = 0; }

    Read the article

  • Entry level engineer question regarding memory mangement

    - by Ealianis
    It has been a few months since I started my position as an entry level software developer. Now that I am past some learning curves (e.g. the language, jargon, syntax of VB and C#) I'm starting to focus on more esoteric topics, as to write better software. A simple question I presented to a fellow coworker was responded with "I'm focusing on the wrong things." While I respect this coworker I do disagree that this is a "wrong thing" to focus upon. Here was the code (in VB) and followed by the question. Note: The Function GenerateAlert() returns an integer. Dim alertID as Integer = GenerateAlert() _errorDictionary.Add(argErrorID, NewErrorInfo(Now(), alertID)) vs... _errorDictionary.Add(argErrorID, New ErrorInfo(Now(), GenerateAlert())) I originally wrote the ladder and rewrote it with the "Dim alertID" so that someone else might find it easier to read. But here was my concern and question. "Should one write this with the Dim AlertID, it would in fact take up more memory; finite but more, and should this method be called many times could it lead to an issue? How will .NET handle this object AlertID. Outside of .NET should one manually dispose of the object after use (near the end of the sub)." I want to ensure I become a knowledgeable programmer that does not just rely upon garbage collection. Am I over thinking this? Am I focusing on the wrong things?

    Read the article

  • Am I running out of memory or do I have two logical drives instead of one

    - by user30904
    I did a complete reinstall of Ubuntu 13.04 a couple of months ago. Since then, I have switched out my motherboard with another. I kept the same hard drive. I just did an upgrade to 13.10. Recently, after this install, I keep getting the message that I'm running out of memory. I just checked my system usage and was surprised by what I found. I believed that I installed Ubuntu as a fresh install but when I check the system usage, it seems like there are two logical drives. I just did the basic install, so I was only expecting to see one partition but instead I see two. One is a small 300mb partition, the other is a 300gb partition I was expecting. Can anyone tell me if I have two partitions and/or logical drives and if so how I can fix this? I seem to have been running on the smaller drive and now I'm obviously out of space. I want to be able to use the bigger one at least.

    Read the article

  • Entry level engineer question regarding memory management

    - by Ealianis
    It has been a few months since I started my position as an entry level software developer. Now that I am past some learning curves (e.g. the language, jargon, syntax of VB and C#) I'm starting to focus on more esoteric topics, as to write better software. A simple question I presented to a fellow coworker was responded with "I'm focusing on the wrong things." While I respect this coworker I do disagree that this is a "wrong thing" to focus upon. Here was the code (in VB) and followed by the question. Note: The Function GenerateAlert() returns an integer. Dim alertID as Integer = GenerateAlert() _errorDictionary.Add(argErrorID, NewErrorInfo(Now(), alertID)) vs... _errorDictionary.Add(argErrorID, New ErrorInfo(Now(), GenerateAlert())) I originally wrote the latter and rewrote it with the "Dim alertID" so that someone else might find it easier to read. But here was my concern and question: Should one write this with the Dim AlertID, it would in fact take up more memory; finite but more, and should this method be called many times could it lead to an issue? How will .NET handle this object AlertID. Outside of .NET should one manually dispose of the object after use (near the end of the sub). I want to ensure I become a knowledgeable programmer that does not just rely upon garbage collection. Am I over thinking this? Am I focusing on the wrong things?

    Read the article

  • "Could not register destruction callback" warn message leads to memory leaks?

    - by Séb
    Hello all, I'm in the exact same situation as this old question: http://stackoverflow.com/questions/2077558/warn-could-not-register-destruction-callback In short: I see a warning saying that a destruction callback could not be registered for some beans. My question is: since the beans whose destruction callback cannot be registered are two persistance beans, could this be the source of a memory leak? I am experiencing a leak in my app. Although the session timeout is set (to 30 minutes), my profiler shows me more instances of the hibernate SessionImpl each time I run a thread dump. The number of instances of SessionImpl is exactly the number of times I tried to login between two thread dumps. Thanks for your help...

    Read the article

  • What is the complexity of the below code with respect to memory ?

    - by Cshah
    Hi, I read about Big-O Notation from here and had few questions on calculating the complexity.So for the below code i have calculated the complexity. need your inputs for the same. private void reverse(String strToRevers) { if(strToRevers.length() == 0) { return ; } else { reverse(strToRevers.substring(1)); System.out.print(strToRevers.charAt(0)); } } If the memory factor is considered then the complexity of above code for a string of n characters is O(n^2). The explanation is for a string that consists of n characters, the below function would be called recursively n-1 times and each function call creates a string of single character(stringToReverse.charAT(0)). Hence it is n*(n-1)*2 which translates to o(n^2). Let me know if this is right ?

    Read the article

  • Getting Out Of Memory: Java heap space, but while viewing heap space it max uses 50 MB

    - by ikky
    Hi! I'm using ASANT to run a xml file which points to a NARS.jar file. (i do not have the project file of the NARS.jar) I'm getting "java.lang.OutOfMemoryError: Java heap space. I used VisualVM to look at the heap while running the NARS.jar, and it says that it max uses 50 MB of the heapspace. I've set the initial and max size of heapspace to 512 MB. Does anyone have an ide of what could be wrong? I got 1 GB physical Memory and created a 5 GB pagefile (for test purpose). Thanks in advance.

    Read the article

  • zlib memory usage / performance. With 500kb of data.

    - by unixman83
    Is zLib Worth it? Are there other better suited compressors? I am using an embedded system. Frequently, I have only 3MB of RAM or less available to my application. So I am considering using zlib to compress my buffers. I am concerned about overhead however. The buffer's average size will be 30kb. This probably won't get compressed by zlib. Anyone know of a good compressor for extremely limited memory environments? However, I will experience occasional maximum buffer sizes of 700kb, with 500kb much more common. Is zlib worth it in this case? Or is the overhead too much to justify? My sole considerations for compression are RAM overhead of algorithm and performance at least as good as zlib.

    Read the article

  • How do I load the Oracle schema into memory instead of the hard drive?

    - by Andrew
    I have a certain web application that makes upwards of ~100 updates to an Oracle database in succession. This can take anywhere from 3-5 minutes, which sometimes causes the webpage to time out. A re-design of the application is scheduled soon but someone told me that there is a way to configure a "loader file" which loads the schema into memory and runs the transactions there instead of on the hard drive, supposedly improving speed by several orders of magnitude. I have tried to research this "loader file" but all I can find is information about the SQL* bulk data loader. Does anyone know what he's talking about? Is this really possible and is it a feasible quick fix or should I just wait until the application is re-designed?

    Read the article

  • Of these 3 methods for reading linked lists from shared memory, why is the 3rd fastest?

    - by Joseph Garvin
    I have a 'server' program that updates many linked lists in shared memory in response to external events. I want client programs to notice an update on any of the lists as quickly as possible (lowest latency). The server marks a linked list's node's state_ as FILLED once its data is filled in and its next pointer has been set to a valid location. Until then, its state_ is NOT_FILLED_YET. I am using memory barriers to make sure that clients don't see the state_ as FILLED before the data within is actually ready (and it seems to work, I never see corrupt data). Also, state_ is volatile to be sure the compiler doesn't lift the client's checking of it out of loops. Keeping the server code exactly the same, I've come up with 3 different methods for the client to scan the linked lists for changes. The question is: Why is the 3rd method fastest? Method 1: Round robin over all the linked lists (called 'channels') continuously, looking to see if any nodes have changed to 'FILLED': void method_one() { std::vector<Data*> channel_cursors; for(ChannelList::iterator i = channel_list.begin(); i != channel_list.end(); ++i) { Data* current_item = static_cast<Data*>(i->get(segment)->tail_.get(segment)); channel_cursors.push_back(current_item); } while(true) { for(std::size_t i = 0; i < channel_list.size(); ++i) { Data* current_item = channel_cursors[i]; ACQUIRE_MEMORY_BARRIER; if(current_item->state_ == NOT_FILLED_YET) { continue; } log_latency(current_item->tv_sec_, current_item->tv_usec_); channel_cursors[i] = static_cast<Data*>(current_item->next_.get(segment)); } } } Method 1 gave very low latency when then number of channels was small. But when the number of channels grew (250K+) it became very slow because of looping over all the channels. So I tried... Method 2: Give each linked list an ID. Keep a separate 'update list' to the side. Every time one of the linked lists is updated, push its ID on to the update list. Now we just need to monitor the single update list, and check the IDs we get from it. void method_two() { std::vector<Data*> channel_cursors; for(ChannelList::iterator i = channel_list.begin(); i != channel_list.end(); ++i) { Data* current_item = static_cast<Data*>(i->get(segment)->tail_.get(segment)); channel_cursors.push_back(current_item); } UpdateID* update_cursor = static_cast<UpdateID*>(update_channel.tail_.get(segment)); while(true) { if(update_cursor->state_ == NOT_FILLED_YET) { continue; } ::uint32_t update_id = update_cursor->list_id_; Data* current_item = channel_cursors[update_id]; if(current_item->state_ == NOT_FILLED_YET) { std::cerr << "This should never print." << std::endl; // it doesn't continue; } log_latency(current_item->tv_sec_, current_item->tv_usec_); channel_cursors[update_id] = static_cast<Data*>(current_item->next_.get(segment)); update_cursor = static_cast<UpdateID*>(update_cursor->next_.get(segment)); } } Method 2 gave TERRIBLE latency. Whereas Method 1 might give under 10us latency, Method 2 would inexplicably often given 8ms latency! Using gettimeofday it appears that the change in update_cursor-state_ was very slow to propogate from the server's view to the client's (I'm on a multicore box, so I assume the delay is due to cache). So I tried a hybrid approach... Method 3: Keep the update list. But loop over all the channels continuously, and within each iteration check if the update list has updated. If it has, go with the number pushed onto it. If it hasn't, check the channel we've currently iterated to. void method_three() { std::vector<Data*> channel_cursors; for(ChannelList::iterator i = channel_list.begin(); i != channel_list.end(); ++i) { Data* current_item = static_cast<Data*>(i->get(segment)->tail_.get(segment)); channel_cursors.push_back(current_item); } UpdateID* update_cursor = static_cast<UpdateID*>(update_channel.tail_.get(segment)); while(true) { for(std::size_t i = 0; i < channel_list.size(); ++i) { std::size_t idx = i; ACQUIRE_MEMORY_BARRIER; if(update_cursor->state_ != NOT_FILLED_YET) { //std::cerr << "Found via update" << std::endl; i--; idx = update_cursor->list_id_; update_cursor = static_cast<UpdateID*>(update_cursor->next_.get(segment)); } Data* current_item = channel_cursors[idx]; ACQUIRE_MEMORY_BARRIER; if(current_item->state_ == NOT_FILLED_YET) { continue; } found_an_update = true; log_latency(current_item->tv_sec_, current_item->tv_usec_); channel_cursors[idx] = static_cast<Data*>(current_item->next_.get(segment)); } } } The latency of this method was as good as Method 1, but scaled to large numbers of channels. The problem is, I have no clue why. Just to throw a wrench in things: if I uncomment the 'found via update' part, it prints between EVERY LATENCY LOG MESSAGE. Which means things are only ever found on the update list! So I don't understand how this method can be faster than method 2. The full, compilable code (requires GCC and boost-1.41) that generates random strings as test data is at: http://pastebin.com/e3HuL0nr

    Read the article

  • How to get accuracy memory usage on iphone device.

    - by Favo Yang
    I want to output accuracy memory usage on iphone device, the method I used was taking from, http://landonf.bikemonkey.org/code/iphone/Determining%5FAvailable%5FMemory.20081203.html natural_t mem_used = (vm_stat.active_count + vm_stat.inactive_count + vm_stat.wire_count) * pagesize; natural_t mem_free = vm_stat.free_count * pagesize; natural_t mem_total = mem_used + mem_free; The issue is that the total value is always changed after testing on device! used: 60200.0KB free: 2740.0KB total: 62940.0KB used: 53156.0KB free: 2524.0KB total: 55680.0KB used: 52500.0KB free: 2544.0KB total: 55044.0KB Have a look for the function implementation, it already sum active, inactive, wire and free pages, is there anything I missing here?

    Read the article

  • Why is memory management so visible in Java VM?

    - by Emil
    I'm playing around with writing some simple Spring-based web apps and deploying them to Tomcat. Almost immediately, I run into the need to customize the Tomcat's JVM settings with -XX:MaxPermSize (and -Xmx and -Xms); without this, the server easily runs out of PermGen space. Why is this such an issue for Java VMs compared to other garbage collected languages? Comparing counts of "tune X memory usage" for X in Java, Ruby, Perl and Python, shows that Java has easily an order of magnitude more hits in Google than the other languages combined. I'd also be interested in references to technical papers/blog-posts/etc explaining design choices behind JVM GC implementations, across different JVMs or compared to other interpreted language VMs (e.g. comparing Sun or IBM JVM to Parrot). Are there technical reasons why JVM users still have to deal with non-auto-tuning heap/permgen sizes?

    Read the article

  • memory of drawables, is it better to have resources inside APK, outside APK or is it the same for me

    - by Daniel Benedykt
    Hi I have an application that draws a lot of graphics and change them. Since I have many graphics, I thought of having the images outside the APK, downloaded from the internet as needed, and saved on the files application folder. But I started to get outOfMemory exceptions. The question is: Does android handle memory different if I load a graphic from APK than if I load it from 'disk'? code using APK: topView.setBackgroundResource(R.drawable.bg); code if image is outside APK: Drawable d = Drawable.createFromPath(pathName); topView.setBackgroundDrawable(d); Thanks Daniel

    Read the article

  • Local variable assign versus direct assign; properties and memory.

    - by Typeoneerror
    In objective-c I see a lot of sample code where the author assigns a local variable, assigns it to a property, then releases the local variable. Is there a practical reason for doing this? I've been just assigning directly to the property for the most part. Would that cause a memory leak in any way? I guess I'd like to know if there's any difference between this: HomeScreenBtns *localHomeScreenBtns = [[HomeScreenBtns alloc] init]; self.homeScreenBtns = localHomeScreenBtns; [localHomeScreenBtns release]; and this: self.homeScreenBtns = [[HomeScreenBtns alloc] init]; Assuming that homeScreenBtns is a property like so: @property (nonatomic, retain) HomeScreenBtns *homeScreenBtns; I'm getting ready to submit my application to the app store so I'm in full optimize/QA mode.

    Read the article

  • UIScrollView: Is 806k Image too much too handle? (Crash, out of memory)?

    - by Jordan
    I'm loading a 2400x1845 png image into a scroll view. The program crashes out of memory, is there a better way to handle this? mapScrollView is an UIScrollView in IB, along with a couple of UIButtons. -(void)loadMapWithName:(NSString *)mapName { NSString* bundlePath = [[NSBundle mainBundle] bundlePath]; UIImage *image = [UIImage imageWithContentsOfFile:[NSString stringWithFormat:@"%@/path/%@", bundlePath, [maps objectForKey:mapName]]]; UIImageView *imageView = [[UIImageView alloc] initWithImage:image]; CGSize imgSize = image.size; mapScrollView.contentSize = imgSize; [mapScrollView addSubview:imageView]; [imageView release]; [self.view addSubview:mapScrollView]; }

    Read the article

  • Which .NET performance and/or memory profilers will allow me to profile a DLL?

    - by Eric
    I write a lot of .NET based plug-ins for other programs which are usually compiled as a DLL which is up to the native application to start up. I've been using Equatec's profiler, which works great, but now would like something with more features, including the ability to profile memory usage. I tried out Red Gate's Ant Profiler, but as far as I can see there is no way to profile a DLL. The only option is to profile an EXE. So my question is what other profiling tools are available that will allow me to profile a single library DLL rather than an EXE. I'm assuming this would require injecting profile code into the library as Equatec does?

    Read the article

  • Is there any way to determine what type of memory the segments returned by VirtualQuery() are?

    - by bdbaddog
    Greetings, I'm able to walk a processes memory map using logic like this: MEMORY_BASIC_INFORMATION mbi; void *lpAddress=(void*)0; while (VirtualQuery(lpAddress,&mbi,sizeof(mbi))) { fprintf(fptr,"Mem base:%-10x start:%-10x Size:%-10x Type:%-10x State:%-10x\n", mbi.AllocationBase, mbi.BaseAddress, mbi.RegionSize, mbi.Type,mbi.State); lpAddress=(void *)((unsigned int)mbi.BaseAddress + (unsigned int)mbi.RegionSize); } I'd like to know if a given segment is used for static allocation, stack, and/or heap and/or other? Is there any way to determine that?

    Read the article

  • What Android tools and methods work best to find memory/resource leaks?

    - by jottos
    I've got an Android app developed, and I'm at the point of a phone app development where everything seems to be working well and you want to declare victory and ship, but you know there just have to be some memory and resource leaks in there; and there's only 16mb of heap on the Android and its apparently surprisingly easy to leak in an Android app. I've been looking around and so far have only been able to dig up info on 'hprof' and 'traceview' and neither gets a lot of favorable reviews. What tools or methods have you come across or developed and care to share maybe in an OS project?

    Read the article

  • WPF Datatemplate + ItemsControl each item uses > 1 MB Memory?

    - by Matt H.
    Does that sound right to anyone???? I have an ItemsControl that displays data from a custom object that implements iNotifyPropertyChanged. The DataTemplate consists of: Border 3 buttons 5 textboxes An ellipse A Bindable RichTextBox (custom class that inherits from RichTextBox... so I could make Document a dependency property (to support binding)) Several grids and stackpanels for layout It uses: Styles (stored in a resource dictionary higher up the tree) Styles affect: colors, thicknesses, and text properties: which are data-bound to a "settings" class that implements iNotifyPropertyChanged, so the user can change display settings That's it! So what gives? I've also noticed that when I empty and remove the ItemsControl, memory isn't freed. over 5000 instances of "CommandBindingCollection" and "WeakReference" are CREATED (using ANTS profiler). And huge number of EffectiveValueEntry objects are created too. So really, what gives!!! :-) Thanks for your insight! Management needs this project soon but in its current state, it's unreleasable.

    Read the article

  • Why do virtual memory addresses for linux binaries start at 0x8048000?

    - by muteW
    Disassembling an ELF binary on a Ubuntu x86 system I couldn't help but notice that the code(.text) section starts from the virtual address 0x8048000 and all lower memory addresses seem to be unused. This seems to be rather wasteful and all Google turns up is either folklore involving STACK_TOP or protection against null-pointer dereferences. The latter case looks like it can be fixed by using a single page instead of leaving a 128MB gap. So my question is this - is there a definitive answer to why the layout has been fixed to these values or is it just an arbitrary choice?

    Read the article

  • Java: Best approach to have a long list of variables needed all the time without consuming memory?

    - by evilReiko
    I wrote an abstract class to contain all rules of the application because I need them almost everywhere in my application. So most of what it contains is static final variables, something like this: public abstract class appRules { public static final boolean IS_DEV = true; public static final String CLOCK_SHORT_TIME_FORMAT = "something"; public static final String CLOCK_SHORT_DATE_FORMAT = "something else"; public static final String CLOCK_FULL_FORMAT = "other thing"; public static final int USERNAME_MIN = 5; public static final int USERNAME_MAX = 16; // etc. } The class is big and contains LOTS of such variables. My Question: Isn't setting static variables means these variables are floating in memory all the time? Do you suggest insteading of having an abstract class, I have a instantiable class with non-static variables (just public final), so I instantiate the class and use the variables only when I need them. Or is what am I doing is completely wrong approach and you suggest something else?

    Read the article

  • Core data relationship memory leak

    - by cfihelp
    I have a strange (to me) memory leak when accessing an entity in a relationship. Series and Tiles have an inverse relationship to each other. // set up the fetch request NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Series" inManagedObjectContext:managedObjectContext]; [fetchRequest setEntity:entity]; // grab all of the series in the core data store NSError *error = nil; availableSeries = [[NSArray alloc] initWithArray:[managedObjectContext executeFetchRequest:fetchRequest error:&error]]; [fetchRequest release]; // grab one of the series Series *currentSeries = [availableSeries objectAtIndex:1]; // load all of the tiles attached to the series through the relationship NSArray *myTiles = [currentSeries.tile allObjects]; // 16 byte leak here! Instruments reports back that the final line has a 16 byte leak cause by NSPlaceHolderString. Stack trace: 2 UIKit UIApplicationMain 3 UIKit -[UIApplication _run] 4 CoreFoundation CFRunLoopRunInMode 5 CoreFoundation CFRunLoopRunSpecific 6 GraphicsServices PurpleEventCallback 7 UIKit _UIApplicationHandleEvent 8 UIKit -[UIApplication sendEvent:] 9 UIKit -[UIApplication handleEvent:withNewEvent:] 10 UIKit -[UIApplication _runWithURL:sourceBundleID:] 11 UIKit -[UIApplication _performInitializationWithURL:sourceBundleID:] 12 Memory -[AppDelegate_Phone application:didFinishLaunchingWithOptions:] /Users/cfish/svnrepo/Memory/src/Memory/iPhone/AppDelegate_Phone.m:49 13 UIKit -[UIViewController view] 14 Memory -[HomeScreenController_Phone viewDidLoad] /Users/cfish/svnrepo/Memory/src/Memory/iPhone/HomeScreenController_Phone.m:58 15 CoreData -[_NSFaultingMutableSet allObjects] 16 CoreData -[_NSFaultingMutableSet willRead] 17 CoreData -[NSFaultHandler retainedFulfillAggregateFaultForObject:andRelationship:withContext:] 18 CoreData -[NSSQLCore retainedRelationshipDataWithSourceID:forRelationship:withContext:] 19 CoreData -[NSSQLCore newFetchedPKsForSourceID:andRelationship:] 20 CoreData -[NSSQLCore rawSQLTextForToManyFaultStatement:stripBindVariables:swapEKPK:] 21 Foundation +[NSString stringWithFormat:] 22 Foundation -[NSPlaceholderString initWithFormat:locale:arguments:] 23 CoreFoundation _CFStringCreateWithFormatAndArgumentsAux 24 CoreFoundation _CFStringAppendFormatAndArgumentsAux 25 Foundation _NSDescriptionWithLocaleFunc 26 CoreFoundation -[NSObject respondsToSelector:] 27 libobjc.A.dylib class_respondsToSelector 28 libobjc.A.dylib lookUpMethod 29 libobjc.A.dylib _cache_addForwardEntry 30 libobjc.A.dylib _malloc_internal I think I'm missing something obvious but I can't quite figure out what. Thanks for your help! Update: I've copied the offending chunk of code to the first part of applicationDidFinishLaunching and it still leaks. Could there be something wrong with my model?

    Read the article

  • How to simulate inner join on very large files in java (without running out of memory)

    - by Constantin
    I am trying to simulate SQL joins using java and very large text files (INNER, RIGHT OUTER and LEFT OUTER). The files have already been sorted using an external sort routine. The issue I have is I am trying to find the most efficient way to deal with the INNER join part of the algorithm. Right now I am using two Lists to store the lines that have the same key and iterate through the set of lines in the right file once for every line in the left file (provided the keys still match). In other words, the join key is not unique in each file so would need to account for the Cartesian product situations ... left_01, 1 left_02, 1 right_01, 1 right_02, 1 right_03, 1 left_01 joins to right_01 using key 1 left_01 joins to right_02 using key 1 left_01 joins to right_03 using key 1 left_02 joins to right_01 using key 1 left_02 joins to right_02 using key 1 left_02 joins to right_03 using key 1 My concern is one of memory. I will run out of memory if i use the approach below but still want the inner join part to work fairly quickly. What is the best approach to deal with the INNER join part keeping in mind that these files may potentially be huge public class Joiner { private void join(BufferedReader left, BufferedReader right, BufferedWriter output) throws Throwable { BufferedReader _left = left; BufferedReader _right = right; BufferedWriter _output = output; Record _leftRecord; Record _rightRecord; _leftRecord = read(_left); _rightRecord = read(_right); while( _leftRecord != null && _rightRecord != null ) { if( _leftRecord.getKey() < _rightRecord.getKey() ) { write(_output, _leftRecord, null); _leftRecord = read(_left); } else if( _leftRecord.getKey() > _rightRecord.getKey() ) { write(_output, null, _rightRecord); _rightRecord = read(_right); } else { List<Record> leftList = new ArrayList<Record>(); List<Record> rightList = new ArrayList<Record>(); _leftRecord = readRecords(leftList, _leftRecord, _left); _rightRecord = readRecords(rightList, _rightRecord, _right); for( Record equalKeyLeftRecord : leftList ){ for( Record equalKeyRightRecord : rightList ){ write(_output, equalKeyLeftRecord, equalKeyRightRecord); } } } } if( _leftRecord != null ) { write(_output, _leftRecord, null); _leftRecord = read(_left); while(_leftRecord != null) { write(_output, _leftRecord, null); _leftRecord = read(_left); } } else { if( _rightRecord != null ) { write(_output, null, _rightRecord); _rightRecord = read(_right); while(_rightRecord != null) { write(_output, null, _rightRecord); _rightRecord = read(_right); } } } _left.close(); _right.close(); _output.flush(); _output.close(); } private Record read(BufferedReader reader) throws Throwable { Record record = null; String data = reader.readLine(); if( data != null ) { record = new Record(data.split("\t")); } return record; } private Record readRecords(List<Record> list, Record record, BufferedReader reader) throws Throwable { int key = record.getKey(); list.add(record); record = read(reader); while( record != null && record.getKey() == key) { list.add(record); record = read(reader); } return record; } private void write(BufferedWriter writer, Record left, Record right) throws Throwable { String leftKey = (left == null ? "null" : Integer.toString(left.getKey())); String leftData = (left == null ? "null" : left.getData()); String rightKey = (right == null ? "null" : Integer.toString(right.getKey())); String rightData = (right == null ? "null" : right.getData()); writer.write("[" + leftKey + "][" + leftData + "][" + rightKey + "][" + rightData + "]\n"); } public static void main(String[] args) { try { BufferedReader leftReader = new BufferedReader(new FileReader("LEFT.DAT")); BufferedReader rightReader = new BufferedReader(new FileReader("RIGHT.DAT")); BufferedWriter output = new BufferedWriter(new FileWriter("OUTPUT.DAT")); Joiner joiner = new Joiner(); joiner.join(leftReader, rightReader, output); } catch (Throwable e) { e.printStackTrace(); } } } After applying the ideas from the proposed answer, I changed the loop to this private void join(RandomAccessFile left, RandomAccessFile right, BufferedWriter output) throws Throwable { long _pointer = 0; RandomAccessFile _left = left; RandomAccessFile _right = right; BufferedWriter _output = output; Record _leftRecord; Record _rightRecord; _leftRecord = read(_left); _rightRecord = read(_right); while( _leftRecord != null && _rightRecord != null ) { if( _leftRecord.getKey() < _rightRecord.getKey() ) { write(_output, _leftRecord, null); _leftRecord = read(_left); } else if( _leftRecord.getKey() > _rightRecord.getKey() ) { write(_output, null, _rightRecord); _pointer = _right.getFilePointer(); _rightRecord = read(_right); } else { long _tempPointer = 0; int key = _leftRecord.getKey(); while( _leftRecord != null && _leftRecord.getKey() == key ) { _right.seek(_pointer); _rightRecord = read(_right); while( _rightRecord != null && _rightRecord.getKey() == key ) { write(_output, _leftRecord, _rightRecord ); _tempPointer = _right.getFilePointer(); _rightRecord = read(_right); } _leftRecord = read(_left); } _pointer = _tempPointer; } } if( _leftRecord != null ) { write(_output, _leftRecord, null); _leftRecord = read(_left); while(_leftRecord != null) { write(_output, _leftRecord, null); _leftRecord = read(_left); } } else { if( _rightRecord != null ) { write(_output, null, _rightRecord); _rightRecord = read(_right); while(_rightRecord != null) { write(_output, null, _rightRecord); _rightRecord = read(_right); } } } _left.close(); _right.close(); _output.flush(); _output.close(); } UPDATE While this approach worked, it was terribly slow and so I have modified this to create files as buffers and this works very well. Here is the update ... private long getMaxBufferedLines(File file) throws Throwable { long freeBytes = Runtime.getRuntime().freeMemory() / 2; return (freeBytes / (file.length() / getLineCount(file))); } private void join(File left, File right, File output, JoinType joinType) throws Throwable { BufferedReader leftFile = new BufferedReader(new FileReader(left)); BufferedReader rightFile = new BufferedReader(new FileReader(right)); BufferedWriter outputFile = new BufferedWriter(new FileWriter(output)); long maxBufferedLines = getMaxBufferedLines(right); Record leftRecord; Record rightRecord; leftRecord = read(leftFile); rightRecord = read(rightFile); while( leftRecord != null && rightRecord != null ) { if( leftRecord.getKey().compareTo(rightRecord.getKey()) < 0) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.LeftExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, leftRecord, null); } leftRecord = read(leftFile); } else if( leftRecord.getKey().compareTo(rightRecord.getKey()) > 0 ) { if( joinType == JoinType.RightOuterJoin || joinType == JoinType.RightExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, null, rightRecord); } rightRecord = read(rightFile); } else if( leftRecord.getKey().compareTo(rightRecord.getKey()) == 0 ) { String key = leftRecord.getKey(); List<File> rightRecordFileList = new ArrayList<File>(); List<Record> rightRecordList = new ArrayList<Record>(); rightRecordList.add(rightRecord); rightRecord = consume(key, rightFile, rightRecordList, rightRecordFileList, maxBufferedLines); while( leftRecord != null && leftRecord.getKey().compareTo(key) == 0 ) { processRightRecords(outputFile, leftRecord, rightRecordFileList, rightRecordList, joinType); leftRecord = read(leftFile); } // need a dispose for deleting files in list } else { throw new Exception("DATA IS NOT SORTED"); } } if( leftRecord != null ) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.LeftExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, leftRecord, null); } leftRecord = read(leftFile); while(leftRecord != null) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.LeftExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, leftRecord, null); } leftRecord = read(leftFile); } } else { if( rightRecord != null ) { if( joinType == JoinType.RightOuterJoin || joinType == JoinType.RightExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, null, rightRecord); } rightRecord = read(rightFile); while(rightRecord != null) { if( joinType == JoinType.RightOuterJoin || joinType == JoinType.RightExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, null, rightRecord); } rightRecord = read(rightFile); } } } leftFile.close(); rightFile.close(); outputFile.flush(); outputFile.close(); } public void processRightRecords(BufferedWriter outputFile, Record leftRecord, List<File> rightFiles, List<Record> rightRecords, JoinType joinType) throws Throwable { for(File rightFile : rightFiles) { BufferedReader rightReader = new BufferedReader(new FileReader(rightFile)); Record rightRecord = read(rightReader); while(rightRecord != null){ if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.RightOuterJoin || joinType == JoinType.FullOuterJoin || joinType == JoinType.InnerJoin ) { write(outputFile, leftRecord, rightRecord); } rightRecord = read(rightReader); } rightReader.close(); } for(Record rightRecord : rightRecords) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.RightOuterJoin || joinType == JoinType.FullOuterJoin || joinType == JoinType.InnerJoin ) { write(outputFile, leftRecord, rightRecord); } } } /** * consume all records having key (either to a single list or multiple files) each file will * store a buffer full of data. The right record returned represents the outside flow (key is * already positioned to next one or null) so we can't use this record in below while loop or * within this block in general when comparing current key. The trick is to keep consuming * from a List. When it becomes empty, re-fill it from the next file until all files have * been consumed (and the last node in the list is read). The next outside iteration will be * ready to be processed (either it will be null or it points to the next biggest key * @throws Throwable * */ private Record consume(String key, BufferedReader reader, List<Record> records, List<File> files, long bufferMaxRecordLines ) throws Throwable { boolean processComplete = false; Record record = records.get(records.size() - 1); while(!processComplete){ long recordCount = records.size(); if( record.getKey().compareTo(key) == 0 ){ record = read(reader); while( record != null && record.getKey().compareTo(key) == 0 && recordCount < bufferMaxRecordLines ) { records.add(record); recordCount++; record = read(reader); } } processComplete = true; // if record is null, we are done if( record != null ) { // if the key has changed, we are done if( record.getKey().compareTo(key) == 0 ) { // Same key means we have exhausted the buffer. // Dump entire buffer into a file. The list of file // pointers will keep track of the files ... processComplete = false; dumpBufferToFile(records, files); records.clear(); records.add(record); } } } return record; } /** * Dump all records in List of Record objects to a file. Then, add that * file to List of File objects * * NEED TO PLACE A LIMIT ON NUMBER OF FILE POINTERS (check size of file list) * * @param records * @param files * @throws Throwable */ private void dumpBufferToFile(List<Record> records, List<File> files) throws Throwable { String prefix = "joiner_" + files.size() + 1; String suffix = ".dat"; File file = File.createTempFile(prefix, suffix, new File("cache")); BufferedWriter writer = new BufferedWriter(new FileWriter(file)); for( Record record : records ) { writer.write( record.dump() ); } files.add(file); writer.flush(); writer.close(); }

    Read the article

  • Freeing of allocated memory in Solaris/Linux

    - by user355159
    Hi, I have written a small program and compiled it under Solaris/Linux platform to measure the performance of applying this code to my application. The program is written in such a way, initially using sbrk(0) system call, i have taken base address of the heap region. After that i have allocated an 1.5GB of memory using malloc system call, Then i used memcpy system call to copy 1.5GB of content to the allocated memory area. Then, I freed the allocated memory. After freeing, i used again sbrk(0) system call to view the heap size. This is where i little confused. In solaris, eventhough, i freed the memory allocated (of nearly 1.5GB) the heap size of the process is huge. But i run the same application in linux, after freeing, i found that the heap size of the process is equal to the size of the heap memory before allocation of 1.5GB. I know Solaris does not frees memory immediately, but i don't know how to tune the solaris kernel to immediately free the memory after free() system call. Also, please explain why the same problem does not comes under Linux? Can anyone help me out of this? Thanks, Santhosh.

    Read the article

  • How to properly cast a global memory array using the uint4 vector in CUDA to increase memory throughput?

    - by charis
    There are generally two techniques to increase the memory throughput of the global memory on a CUDA kernel; memory accesses coalescence and accessing words of at least 4 bytes. With the first technique accesses to the same memory segment by threads of the same half-warp are coalesced to fewer transactions while be accessing words of at least 4 bytes this memory segment is effectively increased from 32 bytes to 128. To access 16-byte instead of 1-byte words when there are unsigned chars stored in the global memory, the uint4 vector is commonly used by casting the memory array to uint4: uint4 *text4 = ( uint4 * ) d_text; var = text4[i]; In order to extract the 16 chars from var, i am currently using bitwise operations. For example: s_array[j * 16 + 0] = var.x & 0x000000FF; s_array[j * 16 + 1] = (var.x >> 8) & 0x000000FF; s_array[j * 16 + 2] = (var.x >> 16) & 0x000000FF; s_array[j * 16 + 3] = (var.x >> 24) & 0x000000FF; My question is, is it possible to recast var (or for that matter *text4) to unsigned char in order to avoid the additional overhead of the bitwise operations?

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >