Search Results

Search found 24560 results on 983 pages for 'memory model'.

Page 92/983 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • Getting Out Of Memory: Java heap space, but while viewing heap space it max uses 50 MB

    - by ikky
    Hi! I'm using ASANT to run a xml file which points to a NARS.jar file. (i do not have the project file of the NARS.jar) I'm getting "java.lang.OutOfMemoryError: Java heap space. I used VisualVM to look at the heap while running the NARS.jar, and it says that it max uses 50 MB of the heapspace. I've set the initial and max size of heapspace to 512 MB. Does anyone have an ide of what could be wrong? I got 1 GB physical Memory and created a 5 GB pagefile (for test purpose). Thanks in advance.

    Read the article

  • zlib memory usage / performance. With 500kb of data.

    - by unixman83
    Is zLib Worth it? Are there other better suited compressors? I am using an embedded system. Frequently, I have only 3MB of RAM or less available to my application. So I am considering using zlib to compress my buffers. I am concerned about overhead however. The buffer's average size will be 30kb. This probably won't get compressed by zlib. Anyone know of a good compressor for extremely limited memory environments? However, I will experience occasional maximum buffer sizes of 700kb, with 500kb much more common. Is zlib worth it in this case? Or is the overhead too much to justify? My sole considerations for compression are RAM overhead of algorithm and performance at least as good as zlib.

    Read the article

  • How do I load the Oracle schema into memory instead of the hard drive?

    - by Andrew
    I have a certain web application that makes upwards of ~100 updates to an Oracle database in succession. This can take anywhere from 3-5 minutes, which sometimes causes the webpage to time out. A re-design of the application is scheduled soon but someone told me that there is a way to configure a "loader file" which loads the schema into memory and runs the transactions there instead of on the hard drive, supposedly improving speed by several orders of magnitude. I have tried to research this "loader file" but all I can find is information about the SQL* bulk data loader. Does anyone know what he's talking about? Is this really possible and is it a feasible quick fix or should I just wait until the application is re-designed?

    Read the article

  • Of these 3 methods for reading linked lists from shared memory, why is the 3rd fastest?

    - by Joseph Garvin
    I have a 'server' program that updates many linked lists in shared memory in response to external events. I want client programs to notice an update on any of the lists as quickly as possible (lowest latency). The server marks a linked list's node's state_ as FILLED once its data is filled in and its next pointer has been set to a valid location. Until then, its state_ is NOT_FILLED_YET. I am using memory barriers to make sure that clients don't see the state_ as FILLED before the data within is actually ready (and it seems to work, I never see corrupt data). Also, state_ is volatile to be sure the compiler doesn't lift the client's checking of it out of loops. Keeping the server code exactly the same, I've come up with 3 different methods for the client to scan the linked lists for changes. The question is: Why is the 3rd method fastest? Method 1: Round robin over all the linked lists (called 'channels') continuously, looking to see if any nodes have changed to 'FILLED': void method_one() { std::vector<Data*> channel_cursors; for(ChannelList::iterator i = channel_list.begin(); i != channel_list.end(); ++i) { Data* current_item = static_cast<Data*>(i->get(segment)->tail_.get(segment)); channel_cursors.push_back(current_item); } while(true) { for(std::size_t i = 0; i < channel_list.size(); ++i) { Data* current_item = channel_cursors[i]; ACQUIRE_MEMORY_BARRIER; if(current_item->state_ == NOT_FILLED_YET) { continue; } log_latency(current_item->tv_sec_, current_item->tv_usec_); channel_cursors[i] = static_cast<Data*>(current_item->next_.get(segment)); } } } Method 1 gave very low latency when then number of channels was small. But when the number of channels grew (250K+) it became very slow because of looping over all the channels. So I tried... Method 2: Give each linked list an ID. Keep a separate 'update list' to the side. Every time one of the linked lists is updated, push its ID on to the update list. Now we just need to monitor the single update list, and check the IDs we get from it. void method_two() { std::vector<Data*> channel_cursors; for(ChannelList::iterator i = channel_list.begin(); i != channel_list.end(); ++i) { Data* current_item = static_cast<Data*>(i->get(segment)->tail_.get(segment)); channel_cursors.push_back(current_item); } UpdateID* update_cursor = static_cast<UpdateID*>(update_channel.tail_.get(segment)); while(true) { if(update_cursor->state_ == NOT_FILLED_YET) { continue; } ::uint32_t update_id = update_cursor->list_id_; Data* current_item = channel_cursors[update_id]; if(current_item->state_ == NOT_FILLED_YET) { std::cerr << "This should never print." << std::endl; // it doesn't continue; } log_latency(current_item->tv_sec_, current_item->tv_usec_); channel_cursors[update_id] = static_cast<Data*>(current_item->next_.get(segment)); update_cursor = static_cast<UpdateID*>(update_cursor->next_.get(segment)); } } Method 2 gave TERRIBLE latency. Whereas Method 1 might give under 10us latency, Method 2 would inexplicably often given 8ms latency! Using gettimeofday it appears that the change in update_cursor-state_ was very slow to propogate from the server's view to the client's (I'm on a multicore box, so I assume the delay is due to cache). So I tried a hybrid approach... Method 3: Keep the update list. But loop over all the channels continuously, and within each iteration check if the update list has updated. If it has, go with the number pushed onto it. If it hasn't, check the channel we've currently iterated to. void method_three() { std::vector<Data*> channel_cursors; for(ChannelList::iterator i = channel_list.begin(); i != channel_list.end(); ++i) { Data* current_item = static_cast<Data*>(i->get(segment)->tail_.get(segment)); channel_cursors.push_back(current_item); } UpdateID* update_cursor = static_cast<UpdateID*>(update_channel.tail_.get(segment)); while(true) { for(std::size_t i = 0; i < channel_list.size(); ++i) { std::size_t idx = i; ACQUIRE_MEMORY_BARRIER; if(update_cursor->state_ != NOT_FILLED_YET) { //std::cerr << "Found via update" << std::endl; i--; idx = update_cursor->list_id_; update_cursor = static_cast<UpdateID*>(update_cursor->next_.get(segment)); } Data* current_item = channel_cursors[idx]; ACQUIRE_MEMORY_BARRIER; if(current_item->state_ == NOT_FILLED_YET) { continue; } found_an_update = true; log_latency(current_item->tv_sec_, current_item->tv_usec_); channel_cursors[idx] = static_cast<Data*>(current_item->next_.get(segment)); } } } The latency of this method was as good as Method 1, but scaled to large numbers of channels. The problem is, I have no clue why. Just to throw a wrench in things: if I uncomment the 'found via update' part, it prints between EVERY LATENCY LOG MESSAGE. Which means things are only ever found on the update list! So I don't understand how this method can be faster than method 2. The full, compilable code (requires GCC and boost-1.41) that generates random strings as test data is at: http://pastebin.com/e3HuL0nr

    Read the article

  • How to get accuracy memory usage on iphone device.

    - by Favo Yang
    I want to output accuracy memory usage on iphone device, the method I used was taking from, http://landonf.bikemonkey.org/code/iphone/Determining%5FAvailable%5FMemory.20081203.html natural_t mem_used = (vm_stat.active_count + vm_stat.inactive_count + vm_stat.wire_count) * pagesize; natural_t mem_free = vm_stat.free_count * pagesize; natural_t mem_total = mem_used + mem_free; The issue is that the total value is always changed after testing on device! used: 60200.0KB free: 2740.0KB total: 62940.0KB used: 53156.0KB free: 2524.0KB total: 55680.0KB used: 52500.0KB free: 2544.0KB total: 55044.0KB Have a look for the function implementation, it already sum active, inactive, wire and free pages, is there anything I missing here?

    Read the article

  • Why is memory management so visible in Java VM?

    - by Emil
    I'm playing around with writing some simple Spring-based web apps and deploying them to Tomcat. Almost immediately, I run into the need to customize the Tomcat's JVM settings with -XX:MaxPermSize (and -Xmx and -Xms); without this, the server easily runs out of PermGen space. Why is this such an issue for Java VMs compared to other garbage collected languages? Comparing counts of "tune X memory usage" for X in Java, Ruby, Perl and Python, shows that Java has easily an order of magnitude more hits in Google than the other languages combined. I'd also be interested in references to technical papers/blog-posts/etc explaining design choices behind JVM GC implementations, across different JVMs or compared to other interpreted language VMs (e.g. comparing Sun or IBM JVM to Parrot). Are there technical reasons why JVM users still have to deal with non-auto-tuning heap/permgen sizes?

    Read the article

  • memory of drawables, is it better to have resources inside APK, outside APK or is it the same for me

    - by Daniel Benedykt
    Hi I have an application that draws a lot of graphics and change them. Since I have many graphics, I thought of having the images outside the APK, downloaded from the internet as needed, and saved on the files application folder. But I started to get outOfMemory exceptions. The question is: Does android handle memory different if I load a graphic from APK than if I load it from 'disk'? code using APK: topView.setBackgroundResource(R.drawable.bg); code if image is outside APK: Drawable d = Drawable.createFromPath(pathName); topView.setBackgroundDrawable(d); Thanks Daniel

    Read the article

  • Local variable assign versus direct assign; properties and memory.

    - by Typeoneerror
    In objective-c I see a lot of sample code where the author assigns a local variable, assigns it to a property, then releases the local variable. Is there a practical reason for doing this? I've been just assigning directly to the property for the most part. Would that cause a memory leak in any way? I guess I'd like to know if there's any difference between this: HomeScreenBtns *localHomeScreenBtns = [[HomeScreenBtns alloc] init]; self.homeScreenBtns = localHomeScreenBtns; [localHomeScreenBtns release]; and this: self.homeScreenBtns = [[HomeScreenBtns alloc] init]; Assuming that homeScreenBtns is a property like so: @property (nonatomic, retain) HomeScreenBtns *homeScreenBtns; I'm getting ready to submit my application to the app store so I'm in full optimize/QA mode.

    Read the article

  • UIScrollView: Is 806k Image too much too handle? (Crash, out of memory)?

    - by Jordan
    I'm loading a 2400x1845 png image into a scroll view. The program crashes out of memory, is there a better way to handle this? mapScrollView is an UIScrollView in IB, along with a couple of UIButtons. -(void)loadMapWithName:(NSString *)mapName { NSString* bundlePath = [[NSBundle mainBundle] bundlePath]; UIImage *image = [UIImage imageWithContentsOfFile:[NSString stringWithFormat:@"%@/path/%@", bundlePath, [maps objectForKey:mapName]]]; UIImageView *imageView = [[UIImageView alloc] initWithImage:image]; CGSize imgSize = image.size; mapScrollView.contentSize = imgSize; [mapScrollView addSubview:imageView]; [imageView release]; [self.view addSubview:mapScrollView]; }

    Read the article

  • MVC (model-view-controller) - can it be explained in simple terms?

    - by DVK
    I need to explain to a not-very-technical manager the MVC (model-view-controller) concept and ran into trouble. The problem is that the explanation needs to be on a "your grandma will get it" level - e.g. even the fairly straightforward explanation offered on MVC Wiki page didn't work, at least with my commentary. Does anyone have a reference to a good MVC explanation in simple terms? It would ideally be done with non-techie metaphor examples (e.g. similar to "Decorator pattern is like glasses") - one reason I failed was that all MVC examples I could come up with were development related. I once saw a list of pattern explanations but to the best of my memory MVC was not on it. Thanks!

    Read the article

  • Which .NET performance and/or memory profilers will allow me to profile a DLL?

    - by Eric
    I write a lot of .NET based plug-ins for other programs which are usually compiled as a DLL which is up to the native application to start up. I've been using Equatec's profiler, which works great, but now would like something with more features, including the ability to profile memory usage. I tried out Red Gate's Ant Profiler, but as far as I can see there is no way to profile a DLL. The only option is to profile an EXE. So my question is what other profiling tools are available that will allow me to profile a single library DLL rather than an EXE. I'm assuming this would require injecting profile code into the library as Equatec does?

    Read the article

  • Is there any way to determine what type of memory the segments returned by VirtualQuery() are?

    - by bdbaddog
    Greetings, I'm able to walk a processes memory map using logic like this: MEMORY_BASIC_INFORMATION mbi; void *lpAddress=(void*)0; while (VirtualQuery(lpAddress,&mbi,sizeof(mbi))) { fprintf(fptr,"Mem base:%-10x start:%-10x Size:%-10x Type:%-10x State:%-10x\n", mbi.AllocationBase, mbi.BaseAddress, mbi.RegionSize, mbi.Type,mbi.State); lpAddress=(void *)((unsigned int)mbi.BaseAddress + (unsigned int)mbi.RegionSize); } I'd like to know if a given segment is used for static allocation, stack, and/or heap and/or other? Is there any way to determine that?

    Read the article

  • What is the state of model driven development in 2012?

    - by eriktheblond
    I haven't heard a lot of talk lately on model driven development, but some of my company's customers are using it. I know briefly what it is, but after trying to refresh my memory using Google I found that most of the information is from 2006-2007. Was it something that never really took off? More worryingly is the reason why the customers are asking: They want IEC 61508 complience. Is this actually a "not-so-good" idea that will live on simply because it is prescribed by a standard? I have too little experience to say if MDD is good idea or not (I'm guessing the right answer starts with "It depends"...), but the reasons for using it, that is standard complience, seems just wrong.

    Read the article

  • CakePHP: How can I disable auto-increment on Model.id?

    - by tomws
    CakePHP 1.3.0, mysqli I have a model, Manifest, whose ID should be the unique number from a printed form. However, with Manifest.id set as the primary key, CakePHP is helping me by setting up auto-increment on the field. Is there a way to flag the field via schema.php and/or elsewhere to disable auto-increment? I need just a plain, old primary key without it. The only other solution I can imagine is adding on a separate manifest number field and changing foreign keys in a half dozen other tables. A bit wasteful and not as intuitive.

    Read the article

  • What Android tools and methods work best to find memory/resource leaks?

    - by jottos
    I've got an Android app developed, and I'm at the point of a phone app development where everything seems to be working well and you want to declare victory and ship, but you know there just have to be some memory and resource leaks in there; and there's only 16mb of heap on the Android and its apparently surprisingly easy to leak in an Android app. I've been looking around and so far have only been able to dig up info on 'hprof' and 'traceview' and neither gets a lot of favorable reviews. What tools or methods have you come across or developed and care to share maybe in an OS project?

    Read the article

  • What are the differences between MVP, Presentation Model, MVVM and MVC?

    - by Nicholas
    I have a pretty good idea how each of these patterns work some of the minor differences between them, but are they really all that different from each other? It seems to me that the Presenter, Presentation Model, ViewModel and Controller are essentially the same concept. Why couldn't I classify all of these concepts as controllers? I feel like it might simplify the entire idea a great deal. Can anyone give a clear description of their differences? I want to clarify that I do understand how the patterns work, and have implemented most of them in one technology or another. What I am really looking for is someone's experience with one of these patterns, and why they would not consider their ViewModel a controller for instance.

    Read the article

  • Where does the data model go in a Prism app?

    - by HawkeyeJoeS
    I'm having trouble where to put our data model in our Prism app. Most, if not all or our data will be coming from web services and the web services are unique for each module. Unfortunately, there will be objects that need to be shared (such as a person/user object). I'm really torn about whether to add these services directly to the module, so that each is truly independent, or create a separate project to house the web service proxies and business entities. The modules are being built by different teams, but will all live in the same solution (and source control, of course).

    Read the article

  • WPF Datatemplate + ItemsControl each item uses > 1 MB Memory?

    - by Matt H.
    Does that sound right to anyone???? I have an ItemsControl that displays data from a custom object that implements iNotifyPropertyChanged. The DataTemplate consists of: Border 3 buttons 5 textboxes An ellipse A Bindable RichTextBox (custom class that inherits from RichTextBox... so I could make Document a dependency property (to support binding)) Several grids and stackpanels for layout It uses: Styles (stored in a resource dictionary higher up the tree) Styles affect: colors, thicknesses, and text properties: which are data-bound to a "settings" class that implements iNotifyPropertyChanged, so the user can change display settings That's it! So what gives? I've also noticed that when I empty and remove the ItemsControl, memory isn't freed. over 5000 instances of "CommandBindingCollection" and "WeakReference" are CREATED (using ANTS profiler). And huge number of EffectiveValueEntry objects are created too. So really, what gives!!! :-) Thanks for your insight! Management needs this project soon but in its current state, it's unreleasable.

    Read the article

  • Why do virtual memory addresses for linux binaries start at 0x8048000?

    - by muteW
    Disassembling an ELF binary on a Ubuntu x86 system I couldn't help but notice that the code(.text) section starts from the virtual address 0x8048000 and all lower memory addresses seem to be unused. This seems to be rather wasteful and all Google turns up is either folklore involving STACK_TOP or protection against null-pointer dereferences. The latter case looks like it can be fixed by using a single page instead of leaving a 128MB gap. So my question is this - is there a definitive answer to why the layout has been fixed to these values or is it just an arbitrary choice?

    Read the article

  • Java: Best approach to have a long list of variables needed all the time without consuming memory?

    - by evilReiko
    I wrote an abstract class to contain all rules of the application because I need them almost everywhere in my application. So most of what it contains is static final variables, something like this: public abstract class appRules { public static final boolean IS_DEV = true; public static final String CLOCK_SHORT_TIME_FORMAT = "something"; public static final String CLOCK_SHORT_DATE_FORMAT = "something else"; public static final String CLOCK_FULL_FORMAT = "other thing"; public static final int USERNAME_MIN = 5; public static final int USERNAME_MAX = 16; // etc. } The class is big and contains LOTS of such variables. My Question: Isn't setting static variables means these variables are floating in memory all the time? Do you suggest insteading of having an abstract class, I have a instantiable class with non-static variables (just public final), so I instantiate the class and use the variables only when I need them. Or is what am I doing is completely wrong approach and you suggest something else?

    Read the article

  • Add fields to Django ModelForm that aren't in the model

    - by Cyclic
    I have a model that looks like: class MySchedule(models.Model): start_datetime=models.DateTimeField() name=models.CharField('Name',max_length=75) With it comes its ModelForm: class MyScheduleForm(forms.ModelForm): startdate=forms.DateField() starthour=forms.ChoiceField(choices=((6,"6am"),(7,"7am"),(8,"8am"),(9,"9am"),(10,"10am"),(11,"11am"), (12,"noon"),(13,"1pm"),(14,"2pm"),(15,"3pm"),(16,"4pm"),(17,"5pm"), (18,"6pm" startminute=forms.ChoiceField(choices=((0,":00"),(15,":15"),(30,":30"),(45,":45")))),(19,"7pm"),(20,"8pm"),(21,"9pm"),(22,"10pm"),(23,"11pm"))) class Meta: model=MySchedule def clean(self): starttime=time(int(self.cleaned_data.get('starthour')),int(self.cleaned_data.get('startminute'))) return self.cleaned_data try: self.instance.start_datetime=datetime.combine(self.cleaned_data.get("startdate"),starttime) except TypeError: raise forms.ValidationError("There's a problem with your start or end date") Basically, I'm trying to break the DateTime field in the model into 3 more easily usable form fields -- a date picker, an hour dropdown, and a minute dropdown. Then, once I've gotten the three inputs, I reassemble them into a DateTime and save it to the model. A few questions: 1) Is this totally the wrong way to go about doing it? I don't want to create fields in the model for hours, minutes, etc, since that's all basically just intermediary data, so I'd like a way to break the DateTime field into sub-fields. 2) The difficulty I'm running into is when the startdate field is blank -- it seems like it never gets checked for non-blankness, and just ends up throwing up a TypeError later when the program expects a date and gets None. Where does Django check for blank inputs, and raise the error that eventually goes back to the form? Is this my responsibility? If so, how do I do it, since it doesn't evaluate clean_startdate() since startdate isn't in the model. 3) Is there some better way to do this with inheritance? Perhaps inherit the MyScheduleForm in BetterScheduleForm and add the fields there? How would I do this? (I've been playing around with it for over an hours and can't seem to get it) Thanks! [Edit:] Left off the return self.cleaned_data -- lost it in the copy/paste originally

    Read the article

  • Core data relationship memory leak

    - by cfihelp
    I have a strange (to me) memory leak when accessing an entity in a relationship. Series and Tiles have an inverse relationship to each other. // set up the fetch request NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Series" inManagedObjectContext:managedObjectContext]; [fetchRequest setEntity:entity]; // grab all of the series in the core data store NSError *error = nil; availableSeries = [[NSArray alloc] initWithArray:[managedObjectContext executeFetchRequest:fetchRequest error:&error]]; [fetchRequest release]; // grab one of the series Series *currentSeries = [availableSeries objectAtIndex:1]; // load all of the tiles attached to the series through the relationship NSArray *myTiles = [currentSeries.tile allObjects]; // 16 byte leak here! Instruments reports back that the final line has a 16 byte leak cause by NSPlaceHolderString. Stack trace: 2 UIKit UIApplicationMain 3 UIKit -[UIApplication _run] 4 CoreFoundation CFRunLoopRunInMode 5 CoreFoundation CFRunLoopRunSpecific 6 GraphicsServices PurpleEventCallback 7 UIKit _UIApplicationHandleEvent 8 UIKit -[UIApplication sendEvent:] 9 UIKit -[UIApplication handleEvent:withNewEvent:] 10 UIKit -[UIApplication _runWithURL:sourceBundleID:] 11 UIKit -[UIApplication _performInitializationWithURL:sourceBundleID:] 12 Memory -[AppDelegate_Phone application:didFinishLaunchingWithOptions:] /Users/cfish/svnrepo/Memory/src/Memory/iPhone/AppDelegate_Phone.m:49 13 UIKit -[UIViewController view] 14 Memory -[HomeScreenController_Phone viewDidLoad] /Users/cfish/svnrepo/Memory/src/Memory/iPhone/HomeScreenController_Phone.m:58 15 CoreData -[_NSFaultingMutableSet allObjects] 16 CoreData -[_NSFaultingMutableSet willRead] 17 CoreData -[NSFaultHandler retainedFulfillAggregateFaultForObject:andRelationship:withContext:] 18 CoreData -[NSSQLCore retainedRelationshipDataWithSourceID:forRelationship:withContext:] 19 CoreData -[NSSQLCore newFetchedPKsForSourceID:andRelationship:] 20 CoreData -[NSSQLCore rawSQLTextForToManyFaultStatement:stripBindVariables:swapEKPK:] 21 Foundation +[NSString stringWithFormat:] 22 Foundation -[NSPlaceholderString initWithFormat:locale:arguments:] 23 CoreFoundation _CFStringCreateWithFormatAndArgumentsAux 24 CoreFoundation _CFStringAppendFormatAndArgumentsAux 25 Foundation _NSDescriptionWithLocaleFunc 26 CoreFoundation -[NSObject respondsToSelector:] 27 libobjc.A.dylib class_respondsToSelector 28 libobjc.A.dylib lookUpMethod 29 libobjc.A.dylib _cache_addForwardEntry 30 libobjc.A.dylib _malloc_internal I think I'm missing something obvious but I can't quite figure out what. Thanks for your help! Update: I've copied the offending chunk of code to the first part of applicationDidFinishLaunching and it still leaks. Could there be something wrong with my model?

    Read the article

  • How to simulate inner join on very large files in java (without running out of memory)

    - by Constantin
    I am trying to simulate SQL joins using java and very large text files (INNER, RIGHT OUTER and LEFT OUTER). The files have already been sorted using an external sort routine. The issue I have is I am trying to find the most efficient way to deal with the INNER join part of the algorithm. Right now I am using two Lists to store the lines that have the same key and iterate through the set of lines in the right file once for every line in the left file (provided the keys still match). In other words, the join key is not unique in each file so would need to account for the Cartesian product situations ... left_01, 1 left_02, 1 right_01, 1 right_02, 1 right_03, 1 left_01 joins to right_01 using key 1 left_01 joins to right_02 using key 1 left_01 joins to right_03 using key 1 left_02 joins to right_01 using key 1 left_02 joins to right_02 using key 1 left_02 joins to right_03 using key 1 My concern is one of memory. I will run out of memory if i use the approach below but still want the inner join part to work fairly quickly. What is the best approach to deal with the INNER join part keeping in mind that these files may potentially be huge public class Joiner { private void join(BufferedReader left, BufferedReader right, BufferedWriter output) throws Throwable { BufferedReader _left = left; BufferedReader _right = right; BufferedWriter _output = output; Record _leftRecord; Record _rightRecord; _leftRecord = read(_left); _rightRecord = read(_right); while( _leftRecord != null && _rightRecord != null ) { if( _leftRecord.getKey() < _rightRecord.getKey() ) { write(_output, _leftRecord, null); _leftRecord = read(_left); } else if( _leftRecord.getKey() > _rightRecord.getKey() ) { write(_output, null, _rightRecord); _rightRecord = read(_right); } else { List<Record> leftList = new ArrayList<Record>(); List<Record> rightList = new ArrayList<Record>(); _leftRecord = readRecords(leftList, _leftRecord, _left); _rightRecord = readRecords(rightList, _rightRecord, _right); for( Record equalKeyLeftRecord : leftList ){ for( Record equalKeyRightRecord : rightList ){ write(_output, equalKeyLeftRecord, equalKeyRightRecord); } } } } if( _leftRecord != null ) { write(_output, _leftRecord, null); _leftRecord = read(_left); while(_leftRecord != null) { write(_output, _leftRecord, null); _leftRecord = read(_left); } } else { if( _rightRecord != null ) { write(_output, null, _rightRecord); _rightRecord = read(_right); while(_rightRecord != null) { write(_output, null, _rightRecord); _rightRecord = read(_right); } } } _left.close(); _right.close(); _output.flush(); _output.close(); } private Record read(BufferedReader reader) throws Throwable { Record record = null; String data = reader.readLine(); if( data != null ) { record = new Record(data.split("\t")); } return record; } private Record readRecords(List<Record> list, Record record, BufferedReader reader) throws Throwable { int key = record.getKey(); list.add(record); record = read(reader); while( record != null && record.getKey() == key) { list.add(record); record = read(reader); } return record; } private void write(BufferedWriter writer, Record left, Record right) throws Throwable { String leftKey = (left == null ? "null" : Integer.toString(left.getKey())); String leftData = (left == null ? "null" : left.getData()); String rightKey = (right == null ? "null" : Integer.toString(right.getKey())); String rightData = (right == null ? "null" : right.getData()); writer.write("[" + leftKey + "][" + leftData + "][" + rightKey + "][" + rightData + "]\n"); } public static void main(String[] args) { try { BufferedReader leftReader = new BufferedReader(new FileReader("LEFT.DAT")); BufferedReader rightReader = new BufferedReader(new FileReader("RIGHT.DAT")); BufferedWriter output = new BufferedWriter(new FileWriter("OUTPUT.DAT")); Joiner joiner = new Joiner(); joiner.join(leftReader, rightReader, output); } catch (Throwable e) { e.printStackTrace(); } } } After applying the ideas from the proposed answer, I changed the loop to this private void join(RandomAccessFile left, RandomAccessFile right, BufferedWriter output) throws Throwable { long _pointer = 0; RandomAccessFile _left = left; RandomAccessFile _right = right; BufferedWriter _output = output; Record _leftRecord; Record _rightRecord; _leftRecord = read(_left); _rightRecord = read(_right); while( _leftRecord != null && _rightRecord != null ) { if( _leftRecord.getKey() < _rightRecord.getKey() ) { write(_output, _leftRecord, null); _leftRecord = read(_left); } else if( _leftRecord.getKey() > _rightRecord.getKey() ) { write(_output, null, _rightRecord); _pointer = _right.getFilePointer(); _rightRecord = read(_right); } else { long _tempPointer = 0; int key = _leftRecord.getKey(); while( _leftRecord != null && _leftRecord.getKey() == key ) { _right.seek(_pointer); _rightRecord = read(_right); while( _rightRecord != null && _rightRecord.getKey() == key ) { write(_output, _leftRecord, _rightRecord ); _tempPointer = _right.getFilePointer(); _rightRecord = read(_right); } _leftRecord = read(_left); } _pointer = _tempPointer; } } if( _leftRecord != null ) { write(_output, _leftRecord, null); _leftRecord = read(_left); while(_leftRecord != null) { write(_output, _leftRecord, null); _leftRecord = read(_left); } } else { if( _rightRecord != null ) { write(_output, null, _rightRecord); _rightRecord = read(_right); while(_rightRecord != null) { write(_output, null, _rightRecord); _rightRecord = read(_right); } } } _left.close(); _right.close(); _output.flush(); _output.close(); } UPDATE While this approach worked, it was terribly slow and so I have modified this to create files as buffers and this works very well. Here is the update ... private long getMaxBufferedLines(File file) throws Throwable { long freeBytes = Runtime.getRuntime().freeMemory() / 2; return (freeBytes / (file.length() / getLineCount(file))); } private void join(File left, File right, File output, JoinType joinType) throws Throwable { BufferedReader leftFile = new BufferedReader(new FileReader(left)); BufferedReader rightFile = new BufferedReader(new FileReader(right)); BufferedWriter outputFile = new BufferedWriter(new FileWriter(output)); long maxBufferedLines = getMaxBufferedLines(right); Record leftRecord; Record rightRecord; leftRecord = read(leftFile); rightRecord = read(rightFile); while( leftRecord != null && rightRecord != null ) { if( leftRecord.getKey().compareTo(rightRecord.getKey()) < 0) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.LeftExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, leftRecord, null); } leftRecord = read(leftFile); } else if( leftRecord.getKey().compareTo(rightRecord.getKey()) > 0 ) { if( joinType == JoinType.RightOuterJoin || joinType == JoinType.RightExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, null, rightRecord); } rightRecord = read(rightFile); } else if( leftRecord.getKey().compareTo(rightRecord.getKey()) == 0 ) { String key = leftRecord.getKey(); List<File> rightRecordFileList = new ArrayList<File>(); List<Record> rightRecordList = new ArrayList<Record>(); rightRecordList.add(rightRecord); rightRecord = consume(key, rightFile, rightRecordList, rightRecordFileList, maxBufferedLines); while( leftRecord != null && leftRecord.getKey().compareTo(key) == 0 ) { processRightRecords(outputFile, leftRecord, rightRecordFileList, rightRecordList, joinType); leftRecord = read(leftFile); } // need a dispose for deleting files in list } else { throw new Exception("DATA IS NOT SORTED"); } } if( leftRecord != null ) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.LeftExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, leftRecord, null); } leftRecord = read(leftFile); while(leftRecord != null) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.LeftExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, leftRecord, null); } leftRecord = read(leftFile); } } else { if( rightRecord != null ) { if( joinType == JoinType.RightOuterJoin || joinType == JoinType.RightExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, null, rightRecord); } rightRecord = read(rightFile); while(rightRecord != null) { if( joinType == JoinType.RightOuterJoin || joinType == JoinType.RightExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, null, rightRecord); } rightRecord = read(rightFile); } } } leftFile.close(); rightFile.close(); outputFile.flush(); outputFile.close(); } public void processRightRecords(BufferedWriter outputFile, Record leftRecord, List<File> rightFiles, List<Record> rightRecords, JoinType joinType) throws Throwable { for(File rightFile : rightFiles) { BufferedReader rightReader = new BufferedReader(new FileReader(rightFile)); Record rightRecord = read(rightReader); while(rightRecord != null){ if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.RightOuterJoin || joinType == JoinType.FullOuterJoin || joinType == JoinType.InnerJoin ) { write(outputFile, leftRecord, rightRecord); } rightRecord = read(rightReader); } rightReader.close(); } for(Record rightRecord : rightRecords) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.RightOuterJoin || joinType == JoinType.FullOuterJoin || joinType == JoinType.InnerJoin ) { write(outputFile, leftRecord, rightRecord); } } } /** * consume all records having key (either to a single list or multiple files) each file will * store a buffer full of data. The right record returned represents the outside flow (key is * already positioned to next one or null) so we can't use this record in below while loop or * within this block in general when comparing current key. The trick is to keep consuming * from a List. When it becomes empty, re-fill it from the next file until all files have * been consumed (and the last node in the list is read). The next outside iteration will be * ready to be processed (either it will be null or it points to the next biggest key * @throws Throwable * */ private Record consume(String key, BufferedReader reader, List<Record> records, List<File> files, long bufferMaxRecordLines ) throws Throwable { boolean processComplete = false; Record record = records.get(records.size() - 1); while(!processComplete){ long recordCount = records.size(); if( record.getKey().compareTo(key) == 0 ){ record = read(reader); while( record != null && record.getKey().compareTo(key) == 0 && recordCount < bufferMaxRecordLines ) { records.add(record); recordCount++; record = read(reader); } } processComplete = true; // if record is null, we are done if( record != null ) { // if the key has changed, we are done if( record.getKey().compareTo(key) == 0 ) { // Same key means we have exhausted the buffer. // Dump entire buffer into a file. The list of file // pointers will keep track of the files ... processComplete = false; dumpBufferToFile(records, files); records.clear(); records.add(record); } } } return record; } /** * Dump all records in List of Record objects to a file. Then, add that * file to List of File objects * * NEED TO PLACE A LIMIT ON NUMBER OF FILE POINTERS (check size of file list) * * @param records * @param files * @throws Throwable */ private void dumpBufferToFile(List<Record> records, List<File> files) throws Throwable { String prefix = "joiner_" + files.size() + 1; String suffix = ".dat"; File file = File.createTempFile(prefix, suffix, new File("cache")); BufferedWriter writer = new BufferedWriter(new FileWriter(file)); for( Record record : records ) { writer.write( record.dump() ); } files.add(file); writer.flush(); writer.close(); }

    Read the article

  • Custom DateTime model binder in Asp.net MVC

    - by Robert Koritnik
    I would like to write my own model binder for DateTime type. First of all I'd like to write a new attribute that I can attach to my model property like: [DateTimeFormat("d.M.yyyy")] public DateTime Birth { get; set,} This is the easy part. But the binder part is a bit more difficult. I would like to add a new model binder for type DateTime. I can either implement IModelBinder interface and write my own BindModel() method inherit from DefaultModelBinder and override BindModel() method My model has a property as seen above (Birth). So when the model tries to bind request data to this property, my model binder's BindModel(controllerContext, bindingContext) gets invoked. Everything ok, but. How do I get property attributes from controller/bindingContext, to parse my date correctly? How can I get to the PropertyDesciptor of property Birth? Edit Because of separation of concerns my model class is defined in an assembly that doesn't (and shouldn't) reference System.Web.MVC assembly. Setting custom binding (similar to Scott Hanselman's example) attributes is a no-go here.

    Read the article

  • Explaining the forecasts from an ARIMA model

    - by Samik R.
    I am trying to explain to myself the forecasting result from applying an ARIMA model to a time-series dataset. The data is from the M1-Competition, the series is MNB65. For quick reference, I have a google doc spreadsheet with the data. I am trying to fit the data to an ARIMA(1,0,0) model and get the forecasts. I am using R. Here are some output snippets: > arima(x, order = c(1,0,0)) Series: x ARIMA(1,0,0) with non-zero mean Call: arima(x = x, order = c(1, 0, 0)) Coefficients: ar1 intercept 0.9421 12260.298 s.e. 0.0474 202.717 > predict(arima(x, order = c(1,0,0)), n.ahead=12) $pred Time Series: Start = 53 End = 64 Frequency = 1 [1] 11757.39 11786.50 11813.92 11839.75 11864.09 11887.02 11908.62 11928.97 11948.15 11966.21 11983.23 11999.27 I have a few questions: (1) How do I explain that although the dataset shows a clear downward trend, the forecast from this model trends upward. This also happens for ARIMA(2,0,0), which is the best ARIMA fit for the data using auto.arima (forecast package) and for an ARIMA(1,0,1) model. (2) The intercept value for the ARIMA(1,0,0) model is 12260.298. Shouldn't the intercept satisfy the equation: C = mean * (1 - sum(AR coeffs)), in which case, the value should be 715.52. I must be missing something basic here. (3) This is clearly a series with non-stationary mean. Why is an AR(2) model still selected as the best model by auto.arima? Could there be an intuitive explanation? Thanks.

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >