Search Results

Search found 19301 results on 773 pages for 'hyper v dynamic memory'.

Page 51/773 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • Python: Memory usage and optimization when modifying lists

    - by xApple
    The problem My concern is the following: I am storing a relativity large dataset in a classical python list and in order to process the data I must iterate over the list several times, perform some operations on the elements, and often pop an item out of the list. It seems that deleting one item out of a Python list costs O(N) since Python has to copy all the items above the element at hand down one place. Furthermore, since the number of items to delete is approximately proportional to the number of elements in the list this results in an O(N^2) algorithm. I am hoping to find a solution that is cost effective (time and memory-wise). I have studied what I could find on the internet and have summarized my different options below. Which one is the best candidate ? Keeping a local index: while processingdata: index = 0 while index < len(somelist): item = somelist[index] dosomestuff(item) if somecondition(item): del somelist[index] else: index += 1 This is the original solution I came up with. Not only is this not very elegant, but I am hoping there is better way to do it that remains time and memory efficient. Walking the list backwards: while processingdata: for i in xrange(len(somelist) - 1, -1, -1): dosomestuff(item) if somecondition(somelist, i): somelist.pop(i) This avoids incrementing an index variable but ultimately has the same cost as the original version. It also breaks the logic of dosomestuff(item) that wishes to process them in the same order as they appear in the original list. Making a new list: while processingdata: for i, item in enumerate(somelist): dosomestuff(item) newlist = [] for item in somelist: if somecondition(item): newlist.append(item) somelist = newlist gc.collect() This is a very naive strategy for eliminating elements from a list and requires lots of memory since an almost full copy of the list must be made. Using list comprehensions: while processingdata: for i, item in enumerate(somelist): dosomestuff(item) somelist[:] = [x for x in somelist if somecondition(x)] This is very elegant but under-the-cover it walks the whole list one more time and must copy most of the elements in it. My intuition is that this operation probably costs more than the original del statement at least memory wise. Keep in mind that somelist can be huge and that any solution that will iterate through it only once per run will probably always win. Using the filter function: while processingdata: for i, item in enumerate(somelist): dosomestuff(item) somelist = filter(lambda x: not subtle_condition(x), somelist) This also creates a new list occupying lots of RAM. Using the itertools' filter function: from itertools import ifilterfalse while processingdata: for item in itertools.ifilterfalse(somecondtion, somelist): dosomestuff(item) This version of the filter call does not create a new list but will not call dosomestuff on every item breaking the logic of the algorithm. I am including this example only for the purpose of creating an exhaustive list. Moving items up the list while walking while processingdata: index = 0 for item in somelist: dosomestuff(item) if not somecondition(item): somelist[index] = item index += 1 del somelist[index:] This is a subtle method that seems cost effective. I think it will move each item (or the pointer to each item ?) exactly once resulting in an O(N) algorithm. Finally, I hope Python will be intelligent enough to resize the list at the end without allocating memory for a new copy of the list. Not sure though. Abandoning Python lists: class Doubly_Linked_List: def __init__(self): self.first = None self.last = None self.n = 0 def __len__(self): return self.n def __iter__(self): return DLLIter(self) def iterator(self): return self.__iter__() def append(self, x): x = DLLElement(x) x.next = None if self.last is None: x.prev = None self.last = x self.first = x self.n = 1 else: x.prev = self.last x.prev.next = x self.last = x self.n += 1 class DLLElement: def __init__(self, x): self.next = None self.data = x self.prev = None class DLLIter: etc... This type of object resembles a python list in a limited way. However, deletion of an element is guaranteed O(1). I would not like to go here since this would require massive amounts of code refactoring almost everywhere.

    Read the article

  • How do I get the value of a DynamicControl?

    - by Telos
    I'm using ASP.NET Dynamic Data functionality to do something a little weird. Namely, create a dynamic list of fields as children of the main object. So basically I have Ticket.Fields. The main page lists all the fields for Ticket, and the Fields property has a DynamicControl that generates a list of controls to collect more data. The tricky part is that this list ALSO uses Dynamic Data to generate the controls, so each field can be any of the defined FieldTemplates... meaning I don't necessarily know what the actual data control will be when I try to get the value. So, how do I get the value of a DynamicControl? Do I need to create a new subclass of FieldTemplate that provides a means to get at the value?

    Read the article

  • Why calling ISet<dynamic>.Contains() compiles, but throws an exception at runtime?

    - by Andrey Breslav
    Please, help me to explain the following behavior: dynamic d = 1; ISet<dynamic> s = new HashSet<dynamic>(); s.Contains(d); The code compiles with no errors/warnings, but at the last line I get the following exception: Unhandled Exception: Microsoft.CSharp.RuntimeBinder.RuntimeBinderException: 'System.Collections.Generic.ISet<object>' does not contain a definition for 'Contains' at CallSite.Target(Closure , CallSite , ISet`1 , Object ) at System.Dynamic.UpdateDelegates.UpdateAndExecuteVoid2[T0,T1](CallSite site, T0 arg0, T1 arg1) at FormulaToSimulation.Program.Main(String[] args) in As far as I can tell, this is related to dynamic overload resolution, but the strange things are (1) If the type of s is HashSet<dynamic>, no exception occurs. (2) If I use a non-generic interface with a method accepting a dynamic argument, no exception occurs. Thus, it looks like this problem is related particularly with generic interfaces, but I could not find out what exactly causes the problem. Is it a bug in the compiler/typesystem, or legitimate behavior?

    Read the article

  • Basic C question, concerning memory allocation and value assignment

    - by VHristov
    Hi there, I have recently started working on my master thesis in C that I haven't used in quite a long time. Being used to Java, I'm now facing all kinds of problems all the time. I hope someone can help me with the following one, since I've been struggling with it for the past two days. So I have a really basic model of a database: tables, tuples, attributes and I'm trying to load some data into this structure. Following are the definitions: typedef struct attribute { int type; char * name; void * value; } attribute; typedef struct tuple { int tuple_id; int attribute_count; attribute * attributes; } tuple; typedef struct table { char * name; int row_count; tuple * tuples; } table; Data is coming from a file with inserts (generated for the Wisconsin benchmark), which I'm parsing. I have only integer or string values. A sample row would look like: insert into table values (9205, 541, 1, 1, 5, 5, 5, 5, 0, 1, 9205, 10, 11, 'HHHHHHH', 'HHHHHHH', 'HHHHHHH'); I've "managed" to load and parse the data and also to assign it. However, the assignment bit is buggy, since all values point to the same memory location, i.e. all rows look identical after I've loaded the data. Here is what I do: char value[10]; // assuming no value is longer than 10 chars int i, j, k; table * data = (table*) malloc(sizeof(data)); data->name = "table"; data->row_count = number_of_lines; data->tuples = (tuple*) malloc(number_of_lines*sizeof(tuple)); tuple* current_tuple; for(i=0; i<number_of_lines; i++) { current_tuple = &data->tuples[i]; current_tuple->tuple_id = i; current_tuple->attribute_count = 16; // static in our system current_tuple->attributes = (attribute*) malloc(16*sizeof(attribute)); for(k = 0; k < 16; k++) { current_tuple->attributes[k].name = attribute_names[k]; // for int values: current_tuple->attributes[k].type = DB_ATT_TYPE_INT; // write data into value-field int v = atoi(value); current_tuple->attributes[k].value = &v; // for string values: current_tuple->attributes[k].type = DB_ATT_TYPE_STRING; current_tuple->attributes[k].value = value; } // ... } While I am perfectly aware, why this is not working, I can't figure out how to get it working. I've tried following things, none of which worked: memcpy(current_tuple->attributes[k].value, &v, sizeof(int)); This results in a bad access error. Same for the following code (since I'm not quite sure which one would be the correct usage): memcpy(current_tuple->attributes[k].value, &v, 1); Not even sure if memcpy is what I need here... Also I've tried allocating memory, by doing something like: current_tuple->attributes[k].value = (int *) malloc(sizeof(int)); only to get "malloc: * error for object 0x100108e98: incorrect checksum for freed object - object was probably modified after being freed." As far as I understand this error, memory has already been allocated for this object, but I don't see where this happened. Doesn't the malloc(sizeof(attribute)) only allocate the memory needed to store an integer and two pointers (i.e. not the memory those pointers point to)? Any help would be greatly appreciated! Regards, Vassil

    Read the article

  • Are Dynamic Prepared Statements Bad? (with php + mysqli)

    - by John
    I like the flexibility of Dynamic SQL and I like the security + improved performance of Prepared Statements. So what I really want is Dynamic Prepared Statements, which is troublesome to make because bind_param and bind_result accept "fixed" number of arguments. So I made use of an eval() statement to get around this problem. But I get the feeling this is a bad idea. Here's example code of what I mean // array of WHERE conditions $param = array('customer_id'=>1, 'qty'=>'2'); $stmt = $mysqli->stmt_init(); $types = ''; $bindParam = array(); $where = ''; $count = 0; // build the dynamic sql and param bind conditions foreach($param as $key=>$val) { $types .= 'i'; $bindParam[] = '$p'.$count.'=$param["'.$key.'"]'; $where .= "$key = ? AND "; $count++; } // prepare the query -- SELECT * FROM t1 WHERE customer_id = ? AND qty = ? $sql = "SELECT * FROM t1 WHERE ".substr($where, 0, strlen($where)-4); $stmt->prepare($sql); // assemble the bind_param command $command = '$stmt->bind_param($types, '.implode(', ', $bindParam).');'; // evaluate the command -- $stmt->bind_param($types,$p0=$param["customer_id"],$p1=$param["qty"]); eval($command); Is that last eval() statement a bad idea? I tried to avoid code injection by encapsulating values behind the variable name $param. Does anyone have an opinion or other suggestions? Are there issues I need to be aware of?

    Read the article

  • Aligning music notes using String matching algorithms or Dynamic Programming

    - by Dolphin
    Hi I need to compare 2 sets of musical pieces (i.e. a playing-taken in MIDI format-note details extracted and saved in a database table, against sheet music-taken into XML format). When evaluating playing against sheet music (i.e.note details-pitch, duration, rhythm), note alignment needs to be done - to identify missed/extra/incorrect/swapped notes that from the reference (sheet music) notes. I have like 1800-2500 notes in one piece approx (can even be more-with polyphonic, right now I'm doing for monophonic). So will I have to have all these into an array? Will it be memory overloading or stack overflow? There are string matching algorithms like KMP, Boyce-Moore. But note alignment can also be done through Dynamic Programming. How can I use Dynamic Programming to approach this? What are the available algorithms? Is it about approximate string matching? Which approach is much productive? String matching algos like Boyce-Moore, or dynamic programming? How can I assess which is more effective? Greatly appreciate any insight or suggestions Thanks in advance

    Read the article

  • Hyper-V Live Migration across Sites!

    - by Ryan Roussel
    One of the great sessions I sat in on at Tech Ed this week was stretching a Windows 2008 R2 Hyper-V  Failover Cluster across sites.  With this ability, you could actually implement a Hyper-V cluster where you could migrate or even Live Migrate VMs across sites.   With this area’s propensity for Hurricanes, this will be a very popular topic for me over the next few months. While this technology is possible today, it’s also very complicated and can be very expensive to implement.    First your WAN connection has to support the ability to trunk your VLAN across both sites in order to Live Migrate.  This means you can’t use a Layer 3 routed connection like MPLS.  It has to be a Metro Ethernet connection or "Dark Fiber”.  Dark Fiber is unused Fiber already in the ground that can be leased from  various providers. Both of these connections would allow you to trunk layer 2 across your WAN.  Cisco does have the ability to trunk layer 2 across a routed connection by muxing the traffic but this is only available in their Nexus product line which has a very steep price tag.   If you are stuck with MPLS or the like and Nexus switching is not a realistic possibility, you will have to implement a multi-subnet cluster in which case Live Migration won’t be possible.  However you can still failover VMs to the remote site with some planning and manual intervention.  The consideration here is that the VMs will be on a different subnet once migrated, so you will have to change the IP addressing of your VMs.  This also has ramifications with DNS and Name resolution to control your down time.  DHCP with Reservations for your VMs is the preferred method to achieve the IP changes as this will automate that part of the process.   Secondly, you will have to have  a mechanism to replicate your storage across both sites.  Many SAN vendors natively support hardware based synchronous and asynchronous replication.  Some even support cluster shared volumes which were introduced in 2008 R2.   If your SANs do not support this natively, there are alternative file based replication products either software based like Double Take or hardware appliance like EMC.  Be sure to check with your vendor on the support of Disk majority if you’re replicating your quorum disk between SANs.   The last consideration is the ability to maintain quorum for your cluster.  If your replication provider does not support Disk Majority through replication, you will have to explore Node Majority with File Share Witness.  This will affect your design as a 3 node cluster with 1 node at the remote site and FSW at the production site would not have the ability to maintain quorum if the production site was lost. MS best practice for this would be to implement an even node cluster with 2 nodes at  each site and the FSW at a third site.   And there you have it.  While some considerations and research goes into implementing this solution, even a multi-subnet solution would be invaluable to organizations in the implementations of “warm” DR sites.

    Read the article

  • pushViewController causes memory leak

    - by hookjd
    The Leaks application tells me that the following function is causing a memory leak and I can't figure out why. -(void)viewGameList { GameListController *gameListViewController = [[GameListController alloc] initWithNibName:@"GameListController" bundle:nil]; gameListViewController.rootController = self; [self.navigationController pushViewController:gameListViewController animated:YES]; [gameListViewController release]; } It tells me that this line causes a 128 byte memory leak. [self.navigationController pushViewController:gameListViewController animated:YES]; Am I missing something obvious?

    Read the article

  • Reed-Solomon encoder for embedded application (memory-efficient)

    - by bjarkef
    Hi I am looking for a very memory-efficient (like max. 500 bytes of memory for lookup tables etc.) implementation of a Reed-Solomon encoder for use in an embedded application? I am interested in coding blocks of 10 bytes with 5 bytes of parity. Speed is of little importance. Do you know any freely available implementations that I can use for this purpose? Thanks in advance.

    Read the article

  • Linux Shared Memory

    - by Betamoo
    The function which creates shared memory in *inux programming takes a key as one of its parameters.. What is the meaning of this key? And How can I use it? Edit: Not shared memory id

    Read the article

  • UIImagePickerController Memory Leak

    - by Watson
    I am seeing a huge memory leak when using UIImagePickerController in my iPhone app. I am using standard code from the apple documents to implement the control: UIImagePickerController* imagePickerController = [[UIImagePickerController alloc] init]; imagePickerController.delegate = self; if ([UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera]) { switch (buttonIndex) { case 0: imagePickerController.sourceType = UIImagePickerControllerSourceTypeCamera; [self presentModalViewController:imagePickerController animated:YES]; break; case 1: imagePickerController.sourceType = UIImagePickerControllerSourceTypePhotoLibrary; [self presentModalViewController:imagePickerController animated:YES]; break; default: break; } } And for the cancel: -(void) imagePickerControllerDidCancel:(UIImagePickerController *)picker { [[picker parentViewController] dismissModalViewControllerAnimated: YES]; [picker release]; } The didFinishPickingMediaWithInfo callback is just as stanard, although I do not even have to pick anything to cause the leak. Here is what I see in instruments when all I do is open the UIImagePickerController, pick photo library, and press cancel, repeatedly. As you can see the memory keeps growing, and eventually this causes my iPhone app to slow down tremendously. As you can see I opened the image picker 24 times, and each time it malloc'd 128kb which was never released. Basically 3mb out of my total 6mb is never released. This memory stays leaked no matter what I do. Even after navigating away from the current controller, is remains the same. I have also implemented the picker control as a singleton with the same results. Here is what I see when I drill down into those two lines: Any help here would be greatly appreciated! Again, I do not even have to choose an image. All I do is present the controller, and press cancel. Update 1 I downloaded and ran apple's example of using the UIIMagePickerController and I see the same leak happening there when running instruments (both in simulator and on the phone). http://developer.apple.com/library/ios/#samplecode/PhotoPicker/Introduction/Intro.html%23//apple_ref/doc/uid/DTS40010196 All you have to do is hit the photo library button and hit cancel over and over, you'll see the memory keep growing. Any ideas? Update 2 I only see this problem when viewing the photo library. I can choose take photo, and open and close that one over and over, without a leak.

    Read the article

  • java memory management

    - by pavlos
    i have the following code snapshot: public void getResults( String expression, String collection ){ ReferenceList list; Vector lists = new Vector(); list = invertedIndex.get( 1 )//invertedIndex is class member lists.add(list); } when the method is finished, i supose that the local objects ( list, lists) are "destroyed". Can you tell if the memory occupied by list stored in invertedIndex is released as well? Or does java allocate new memory for list when assigning list = invertedIndex.get( 1 );

    Read the article

  • Can immutable be a memory hog?

    - by ciscoheat
    Let's say we have a memory-intensive class like an Image, with chainable methods like Resize() and ConvertTo(). If this class is immutable, won't it take a huge amount of memory when I start doing things like i.Resize(500, 800).Rotate(90).ConvertTo(Gif), compared to a mutable one which modifies itself? How to handle a situation like this in a functional language?

    Read the article

  • Reopen error in bdb 4.7 memory

    - by user207634
    I create a memory pool with flag(DB_CREATE) after ope a Db_Env with flag(DB_CREATE | DB_INIT_MPOOL |DB_SYSTEM_MEM), when I run their program at first time, it's ok and create some db files such _db.001,_db.002,mpool, but after I close the program and run it again,their make a error said the system cannot find the specified file("Mpool: PANIC:No such file or directory"), if I delete all files in the memory pool folder and run it again it will be ok? How can I fix this problem?

    Read the article

  • Memory Mapped Files .NET

    - by CSharpAtl
    I have a project and it needs to access a large amount of proprietary data in ASP.NET. This was done on the Linux/PHP by loading the data in shared memory. I was wondering if trying to use Memory Mapped Files would be the way to go, or if there is a better way with better .NET support. I was thinking of using the Data Cache but not sure of all the pitfalls of size of data being saved in the Cache.

    Read the article

  • quick question about memory management in AS3

    - by TheDarkIn1978
    the following method will be called many times. i'm concerned the continuous call for new rectangles will add potentially unnecessary memory consumption, or is the memory used to produce the previous rectangle released/overwritten to accommodate another rectangle since it is assigned to the same instance variable? private function onDrag(evt:MouseEvent):void { this.startDrag(false, dragBounds()); } private function dragBounds():Rectangle { var stagebounds = new Rectangle(0 - swatchRect.x, 0 - swatchRect.y, stage.stageWidth - swatchRect.width, stage.stageHeight - swatchRect.height); return stagebounds; }

    Read the article

  • How does memory protection in SASOS works?

    - by chris
    I'd like to know how it works - whether it checks if process can read/write/execute memory on every access, or it does it only once? But when it does it only once, and all processes are in a single address space, how are these other hostile processes are prevented from accessing memory from not their's areas?

    Read the article

  • AS3 Memory Leak Example

    - by Skawful
    Can someone post an example of as3 code (specifically event listener included) that would be a simple example of something that could leak memory... also hopefully could you post a solution to the problem shown? The question is: What is a simple example of leaking memory in an AS3 event listener and how can you solve it?

    Read the article

  • Memory Problem - Release whole ViewController ?

    - by Sebastian
    Hi, I'm using a TabBarController with a few Tabs and I have memory problems when switching through the tabs and the contents. Is there a way to release and dealloc everything when I go to another ViewController ? So when I am in Tab#1 with ViewController #1 and I go to Tab#2 with ViewController #2, how can I free all the memory ViewController #1 took ? Thx ! Sebastian

    Read the article

  • modalViewController use very much memory

    - by burki
    Hi! I'm presenting a modalViewController that uses a certain amount of memory, of course. But now, if I call the method - (void)dismissModalViewControllerAnimated:(BOOL)animated it seems that the modalViewController remains in the memory. How can I solve this problem? Thanks.

    Read the article

  • iPhone memory leaks with store kit

    - by Nareshkumar
    Hello, I am trying to develop an application which uses storekit api. The document (Store Kit guide) suggests that the api will not work on a simulator. I found out that memory leaks will not be able to work on a device. I was wondering if any one can tell me how to check for memory leaks while using a store kit api on a project? How is it possible?

    Read the article

  • Drawable advantage over bitmap for memory in android

    - by Tabish
    This question is linked with the answers in the following question: Error removing Bitmaps[Android] Is there any advantage of using Drawable over Bitmap in Android in terms of memory de-allocation ? I was looking at Romain Guy project Shelves and he uses SoftReference for images caches but I'm unable to search where is the code which is de-allocating these Drawables when SoftReference automatically reclaims the memory for Bitmap. As far as I know .recycle() has to be explicitly called on the Bitmap for it to be de-allocated.

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >