Search Results

Search found 870 results on 35 pages for 'allocation'.

Page 8/35 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Malloc inside another function (ANSI C)

    - by Casper
    Hi I'll go straight to it. I'm working on an assignment, where I suddenly ran into trouble. I have to allocate a struct from within another function, obviously using pointers. I've been staring at this problem for hours and tried in a million different ways to solve it. This is some sample code (very simplified): ... some_struct s; printf("Before: %d\n", &s"); allocate(&s); printf("After: %d\n", &s"); ... /* The allocation function */ int allocate(some_struct *arg) { arg = malloc(sizeof(some_struct)); printf("In function: %d\n", &arg"); return 0; } This does give me the same address before and after the allocate-call: Before: -1079752900 In function: -1079752928 After: -1079752900 I know it's probably because it makes a copy in the function, but I don't know how to actually work on the pointer I gave as argument. I tried defining some_struct *s instead of some_struct s, but no luck. I tried with: int allocate(some_struct **arg) which works just fine (the allocate-function needs to be changed as well), BUT according to the assignment I may NOT change the declaration, and it HAS to be *arg.. And it would be most correct if I just have to declare some_struct s.. Not some_struct *s. I hope I make sense and some of you out there can help me :P Thanks in advice

    Read the article

  • Problems Using memset and memcpy

    - by user306557
    So I am trying to create a Memory Management System. In order to do this I have a set amount of space (allocated by malloc) and then I have a function myMalloc which will essentially return a pointer to the space allocated. Since we will then try and free it, we are trying to set a header of the allocated space to be the size of the allocated space, using memset. memset(memPtr,sizeBytes,sizeof(int)); We then need to be able to read this so we can see the size of it. We are attempting to do this by using memcpy and getting the first sizeof(int) bytes into a variable. For testing purposes we are just trying to do memset and then immediately get the size back. I've included the entire method below so that you can see all declarations. Any help would be greatly appreciated! Thanks! void* FirstFit::memMalloc(int sizeBytes){ node* listPtr = freelist; void* memPtr; // Cycle through each node in freelist while(listPtr != NULL) { if(listPtr->size >= sizeBytes) { // We found our space // This is where the new memory allocation begins memPtr = listPtr->head; memset(memPtr,sizeBytes,sizeof(int)); void *size; memcpy(size, memPtr, sizeof(memPtr)); // Now let's shrink freelist listPtr->size = listPtr->size - sizeBytes; int *temp = (int*)listPtr->head + (sizeBytes*sizeof(int)); listPtr->head = (int*) temp; return memPtr; } listPtr = listPtr->next; }

    Read the article

  • Recycle Freed Objects

    - by uray
    suppose I need to allocate and delete object on heap frequently (of arbitrary size), is there any performance benefit if instead of deleting those objects, I will return it back to some "pool" to be reused later? would it give benefit by reduce heap allocation/deallocation?, or it will be slower compared to memory allocator performance, since the "pool" need to manage a dynamic collection of pointers. my use case: suppose I create a queue container based on linked list, and each node of that list are allocated on the heap, so every call to push() and pop() will allocate and deallocate that node: ` template <typename T> struct QueueNode { QueueNode<T>* next; T object; } template <typename T> class Queue { void push(T object) { QueueNode<T>* newNode = QueueNodePool<T>::get(); //get recycled node if(!newNode) { newNode = new QueueNode<T>(object); } // push newNode routine here.. } T pop() { //pop routine here... QueueNodePool<T>::store(unusedNode); //recycle node return unusedNode->object; } } `

    Read the article

  • Is extending a base class with non-virtual destructor dangerous in C++

    - by Akusete
    Take the following code class A { }; class B : public A { }; class C : public A { int x; }; int main (int argc, char** argv) { A* b = new B(); A* c = new C(); //in both cases, only ~A() is called, not ~B() or ~C() delete b; //is this ok? delete c; //does this line leak memory? return 0; } when calling delete on a class with a non-virtual destructor with member functions (like class C), can the memory allocator tell what the proper size of the object is? If not, is memory leaked? Secondly, if the class has no member functions, and no explicit destructor behaviour (like class B), is everything ok? I ask this because I wanted to create a class to extend std::string, (which I know is not recommended, but for the sake of the discussion just bear with it), and overload the +=,+ operator. -Weffc++ gives me a warning because std::string has a non virtual destructor, but does it matter if the sub-class has no members and does not need to do anything in its destructor? -- FYI the += overload was to do proper file path formatting, so the path class could be used like class path : public std::string { //... overload, +=, + //... add last_path_component, remove_path_component, ext, etc... }; path foo = "/some/file/path"; foo = foo + "filename.txt"; //and so on... I just wanted to make sure someone doing this path* foo = new path(); std::string* bar = foo; delete bar; would not cause any problems with memory allocation

    Read the article

  • Initial capacity of collection types, i.e. Dictionary, List

    - by Neil N
    Certain collection types in .Net have an optional "Initial Capacity" constructor param. i.e. Dictionary<string, string> something = new Dictionary<string,string>(20); List<string> anything = new List<string>(50); I can't seem to find what the default initial capacity is for these objects on MSDN. If I know I will only be storing 12 or so items in a dictionary, doesn't it make sense to set the initial capacity to something like 20? My reasoning is, assuming the capacity grows like it does for a StringBuiler, which doubles each time the capacity is hit, and each re-allocation is costly, why not pre-set the size to something you know will hold your data, with some extra room just in case? If the initial capacity is 100, and I know I will only need a dozen or so, it seems as though the rest of that allocated RAM is allocated for nothing. Please spare me the "premature optimization" speil for the O(n^n)th time. I know it won't make my apps any faster or save any meaningful amount of memory, this is mostly out of curiosity.

    Read the article

  • Titanium TableViewRow classname with custom rows

    - by pancake
    I would like to know in what way the 'className' property of a Ti.UI.TableViewRow helps when creating custom rows. For example, I populate a tableview with custom rows in the following way: function populateTableView(tableView, data) { var rows = []; var row; var title, image; var i; for (i = 0; i < data.length; i++) { title = Ti.UI.createLabel({ text : data[i].title, width : 100, height: 30, top: 5, left: 25 }); image = Ti.UI.createImage({ image : 'some_image.png', width: 30, height: 30, top: 5, left: 5 }); /* and, like, 5+ more views or whatever */ row = Ti.UI.createTableViewRow(); row.add(titleLabel); row.add(image); rows.push(row); } tableView.setData(rows); } Of course, this example of a "custom" row is easily created using the standard title and image properties of the TableViewRow, but that isn't the point. How is the allocation of new labels, image views and other child views of a table view prevented in favour of their reuse? I know in iOS this is achieved by using the method -[UITableView dequeueReusableCellWithIdentifier:] to fetch a row object from a 'reservoir' (so 'className' is 'identifier' here) that isn't currently being used for displaying data, but already has the needed child views laid out correctly in it, thus only requiring to update the data contained within (text, image data, etc). As this system is so unbelievably simple, I have a lot of trouble believing the method employed by the Titanium API does not support this. After reading through the API and searching the web, I do however suspect this is the case. The 'className' property is recommended as an easy way to make table views more efficient in Titanium, but its relation to custom table view rows is not explained in any way. If anyone could clarify this matter for me, I would be very grateful.

    Read the article

  • Information about PTE's (Page Table Entries) in Windows

    - by Patrick
    In order to find more easily buffer overflows I am changing our custom memory allocator so that it allocates a full 4KB page instead of only the wanted number of bytes. Then I change the page protection and size so that if the caller writes before or after its allocated piece of memory, the application immediately crashes. Problem is that although I have enough memory, the application never starts up completely because it runs out of memory. This has two causes: since every allocation needs 4 KB, we probably reach the 2 GB limit very soon. This problem could be solved if I would make a 64-bit executable (didn't try it yet). even when I only need a few hundreds of megabytes, the allocations fail at a certain moment. The second problem is the biggest one, and I think it's related to the maximum number of PTE's (page table entries, which store information on how Virtual Memory is mapped to physical memory, and whether pages should be read-only or not) you can have in a process. My questions (or a cry-for-tips): Where can I find information about the maximum number of PTE's in a process? Is this different (higher) for 64-bit systems/applications or not? Can the number of PTE's be configured in the application or in Windows? Thanks, Patrick PS. note for those who will try to argument that you shouldn't write your own memory manager: My application is rather specific so I really want full control over memory management (can't give any more details) Last week we had a memory overwrite which we couldn't find using the standard C++ allocator and the debugging functionality of the C/C++ run time (it only said "block corrupt" minutes after the actual corruption") We also tried standard Windows utilities (like GFLAGS, ...) but they slowed down the application by a factor of 100, and couldn't find the exact position of the overwrite either We also tried the "Full Page Heap" functionality of Application Verifier, but then the application doesn't start up either (probably also running out of PTE's)

    Read the article

  • HPCM 11.1.2.2.x - How to find data in an HPCM Standard Costing database

    - by Jane Story
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} When working with a Hyperion Profitability and Cost Management (HPCM) Standard Costing application, there can often be a requirement to check data or allocated results using reporting tools e.g Smartview. To do this, you are retrieving data directly from the Essbase databases related to your HPCM model. For information, running reports is covered in Chapter 9 of the HPCM User documentation. The aim of this blog is to provide a quick guide to finding this data for reporting in the HPCM generated Essbase database in v11.1.2.2.x of HPCM. In order to retrieve data from an HPCM generated Essbase database, it is important to understand each of the following dimensions in the Essbase database and where data is located within them: Measures dimension – identifies Measures AllocationType dimension – identifies Direct Allocation Data or Genealogy Allocation data Point Of View (POV) dimensions – there must be at least one, maximum of four. Business dimensions: Stage Business dimensions – these will be identified by the Stage prefix. Intra-Stage dimension – these will be identified by the _Intra suffix. Essbase outlines and reporting is explained in the documentation here:http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_user/ch09s02.html For additional details on reporting measures, please review this section of the documentation:http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_user/apas03.html Reporting requirements in HPCM quite often start with identifying non balanced items in the Stage Balancing report. The following documentation link provides help with identifying some of the items within the Stage Balancing report:http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_user/generatestagebalancing.html The following are some types of data upon which you may want to report: Stage Data: Direct Input Assigned Input Data Assigned Output Data Idle Cost/Revenue Unassigned Cost/Revenue Over Driven Cost/Revenue Direct Allocation Data Genealogy Allocation Data Stage Data Stage Data consists of: Direct Input i.e. input data, the starting point of your allocation e.g. in Stage 1 Assigned Input Data i.e. the cost/revenue received from a prior stage (i.e. stage 2 and higher). Assigned Output Data i.e. for each stage, the data that will be assigned forward is assigned post stage data. Reporting on this data is explained in the documentation here:http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_user/ch09s03.html Dimension Selection Measures Direct Input: CostInput RevenueInput Assigned Input (from previous stages): CostReceivedPriorStage RevenueReceivedPriorStage Assigned Output (to subsequent stages): CostAssignedPostStage RevenueAssignedPostStage AllocationType DirectAllocation POV One member from each POV dimension Stage Business Dimensions Any members for the stage business dimensions for the stage you wish to see the Stage data for. All other Dimensions NoMember Idle/Unassigned/OverDriven To view Idle, Unassigned or Overdriven Costs/Revenue, first select which stage for which you want to view this data. If multiple Stages have unassigned/idle, resolve the earliest first and re-run the calculation as differences in early stages will create unassigned/idle in later stages. Dimension Selection Measures Idle: IdleCost IdleRevenue Unassigned: UnAssignedCost UnAssignedRevenue Overdriven: OverDrivenCost OverDrivenRevenue AllocationType DirectAllocation POV One member from each POV dimension Dimensions in the Stage with Unassigned/ Idle/OverDriven Cost All the Stage Business dimensions in the Stage with Unassigned/Idle/Overdriven. Zoom in on each dimension to find the individual members to find which members have Unassigned/Idle/OverDriven data. All other Dimensions NoMember Direct Allocation Data Direct allocation data shows the data received by a destination intersection from a source intersection where a direct assignment(s) exists. Reporting on direct allocation data is explained in the documentation here:http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_user/ch09s04.html You would select the following to report direct allocation data Dimension Selection Measures CostReceivedPriorStage AllocationType DirectAllocation POV One member from each POV dimension Stage Business Dimensions Any members for the SOURCE stage business dimensions and the DESTINATION stage business dimensions for the direct allocations for the stage you wish to report on. All other Dimensions NoMember Genealogy Allocation Data Genealogy allocation data shows the indirect data relationships between stages. Genealogy calculations run in the HPCM Reporting database only. Reporting on genealogy data is explained in the documentation here:http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_user/ch09s05.html Dimension Selection Measures CostReceivedPriorStage AllocationType GenealogyAllocation (IndirectAllocation in 11.1.2.1 and prior versions) POV One member from each POV dimension Stage Business Dimensions Any stage business dimension members from the STARTING stage in Genealogy Any stage business dimension members from the INTERMEDIATE stage(s) in Genealogy Any stage business dimension members from the ENDING stage in Genealogy All other Dimensions NoMember Notes If you still don’t see data after checking the above, please check the following Check the calculation has been run. Here are couple of indicators that might help them with that. Note the size of essbase cube before and after calculations ensure that a calculation was run against the database you are examing. Export the essbase data to a text file to confirm that some data exists. Examine the date and time on task area to see when, if any, calculations were run and what choices were used (e.g. Genealogy choices) If data does not exist in places where they are expecting, it could be that No calculations/genealogy were run No calculations were successfully run The model/data at feeder location were either absent or incompatible, resulting in no allocation e.g no driver data. Smartview Invocation from HPCM From version 11.1.2.2.350 of HPCM (this version will be GA shortly), it is possible to directly invoke Smartview from HPCM. There is guided navigation before the Smartview invocation and it is then possible to see the selected value(s) in SmartView. Click to Download HPCM 11.1.2.2.x - How to find data in an HPCM Standard Costing database (Right click or option-click the link and choose "Save As..." to download this pdf file)

    Read the article

  • Why does Git.pm on cygwin complain about 'Out of memory during "large" request?

    - by Charles Ma
    Hi, I'm getting this error while doing a git svn rebase in cygwin Out of memory during "large" request for 268439552 bytes, total sbrk() is 140652544 bytes at /usr/lib/perl5/site_perl/Git.pm line 898, <GEN1> line 3. 268439552 is 256MB. Cygwin's maxium memory size is set to 1024MB so I'm guessing that it has a different maximum memory size for perl? How can I increase the maximum memory size that perl programs can use? update: This is where the error occurs (in Git.pm): while (1) { my $bytesLeft = $size - $bytesRead; last unless $bytesLeft; my $bytesToRead = $bytesLeft < 1024 ? $bytesLeft : 1024; my $read = read($in, $blob, $bytesToRead, $bytesRead); //line 898 unless (defined($read)) { $self->_close_cat_blob(); throw Error::Simple("in pipe went bad"); } $bytesRead += $read; } I've added a print before line 898 to print out $bytesToRead and $bytesRead and the result was 1024 for $bytesToRead, and 134220800 for $bytesRead, so it's reading 1024 bytes at a time and it has already read 128MB. Perl's 'read' function must be out of memory and is trying to request for double it's memory size...is there a way to specify how much memory to request? or is that implementation dependent? UPDATE2: While testing memory allocation in cygwin: This C program's output was 1536MB int main() { unsigned int bit=0x40000000, sum=0; char *x; while (bit > 4096) { x = malloc(bit); if (x) sum += bit; bit >>= 1; } printf("%08x bytes (%.1fMb)\n", sum, sum/1024.0/1024.0); return 0; } While this perl program crashed if the file size is greater than 384MB (but succeeded if the file size was less). open(F, "<400") or die("can't read\n"); $size = -s "400"; $read = read(F, $s, $size); The error is similar Out of memory during "large" request for 536875008 bytes, total sbrk() is 217088 bytes at mem.pl line 6.

    Read the article

  • Strange behavior with gcc inline assembly

    - by Chris
    When inlining assembly in gcc, I find myself regularly having to add empty asm blocks in order to keep variables alive in earlier blocks, for example: asm("rcr $1,%[borrow];" "movq 0(%[b_],%[i],8),%%rax;" "adcq %%rax,0(%[r_top],%[i],8);" "rcl $1,%[borrow];" : [borrow]"+r"(borrow) : [i]"r"(i),[b_]"r"(b_.data),[r_top]"r"(r_top.data) : "%rax","%rdx"); asm("" : : "r"(borrow) : ); // work-around to keep borrow alive ... Another example of weirdness is that the code below works great without optimizations, but with -O3 it seg-faults: ulong carry = 0,hi = 0,qh = s.data[1],ql = s.data[0]; asm("movq 0(%[b]),%%rax;" "mulq %[ql];" "movq %%rax,0(%[sb]);" "movq %%rdx,%[hi];" : [hi]"=r"(hi) : [ql]"r"(ql),[b]"r"(b.data),[sb]"r"(sb.data) : "%rax","%rdx","memory"); for (long i = 1; i < b.size; i++) { asm("movq 0(%[b],%[i],8),%%rax;" "mulq %[ql];" "xorq %%r10,%%r10;" "addq %%rax,%[hi];" "adcq %%rdx,%[carry];" "adcq $0,%%r10;" "movq -8(%[b],%[i],8),%%rax;" "mulq %[qh];" "addq %%rax,%[hi];" "adcq %%rdx,%[carry];" "adcq $0,%%r10;" "movq %[hi],0(%[sb],%[i],8);" "movq %[carry],%[hi];" "movq %%r10,%[carry];" : [carry]"+r"(carry),[hi]"+r"(hi) : [i]"r"(i),[ql]"r"(ql),[qh]"r"(qh),[b]"r"(b.data),[sb]"r"(sb.data) : "%rax","%rdx","%r10","memory"); } asm("movq -8(%[b],%[i],8),%%rax;" "mulq %[qh];" "addq %%rax,%[hi];" "adcq %%rdx,%[carry];" "movq %[hi],0(%[sb],%[i],8);" "movq %[carry],8(%[sb],%[i],8);" : [hi]"+r"(hi),[carry]"+r"(carry) : [i]"r"(long(b.size)),[qh]"r"(qh),[b]"r"(b.data),[sb]"r"(sb.data) : "%rax","%rdx","memory"); I think it has to do with the fact that it's using so many registers. Is there something I'm missing here or is the register allocation just really buggy with gcc inline assembly?

    Read the article

  • Linked List exercise, what am I doing wrong?

    - by Sean Ochoa
    Hey all. I'm doing a linked list exercise that involves dynamic memory allocation, pointers, classes, and exceptions. Would someone be willing to critique it and tell me what I did wrong and what I should have done better both with regards to style and to those subjects I listed above? /* Linked List exercise */ #include <iostream> #include <exception> #include <string> using namespace std; class node{ public: node * next; int * data; node(const int i){ data = new int; *data = i; } node& operator=(node n){ *data = *(n.data); } ~node(){ delete data; } }; class linkedList{ public: node * head; node * tail; int nodeCount; linkedList(){ head = NULL; tail = NULL; } ~linkedList(){ while (head){ node* t = head->next; delete head; if (t) head = t; } } void add(node * n){ if (!head) { head = n; head->next = NULL; tail = head; nodeCount = 0; }else { node * t = head; while (t->next) t = t->next; t->next = n; n->next = NULL; nodeCount++; } } node * operator[](const int &i){ if ((i >= 0) && (i < nodeCount)) throw new exception("ERROR: Invalid index on linked list.", -1); node *t = head; for (int x = i; x < nodeCount; x++) t = t->next; return t; } void print(){ if (!head) return; node * t = head; string collection; cout << "["; int c = 0; if (!t->next) cout << *(t->data); else while (t->next){ cout << *(t->data); c++; if (t->next) t = t->next; if (c < nodeCount) cout << ", "; } cout << "]" << endl; } }; int main (const int & argc, const char * argv[]){ try{ linkedList * myList = new linkedList; for (int x = 0; x < 10; x++) myList->add(new node(x)); myList->print(); }catch(exception &ex){ cout << ex.what() << endl; return -1; } return 0; }

    Read the article

  • Access violation using LocalAlloc()

    - by PaulH
    I have a Visual Studio 2008 Windows Mobile 6 C++ application that is using an API that requires the use of LocalAlloc(). To make my life easier, I created an implementation of a standard allocator that uses LocalAlloc() internally: /// Standard library allocator implementation using LocalAlloc and LocalReAlloc /// to create a dynamically-sized array. /// Memory allocated by this allocator is never deallocated. That is up to the /// user. template< class T, int max_allocations > class LocalAllocator { public: typedef T value_type; typedef size_t size_type; typedef ptrdiff_t difference_type; typedef T* pointer; typedef const T* const_pointer; typedef T& reference; typedef const T& const_reference; pointer address( reference r ) const { return &r; }; const_pointer address( const_reference r ) const { return &r; }; LocalAllocator() throw() : c_( NULL ) { }; /// Attempt to allocate a block of storage with enough space for n elements /// of type T. n>=1 && n<=max_allocations. /// If memory cannot be allocated, a std::bad_alloc() exception is thrown. pointer allocate( size_type n, const void* /*hint*/ = 0 ) { if( NULL == c_ ) { c_ = LocalAlloc( LPTR, sizeof( T ) * n ); } else { HLOCAL c = LocalReAlloc( c_, sizeof( T ) * n, LHND ); if( NULL == c ) LocalFree( c_ ); c_ = c; } if( NULL == c_ ) throw std::bad_alloc(); return reinterpret_cast< T* >( c_ ); }; /// Normally, this would release a block of previously allocated storage. /// Since that's not what we want, this function does nothing. void deallocate( pointer /*p*/, size_type /*n*/ ) { // no deallocation is performed. that is up to the user. }; /// maximum number of elements that can be allocated size_type max_size() const throw() { return max_allocations; }; private: /// current allocation point HLOCAL c_; }; // class LocalAllocator My application is using that allocator implementation in a std::vector< #define MAX_DIRECTORY_LISTING 512 std::vector< WIN32_FIND_DATA, LocalAllocator< WIN32_FIND_DATA, MAX_DIRECTORY_LISTING > > file_list; WIN32_FIND_DATA find_data = { 0 }; HANDLE find_file = ::FindFirstFile( folder.c_str(), &find_data ); if( NULL != find_file ) { do { // access violation here on the 257th item. file_list.push_back( find_data ); } while ( ::FindNextFile( find_file, &find_data ) ); ::FindClose( find_file ); } // data submitted to the API that requires LocalAlloc()'d array of WIN32_FIND_DATA structures SubmitData( &file_list.front() ); On the 257th item added to the vector<, the application crashes with an access violation: Data Abort: Thread=8e1b0400 Proc=8031c1b0 'rapiclnt' AKY=00008001 PC=03f9e3c8(coredll.dll+0x000543c8) RA=03f9ff04(coredll.dll+0x00055f04) BVA=21ae0020 FSR=00000007 First-chance exception at 0x03f9e3c8 in rapiclnt.exe: 0xC0000005: Access violation reading location 0x01ae0020. LocalAllocator::allocate is called with an n=512 and LocalReAlloc() succeeds. The actual Access Violation exception occurs within the std::vector< code after the LocalAllocator::allocate call: 0x03f9e3c8 0x03f9ff04 > MyLib.dll!stlp_std::priv::__copy_trivial(const void* __first = 0x01ae0020, const void* __last = 0x01b03020, void* __result = 0x01b10020) Line: 224, Byte Offsets: 0x3c C++ MyLib.dll!stlp_std::vector<_WIN32_FIND_DATAW,LocalAllocator<_WIN32_FIND_DATAW,512> >::_M_insert_overflow(_WIN32_FIND_DATAW* __pos = 0x01b03020, _WIN32_FIND_DATAW& __x = {...}, stlp_std::__true_type& __formal = {...}, unsigned int __fill_len = 1, bool __atend = true) Line: 112, Byte Offsets: 0x5c C++ MyLib.dll!stlp_std::vector<_WIN32_FIND_DATAW,LocalAllocator<_WIN32_FIND_DATAW,512> >::push_back(_WIN32_FIND_DATAW& __x = {...}) Line: 388, Byte Offsets: 0xa0 C++ MyLib.dll!Foo(unsigned long int cbInput = 16, unsigned char* pInput = 0x01a45620, unsigned long int* pcbOutput = 0x1dabfbbc, unsigned char** ppOutput = 0x1dabfbc0, IRAPIStream* __formal = 0x00000000) Line: 66, Byte Offsets: 0x1e4 C++ If anybody can point out what I may be doing wrong, I would appreciate it. Thanks, PaulH

    Read the article

  • What is the point of dynamic allocation in C++?

    - by Aerovistae
    I really have never understood it at all. I can do it, but I just don't get why I would want to. For instance, I was programming a game yesterday, and I set up an array of pointers to dynamically allocated little enemies in the game, then passed it to a function which updates their positions. When I ran the game, I got one of those nondescript assertion errors, something about a memory block not existing, I don't know. It was a run-time error, so it didn't say where the problem was. So I just said screw it and rewrote it with static instantiation, i.e.: while(n<4) { Enemy tempEnemy = Enemy(3, 4); enemyVector.push_back(tempEnemy); n++; } updatePositions(&enemyVector); And it immediately worked perfectly. Now sure, some of you may be thinking something to the effect of "Maybe if you knew what you were doing," or perhaps "n00b can't use pointers L0L," but frankly, you really can't deny that they make things way overcomplicated, hence most modern languages have done away with them entirely. But please-- someone -- What IS the point of dynamic allocation? What advantage does it afford? Why would I ever not do what I just did in the above example?

    Read the article

  • Silverlight 4. Activator.CreateInstance uses a huge amount of memory

    - by Marco
    Hi, I have been playing a bit with Silverlight and try to port my Silverlight 3.0 application to Silverlight 4.0. My application loads different XAP files and upon a user request create an instance of a Xaml user control and adds it to the main container, in a sort of MEF approach in order I can have an extensible and pluggable application. The application is pretty huge and to keep acceptable the performances and the initial loading I have built up some helper classes to load in the background all pages and user controls that might be used later on. On Silverlight 3.0 everything was running smoothly without any problem so far. Switching to SL 4.0 I have noticed that when the process approaches to create the instances of the user controls using Activator.CreateInstance, the layout freezes unexpectedly for a minute and sometimes for more. Looking at the task manager the memory usage of IE jumps from 50MB to 400MB and sometimes to 1.5 GB. If the process won't take that much the layout is rendered properly and the memory falls back to 50 MB. Otherwise everything crashes due to out of memory exeption. Does anybody encountered the same problem? Or has anybody a solution about this tricky issue? Thans in advance Marco

    Read the article

  • Java Runtime.getRuntime().exec() alternatives

    - by twilbrand
    I have a collection of webapps that are running under tomcat. Tomcat is configured to have as much as 2 GB of memory using the -Xmx argument. Many of the webapps need to perform a task that ends up making use of the following code: Runtime runtime = Runtime.getRuntime(); Process process = runtime.exec(command); process.waitFor(); ... The issue we are having is related to the way that this "child-process" is getting created on Linux (Redhat 4.4 and Centos 5.4). It's my understanding that an amount of memory equal to the amount tomcat is using needs to be free in the pool of physical (non-swap) system memory initially for this child process to be created. When we don't have enough free physical memory, we are getting this: java.io.IOException: error=12, Cannot allocate memory at java.lang.UNIXProcess.<init>(UNIXProcess.java:148) at java.lang.ProcessImpl.start(ProcessImpl.java:65) at java.lang.ProcessBuilder.start(ProcessBuilder.java:452) ... 28 more My questions are: 1) Is it possible to remove the requirement for an amount of memory equal to the parent process being free in the physical memory? I'm looking for an answer that allows me to specify how much memory the child process gets or to allow java on linux to access swap memory. 2) What are the alternatives to Runtime.getRuntime().exec() if no solution to #1 exists? I could only think of two, neither of which is very desirable. JNI (very un-desirable) or rewriting the program we are calling in java and making it it's own process that the webapp communicates with somehow. There has to be others. 3) Is there another side to this problem that I'm not seeing that could potentially fix it? Lowering the amount of memory used by tomcat is not an option. Increasing the memory on the server is always an option, but seems like more a band-aid. Servers are running java 6.

    Read the article

  • iPhone development: pointer being freed was not allocated

    - by w4nderlust
    Hello, i got this message from the debugger: Pixture(1257,0xa0610500) malloc: *** error for object 0x21a8000: pointer being freed was not allocated *** set a breakpoint in malloc_error_break to debug so i did a bit of tracing and got: (gdb) shell malloc_history 1257 0x21a8000 ALLOC 0x2196a00-0x21a89ff [size=73728]: thread_a0610500 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __CFRunLoopDoObservers | CA::Transaction::observer_callback(__CFRunLoopObserver*, unsigned long, void*) | CA::Transaction::commit() | CA::Context::commit_transaction(CA::Transaction*) | CALayerDisplayIfNeeded | -[CALayer _display] | CABackingStoreUpdate | backing_callback(CGContext*, void*) | -[CALayer drawInContext:] | -[UIView(CALayerDelegate) drawLayer:inContext:] | -[AvatarView drawRect:] | -[AvatarView overlayPNG:] | +[UIImageUtility createMaskOf:] | UIGraphicsGetImageFromCurrentImageContext | CGBitmapContextCreateImage | create_bitmap_data_provider | malloc | malloc_zone_malloc and i really can't understand what i am doing wrong. here's the code of the [UIImageUtility createMaskOf:] function: + (UIImage *)createMaskOf:(UIImage *)source { CGRect rect = CGRectMake(0, 0, source.size.width, source.size.height); UIGraphicsBeginImageContext(CGSizeMake(source.size.width, source.size.height)); CGContextRef context = UIGraphicsGetCurrentContext(); CGContextTranslateCTM(context, 0, source.size.height); CGContextScaleCTM(context, 1.0, -1.0); UIImage *original = [self createGrayCopy:source]; CGContextRef context2 = CGBitmapContextCreate(NULL, source.size.width, source.size.height, 8, 4 * source.size.width, CGColorSpaceCreateDeviceRGB(), kCGImageAlphaNoneSkipLast); CGContextDrawImage(context2, CGRectMake(0, 0, source.size.width, source.size.height), original.CGImage); CGImageRef unmasked = CGBitmapContextCreateImage(context2); const float myMaskingColorsFrameColor[6] = { 1,256,1,256,1,256 }; CGImageRef mask = CGImageCreateWithMaskingColors(unmasked, myMaskingColorsFrameColor); CGContextSetRGBFillColor (context, 256,256,256, 1); CGContextFillRect(context, rect); CGContextDrawImage(context, rect, mask); UIImage *whiteMasked = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); return whiteMasked; } the other custom function called before that is the following: - (UIImage *)overlayPNG:(SinglePart *)sp { NSLog([sp description]); // Rect and context setup CGRect rect = CGRectMake(0, 0, sp.image.size.width, sp.image.size.height); NSLog(@"%f x %f", sp.image.size.width, sp.image.size.height); // Create an image of a color filled rectangle UIImage *baseColor = nil; if (sp.hasOwnColor) { baseColor = [UIImageUtility imageWithRect:rect ofColor:sp.color]; } else { SinglePart *facePart = [editingAvatar.face.partList objectAtIndex:0]; baseColor = [UIImageUtility imageWithRect:rect ofColor:facePart.color]; } // Crete the mask of the layer UIImage *mask = [UIImageUtility createMaskOf:sp.image]; mask = [UIImageUtility createGrayCopy:mask]; // Create a new context for merging the overlay and a mask of the layer UIGraphicsBeginImageContext(CGSizeMake(sp.image.size.width, sp.image.size.height)); CGContextRef context2 = UIGraphicsGetCurrentContext(); // Adjust the coordinate system so that the origin // is in the lower left corner of the view and the // y axis points up CGContextTranslateCTM(context2, 0, sp.image.size.height); CGContextScaleCTM(context2, 1.0, -1.0); // Create masked overlay color layer CGImageRef MaskedImage = CGImageCreateWithMask (baseColor.CGImage, mask.CGImage); // Draw the base color layer CGContextDrawImage(context2, rect, MaskedImage); // Get the result of the masking UIImage* overlayMasked = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); UIGraphicsBeginImageContext(CGSizeMake(sp.image.size.width, sp.image.size.height)); CGContextRef context = UIGraphicsGetCurrentContext(); // Adjust the coordinate system so that the origin // is in the lower left corner of the view and the // y axis points up CGContextTranslateCTM(context, 0, sp.image.size.height); CGContextScaleCTM(context, 1.0, -1.0); // Get the result of the blending of the masked overlay and the base image CGContextDrawImage(context, rect, overlayMasked.CGImage); // Set the blend mode for the next drawn image CGContextSetBlendMode(context, kCGBlendModeOverlay); // Component image drawn CGContextDrawImage(context, rect, sp.image.CGImage); UIImage* blendedImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); CGImageRelease(MaskedImage); return blendedImage; }

    Read the article

  • How to avoid reallocation using the STL (C++)

    - by Tue Christensen
    This question is derived of the topic: http://stackoverflow.com/questions/2280655/vector-reserve-c I am using a datastructur of the type vector<vector<vector<double> > >. It is not possible to know the size of each of these vector (except the outer one) before items (doubles) are added. I can get an approximate size (upper bound) on the number of items in each "dimension". A solution with the shared pointers might be the way to go, but I would like to try a solution where the vector<vector<vector<double> > > simply has .reserve()'ed enough space (or in some other way has allocated enough memory). Will A.reserve(500) (assumming 500 is the size or, alternatively an upper bound on the size) be enough to hold "2D" vectors of large size, say [1000][10000]? The reason for my question is mainly because I cannot see any way of reasonably estimating the size of the interior of A at the time of .reserve(500). An example of my question: vector A; A.reserve(500+1); vector temp2; vector temp1 (666,666); for(int i=0;i<500;i++) { A.push_back(temp2); for(int j=0; j< 10000;j++) { A.back().push_back(temp1); } } Will this ensure that no reallocation is done for A? If temp2.reserve(100000) and temp1.reserve(1000) where added at creation will this ensure no reallocation at all will occur at all? In the above please disregard the fact that memory could be wasted due to conservative .reserve() calls. Thank you all in advance!

    Read the article

  • Objective-C (iPhone) Memory Management

    - by Steven
    I'm sorry to ask such a simple question, but it's a specific question I've not been able to find an answer for. I'm not a native objective-c programmer, so I apologise if I use any C# terms! If I define an object in test.h @interface test : something { NSString *_testString; } Then initialise it in test.m -(id)init { _testString = [[NSString alloc] initWithString:@"hello"]; } Then I understand that I would release it in dealloc, as every init should have a release -(void)dealloc { [_testString release]; } However, what I need clarification on is what happens if in init, I use one of the shortcut methods for object creation, do I still release it in dealloc? Doesn't this break the "one release for one init" rule? e.g. -(id)init { _testString = [NSString stringWithString:@"hello"]; } Thanks for your helps, and if this has been answered somewhere else, I apologise!! Steven

    Read the article

  • Declaring variables with New DataSet vs DataSet

    - by eych
    What is the impact of creating variables using: Dim ds as New DataSet ds = GetActualData() where GetActualData() also creates a New DataSet and returns it? Does the original empty DataSet that was 'New'ed just get left in the Heap? What if this kind of code was in many places? Would that affect the ASP.NET process and cause it to recycle sooner?

    Read the article

  • Help with malloc and free: Glibc detected: free(): invalid pointer

    - by nunos
    I need help with debugging this piece of code. I know the problem is in malloc and free but can't find exactly where, why and how to fix it. Please don't answer: "Use gdb" and that's it. I would use gdb to debug it, but I still don't know much about it and am still learning it, and would like to have, in the meanwhile, another solution. Thanks. #include <stdio.h> #include <stdlib.h> #include <ctype.h> #include <unistd.h> #include <string.h> #include <sys/wait.h> #include <sys/types.h> #define MAX_COMMAND_LENGTH 256 #define MAX_ARGS_NUMBER 128 #define MAX_HISTORY_NUMBER 100 #define PROMPT ">>> " int num_elems; typedef enum {false, true} bool; typedef struct { char **arg; char *infile; char *outfile; int background; } Command_Info; int parse_cmd(char *cmd_line, Command_Info *cmd_info) { char *arg; char *args[MAX_ARGS_NUMBER]; int i = 0; arg = strtok(cmd_line, " "); while (arg != NULL) { args[i] = arg; arg = strtok(NULL, " "); i++; } num_elems = i;precisa em free_mem if (num_elems == 0) return 0; cmd_info->arg = (char **) ( malloc(num_elems * sizeof(char *)) ); cmd_info->infile = NULL; cmd_info->outfile = NULL; cmd_info->background = 0; bool b_infile = false; bool b_outfile = false; int iarg = 0; for (i = 0; i < num_elems; i++) { if ( !strcmp(args[i], "<") ) { if ( b_infile || i == num_elems-1 || !strcmp(args[i+1], "<") || !strcmp(args[i+1], ">") || !strcmp(args[i+1], "&") ) return -1; i++; cmd_info->infile = malloc(strlen(args[i]) * sizeof(char)); strcpy(cmd_info->infile, args[i]); b_infile = true; } else if (!strcmp(args[i], ">")) { if ( b_outfile || i == num_elems-1 || !strcmp(args[i+1], ">") || !strcmp(args[i+1], "<") || !strcmp(args[i+1], "&") ) return -1; i++; cmd_info->outfile = malloc(strlen(args[i]) * sizeof(char)); strcpy(cmd_info->outfile, args[i]); b_outfile = true; } else if (!strcmp(args[i], "&")) { if ( i == 0 || i != num_elems-1 || cmd_info->background ) return -1; cmd_info->background = true; } else { cmd_info->arg[iarg] = malloc(strlen(args[i]) * sizeof(char)); strcpy(cmd_info->arg[iarg], args[i]); iarg++; } } cmd_info->arg[iarg] = NULL; return 0; } void print_cmd(Command_Info *cmd_info) { int i; for (i = 0; cmd_info->arg[i] != NULL; i++) printf("arg[%d]=\"%s\"\n", i, cmd_info->arg[i]); printf("arg[%d]=\"%s\"\n", i, cmd_info->arg[i]); printf("infile=\"%s\"\n", cmd_info->infile); printf("outfile=\"%s\"\n", cmd_info->outfile); printf("background=\"%d\"\n", cmd_info->background); } void get_cmd(char* str) { fgets(str, MAX_COMMAND_LENGTH, stdin); str[strlen(str)-1] = '\0'; } pid_t exec_simple(Command_Info *cmd_info) { pid_t pid = fork(); if (pid < 0) { perror("Fork Error"); return -1; } if (pid == 0) { if ( (execvp(cmd_info->arg[0], cmd_info->arg)) == -1) { perror(cmd_info->arg[0]); exit(1); } } return pid; } void type_prompt(void) { printf("%s", PROMPT); } void syntax_error(void) { printf("msh syntax error\n"); } void free_mem(Command_Info *cmd_info) { int i; for (i = 0; cmd_info->arg[i] != NULL; i++) free(cmd_info->arg[i]); free(cmd_info->arg); free(cmd_info->infile); free(cmd_info->outfile); } int main(int argc, char* argv[]) { char cmd_line[MAX_COMMAND_LENGTH]; Command_Info cmd_info; //char* history[MAX_HISTORY_NUMBER]; while (true) { type_prompt(); get_cmd(cmd_line); if ( parse_cmd(cmd_line, &cmd_info) == -1) { syntax_error(); continue; } if (!strcmp(cmd_line, "")) continue; if (!strcmp(cmd_info.arg[0], "exit")) exit(0); pid_t pid = exec_simple(&cmd_info); waitpid(pid, NULL, 0); free_mem(&cmd_info); } return 0; }

    Read the article

  • Does boost::asio makes excessive small heap allocations or am I wrong?

    - by Poni
    #include <cstdlib> #include <iostream> #include <boost/bind.hpp> #include <boost/asio.hpp> using boost::asio::ip::tcp; class session { public: session(boost::asio::io_service& io_service) : socket_(io_service) { } tcp::socket& socket() { return socket_; } void start() { socket_.async_read_some(boost::asio::buffer(data_, max_length - 1), boost::bind(&session::handle_read, this, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)); } void handle_read(const boost::system::error_code& error, size_t bytes_transferred) { if (!error) { data_[bytes_transferred] = '\0'; if(NULL != strstr(data_, "quit")) { this->socket().shutdown(boost::asio::ip::tcp::socket::shutdown_both); this->socket().close(); // how to make this dispatch "handle_read()" with a "disconnected" flag? } else { boost::asio::async_write(socket_, boost::asio::buffer(data_, bytes_transferred), boost::bind(&session::handle_write, this, boost::asio::placeholders::error)); socket_.async_read_some(boost::asio::buffer(data_, max_length - 1), boost::bind(&session::handle_read, this, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)); } } else { delete this; } } void handle_write(const boost::system::error_code& error) { if (!error) { // } else { delete this; } } private: tcp::socket socket_; enum { max_length = 1024 }; char data_[max_length]; }; class server { public: server(boost::asio::io_service& io_service, short port) : io_service_(io_service), acceptor_(io_service, tcp::endpoint(tcp::v4(), port)) { session* new_session = new session(io_service_); acceptor_.async_accept(new_session->socket(), boost::bind(&server::handle_accept, this, new_session, boost::asio::placeholders::error)); } void handle_accept(session* new_session, const boost::system::error_code& error) { if (!error) { new_session->start(); new_session = new session(io_service_); acceptor_.async_accept(new_session->socket(), boost::bind(&server::handle_accept, this, new_session, boost::asio::placeholders::error)); } else { delete new_session; } } private: boost::asio::io_service& io_service_; tcp::acceptor acceptor_; }; int main(int argc, char* argv[]) { try { if (argc != 2) { std::cerr << "Usage: async_tcp_echo_server <port>\n"; return 1; } boost::asio::io_service io_service; using namespace std; // For atoi. server s(io_service, atoi(argv[1])); io_service.run(); } catch (std::exception& e) { std::cerr << "Exception: " << e.what() << "\n"; } return 0; } While experimenting with boost::asio I've noticed that within the calls to async_write()/async_read_some() there is a usage of the C++ "new" keyword. Also, when stressing this echo server with a client (1 connection) that sends for example 100,000 times some data, the memory usage of this program is getting higher and higher. What's going on? Will it allocate memory for every call? Or am I wrong? Asking because it doesn't seem right that a server app will allocate, anything. Can I handle it, say with a memory pool? Another side-question: See the "this-socket().close();" ? I want it, as the comment right to it says, to dispatch that same function one last time, with a disconnection error. Need that to do some clean-up. How do I do that? Thank you all gurus (:

    Read the article

  • NSMutableDictionary, alloc, init and reiniting...

    - by Marcos Issler
    In the following code: //anArray is a Array of Dictionary with 5 objs. //here we init with the first NSMutableDictionary *anMutableDict = [[NSMutableDictionary alloc] initWithDictionary:[anArray objectAtIndex:0]]; ... use of anMutableDict ... //then want to clear the MutableDict and assign the other dicts that was in the array of dicts for (int i=1;i<5;i++) { [anMutableDict removeAllObjects]; [anMutableDict initWithDictionary:[anArray objectAtIndex:i]]; } Why this crash? How is the right way to clear an nsmutabledict and the assign a new dict? Thanks guy's. Marcos.

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >