Search Results

Search found 18119 results on 725 pages for 'shared memory'.

Page 162/725 | < Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >

  • Class members allocation on heap/stack? C++

    - by simplebutperfect
    If a class is declared as follows: class MyClass { char * MyMember; MyClass() { MyMember = new char[250]; } ~MyClass() { delete[] MyMember; } }; And it could be done like this: class MyClass { char MyMember[250]; }; How does a class gets allocated on heap, like if i do MyClass * Mine = new MyClass(); Does the allocated memory also allocates the 250 bytes in the second example along with the class instantiation? And will the member be valid for the whole lifetime of MyClass object? As for the first example, is it practical to allocate class members on heap?

    Read the article

  • Configuring a library to be included with C++ test

    - by vrish88
    Hello, I would like to utilize the UnitTest++ library in a testing file. However, I am having some difficulty getting the library to be included at compile time. So here is my current directory structure: tests/ UnitTests++/ libUnitTest++.a src/ UnitTests++.h unit/ test.cpp I have just used the UnitTest++ getting started guide to just get the library setup. Here is test.cpp: // test.cpp #include <UnitTest++.h> TEST(FailSpectacularly) { CHECK(false); } int main() { return UnitTest::RunAllTests(); } And I am currently trying to compile with: gcc -lUnitTest++ -L../UnitTest++/ -I../UnitTest++/src/ test.cpp I am currently getting a bunch output with ld: symbol(s) not found at the end. So how would I be able to get the UnitTest++ library properly included when this program is compiled? I am on a Mac and I'd also like for there to be an easy way for people on a Linux machine to run these same tests. Whew, I hope this provides enough information, if not please let me know.

    Read the article

  • how to pass an arbitrary signature to Certifcate

    - by eskoba
    I am trying to sign certificate (X509) using secret sharing. that is shareholders combine their signatures to produce the final signature. which will be in this case the signed certificate. however practically from my understanding only one entity can sign a certificate. therefore I want to know: which entities or data of the x509certificate are actually taken as input to the signing algorithm? ideally I want this data to be signed by the shareholders and then the final combination will be passed to the X509certificate as valid signature. is this possible? how could it done? if not are they other alternatives?

    Read the article

  • Are there any modern platforms with non-IEEE C/C++ float formats?

    - by Patrick Niedzielski
    Hi all, I am writing a video game, Humm and Strumm, which requires a network component in its game engine. I can deal with differences in endianness easily, but I have hit a wall in attempting to deal with possible float memory formats. I know that modern computers have all a standard integer format, but I have heard that they may not all use the IEEE standard for floating-point integers. Is this true? While certainly I could just output it as a character string into each packet, I would still have to convert to a "well-known format" of each client, regardless of the platform. The standard printf() and atod() would be inadequate. Please note, because this game is a Free/Open Source Software program that will run on GNU/Linux, *BSD, and Microsoft Windows, I cannot use any proprietary solutions, nor any single-platform solutions. Cheers, Patrick

    Read the article

  • Any sense to set obj = null(Nothing) in Dispose()?

    - by serhio
    Is there any sense to set custom object to null(Nothing in VB.NET) in the Dispose() method? Could this prevent memory leaks or it's useless?! Let's consider two examples: public class Foo : IDisposable { private Bar bar; // standard custom .NET object public Foo(Bar bar) { this.bar = bar; } public void Dispose() { bar = null; // any sense? } } public class Foo : RichTextBox { // this could be also: GDI+, TCP socket, SQl Connection, other "heavy" object private Bitmap backImage; public Foo(Bitmap backImage) { this.backImage = backImage; } protected override void Dispose(bool disposing) { if (disposing) { backImage = null; // any sense? } } }

    Read the article

  • Conceptual question about NSAutoreleasePools

    - by ryyst
    In my Cocoa program, wouldn't a really simple way of dealing with autoreleased objects be to just create a timer object inside the app delegate that calls the following method e.g. every 10 seconds: if (pool) { // Release & drain the current pool to free the memory. [pool release]; } // Create a new pool. pool = [[NSAutoreleasePool alloc] init]; The only problems I can imagine are: 1) If the above code runs in a separate thread, an object might get autoreleased between the release call to the old pool and the creation of the new pool - that seems highly unlikely though. 2) It's obviously not that efficient, because the pool might get released if there's nothing in it. Likewise, in the 10 second gap, many many objects might be autoreleased, causing the pool to grow a lot. Still, the above solution seems pretty suitable to small and simple projects. Why doesn't anybody use it? What's the best practice of using NSAutoreleasePools?

    Read the article

  • How should platform specific lib files be named?

    - by Scott Langham
    Hello, I'm working on a C++ project that produces a lib that other teams use. It's being produced in a few different flavours: Win32 Debug Win32 Debug Static Win32 Release Win32 Release Static x64 Debug x64 Debug Static x64 Release x64 Release Static I'm wondering what the best wisdom is on how to name the dlls and what arguments are for different naming conventions. Do I output the libs into different directories, or do I append some letters on the end of the lib to differentiate them, or something else? One concern is that if I use directories, but don't give all the libs different names, users of the library will have problems where they accidentally use the wrong lib. Are these concerns valid? Thanks very much.

    Read the article

  • WeakReferences are not freed in embedded OS

    - by Carsten König
    I've got a strange behavior here: I get a massive memory leak in production running a WPF application that runs on a DLOG-Terminal (Windows Embedded Standard SP1) that behaves perfectly fine if I run it localy on a normal desktop (Win7 prof.) After many unsucessful attempts to find any problem I put one of those directly beside my monitor, installed the ANTs MemoryProfiler and did one hour test run simulating user operations on both the terminal and my development PC. Result is, that due to some strange reasons the embedded system piles up a huge amount of WeakReference and EffectiveValueEntry[] Objects. Here are are some pictures: Development (PC): And the terminal: Just look at the class list... Has anyone seen something like this before and are there known solutions to this? Where can I get help? (PS the terminals where installed with images prepared for .net4)

    Read the article

  • NSAutoreleasePool carrying across methods?

    - by Tim
    I'm building an iPhone application where I detach some threads to do long-running work in the background so as not to hang the UI. I understand that threads need NSAutoreleasePool instances for memory management. What I'm not sure about is if the threaded method calls another method - does that method also need an NSAutoreleasePool? Example code: - (void)primaryMethod { [self performSelectorInBackground:@selector(threadedMethod) withObject:nil]; } - (void)threadedMethod { NSAutoreleasePool *aPool = [[NSAutoreleasePool alloc] init]; // Some code here [self anotherMethod]; // Maybe more code here [aPool drain]; } - (void)anotherMethod { // More code here } The reason I ask is I'm receiving errors that objects are being autoreleased with no pool in place, and are "just leaking." I've seen other questions where people didn't have autorelease pools in place at all, and I understand why an autorelease pool is needed. I'm specifically interested in finding out whether an autorelease pool created in (in this example) threadedMethod applies to objects created in anotherMethod.

    Read the article

  • Share ASP.Net membership info between two applications

    - by bill
    Hi All, I have an existing webapp and i'm attempting to setup BlogEngine .Net to share the membership tables Everything seems to work.. accept i can see that the Membership.ValidateUser call in blogengine returns false! While the other apps returns true. I'm at a loss.. Membership.GetUser called from both apps returns the correct user.. Any ideas? thanks!

    Read the article

  • C++ - passing references to boost::shared_ptr

    - by abigagli
    If I have a function that needs to work with a shared_ptr, wouldn't it be more efficient to pass it a reference to it (so to avoid copying the shared_ptr object)? What are the possible bad side effects? I envision two possible cases: 1) inside the function a copy is made of the argument, like in ClassA::take_copy_of_sp(boost::shared_ptr<foo> &sp) { ... m_sp_member=sp; //This will copy the object, incrementing refcount ... } 2) inside the function the argument is only used, like in Class::only_work_with_sp(boost::shared_ptr<foo> &sp) //Again, no copy here { ... sp->do_something(); ... } I can't see in both cases a good reason to pass the boost::shared_ptr by value instead of by reference. Passing by value would only "temporarily" increment the reference count due to the copying, and then decrement it when exiting the function scope. Am I overlooking something? Andrea. EDIT: Just to clarify, after reading several answers : I perfectly agree on the premature-optimization concerns, and I alwasy try to first-profile-then-work-on-the-hotspots. My question was more from a purely technical code-point-of-view, if you know what I mean.

    Read the article

  • typedef boost::shared_ptr<MyJob> Ptr; or #define Ptr boost::shared_ptr

    - by danio
    I've just started wrking on a new codebase where each class contains a shared_ptr typedef (similar to this) like: typedef boost::shared_ptr<MyClass> Ptr; Is the only purpose to save typing boost::shared_ptr? If that is the case why not do #define Ptr boost::shared_ptr in one common header? Then you can do: Ptr<MyClass> myClass(new MyClass); which is no more typing than MyClass::Ptr myClass(new MyClass); and saves the Ptr definition in each class.

    Read the article

  • Using custom dll in Qt Application

    - by Donotalo
    First, my compiler and OS: Qt Creator 1.3 Qt 4.6 (32 bit) Windows 7 Ultimate I want to learn how to create and import a dll in Qt. I've created a *.dll file using Qt Creator, called Shared1.dll which contains nothing but an empty class named Shared1. Now I'd like to use Shared1 class in another Qt project. How can I do that? Thanks in advance.

    Read the article

  • Displaying/scrolling through heaps of pictures in the browser

    - by user347256
    I want to be able to browse through heaps of images in the browser, fast. THe easy way (just load 2000 images and scroll) slows down the scrolling a lot, assumedly because there's too much images to be kept in memory. I'd love to hear thoughts on strategies to be able to quickly scroll through 10000s of images (as if you were on your desktop) in the browser. What would expected bottlenecks be? How to address them? How to fake things so that the user experience is still good? Examples in the wild?

    Read the article

  • Access violation using LocalAlloc()

    - by PaulH
    I have a Visual Studio 2008 Windows Mobile 6 C++ application that is using an API that requires the use of LocalAlloc(). To make my life easier, I created an implementation of a standard allocator that uses LocalAlloc() internally: /// Standard library allocator implementation using LocalAlloc and LocalReAlloc /// to create a dynamically-sized array. /// Memory allocated by this allocator is never deallocated. That is up to the /// user. template< class T, int max_allocations > class LocalAllocator { public: typedef T value_type; typedef size_t size_type; typedef ptrdiff_t difference_type; typedef T* pointer; typedef const T* const_pointer; typedef T& reference; typedef const T& const_reference; pointer address( reference r ) const { return &r; }; const_pointer address( const_reference r ) const { return &r; }; LocalAllocator() throw() : c_( NULL ) { }; /// Attempt to allocate a block of storage with enough space for n elements /// of type T. n>=1 && n<=max_allocations. /// If memory cannot be allocated, a std::bad_alloc() exception is thrown. pointer allocate( size_type n, const void* /*hint*/ = 0 ) { if( NULL == c_ ) { c_ = LocalAlloc( LPTR, sizeof( T ) * n ); } else { HLOCAL c = LocalReAlloc( c_, sizeof( T ) * n, LHND ); if( NULL == c ) LocalFree( c_ ); c_ = c; } if( NULL == c_ ) throw std::bad_alloc(); return reinterpret_cast< T* >( c_ ); }; /// Normally, this would release a block of previously allocated storage. /// Since that's not what we want, this function does nothing. void deallocate( pointer /*p*/, size_type /*n*/ ) { // no deallocation is performed. that is up to the user. }; /// maximum number of elements that can be allocated size_type max_size() const throw() { return max_allocations; }; private: /// current allocation point HLOCAL c_; }; // class LocalAllocator My application is using that allocator implementation in a std::vector< #define MAX_DIRECTORY_LISTING 512 std::vector< WIN32_FIND_DATA, LocalAllocator< WIN32_FIND_DATA, MAX_DIRECTORY_LISTING > > file_list; WIN32_FIND_DATA find_data = { 0 }; HANDLE find_file = ::FindFirstFile( folder.c_str(), &find_data ); if( NULL != find_file ) { do { // access violation here on the 257th item. file_list.push_back( find_data ); } while ( ::FindNextFile( find_file, &find_data ) ); ::FindClose( find_file ); } // data submitted to the API that requires LocalAlloc()'d array of WIN32_FIND_DATA structures SubmitData( &file_list.front() ); On the 257th item added to the vector<, the application crashes with an access violation: Data Abort: Thread=8e1b0400 Proc=8031c1b0 'rapiclnt' AKY=00008001 PC=03f9e3c8(coredll.dll+0x000543c8) RA=03f9ff04(coredll.dll+0x00055f04) BVA=21ae0020 FSR=00000007 First-chance exception at 0x03f9e3c8 in rapiclnt.exe: 0xC0000005: Access violation reading location 0x01ae0020. LocalAllocator::allocate is called with an n=512 and LocalReAlloc() succeeds. The actual Access Violation exception occurs within the std::vector< code after the LocalAllocator::allocate call: 0x03f9e3c8 0x03f9ff04 > MyLib.dll!stlp_std::priv::__copy_trivial(const void* __first = 0x01ae0020, const void* __last = 0x01b03020, void* __result = 0x01b10020) Line: 224, Byte Offsets: 0x3c C++ MyLib.dll!stlp_std::vector<_WIN32_FIND_DATAW,LocalAllocator<_WIN32_FIND_DATAW,512> >::_M_insert_overflow(_WIN32_FIND_DATAW* __pos = 0x01b03020, _WIN32_FIND_DATAW& __x = {...}, stlp_std::__true_type& __formal = {...}, unsigned int __fill_len = 1, bool __atend = true) Line: 112, Byte Offsets: 0x5c C++ MyLib.dll!stlp_std::vector<_WIN32_FIND_DATAW,LocalAllocator<_WIN32_FIND_DATAW,512> >::push_back(_WIN32_FIND_DATAW& __x = {...}) Line: 388, Byte Offsets: 0xa0 C++ MyLib.dll!Foo(unsigned long int cbInput = 16, unsigned char* pInput = 0x01a45620, unsigned long int* pcbOutput = 0x1dabfbbc, unsigned char** ppOutput = 0x1dabfbc0, IRAPIStream* __formal = 0x00000000) Line: 66, Byte Offsets: 0x1e4 C++ If anybody can point out what I may be doing wrong, I would appreciate it. Thanks, PaulH

    Read the article

  • Unicorn: Which number of worker processes to use?

    - by blackbird07
    I am running a Ruby on Rails app on a virtual Linux server that is capped at 1GB RAM. Currently, I am constantly hitting the limit and would like to optimize memory utilization. One option I am looking at is reducing the number of unicorn workers. So what is the best way to determine the number of unicorn workers to use? The current setting is 10 workers, but the maximum number of requests per second I have seen on Google Analytics Real-Time is 3 (only scored once at a peak time; in 99% of the time not going above 1 request per second). So is it a save assumption that I can - for now - go with 4 workers, leaving room for unexpected amounts of requests? What are the metrics I should have a look at for determining the number of workers and what are the tools I can use for that on my Ubuntu machine?

    Read the article

  • Where is shared_ptr?

    - by Jake
    I am so frustrated right now after several hours trying to find where shared_ptr is located. None of the examples I see show complete code to include the headers for shared_ptr (and working). Simply stating "std" "tr1" and "" is not helping at all! I have downloaded boosts and all but still it doesn't show up! Can someone help me by telling exactly where to find it? Thanks for letting me vent my frustrations!

    Read the article

  • Is there a way to increase the efficiency of shared_ptr by storing the reference count inside the co

    - by BillyONeal
    Hello everyone :) This is becoming a common pattern in my code, for when I need to manage an object that needs to be noncopyable because either A. it is "heavy" or B. it is an operating system resource, such as a critical section: class Resource; class Implementation : public boost::noncopyable { friend class Resource; HANDLE someData; Implementation(HANDLE input) : someData(input) {}; void SomeMethodThatActsOnHandle() { //Do stuff }; public: ~Implementation() { FreeHandle(someData) }; }; class Resource { boost::shared_ptr<Implementation> impl; public: Resource(int argA) explicit { HANDLE handle = SomeLegacyCApiThatMakesSomething(argA); if (handle == INVALID_HANDLE_VALUE) throw SomeTypeOfException(); impl.reset(new Implementation(handle)); }; void SomeMethodThatActsOnTheResource() { impl->SomeMethodThatActsOnTheHandle(); }; }; This way, shared_ptr takes care of the reference counting headaches, allowing Resource to be copyable, even though the underlying handle should only be closed once all references to it are destroyed. However, it seems like we could save the overhead of allocating shared_ptr's reference counts and such separately if we could move that data inside Implementation somehow, like boost's intrusive containers do. If this is making the premature optimization hackles nag some people, I actually agree that I don't need this for my current project. But I'm curious if it is possible.

    Read the article

  • How to accomplish covariant return types when returning a shared_ptr?

    - by Kyle
    using namespace boost; class A {}; class B : public A {}; class X { virtual shared_ptr<A> foo(); }; class Y : public X { virtual shared_ptr<B> foo(); }; The return types aren't covariant (nor are they, therefore, legal), but they would be if I was using raw pointers instead. What's the commonly accepted idiom to work around this, if there is one?

    Read the article

  • boost::shared_ptr<const T> to boost::shared_ptr<T>

    - by Flevine
    I want to cast the const-ness out of a boost::shared_ptr, but I boost::const_pointer_cast is not the answer. boost::const_pointer_cast wants a const boost::shared_ptr, not a boost::shared_ptr. Let's forego the obligitory 'you shouldn't be doing that'. I know... but I need to do it... so what's the best/easiest way to do it? For clarity sake: boost::shared_ptr<const T> orig_ptr( new T() ); boost::shared_ptr<T> new_ptr = magic_incantation(orig_ptr); I need to know the magic_incantation() Thanks!

    Read the article

  • Release an object without a pointer?

    - by Kai Friis
    I’ve just started developing for iPhone and am trying to get my head around memory management. I made a small program that shows a map and an annotation on the map. For the annotation I made a simple class that implements the MKAnnotation protocol. To create and add the annotation I wrote this: [self.myMapView addAnnotation:[[MyAnnotation alloc] init]]; It worked fine until I tried to release the object. Nothing to release. This is what I would have done in C#, I guess it doesn’t work without garbage collection? So is this the only way to do it? MyAnnotation *myAnnotation = [[MyAnnotation alloc] init]; [self.myMapView addAnnotation: myAnnotation]; [myAnnotation release];

    Read the article

  • How return a std::string from C's "getcwd" function

    - by rubenvb
    Sorry to keep hammering on this, but I'm trying to learn :). Is this any good? And yes, I care about memory leaks. I can't find a decent way of preallocating the char*, because there simply seems to be no cross-platform way. const string getcwd() { char* a_cwd = getcwd(NULL,0); string s_cwd(a_cwd); free(a_cwd); return s_cwd; } UPDATE2: without Boost or Qt, the most common stuff can get long-winded (see accepted answer)

    Read the article

  • How to handle 'this' pointer in constructor?

    - by Kyle
    I have objects which create other child objects within their constructors, passing 'this' so the child can save a pointer back to its parent. I use boost::shared_ptr extensively in my programming as a safer alternative to std::auto_ptr or raw pointers. So the child would have code such as shared_ptr<Parent>, and boost provides the shared_from_this() method which the parent can give to the child. My problem is that shared_from_this() cannot be used in a constructor, which isn't really a crime because 'this' should not be used in a constructor anyways unless you know what you're doing and don't mind the limitations. Google's C++ Style Guide states that constructors should merely set member variables to their initial values. Any complex initialization should go in an explicit Init() method. This solves the 'this-in-constructor' problem as well as a few others as well. What bothers me is that people using your code now must remember to call Init() every time they construct one of your objects. The only way I can think of to enforce this is by having an assertion that Init() has already been called at the top of every member function, but this is tedious to write and cumbersome to execute. Are there any idioms out there that solve this problem at any step along the way?

    Read the article

  • PostGres - run a query in batches?

    - by CaffeineIV
    Is it possible to loop through a query so that if (for example) 500,000 rows are found, it'll return results for the first 10,000 and then rerun the query again? So, what I want to do is run a query and build an array, like this: $result = pg_query("SELECT * FROM myTable"); $i = 0; while($row = pg_fetch_array($result) ) { $myArray[$i]['id'] = $row['id']; $myArray[$i]['name'] = $row['name']; $i++; } But, I know that there will be several hundred thousand rows, so I wanted to do it in batches of like 10,000... 1- 9,999 and then 10,000 - 10,999 etc... The reason why is because I keep getting this error: Fatal error: Allowed memory size of 536870912 bytes exhausted (tried to allocate 3 bytes) Which, incidentally, I don't understand how 3 bytes could exhaust 512M... So, if that's something that I can just change, that'd be great, although, still might be better to do this in batches?

    Read the article

< Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >