Search Results

Search found 870 results on 35 pages for 'allocation'.

Page 29/35 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • iPhone - dequeueReusableCellWithIdentifier usage

    - by Jukurrpa
    Hi, I'm working on a iPhone app which has a pretty large UITableView with data taken from the web, so I'm trying to optimize its creation and usage. I found out that dequeueReusableCellWithIdentifier is pretty useful, but after seeing many source codes using this, I'm wondering if the usage I make of this function is the good one. Here is what people usually do: UITableViewCell* cell = [tableView dequeueReusableCellWithIdentifier:@"Cell"]; if (cell == nil) { cell = [[UITableViewCell alloc] initWithFrame:CGRectZero reuseIdentifier:@"Cell"]; // Add elements to the cell return cell; And here is the way I did it: NSString identifier = [NSString stringWithFormat:@"Cell @d", indexPath.row]: // The cell row UITableViewCell* cell = [tableView dequeueReusableCellWithIdentifier:identifier]; if (cell != nil) return cell; cell = [[UITableViewCell alloc] initWithFrame:CGRectZero reuseIdentifier:identifier]; // Add elements to the cell return cell; The difference is that people use the same identifier for every cell, so dequeuing one only avoids to alloc a new one. For me, the point of queuing was to give each cell a unique identifier, so when the app asks for a cell it already displayed, neither allocation nor element adding have to be done. In fine I don't know which is best, the "common" method ceils the table's memory usage to the exact number of cells it display, whislt the method I use seems to favor speed as it keeps all calculated cells, but can cause large memory consumption (unless there's an inner limit to the queue). Am I wrong to use it this way? Or is it just up to the developper, depending on his needs?

    Read the article

  • stack and heap issue for iPhone memory management

    - by Forrest
    From this post I got know that the Objective-C runtime does not allow objects to be instantiated on the stack, but only on the heap; this means that you don’t have “automatic objects”, nor things like auto_ptr objects to help you manage memory; Someone give one example in post Objective C: Memory Allocation on stack vs. heap NSString* str = @"hello"; but this NSString is also not allocated in stack. Feel odd that this str is static. (Who can explain this ? ) Question here is that why there is no heap ? even mixing c++ together with Object C ? /////////////////////////////// Clear my question /////////////////////////////// I am confused , so questions are not clear. Let me put in this way. 1) All Object C objects should be alloc in stack ? ( I think yes ) 2)In C++, there are stack for memory, so for iOS app, also have stack ? ( I think yes ) 3) for iOS app, if only use Object C, so what is the usage of stack ? what kind of objects should use stack then ?

    Read the article

  • Force full garbage collection when memory occupation goes beyond a certain threshold

    - by Silvio Donnini
    I have a server application that, in rare occasions, can allocate large chunks of memory. It's not a memory leak, as these chunks can be claimed back by the garbage collector by executing a full garbage collection. Normal garbage collection frees amounts of memory that are too small: it is not adequate in this context. The garbage collector executes these full GCs when it deems appropriate, namely when the memory footprint of the application nears the allotted maximum specified with -Xmx. That would be ok, if it wasn't for the fact that these problematic memory allocations come in bursts, and can cause OutOfMemoryErrors due to the fact that the jvm is not able to perform a GC quickly enough to free the required memory. If I manually call System.gc() beforehand, I can prevent this situation. Anyway, I'd prefer not having to monitor my jvm's memory allocation myself (or insert memory management into my application's logic); it would be nice if there was a way to run the virtual machine with a memory threshold, over which full GCs would be executed automatically, in order to release very early the memory I'm going to need. Long story short: I need a way (a command line option?) to configure the jvm in order to release early a good amount of memory (i.e. perform a full GC) when memory occupation reaches a certain threshold, I don't care if this slows my application down every once in a while. All I've found till now are ways to modify the size of the generations, but that's not what I need (at least not directly). I'd appreciate your suggestions, Silvio P.S. I'm working on a way to avoid large allocations, but it could require a long time and meanwhile my app needs a little stability

    Read the article

  • Sync Vs. Async Sockets Performance in .NET

    - by Michael Covelli
    Everything that I read about sockets in .NET says that the asynchronous pattern gives better performance (especially with the new SocketAsyncEventArgs which saves on the allocation). I think this makes sense if we're talking about a server with many client connections where its not possible to allocate one thread per connection. Then I can see the advantage of using the ThreadPool threads and getting async callbacks on them. But in my app, I'm the client and I just need to listen to one server sending market tick data over one tcp connection. Right now, I create a single thread, set the priority to Highest, and call Socket.Receive() with it. My thread blocks on this call and wakes up once new data arrives. If I were to switch this to an async pattern so that I get a callback when there's new data, I see two issues The threadpool threads will have default priority so it seems they will be strictly worse than my own thread which has Highest priority. I'll still have to send everything through a single thread at some point. Say that I get N callbacks at almost the same time on N different threadpool threads notifying me that there's new data. The N byte arrays that they deliver can't be processed on the threadpool threads because there's no guarantee that they represent N unique market data messages because TCP is stream based. I'll have to lock and put the bytes into an array anyway and signal some other thread that can process what's in the array. So I'm not sure what having N threadpool threads is buying me. Am I thinking about this wrong? Is there a reason to use the Async patter in my specific case of one client connected to one server?

    Read the article

  • Instantiating class with custom allocator in shared memory

    - by recipriversexclusion
    I'm pulling my hair due to the following problem: I am following the example given in boost.interprocess documentation to instantiate a fixed-size ring buffer buffer class that I wrote in shared memory. The skeleton constructor for my class is: template<typename ItemType, class Allocator > SharedMemoryBuffer<ItemType, Allocator>::SharedMemoryBuffer( unsigned long capacity ){ m_capacity = capacity; // Create the buffer nodes. m_start_ptr = this->allocator->allocate(); // allocate first buffer node BufferNode* ptr = m_start_ptr; for( int i = 0 ; i < this->capacity()-1; i++ ) { BufferNode* p = this->allocator->allocate(); // allocate a buffer node } } My first question: Does this sort of allocation guarantee that the buffer nodes are allocated in contiguous memory locations, i.e. when I try to access the n'th node from address m_start_ptr + n*sizeof(BufferNode) in my Read() method would it work? If not, what's a better way to keep the nodes, creating a linked list? My test harness is the following: // Define an STL compatible allocator of ints that allocates from the managed_shared_memory. // This allocator will allow placing containers in the segment typedef allocator<int, managed_shared_memory::segment_manager> ShmemAllocator; //Alias a vector that uses the previous STL-like allocator so that allocates //its values from the segment typedef SharedMemoryBuffer<int, ShmemAllocator> MyBuf; int main(int argc, char *argv[]) { shared_memory_object::remove("MySharedMemory"); //Create a new segment with given name and size managed_shared_memory segment(create_only, "MySharedMemory", 65536); //Initialize shared memory STL-compatible allocator const ShmemAllocator alloc_inst (segment.get_segment_manager()); //Construct a buffer named "MyBuffer" in shared memory with argument alloc_inst MyBuf *pBuf = segment.construct<MyBuf>("MyBuffer")(100, alloc_inst); } This gives me all kinds of compilation errors related to templates for the last statement. What am I doing wrong?

    Read the article

  • calloc v/s malloc and time efficiency

    - by yCalleecharan
    Hi, I've read with interest the post "c difference between malloc and calloc". I'm using malloc in my code and would like to know what difference I'll have using calloc instead. My present (pseudo)code with malloc: Scenario 1 int main() { allocate large arrays with malloc INITIALIZE ALL ARRAY ELEMENTS TO ZERO for loop //say 1000 times do something and write results to arrays end for loop FREE ARRAYS with free command } //end main If I use calloc instead of malloc, then I'll have: Scenario2 int main() { for loop //say 1000 times ALLOCATION OF ARRAYS WITH CALLOC do something and write results to arrays FREE ARRAYS with free command end for loop } //end main I have three questions: Which of the scenarios is more efficient if the arrays are very large? Which of the scenarios will be more time efficient if the arrays are very large? In both scenarios,I'm just writing to arrays in the sense that for any given iteration in the for loop, I'm writing each array sequentially from the first element to the last element. The important question: If I'm using malloc as in scenario 1, then is it necessary that I initialize the elements to zero? Say with malloc I have array z = [garbage1, garbage2, garbage 3]. For each iteration, I'm writing elements sequentially i.e. in the first iteration I get z =[some_result, garbage2, garbage3], in the second iteration I get in the first iteration I get z =[some_result, another_result, garbage3] and so on, then do I need specifically to initialize my arrays after malloc?

    Read the article

  • image analysis and 64bit OS

    - by picciopiccio
    I developed a C# application that makes use of Congex vision library (VPro). My application is developed with Visual Studio 2008 Pro on a 32bit Windows PC with 3GB of RAM. During the startup of application I see that a large amount of memory is allocated. So far so good, but when I add many and many vision elaboration the memory allocation increases and a part of application (only Cognex OCX) stops working well. The rest of application stills to work (working threads, com on socket....) I did whatever I could to save memory, but when the memory allocated is about 700MB I begin to have the problems. A note on the documentation of Cognex library tells that /LARGEADDRESSWARE is not supported. Anyway I'm thinking to try the migration of my app on win64 but what do I have to do? Can I simply use a processor with 64bit and windows 64bit without recompiling my application that would remain a 32bit application to take advantage of 64bit ? Or I should recompile my application ? If I don't need to recompile my application, can I link it with 64bit Congnex library? If I have to recompile my application, is it possible to cross compile the application so that my develop suite is on a 32bit PC? Every help will be very appreciated!! Thank in advance

    Read the article

  • how to manage a "resource" array efficiently

    - by Haiyuan Zhang
    The senario of my question is that one need to use a fixed size of array to keep track of certain number of "objects" . The object here can be as simply as a integer or as complex as very fancy data structure. And "keep track" here means to allocate one object when other part of the app need one instance of object and recyle it for future allocation when one instance of the object is returned .Finally ,let me use c++ to put my problme in a more descriptive way . #define MAX 65535 /* 65535 just indicate that many items should be handled . performance demanding! */ typedef struct { int item ; }Item_t; Item_t items[MAX] ; class itemManager { private : /* up to you.... */ public : int get() ; /* get one index to a free Item_t in items */ bool put(int index) ; /* recyle one Item_t indicate by one index in items */ } how will you implement the two public functions of itemManager ? it's up to you to add any private member .

    Read the article

  • malloc works, cudaHostAlloc segfaults?

    - by Mikhail
    I am new to CUDA and I want to use cudaHostAlloc. I was able to isolate my problem to this following code. Using malloc for host allocation works, using cudaHostAlloc results in a segfault, possibly because the area allocated is invalid? When I dump the pointer in both cases it is not null, so cudaHostAlloc returns something... works in_h = (int*) malloc(length*sizeof(int)); //works for (int i = 0;i<length;i++) in_h[i]=2; doesn't work cudaHostAlloc((void**)&in_h,length*sizeof(int),cudaHostAllocDefault); for (int i = 0;i<length;i++) in_h[i]=2; //segfaults Standalone Code #include <stdio.h> void checkDevice() { cudaDeviceProp info; int deviceName; cudaGetDevice(&deviceName); cudaGetDeviceProperties(&info,deviceName); if (!info.deviceOverlap) { printf("Compute device can't use streams and should be discared."); exit(EXIT_FAILURE); } } int main() { checkDevice(); int *in_h; const int length = 10000; cudaHostAlloc((void**)&in_h,length*sizeof(int),cudaHostAllocDefault); printf("segfault comming %d\n",in_h); for (int i = 0;i<length;i++) { in_h[i]=2; } free(in_h); return EXIT_SUCCESS; } ~ Invocation [id129]$ nvcc fun.cu [id129]$ ./a.out segfault comming 327641824 Segmentation fault (core dumped) Details Program is run in interactive mode on a cluster. I was told that an invocation of the program from the compute node pushes it to the cluster. Have not had any trouble with other home made toy cuda codes.

    Read the article

  • Haskell math performance

    - by Travis Brown
    I'm in the middle of porting David Blei's original C implementation of Latent Dirichlet Allocation to Haskell, and I'm trying to decide whether to leave some of the low-level stuff in C. The following function is one example—it's an approximation of the second derivative of lgamma: double trigamma(double x) { double p; int i; x=x+6; p=1/(x*x); p=(((((0.075757575757576*p-0.033333333333333)*p+0.0238095238095238) *p-0.033333333333333)*p+0.166666666666667)*p+1)/x+0.5*p; for (i=0; i<6 ;i++) { x=x-1; p=1/(x*x)+p; } return(p); } I've translated this into more or less idiomatic Haskell as follows: trigamma :: Double -> Double trigamma x = snd $ last $ take 7 $ iterate next (x' - 1, p') where x' = x + 6 p = 1 / x' ^ 2 p' = p / 2 + c / x' c = foldr1 (\a b -> (a + b * p)) [1, 1/6, -1/30, 1/42, -1/30, 5/66] next (x, p) = (x - 1, 1 / x ^ 2 + p) The problem is that when I run both through Criterion, my Haskell version is six or seven times slower (I'm compiling with -O2 on GHC 6.12.1). Some similar functions are even worse. I know practically nothing about Haskell performance, and I'm not terribly interested in digging through Core or anything like that, since I can always just call the handful of math-intensive C functions through FFI. But I'm curious about whether there's low-hanging fruit that I'm missing—some kind of extension or library or annotation that I could use to speed up this numeric stuff without making it too ugly.

    Read the article

  • Error while compiling Hello world program for CUDA

    - by footy
    I am using Ubuntu 12.10 and have sucessfully installed CUDA 5.0 and its sample kits too. I have also run sudo apt-get install nvidia-cuda-toolkit Below is my hello world program for CUDA: #include <stdio.h> /* Core input/output operations */ #include <stdlib.h> /* Conversions, random numbers, memory allocation, etc. */ #include <math.h> /* Common mathematical functions */ #include <time.h> /* Converting between various date/time formats */ #include <cuda.h> /* CUDA related stuff */ __global__ void kernel(void) { } /* MAIN PROGRAM BEGINS */ int main(void) { /* Dg = 1; Db = 1; Ns = 0; S = 0 */ kernel<<<1,1>>>(); /* PRINT 'HELLO, WORLD!' TO THE SCREEN */ printf("\n Hello, World!\n\n"); /* INDICATE THE TERMINATION OF THE PROGRAM */ return 0; } /* MAIN PROGRAM ENDS */ The following error occurs when I compile it with nvcc -g hello_world_cuda.cu -o hello_world_cuda.x /tmp/tmpxft_000033f1_00000000-13_hello_world_cuda.o: In function `main': /home/adarshakb/Documents/hello_world_cuda.cu:16: undefined reference to `cudaConfigureCall' /tmp/tmpxft_000033f1_00000000-13_hello_world_cuda.o: In function `__cudaUnregisterBinaryUtil': /usr/include/crt/host_runtime.h:172: undefined reference to `__cudaUnregisterFatBinary' /tmp/tmpxft_000033f1_00000000-13_hello_world_cuda.o: In function `__sti____cudaRegisterAll_51_tmpxft_000033f1_00000000_4_hello_world_cuda_cpp1_ii_b81a68a1': /tmp/tmpxft_000033f1_00000000-1_hello_world_cuda.cudafe1.stub.c:1: undefined reference to `__cudaRegisterFatBinary' /tmp/tmpxft_000033f1_00000000-1_hello_world_cuda.cudafe1.stub.c:1: undefined reference to `__cudaRegisterFunction' /tmp/tmpxft_000033f1_00000000-13_hello_world_cuda.o: In function `cudaError cudaLaunch<char>(char*)': /usr/lib/nvidia-cuda-toolkit/include/cuda_runtime.h:958: undefined reference to `cudaLaunch' collect2: ld returned 1 exit status I am also making sure that I use gcc and g++ version 4.4 ( As 4.7 there is some problem with CUDA)

    Read the article

  • iPhone - Memory Management - Using Leaks tool and getting some bizarre readings.

    - by Robert
    Hey all, putting the finishing touches on a project of mine so I figured I would run through it and see if and where I had any memory leaks. Found and fixed most of them but there are a couple of things regarding the memory leaks and object alloc that I am confused about. 1) There are 2 memory leaks that do not show me as responsible. There are 8 leaks attributed to AudioToolbox with the function being RegisterEmbeddedAudioCodecs(). This accounts for about 1.5 kb of leaks. The other one is detected immediately when the app begins. Core Graphics is responsible with the extra info being open_handle_to_dylib_path. For the audio leak I have looked over my audio code and to me it seems ok. self.musicPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:[NSURL fileURLWithPath:songFilePath] error:NULL]; [musicPlayer prepareToPlay]; [musicPlayer play] is called later on in a function. 2) Is it normal for there to be a spike in Object Allocation whenever a new view or controller is presented? My total memory usage is very, very low except for whenever I present a view controller. It spikes then immediately goes back down. I am guessing that this is just the phone handling all the information for switching or something. Blegh. Wall of text. Thanks in advance to anyone who helps! =)

    Read the article

  • x86 .net application with system.OutOfMemoryException

    - by Allen
    Hi,Guys I got OutOfMemoryException after the app running for 1 day, the app totally use 1.5G memory , all consumed by managed heap, gen 2 used 200mb , and LOB used 1.3mb, however the weired thing is, 900mb of space are Free. from perf counter I saw there had number of gen 2 gc collection happened, why GC collector cannot collect those 900mb free space in gen2 and LOB? I'm really appreicate for your help. following info are from windbg: 0:000> !eeheap -gc Number of GC Heaps: 1 generation 0 starts at 0x183153f0 generation 1 starts at 0x182aa834 generation 2 starts at 0x02131000 ephemeral segment allocation context: none segment begin allocated size 02130000 02131000 0312f284 0xffe284(16769668) 07750000 07751000 0874fc5c 0xffec5c(16772188) 09e30000 09e31000 0ae2fc2c 0xffec2c(16772140) 0b230000 0b231000 0c22ffec 0xffefec(16773100) 0c230000 0c231000 0d22f6f0 0xffe6f0(16770800) 0d230000 0d231000 0e22ea10 0xffda10(16767504) 0e230000 0e231000 0f22c1c4 0xffb1c4(16757188) 10390000 10391000 1138ddf4 0xffcdf4(16764404) 154e0000 154e1000 164da90c 0xff990c(16750860) 34aa0000 34aa1000 35a9dbfc 0xffcbfc(16763900) 7aca0000 7aca1000 7bc9edfc 0xffddfc(16768508) 49760000 49761000 4a75ef64 0xffdf64(16768868) 7bca0000 7bca1000 7cc99bac 0xff8bac(16747436) 17a70000 17a71000 183313fc 0x8c03fc(9176060) Large object heap starts at 0x03131000 segment begin allocated size 03130000 03131000 041250c8 0xff40c8(16728264) 08920000 08921000 099102f8 0xfef2f8(16708344) .... .... 4c760000 4c761000 4d71d578 0xfbc578(16500088) 1bb10000 1bb11000 1ca110d0 0xf000d0(15728848) 57760000 57761000 5862d7f8 0xecc7f8(15517688) Total Size: Size: 0x5ab13450 (1521562704) bytes. ------------------------------ GC Heap Size: Size: 0x5ab13450 (1521562704) bytes. 0:000> !dumpheap -stat total 0 objects Statistics: MT Count TotalSize Class Name 73037c78 1 12 System.Configuration.GenericEnumConverter 73036da0 1 12 System.Configuration.InfiniteIntConverter .... .... 69161c3c 35025 6809420 System.Windows.EffectiveValueEntry[] 69164748 54 12471072 MS.Internal.WeakEventTable+EventKey[] 710e2228 9540 190389260 System.Byte[] 710dd2b8 1317031 339257932 System.String 0035a670 6427 902224056 Free Total 3615631 objects

    Read the article

  • OpenGL particle system

    - by allan
    I'm really new with OpenGL, so bear with me. I'm trying to simulate a particle system using OpenGl but I can't get it to work, this is what I have so far: #include <GL/glut.h> int main (int argc, char **argv){ // data allocation, various non opengl stuff ............ glutInit(&argc, argv); glutInitDisplayMode(GLUT_RGB | GLUT_DOUBLE ); glutInitWindowPosition(100,100); glutInitWindowSize(size, size); glPointSize (4); glutCreateWindow("test gl"); ............ // initial state, not opengl ............ glViewport(0,0,size,size); glutDisplayFunc(display); glutIdleFunc(compute); glutMainLoop(); } void compute (void) { // change state not opengl glutPostRedisplay(); } void display (void) { glClear(GL_COLOR_BUFFER_BIT); glBegin(GL_POINTS); for(i = 0; i<nparticles; i++) { // two types of particles if (TYPE(particle[i]) == 1) glColor3f(1,0,0); else glColor3f(0,0,1); glVertex2f(X(particle[i]),Y(particle[i])); } glEnd(); glFlush(); glutSwapBuffers(); } I get a black window after a couple of seconds (the window has just the title bar before that). Where do I go wrong? Any help would be very much appreciated. Thanks. LE: the x and y coordinates of each particle are within the interval (0,size)

    Read the article

  • Win32 DLL importing issues (DllMain)

    - by brady
    I have a native DLL that is a plug-in to a different application (one that I have essentially zero control of). Everything works just great until I link with an additional .lib file (links my DLL to another DLL named ABQSMABasCoreUtils.dll). This file contains some additional API from the parent application that I would like to utilize. I haven't even written any code to use any of the functions exported but just linking in this new DLL is causing problems. Specifically I get the following error when I attempt to run the program: The application failed to initialize properly (0xc0000025). Clock on OK to terminate the application. I believe I have read somewhere that this is typically due to a DllMain function returning FALSE. Also, the following message is written to the standard output: ERROR: Memory allocation attempted before component initialization I am almost 100% sure this error message is coming from the application and is not some type of Windows error. Looking into this a little more (aka flailing around and flipping every switch I know of) I linked with /MAP turned on and found this in the resulting .map file: 0001:000af220 ??3@YAXPEAX@Z 00000001800b0220 f ABQSMABasCoreUtils_import:ABQSMABasCoreUtils.dll 0001:000af226 ??2@YAPEAX_K@Z 00000001800b0226 f ABQSMABasCoreUtils_import:ABQSMABasCoreUtils.dll 0001:000af22c ??_U@YAPEAX_K@Z 00000001800b022c f ABQSMABasCoreUtils_import:ABQSMABasCoreUtils.dll 0001:000af232 ??_V@YAXPEAX@Z 00000001800b0232 f ABQSMABasCoreUtils_import:ABQSMABasCoreUtils.dll If I undecorate those names using "undname" they give the following (same order): void __cdecl operator delete(void * __ptr64) void * __ptr64 __cdecl operator new(unsigned __int64) void * __ptr64 __cdecl operator new[](unsigned __int64) void __cdecl operator delete[](void * __ptr64) I am not sure I understand how anything from ABQSMABasCoreUtils.dll can exist within this .map file or why my DLL is even attempting to load ABQSMABasCoreUtils.dll if I don't have any code that references this DLL. Can anyone help me put this information together and find out why this isn't working? For what it's worth I have confirmed via "dumpbin" that the parent application imports the same DLL (ABQSMABasCoreUtils.dll), so it is being loaded no matter what. I have also tried delay loading this DLL in my DLL but that did not change the results.

    Read the article

  • Optimizing processing and management of large Java data arrays

    - by mikera
    I'm writing some pretty CPU-intensive, concurrent numerical code that will process large amounts of data stored in Java arrays (e.g. lots of double[100000]s). Some of the algorithms might run millions of times over several days so getting maximum steady-state performance is a high priority. In essence, each algorithm is a Java object that has an method API something like: public double[] runMyAlgorithm(double[] inputData); or alternatively a reference could be passed to the array to store the output data: public runMyAlgorithm(double[] inputData, double[] outputData); Given this requirement, I'm trying to determine the optimal strategy for allocating / managing array space. Frequently the algorithms will need large amounts of temporary storage space. They will also take large arrays as input and create large arrays as output. Among the options I am considering are: Always allocate new arrays as local variables whenever they are needed (e.g. new double[100000]). Probably the simplest approach, but will produce a lot of garbage. Pre-allocate temporary arrays and store them as final fields in the algorithm object - big downside would be that this would mean that only one thread could run the algorithm at any one time. Keep pre-allocated temporary arrays in ThreadLocal storage, so that a thread can use a fixed amount of temporary array space whenever it needs it. ThreadLocal would be required since multiple threads will be running the same algorithm simultaneously. Pass around lots of arrays as parameters (including the temporary arrays for the algorithm to use). Not good since it will make the algorithm API extremely ugly if the caller has to be responsible for providing temporary array space.... Allocate extremely large arrays (e.g. double[10000000]) but also provide the algorithm with offsets into the array so that different threads will use a different area of the array independently. Will obviously require some code to manage the offsets and allocation of the array ranges. Any thoughts on which approach would be best (and why)?

    Read the article

  • How to do cleanup reliably in python?

    - by Cheery
    I have some ctypes bindings, and for each body.New I should call body.Free. The library I'm binding doesn't have allocation routines insulated out from the rest of the code (they can be called about anywhere there), and to use couple of useful features I need to make cyclic references. I think It'd solve if I'd find a reliable way to hook destructor to an object. (weakrefs would help if they'd give me the callback just before the data is dropped. So obviously this code megafails when I put in velocity_func: class Body(object): def __init__(self, mass, inertia): self._body = body.New(mass, inertia) def __del__(self): print '__del__ %r' % self if body: body.Free(self._body) ... def set_velocity_func(self, func): self._body.contents.velocity_func = ctypes_wrapping(func) I also tried to solve it through weakrefs, with those the things seem getting just worse, just only largely more unpredictable. Even if I don't put in the velocity_func, there will appear cycles at least then when I do this: class Toy(object): def __init__(self, body): self.body.owner = self ... def collision(a, b, contacts): whatever(a.body.owner) So how to make sure Structures will get garbage collected, even if they are allocated/freed by the shared library? There's repository if you are interested about more details: http://bitbucket.org/cheery/ctypes-chipmunk/

    Read the article

  • How to migrate primary key generation from "increment" to "hi-lo"?

    - by Bevan
    I'm working with a moderate sized SQL Server 2008 database (around 120 tables, backups are around 4GB compressed) where all the table primary keys are declared as simple int columns. At present, primary key values are generated by NHibernate with the increment identity generator, which has worked well thus far, but precludes moving to a multiprocessing environment. Load on the system is growing, so I'm evaluating the work required to allow the use of multiple servers accessing a common database backend. Transitioning to the hi-lo generator seems to be the best way forward, but I can't find a lot of detail about how such a migration would work. Will NHibernate automatically create rows in the hi-lo table for me, or do I need to script these manually? If NHibernate does insert rows automatically, does it properly take account of existing key values? If NHibernate does take care of thing automatically, that's great. If not, are there any tools to help? Update NHibernate's increment identifier generator works entirely in-memory. It's seeded by selecting the maximum value of used identifiers from the table, but from that point on allocates new values by a simple increment, without reference back to the underlying database table. If any other process adds rows to the table, you end up with primary key collisions. You can run multiple threads within the one process just fine, but you can't run multiple processes. For comparison, the NHibernate identity generator works by configuring the database tables with identity columns, putting control over primary key generation in the hands of the database. This works well, but compromises the unit of work pattern. The hi-lo algorithm sits inbetween these - generation of primary keys is coordinated through the database, allowing for multiprocessing, but actual allocation can occur entirely in memory, avoiding problems with the unit of work pattern.

    Read the article

  • Scrum - Responding to traditional RFPs

    - by Todd Charron
    Hi all, I've seen many articles about how to put together Agile RFP's and negotiating agile contracts, but how about if you're responding to a more traditional RFP? Any advice on how to meet the requirements of the RFP while still presenting an agile approach? A lot of these traditional RFP's request specific technical implementations, timelines, and costs, while also requesting exact details about milestones and how the technical solutions will be implemented. While I'm sure in traditional waterfall it's normal to pretend that these things are facts, it seems wrong to commit to something like this if you're an agile organization just to get through the initial screening process. What methods have you used to respond to more traditional RFP's? Here's a sample one grabbed from google, http://www.investtoronto.ca/documents/rfp-web-development.pdf Particularly, "3. A detailed work plan outlining how they expect to achieve the four deliverables within the timeframe outlined. Plan for additional phases of development." and "8. The detailed cost structure, including per diem rates for team members, allocation of hours between team members, expenses and other out of pocket disbursements, and a total upset price."

    Read the article

  • C++ interpreter conceptual problem

    - by Jan Wilkins
    I've built an interpreter in C++ for a language created by me. One main problem in the design was that I had two different types in the language: number and string. So I have to pass around a struct like: class myInterpreterValue { myInterpreterType type; int intValue; string strValue; } Objects of this class are passed around million times a second during e.g.: a countdown loop in my language. Profiling pointed out: 85% of the performance is eaten by the allocation function of the string template. This is pretty clear to me: My interpreter has bad design and doesn't use pointers enough. Yet, I don't have an option: I can't use pointers in most cases as I just have to make copies. How to do something against this? Is a class like this a better idea? vector<string> strTable; vector<int> intTable; class myInterpreterValue { myInterpreterType type; int locationInTable; } So the class only knows what type it represents and the position in the table This however again has disadvantages: I'd have to add temporary values to the string/int vector table and then remove them again, this would eat a lot of performance again. Help, how do interpreters of languages like Python or Ruby do that? They somehow need a struct that represents a value in the language like something that can either be int or string.

    Read the article

  • iPhone: Leak with UIWebView loading Office documents. Any ideas how to avoid it?

    - by Thomas Tempelmann
    While there are already quite a few posts about leaks around UIWebView, mine is a bit more special, I believe, and thus deserves its own post here. I see a reproducible large leak every time I load a Office document such as a Word or Excel file. For instance, every time I display a 180KB .doc file, I get a 100KB leak. And that happens with both the simulator and an actual device, running OS 3.1.3. The leak is not visible with the Leaks instrument but only by looking at the malloc instances via the ObjectAlloc instrument. Here's a picture from the instruments trace: I've also made a demo project, UIWebView-Leak.zip, so you can verify this yourself. To see the leak, use the ObjectAlloc instrument, switch to the view where you see individual allocation objects, and sort by size so that you see the large ones in a group, just like in my picture above. Then view a Office document a few times and find the Malloc objects that keep staying "Live" even after the actual UIWebView has been freed. Is this a known bug? Or is there any way I can avoid these leaks? I.e, have you successfully shown Office documents on an iPhone withing getting such leaks? Note: I've reported this as a bug to Apple now, too (ID 7950594) I am still waiting for someone (including Apple) to confirm this as a true leak or show why it isn't (i.e. that I do something wrong or make wrong assumptions)

    Read the article

  • Very weird C file-handling anomaly

    - by KáGé
    Hello, I got a very weird issue that I cant figure out in my school project, which is the simulation of a simple filesystem in a human-readable textfile. Unfortunately I don't yet have enough time to translate the comments in my code or make it less gibberish, so if you are bothered by that, you don't have to help, I understand. See the code HERE. Now in drive.h, at line 574 is this part: i = getline(); #ifdef DEBUG printf("Free space in all found at %d.\n\n", i); if(drive.disk != NULL){ printf("Disk OK\n\n"); } #endif //write in data state = seekline(i); Before this it finds place for the allocation database entry in the ALL sector (see the "image files" in the mounts folder, this issue was tested on mount_30.efs-dbf), then gets the line with i = getline() fine (getline is in lglobal.h, line 39), but after that any file manipulation (in this case seekline's fseek, but if I comment that out, then the first fprintf after that) crashes the program straight away. I think the file gets somehow corrupted (though the Disk OK message appears) but can't figure out how. I've tried putting i = getline(); into comment, but it didn't make any difference. I've also tried asking at local programming forums but they didn't really help either. The last few lines of the output before it crashes: Dir written. (drive.h line 562) Seekline entered: 268 (called at drive.h line 564) Getline entered. (called at drive.h line 574) Line got: 268. Free space in all found at 268. (drive.h line 576) Seekline entered: 268 (called at drive.h line 582, note that this exact call was run successfully less than 20 lines back. This one should set the pointer to the beginning of the line it is currently in) After this it crashes. Does anyone has any idea of what causes this and how could I fix it? Thank you.

    Read the article

  • Using Python to get a CSV output for the following example.

    - by Az
    Hi there, I'm back again with my ongoing saga of Student-Project Allocation questions. Thanks to Moron (who does not match his namesake) I've got a bit of direction for an evaluation portion of my project. Going with the idea of the Assignment Problem and Hungarian Algorithm I would like to express my data in the form of a .csv file which would end up looking like this in spreadsheet form. This is based on the structure I saw here. | | Project 1 | Project 2 | Project 3 | |----------|-----------|-----------|-----------| |Student1 | | 2 | 1 | |----------|-----------|-----------|-----------| |Student2 | 1 | 2 | 3 | |----------|-----------|-----------|-----------| |Student3 | 1 | 3 | 2 | |----------|-----------|-----------|-----------| To make it less cryptic: the rows are the Students/Agents and the columns represent Projects/Task. Obviously ONE project can be assigned to ONE student. That, in short, is what my project is about. The fields represent the preference weights the students have placed upon the projects (ranging from 1 to 10). If blank, that student does not want that project and there's no chance of him/her being assigned such. Anyway, my data is stored within dictionaries. Specifically the students and projects dictionaries such that: students[student_id] = Student(student_id, student_name, alloc_proj, alloc_proj_rank, preferences) where preferences is in the form of a dictionary such that preferences[rank] = {project_id} and projects[project_id] = Project(project_id, project_name) I'm aware that sorted(students.keys()) will give me a sorted list of all the student IDs which will populate the row labels and sorted(projects.keys()) will give me the list I need to populate the column labels. Thus for each student, I'd go into their preferences dictionary and match the applicable projects to ranks. I can do that much. Where I'm failing is understanding how to create a .csv file. Any help, pointers or good tutorials will be highly appreciated.

    Read the article

  • C Programming: malloc() inside another function

    - by vikramtheone
    Hi Guys, I need help with malloc() inside another function. I'm passing a pointer and size to the function from my main() and I would like to allocate memory for that pointer dynamically using malloc() from inside that called function, but what I see is that.... the memory which is getting allocated is for the pointer declared withing my called function and not for the pointer which is inside the main(). How should I pass a pointer to a function and allocate memory for the passed pointer from inside the called function? Can anyone throw light on this? Help!!! Vikram I have written the following code and I get the output as shown below SOURCE: main() { unsigned char *input_image; unsigned int bmp_image_size = 262144; if(alloc_pixels(input_image, bmp_image_size)==NULL) printf("\nPoint2: Memory allocated: %d bytes",_msize(input_image)); else printf("\nPoint3: Memory not allocated"); } signed char alloc_pixels(unsigned char *ptr, unsigned int size) { signed char status = NO_ERROR; ptr = NULL; ptr = (unsigned char*)malloc(size); if(ptr== NULL) { status = ERROR; free(ptr); printf("\nERROR: Memory allocation did not complete successfully!"); } printf("\nPoint1: Memory allocated: %d bytes",_msize(ptr)); return status; } PROGRAM OUTPUT: Point1: Memory allocated ptr: 262144 bytes Point2: Memory allocated input_image: 0 bytes

    Read the article

  • Memeory Leak in Windows Page file when calling a shell command

    - by Arno
    I have an issue on our Windows 2003 x64 Build Server when invoking shell commands from a script. Each call causes a "memory leak" in the page file so it grows quite rapidly until it reaches the maximum and the machine stops working. I can reproduce the problem very nicely by running a perl script like for ($count=1; $count<5000; $count++) { system "echo huhu"; } It is independent of he scripting language as the same happens with lua: for i=1,5000 do os.execute("echo huhu") end I found somebody describing the same issue with php at http://www.issociate.de/board/post/454835/Memory_leak_occurs_when_exec%28%29_function_is_used_on_Windows_platform.html His solution: Firewall/Virus Scanner does not apply, neither are running on the machine. We can also reproduce the issue on other Developer Machines running XP 64, but not on XP 32 Bit. I also found an article describing a leak situation in page file at http://www.programfragment.com/ The guilty guy for the allocation is C:\WINDOWS\System32\svchost.exe -k netsvcs which runs all the basic Windows services. Does anybody know the issue and how to resolve it ?

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35  | Next Page >