Search Results

Search found 15939 results on 638 pages for 'low memory'.

Page 254/638 | < Previous Page | 250 251 252 253 254 255 256 257 258 259 260 261  | Next Page >

  • Inserting rows while fetching(from another table) in SQLite

    - by Samuel
    I'm getting this error no matter what with python and sqlite. File "addbooks.py", line 77, in saveBook conn.commit() sqlite3.OperationalError: cannot commit transaction - SQL statements in progress The code looks like this: conn = sqlite3.connect(fname) cread = conn.cursor() cread.execute('''select book_text from table''') while True: row = cread.fetchone() if row is None: break .... for entry in getEntries(doc): saveBook(entry, conn) Can't do a fetchall() because table and column size are big, and the memory is scarce. What can be done without resorting to dirty tricks(as getting the rowids in memory, which would probably fit, and then selecting the rows one by one)?.

    Read the article

  • Why doesn't this work?

    - by user146780
    I'v tried to solve a memory leak in the GLU callback by creating a global variable but now it dos not draw anything: GLdouble *gluptr = NULL; void CALLBACK combineCallback(GLdouble coords[3], GLdouble *vertex_data[4], GLfloat weight[4], GLdouble **dataOut) { GLdouble *vertex; if(gluptr == NULL) { gluptr = (GLdouble *) malloc(6 * sizeof(GLdouble)); } vertex = (GLdouble*)gluptr; vertex[0] = coords[0]; vertex[1] = coords[1]; vertex[2] = coords[2]; for (int i = 3; i < 6; i++) { vertex[i] = weight[0] * vertex_data[0][i] + weight[1] * vertex_data[0][i] + weight[2] * vertex_data[0][i] + weight[3] * vertex_data[0][i]; } *dataOut = vertex; } basically instead of doing malloc each time in the loop (thus the memory leak) im using a global pointer, but this doesn't work (drawing to the screen). Why would using malloc to a pointer created in the function work any different than a global variable? Thanks

    Read the article

  • How to debug macruby?

    - by Dan
    Hi, I've encountered an inconsistent bug with MacRuby and have no idea how to go about debugging this. If anyone could help would be great. I don't know if this is due to my own code or is it a bug in the MacRuby framework. I have a feeling it's my own code, something about over-retaining a piece of memory and hence the garbage collection failed. This is the error from Xcode. Thanks. CSV Wizard(30245,0x7fff704f7ca0) malloc: resurrection error for object 0x20199da20 while assigning {conservative-block}[196608](0x302360060)[117616] = Array[64](0x20199da20) garbage pointer stored into reachable memory, break on auto_zone_resurrection_error to debug CSV Wizard(30245,0x103781000) malloc: garbage block 0x20199da20(Array[64]) was over-retained during finalization, refcount = 1 This could be an unbalanced CFRetain(), or CFRetain() balanced with -release. Break on auto_zone_resurrection_error() to debug. CSV Wizard(30245,0x103781000) malloc: fatal resurrection error for garbage block 0x20199da20(Array[64]): over-retained during finalization, refcount = 1

    Read the article

  • Stack recommendations for small/medium-sized web application in Python

    - by reto
    I'm looking for some recommendations for a python web application. We have some memory restrictions and we try to keep it small and lean. We thought about using WSGI (and a python webserver) and build the rest ourself. We already have a template engine we'd like to use, but we are open for some suggestions regarding the whole request handling (the controller). The application has to run in a single process and the requests have to be processed with multiple threads. We've looked at django, but we are a not sure if it fits into our memory budget. Your feedback is very welcome! Cheers, Reto

    Read the article

  • segmented reduction with scattered segments

    - by Christian Rau
    I got to solve a pretty standard problem on the GPU, but I'm quite new to practical GPGPU, so I'm looking for ideas to approach this problem. I have many points in 3-space which are assigned to a very small number of groups (each point belongs to one group), specifically 15 in this case (doesn't ever change). Now I want to compute the mean and covariance matrix of all the groups. So on the CPU it's roughly the same as: for each point p { mean[p.group] += p.pos; covariance[p.group] += p.pos * p.pos; ++count[p.group]; } for each group g { mean[g] /= count[g]; covariance[g] = covariance[g]/count[g] - mean[g]*mean[g]; } Since the number of groups is extremely small, the last step can be done on the CPU (I need those values on the CPU, anyway). The first step is actually just a segmented reduction, but with the segments scattered around. So the first idea I came up with, was to first sort the points by their groups. I thought about a simple bucket sort using atomic_inc to compute bucket sizes and per-point relocation indices (got a better idea for sorting?, atomics may not be the best idea). After that they're sorted by groups and I could possibly come up with an adaption of the segmented scan algorithms presented here. But in this special case, I got a very large amount of data per point (9-10 floats, maybe even doubles if the need arises), so the standard algorithms using a shared memory element per thread and a thread per point might make problems regarding per-multiprocessor resources as shared memory or registers (Ok, much more on compute capability 1.x than 2.x, but still). Due to the very small and constant number of groups I thought there might be better approaches. Maybe there are already existing ideas suited for these specific properties of such a standard problem. Or maybe my general approach isn't that bad and you got ideas for improving the individual steps, like a good sorting algorithm suited for a very small number of keys or some segmented reduction algorithm minimizing shared memory/register usage. I'm looking for general approaches and don't want to use external libraries. FWIW I'm using OpenCL, but it shouldn't really matter as the general concepts of GPU computing don't really differ over the major frameworks.

    Read the article

  • Reading from a file not line-by-line

    - by MadH
    Assigning a QTextStream to a QFile and reading it line-by-line is easy and works fine, but I wonder if the performance can be inreased by first storing the file in memory and then processing it line-by-line. Using FileMon from sysinternals, I've encountered that the file is read in chunks of 16KB and since the files I've to process are not that big (~2MB, but many!), loading them into memory would be a nice thing to try. Any ideas how can I do so? QFile is inhereted from QIODevice, which allows me to ReadAll() it into QByteArray, but how to proceed then and divide it into lines?

    Read the article

  • Visual Studio 2010 "Not enough storage is available to process this command"

    - by Daniel Perez
    I'm fighting with VS 2010 and this error that seems to be very common in previous versions, but it looks like not everyone is having it in the latest version. I've got VS 2010 SP1 and I'm getting this error quite often. The problem is that it's not even enough to restart VS in order to make it go away, I usually have to restart my pc, and i'm losing a lot of time doing this (it's quite frequent) I've got Windows 7 32bits (can't upgrade to 64 bits, the company doesn't allow it), and I can't do things like creating another solution (please don't reply this :) ) I've used the command to make devenv.exe LARGEADDRESSAWARE, but the error keeps on happening My virtual memory size is set to automatic, and the weird thing is that VS doesn't even take 2gb of ram, so I don't know if the error is really because it's lacking memory, or if it's some bug in the program any ideas, things to try, something?

    Read the article

  • Windows Service suddenly doing nothing

    - by TB
    Hi, My windows service is using a Thread (not a timer) which is always looping and sleeps for 1 second every loop using : evet.WaitOne(interval); When I start the service it works fine and I can see in the task manager that it is running, consuming and releasing memory, consuming processor ... etc that is all normal, but after a while (random amount of time) the service simply stops!! it is still there in the task manager but it is not consuming any processor work now and its consumption to the memory is not changing. it simply (died but still there in the task manager like a Zombie). I know that many exceptions might have happened during running the service (it is really doing many things) but all those exceptions are handled in Try catch blocks, so why is my "always looping" thread stops ??? This thread also logs every time he loops, when he is freezig in this way he is not logging anything (of course)

    Read the article

  • What wording in the C++ standard allows static_cast<non-void-type*>(malloc(N)); to work?

    - by ben
    As far as I understand the wording in 5.2.9 Static cast, the only time the result of a void*-to-object-pointer conversion is allowed is when the void* was a result of the inverse conversion in the first place. Throughout the standard there is a bunch of references to the representation of a pointer, and the representation of a void pointer being the same as that of a char pointer, and so on, but it never seems to explicitly say that casting an arbitrary void pointer yields a pointer to the same location in memory, with a different type, much like type-punning is undefined where not punning back to an object's actual type. So while malloc clearly returns the address of suitable memory and so on, there does not seem to be any way to actually make use of it, portably, as far as I have seen.

    Read the article

  • Page allocation failures on iSCSI storage

    - by Dave
    We have a CentOS 6.3 iscsi server (16GB RAM) running on Infiniband bus (ipoib). When the load is high I can see multiple errors: Sep 3 23:22:20 stor4 kernel: tgtd: page allocation failure. order:2, mode:0x20 Sep 3 23:22:20 stor4 kernel: Pid: 3637, comm: tgtd Not tainted 2.6.32 #1 Sep 3 23:22:20 stor4 kernel: Call Trace: Sep 3 23:22:20 stor4 kernel: [] ? __alloc_pages_nodemask+0x77f/0x940 Sep 3 23:22:20 stor4 kernel: [] ? kmem_getpages+0x62/0x170 Sep 3 23:22:20 stor4 kernel: [] ? fallback_alloc+0x1ba/0x270 Sep 3 23:22:20 stor4 kernel: [] ? cache_grow+0x2cf/0x320 Sep 3 23:22:20 stor4 kernel: [] ? ____cache_alloc_node+0x99/0x160 Sep 3 23:22:20 stor4 kernel: [] ? pskb_expand_head+0x64/0x270 Sep 3 23:22:20 stor4 kernel: [] ? __kmalloc+0x189/0x220 Sep 3 23:22:20 stor4 kernel: [] ? pskb_expand_head+0x64/0x270 Sep 3 23:22:20 stor4 kernel: [] ? __pskb_pull_tail+0x2aa/0x360 Sep 3 23:22:20 stor4 kernel: [] ? tcp_init_tso_segs+0x37/0x50 Sep 3 23:22:20 stor4 kernel: [] ? dev_queue_xmit+0x4bb/0x6f0 Sep 3 23:22:20 stor4 kernel: [] ? neigh_connected_output+0xbd/0x100 Sep 3 23:22:20 stor4 kernel: [] ? ip_finish_output+0x237/0x310 Sep 3 23:22:20 stor4 kernel: [] ? ip_output+0xb8/0xc0 Sep 3 23:22:20 stor4 kernel: [] ? __ip_local_out+0x9f/0xb0 Sep 3 23:22:20 stor4 kernel: [] ? ip_local_out+0x25/0x30 Sep 3 23:22:20 stor4 kernel: [] ? ip_queue_xmit+0x190/0x420 Sep 3 23:22:20 stor4 kernel: [] ? sock_aio_write+0x167/0x180 Sep 3 23:22:20 stor4 kernel: [] ? tcp_transmit_skb+0x3fe/0x7b0 Sep 3 23:22:20 stor4 kernel: [] ? tcp_write_xmit+0x1fb/0xa20 Sep 3 23:22:20 stor4 kernel: [] ? __tcp_push_pending_frames+0x30/0xe0 Sep 3 23:22:20 stor4 kernel: [] ? tcp_push_pending_frames+0x33/0x40 Sep 3 23:22:20 stor4 kernel: [] ? do_tcp_setsockopt+0x3d6/0x480 Sep 3 23:22:20 stor4 kernel: [] ? tcp_setsockopt+0x2a/0x30 Sep 3 23:22:20 stor4 kernel: [] ? sock_common_setsockopt+0x14/0x20 Sep 3 23:22:20 stor4 kernel: [] ? sys_setsockopt+0x7f/0xe0 Sep 3 23:22:20 stor4 kernel: [] ? system_call_fastpath+0x16/0x1b Sep 3 23:22:20 stor4 kernel: Mem-Info: Sep 3 23:22:20 stor4 kernel: Node 0 DMA per-cpu: Sep 3 23:22:20 stor4 kernel: CPU 0: hi: 0, btch: 1 usd: 0 Sep 3 23:22:20 stor4 kernel: CPU 1: hi: 0, btch: 1 usd: 0 Sep 3 23:22:20 stor4 kernel: CPU 2: hi: 0, btch: 1 usd: 0 Sep 3 23:22:20 stor4 kernel: CPU 3: hi: 0, btch: 1 usd: 0 Sep 3 23:22:20 stor4 kernel: Node 0 DMA32 per-cpu: Sep 3 23:22:20 stor4 kernel: CPU 0: hi: 186, btch: 31 usd: 183 Sep 3 23:22:20 stor4 kernel: CPU 1: hi: 186, btch: 31 usd: 23 Sep 3 23:22:20 stor4 kernel: CPU 2: hi: 186, btch: 31 usd: 183 Sep 3 23:22:20 stor4 kernel: CPU 3: hi: 186, btch: 31 usd: 181 Sep 3 23:22:20 stor4 kernel: Node 0 Normal per-cpu: Sep 3 23:22:20 stor4 kernel: CPU 0: hi: 186, btch: 31 usd: 171 Sep 3 23:22:20 stor4 kernel: CPU 1: hi: 186, btch: 31 usd: 29 Sep 3 23:22:20 stor4 kernel: CPU 2: hi: 186, btch: 31 usd: 32 Sep 3 23:22:20 stor4 kernel: CPU 3: hi: 186, btch: 31 usd: 32 Sep 3 23:22:20 stor4 kernel: active_anon:1875 inactive_anon:2473 isolated_anon:0 Sep 3 23:22:20 stor4 kernel: active_file:1243637 inactive_file:2505055 isolated_file:0 Sep 3 23:22:20 stor4 kernel: unevictable:0 dirty:268338 writeback:0 unstable:0 Sep 3 23:22:20 stor4 kernel: free:86050 slab_reclaimable:132377 slab_unreclaimable:23744 Sep 3 23:22:20 stor4 kernel: mapped:1293 shmem:222 pagetables:720 bounce:0 Sep 3 23:22:20 stor4 kernel: Node 0 DMA free:15732kB min:124kB low:152kB high:184kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15332kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes Sep 3 23:22:20 stor4 kernel: lowmem_reserve[]: 0 2172 16060 16060 Sep 3 23:22:20 stor4 kernel: Node 0 DMA32 free:107544kB min:18268kB low:22832kB high:27400kB active_anon:468kB inactive_anon:2364kB active_file:566208kB inactive_file:976112kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:2224900kB mlocked:0kB dirty:96816kB writeback:0kB mapped:908kB shmem:12kB slab_reclaimable:176940kB slab_unreclaimable:968kB kernel_stack:64kB pagetables:192kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Sep 3 23:22:20 stor4 kernel: lowmem_reserve[]: 0 0 13887 13887 Sep 3 23:22:20 stor4 kernel: Node 0 Normal free:220924kB min:116772kB low:145964kB high:175156kB active_anon:7032kB inactive_anon:7528kB active_file:4408340kB inactive_file:9044108kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:14220800kB mlocked:0kB dirty:976536kB writeback:0kB mapped:4264kB shmem:876kB slab_reclaimable:352568kB slab_unreclaimable:94008kB kernel_stack:2048kB pagetables:2688kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Sep 3 23:22:20 stor4 kernel: lowmem_reserve[]: 0 0 0 0 Sep 3 23:22:20 stor4 kernel: Node 0 DMA: 1*4kB 0*8kB 1*16kB 1*32kB 1*64kB 0*128kB 1*256kB 0*512kB 1*1024kB 1*2048kB 3*4096kB = 15732kB Sep 3 23:22:20 stor4 kernel: Node 0 DMA32: 16305*4kB 4381*8kB 353*16kB 8*32kB 1*64kB 1*128kB 0*256kB 1*512kB 1*1024kB 0*2048kB 0*4096kB = 107900kB Sep 3 23:22:20 stor4 kernel: Node 0 Normal: 14548*4kB 14808*8kB 2420*16kB 31*32kB 5*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 1*4096kB = 220784kB Sep 3 23:22:20 stor4 kernel: 3748822 total pagecache pages Sep 3 23:22:20 stor4 kernel: 0 pages in swap cache Sep 3 23:22:20 stor4 kernel: Swap cache stats: add 0, delete 0, find 0/0 Sep 3 23:22:20 stor4 kernel: Free swap = 975864kB Sep 3 23:22:20 stor4 kernel: Total swap = 975864kB Sep 3 23:22:20 stor4 kernel: 4194303 pages RAM Sep 3 23:22:20 stor4 kernel: 126915 pages reserved Sep 3 23:22:20 stor4 kernel: 3753534 pages shared Sep 3 23:22:20 stor4 kernel: 213500 pages non-shared TCP stack and VM config: net.core.rmem_max = 83886080 net.core.wmem_max = 83886080 net.core.rmem_default = 65536 net.core.wmem_default = 65536 net.ipv4.tcp_rmem = 40960 1048560 4194304 net.ipv4.tcp_wmem = 40960 196608 4194304 net.ipv4.tcp_mem = 16388608 16388608 16388608 vm.min_free_kbytes=135168 Additional tweaks: /sbin/blockdev --setra 16384 /dev/sdb echo 2048 /sys/block/sdb/queue/nr_requests Where might the problem be? Thank you.

    Read the article

  • Algorithm for non-contiguous netmask match

    - by Gianluca
    Hi, I have to write a really really fast algorithm to match an IP address to a list of groups, where each group is defined using a notation like 192.168.0.0/252.255.0.255. As you can see, the bitmask can contain zeros even in the middle, so the traditional "longest prefix match" algorithms won't work. If an IP matches two groups, it will be assigned to the group containing most 1's in the netmask. I'm not working with many entries (let's say < 1000) and I don't want to use a data structure requiring a large memory footprint (let's say 1-2 MB), but it really has to be fast (of course I can't afford a linear search). Do you have any suggestion? Thanks guys. UPDATE: I found something quite interesting at http://www.cse.usf.edu/~ligatti/papers/grouper-conf.pdf, but it's still too memory-hungry for my utopic use case

    Read the article

  • Tuning JVM (GC) for high responsive server application

    - by elgcom
    I am running an application server on Linux 64bit with 8 core CPUs and 6 GB memory. The server must be highly responsive. After some inspection I found that the application running on the server creates rather a huge amount of short-lived objects, and has only about 200~400 MB long-lived objects(as long as there is no memory leak) After reading http://java.sun.com/javase/technologies/hotspot/gc/gc_tuning_6.html I use these JVM options -Xms2g -Xmx2g -XX:MaxPermSize=256m -XX:NewRatio=1 -XX:+UseConcMarkSweepGC Result: the minor GC takes 0.01 ~ 0.02 sec, the major GC takes 1 ~ 3 sec the minor GC happens constantly. How can I further improve or tune the JVM? larger heap size? but will it take more time for GC? larger NewSize and MaxNewSize (for young generation)? other collector? parallel GC? is it a good idea to let major GC take place more often? and how?

    Read the article

  • Bad_alloc exception when using new for a struct c++

    - by bsg
    Hi, I am writing a query processor which allocates large amounts of memory and tries to find matching documents. Whenever I find a match, I create a structure to hold two variables describing the document and add it to a priority queue. Since there is no way of knowing how many times I will do this, I tried creating my structs dynamically using new. When I pop a struct off the priority queue, the queue (STL priority queue implementation) is supposed to call the object's destructor. My struct code has no destructor, so I assume a default destructor is called in that case. However, the very first time that I try to create a DOC struct, I get the following error: Unhandled exception at 0x7c812afb in QueryProcessor.exe: Microsoft C++ exception: std::bad_alloc at memory location 0x0012f5dc.. I don't understand what's happening - have I used up so much memory that the heap is full? It doesn't seem likely. And it's not as if I've even used that pointer before. So: first of all, what am I doing that's causing the error, and secondly, will the following code work more than once? Do I need to have a separate pointer for each struct created, or can I re-use the same temporary pointer and assume that the queue will keep a pointer to each struct? Here is my code: struct DOC{ int docid; double rank; public: DOC() { docid = 0; rank = 0.0; } DOC(int num, double ranking) { docid = num; rank = ranking; } bool operator>( const DOC & d ) const { return rank > d.rank; } bool operator<( const DOC & d ) const { return rank < d.rank; } }; //a lot of processing goes on here; when a matching document is found, I do this: rank = calculateRanking(table, num); //if the heap is not full, create a DOC struct with the docid and rank and add it to the heap if(q.size() < 20) { doc = new DOC(num, rank); q.push(*doc); doc = NULL; } //if the heap is full, but the new rank is greater than the //smallest element in the min heap, remove the current smallest element //and add the new one to the heap else if(rank > q.top().rank) { q.pop(); cout << "pushing doc on to queue" << endl; doc = new DOC(num, rank); q.push(*doc); } Thank you very much, bsg.

    Read the article

  • Lifetime of javacript variables.

    - by The Machine
    What is the lifetime of a variable in javascript, declared with "var". I am sure, it is definitely not according to expectation. <script> function(){ var a; var fun=function(){ // a is accessed and modified } }(); </script> Here how and when does javascript garbage collect the variable a? Since 'a' is a part of the closure of the inner function, it ideally should never get garbage collected, since the inner function 'fun', may be passed as a reference to an external context.So 'fun' should still be able to access 'a', from the external context. If my understanding is correct, how does garbage collection happen then, and how does it ensure to have enough memory space, since keeping all variables in memory until the execution of the program might not be tenable ?

    Read the article

  • internet explorer, google chrome injection

    - by Volim Te
    I wrote code that injects a function in Internet Explorer/Chrome but it doesn't work with these processes. Basically, it fills one big structure with all the APIs my function needs, strings, and other data, then it opens a process to get a handle, virtualallocex to allocate enough memory to store a function and structure there, and it writes the function and the structure in allocated memory. It then runs createremotethread there with the function as a starting address and structure as parameter. It works all great with calc/notepad/winamp processes but I have problems with browser injection. I'm wondering what could it be, I'm using these APIs. x.xCreateFile x.xWriteFile x.xCloseHandle x.xSleep x.xVirtualAlloc x.xVirtualFree x.xMessageBox x.xLoadLibrary x.xShellExecute Is it because browsers are protected now and they're running with lowest privileges?

    Read the article

  • Leak caused by fread

    - by Jack
    I'm profiling code of a game I wrote and I'm wondering how it is possible that the following snippet causes an heap increase of 4kb (I'm profiling with Heapshot Analysis of Xcode) every time it is executed: u8 WorldManager::versionOfMap(FILE *file) { char magic[4]; u8 version; fread(magic, 4, 1, file); <-- this is the line fread(&version,1,1,file); fseek(file, 0, SEEK_SET); return version; } According to the profiler the highlighted line allocates 4.00Kb of memory with a malloc every time the function is called, memory which is never released. This thing seems to happen with other calls to fread around the code, but this was the most eclatant one. Is there anything trivial I'm missing? Is it something internal I shouldn't care about? Just as a note: I'm profiling it on an iPhone and it's compiled as release (-O2).

    Read the article

  • Python fCGI + sqlAlchemy = malformed header from script. Bad header=FROM tags : index.py

    - by crgwbr
    I'm writing an Fast-CGI application that makes use of sqlAlchemy & MySQL for persistent data storage. I have no problem connecting to the DB and setting up ORM (so that tables get mapped to classes); I can even add data to tables (in memory). But, as soon as I query the DB (and push any changes from memory to storage) I get a 500 Internal Server Error and my error.log records malformed header from script. Bad header=FROM tags : index.py, when tags is the table name. Any idea what could be causing this? Also, I don't think it matters, but its a Linux development server talking to an off-site (across the country) MySQL server.

    Read the article

  • GlassFish JDO and global object

    - by bach
    Hi, I'm thinking about the GlassFish platform for my new app. My app env. doesn't have a big volume of data to handle, but a lot of users writing/reading the same data A very volotile portion of the data updates every 200milsec by diff users. Therefore I'd like that type of data to be in memory only and accessible to the whole app My questions: How do I use a global object in memory with GF? a. use a static variable object - for that I guess I need to make sure GF is running on only 1 JVM -- how to I configure GF to run on 1 jvm? b. use HttpContext - same as a. How do I persist to the DB? a. can I use JDO interface? How do I Schedule tasks to be performed in the future (something like the task queue in GAE) thanks, J.S. Bach

    Read the article

  • Deciphering a queer compiler warning about unsigned decimal constant

    - by Artagnon
    This large application has a memory pool library which uses a treap internally to store nodes of memory. The treap is implemented using cpp macros, and the complete file trp.h can be found here. I get the following compiler warning when I attempt to compile the application: warning: this decimal constant is unsigned only in ISO C90 By deleting portions of the macro code and using trial-and-error, I finally found the culprit: #define trp_prio_get(a_type, a_field, a_node) \ (2654435761*(uint32_t)(uintptr_t)(a_node)) I'm not sure what that strange number is doing there, but I assume it's there for a good reason, so I just want to leave it alone. I do want to fix the warning though- any idea why the compiler's saying that it's unsigned only in ISO C90? EDIT: I'm using gcc-4.1

    Read the article

  • hidden forms and thr handles

    - by shraddha
    i am hiding the form using SetVisibleCore to false, but now i cant get the handle of this form in other application which i am retrieving using EnumWindow its giving me exception "Attempted to read or write protected memory. This is often an indication that other memory is corrupt." an ACCESS VIOLATION EXCEPTION. so i am searching on 2 ways 1. find out the other way to get handle here we cant use FindWindow cause its not returning any handle 2. find a another way to hide the form but this should be done before loading of the form so,,,plz help me if anybody can

    Read the article

  • Performance of Managed C++ Vs UnManaged/native C++

    - by bsobaid
    I am writing a very high performance application that handles and processes hundreds of events every millisecond. Is Unmanaged C++ faster than managed c++? and why? Managed C++ deals with CLR instead of OS and CLR takes care of memory management, which simplifies the code and is probably also more efficient than code written by "a programmer" in unmanaged C++? or there is some other reason? When using managed, how can one then avoid dynamic memory allocation, which causes a performance hit, if it is all transparent to the programmer and handled by CLR? So coming back to my question, Is managed C++ more efficient in terms of speed than unmanaged C++ and why?

    Read the article

  • Pointers, links, object and reference count

    - by EugeneP
    String a = "a"; // allocate memory and write address of a memory block to a variable String b = "b"; // in a and b hold addresses b = a; // copy a address into b. // Now what? b value is completely lost and will be garbage collected //* next step a = null; // now a does not hold a valid address to any data, // still data of a object exist somewhere, yet we cannot get access to it. Correct me if there's a mistake somewhere in my reflexions. My question is: suppose anInstance object of type Instance has a property ' surname ' anInstance.getSurname() returns "MySurname". now String s = anInstance.getSurname(); anInstance = null; question is - is it true that getSurname value, namely MySurname will not be garbage collected because and only because it has active reference counter 0, and if other properties of anInstance have a zero reference counter, they'll be garbage collected?

    Read the article

  • java servlet: generate zip file from BLOBs

    - by Zack
    I'm trying to zip a large number of pdf files (stored as BLOBs in the DB) and then return the zip as an attachment to the user. What's the best way to do this without running into memory issues? Another note: I actually need to merge some PDFs prior to adding them to the ZipOutputStream. Therefore, a couple PDFs will need to be stored in memory at a time. I assume it would be best to then store them as temporary files on the server before zipping them all?

    Read the article

< Previous Page | 250 251 252 253 254 255 256 257 258 259 260 261  | Next Page >