Search Results

Search found 960 results on 39 pages for 'heap'.

Page 33/39 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Android - BitmapFactory.decodeByteArray - OutOfMemoryError (OOM)

    - by Bob Keathley
    I have read 100s of article about the OOM problem. Most are in regard to large bitmaps. I am doing a mapping application where we download 256x256 weather overlay tiles. Most are totally transparent and very small. I just got a crash on a bitmap stream that was 442 Bytes long while calling BitmapFactory.decodeByteArray(....). The Exception states: java.lang.OutOfMemoryError: bitmap size exceeds VM budget(Heap Size=9415KB, Allocated=5192KB, Bitmap Size=23671KB) The code is: protected Bitmap retrieveImageData() throws IOException { URL url = new URL(imageUrl); InputStream in = null; OutputStream out = null; HttpURLConnection connection = (HttpURLConnection) url.openConnection(); // determine the image size and allocate a buffer int fileSize = connection.getContentLength(); if (fileSize < 0) { return null; } byte[] imageData = new byte[fileSize]; // download the file //Log.d(LOG_TAG, "fetching image " + imageUrl + " (" + fileSize + ")"); BufferedInputStream istream = new BufferedInputStream(connection.getInputStream()); int bytesRead = 0; int offset = 0; while (bytesRead != -1 && offset < fileSize) { bytesRead = istream.read(imageData, offset, fileSize - offset); offset += bytesRead; } // clean up istream.close(); connection.disconnect(); Bitmap bitmap = null; try { bitmap = BitmapFactory.decodeByteArray(imageData, 0, bytesRead); } catch (OutOfMemoryError e) { Log.e("Map", "Tile Loader (241) Out Of Memory Error " + e.getLocalizedMessage()); System.gc(); } return bitmap; } Here is what I see in the debugger: bytesRead = 442 So the Bitmap data is 442 Bytes. Why would it be trying to create a 23671KB Bitmap and running out of memory?

    Read the article

  • Information about PTE's (Page Table Entries) in Windows

    - by Patrick
    In order to find more easily buffer overflows I am changing our custom memory allocator so that it allocates a full 4KB page instead of only the wanted number of bytes. Then I change the page protection and size so that if the caller writes before or after its allocated piece of memory, the application immediately crashes. Problem is that although I have enough memory, the application never starts up completely because it runs out of memory. This has two causes: since every allocation needs 4 KB, we probably reach the 2 GB limit very soon. This problem could be solved if I would make a 64-bit executable (didn't try it yet). even when I only need a few hundreds of megabytes, the allocations fail at a certain moment. The second problem is the biggest one, and I think it's related to the maximum number of PTE's (page table entries, which store information on how Virtual Memory is mapped to physical memory, and whether pages should be read-only or not) you can have in a process. My questions (or a cry-for-tips): Where can I find information about the maximum number of PTE's in a process? Is this different (higher) for 64-bit systems/applications or not? Can the number of PTE's be configured in the application or in Windows? Thanks, Patrick PS. note for those who will try to argument that you shouldn't write your own memory manager: My application is rather specific so I really want full control over memory management (can't give any more details) Last week we had a memory overwrite which we couldn't find using the standard C++ allocator and the debugging functionality of the C/C++ run time (it only said "block corrupt" minutes after the actual corruption") We also tried standard Windows utilities (like GFLAGS, ...) but they slowed down the application by a factor of 100, and couldn't find the exact position of the overwrite either We also tried the "Full Page Heap" functionality of Application Verifier, but then the application doesn't start up either (probably also running out of PTE's)

    Read the article

  • Why does this code sample produce a memory leak?

    - by citronas
    In the university we were given the following code sample and we were being told, that there is a memory leak when running this code. The sample should demonstrate that this is a situation where the garbage collector can't work. As far as my object oriented programming goes, the only codeline able to create a memory leak would be items=Arrays.copyOf(items,2 * size+1); The documentation says, that the elements are copied. Does that mean the reference is copied (and therefore another entry on the heap is created) or the object itself is being copied? As far as I know, Object and therefore Object[] are implemented as a reference type. So assigning a new value to 'items' would allow the garbage collector to find that the old 'item' is no longer referenced and can therefore be collected. In my eyes, this the codesample does not produce a memory leak. Could somebody prove me wrong? =) import java.util.Arrays; public class Foo { private Object[] items; private int size=0; private static final int ISIZE=10; public Foo() { items= new Object[ISIZE]; } public void push(final Object o){ checkSize(); items[size++]=o; } public Object pop(){ if (size==0) throw new ///... return items[--size]; } private void checkSize(){ if (items.length==size){ items=Arrays.copyOf(items,2 * size+1); } } }

    Read the article

  • Link failure with either abnormal memory consumption or LNK1106 in Visual Studio 2005.

    - by Corvin
    Hello, I am trying to build a solution for windows XP in Visual Studio 2005. This solution contains 81 projects (static libs, exe's, dlls) and is being successfully used by our partners. I copied the solution bundle from their repository and tried setting it up on 3 similar machines of people in our group. I was successful on two machines and the solution failed to build on my machine. The build on my machine encountered two problems: During a simple build creation of the biggest static library (about 522Mb in debug mode) would fail with the message "13libd\ui1d.lib : fatal error LNK1106: invalid file or disk full: cannot seek to 0x20101879" Full solution rebuild creates this library, however when it comes to linking the library to main .exe file, devenv.exe spawns link.exe which consumes about 80Mb of physical memory and 250MB of virtual and spawns another link.exe, which does the same. This goes on until the system runs out of memory. On PCs of my colleagues where successful build could be performed, there is only one link.exe process which uses all the memory required for linking (about 500Mb physical). There is a plenty of hard drive space on my machine and the file system is NTFS. All three of our systems are similar - Core2Quad processors, 4Gb of RAM, Windows XP SP3. We are using Visual studio installed from the same source. I tried using a different RAM and CPU, using dedicated graphics adapter to eliminate possibility of video memory sharing influencing the build, putting solution files to different location, using different versions of VS 2005 (Professional, Standard and Team Suite), changing the amount of available virtual memory, running memtest86 and building the project from scratch (i.e. a clean bundle). I have read what MSDN says about LNK1106, none of the cases apply to me except for maybe "out of heap space", however I am not sure how I should fight this. The only idea that I have left is reinstalling the OS, however I am not sure that it would help and I am not sure that my situation wouldn't repeat itself on a different machine. Would anyone have any sort of advice for me? Thanks

    Read the article

  • How come the Actionscript 3 ENTER_FRAME event is crazy nuts?

    - by nstory
    So, I've been toying around with Flash, browsing through the documentation, and all that, and noticed that the ENTER_FRAME event seems to defy my expectation of a deterministic universe. Take the following example: (new MovieClip()).addEventListener(Event.ENTER_FRAME, function(ev) {trace("Test");}); Notice this anonymous MovieClip is not added to the display hierarchy, and any reference to it is immediately lost. It will actually print "Test" once a frame until it is garbage collected. How insane is that? The behavior of this is actually determined by when the garbage collector feels like coming around in all its unpredictable insanity! Is there a better way to create intermittent failures? Seriously. My two theories are that either the DisplayObject class stores weak references to all its instances for the purpose of dispatching ENTER_FRAME events, or, and much wilder, the Flash player actually scans the heap each frame looking for ENTER_FRAME listeners to pull on. Can any hardened Actionscript developer clue me in on how this works? (And maybe a why - the - f**k they thought this was a good idea?)

    Read the article

  • C++ and Dependency Injection in unit testing

    - by lhumongous
    Suppose I have a C++ class like so: class A { public: A() { } void SetNewB( const B& _b ) { m_B = _b; } private: B m_B; } In order to unit test something like this, I would have to break A's dependency on B. Since class A holds onto an actual object and not a pointer, I would have to refactor this code to take a pointer. Additionally, I would need to create a parent interface class for B so I can pass in my own fake of B when I test SetNewB. In this case, doesn't unit testing with dependency injection further complicate the existing code? If I make B a pointer, I'm now introducing heap allocation, and some piece of code is now responsible for cleaning it up (unless I use ref counted pointers). Additionally, if B is a rather trivial class with only a couple of member variables and functions, why introduce a whole new interface for it instead of just testing with an instance of B? I suppose you could make the argument that it would be easier to refactor A by using an interface. But are there some cases where two classes might need to be tightly coupled?

    Read the article

  • Sorted queue with dropping out elements

    - by ffriend
    I have a list of jobs and queue of workers waiting for these jobs. All the jobs are the same, but workers are different and sorted by their ability to perform the job. That is, first person can do this job best of all, second does it just a little bit worse and so on. Job is always assigned to the person with the highest skills from those who are free at that moment. When person is assigned a job, he drops out of the queue for some time. But when he is done, he gets back to his position. So, for example, at some moment in time worker queue looks like: [x, x, .83, x, .7, .63, .55, .54, .48, ...] where x's stand for missing workers and numbers show skill level of left workers. When there's a new job, it is assigned to 3rd worker as the one with highest skill of available workers. So next moment queue looks like: [x, x, x, x, .7, .63, .55, .54, .48, ...] Let's say, that at this moment worker #2 finishes his job and gets back to the list: [x, .91, x, x, .7, .63, .55, .54, .48, ...] I hope the process is completely clear now. My question is what algorithm and data structure to use to implement quick search and deletion of worker and insertion back to his position. For the moment the best approach I can see is to use Fibonacci heap that have amortized O(log n) for deleting minimal element (assigning job and deleting worker from queue) and O(1) for inserting him back, which is pretty good. But is there even better algorithm / data structure that possibly take into account the fact that elements are already sorted and only drop of the queue from time to time?

    Read the article

  • Asynchronous URL connection objective C

    - by tweety
    I created an asynchronous URL connection to call a web service using HTTP POST method. after I am pinging the web i set an NSTimerInterval in the completion handler. my problem is when I'm trying to display the time on the view controller it is not doing promptly. I know block is stored in the heap and gets executed later on anytime and probably that's why i'm not getting prompt answer. I was wondering is there any other way to do this? Thanks in advance. my code: __block NSDate *start= [NSDate date]; __block NSDate *end; __block double miliseconds; __block NSTimeInterval time; [NSURLConnection sendAsynchronousRequest:urlRequest queue:queue completionHandler:^(NSURLResponse *response, NSData *data, NSError *error) { if([data length]==0 && error==nil){ end=[NSDate date]; time=[end timeIntervalSinceDate:start]; NSLog(@"Successfully Pinged"); miliseconds = time; // calling a method to display ping time [self label:miliseconds]; } -(void) label:(double) mili{ double miliseconds=mili*1000; self.timeDisplay.text=[NSString stringWithFormat:@"Time: %.3f ms", miliseconds];

    Read the article

  • How to repair Java in Ubuntu after trying to switch to Java 6 using update-java-alternatives

    - by Kau-Boy
    I tried to switch from Java 5 to Java 6 using the "update-java-alternatives" command like explained on this page: https://help.ubuntu.com/community/Java But afterwards I get the following error when I tried to execute java: root@webserver:~# java Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. I also tried to reinstall the java binaries using "apt-get" but I didn't succeed reinstalling it. I would like to post the "apt-get" errors, but unfortunately I don't know how to print out the error messages in English and not in German. My system is a Ubuntu 8.04 ROOT server. Here is the (Google translated) english text tring to install Java 6 again: root@server:~# apt-get install sun-java6-jdk Reading package lists ... Ready Dependency tree Reading state information ... Ready sun-java6-jdk is already the newest version. sun-java6-jdk set to manually installed. 0 upgraded, 0 newly installed, 0 to remove and 86 not upgraded. 1 not fully installed or removed. After this operation, 0B of additional disk space will be used. Set up a sun-java6-bin (6-03-0ubuntu2) ... Could not create the Java virtual machine. dpkg: error processing sun-java6-bin (- configure): Subprocess post-installation script returned error exit status 1 Errors were encountered while processing: sun-java6-bin E: Sub-process / usr / bin / dpkg returned an error code (1) I hope that this might help you helping me.

    Read the article

  • How string accepting interface should look like?

    - by ybungalobill
    Hello, This is a follow up of this question. Suppose I write a C++ interface that accepts or returns a const string. I can use a const char* zero-terminated string: void f(const char* str); // (1) The other way would be to use an std::string: void f(const string& str); // (2) It's also possible to write an overload and accept both: void f(const char* str); // (3) void f(const string& str); Or even a template in conjunction with boost string algorithms: template<class Range> void f(const Range& str); // (4) My thoughts are: (1) is not C++ish and may be less efficient when subsequent operations may need to know the string length. (2) is bad because now f("long very long C string"); invokes a construction of std::string which involves a heap allocation. If f uses that string just to pass it to some low-level interface that expects a C-string (like fopen) then it is just a waste of resources. (3) causes code duplication. Although one f can call the other depending on what is the most efficient implementation. However we can't overload based on return type, like in case of std::exception::what() that returns a const char*. (4) doesn't work with separate compilation and may cause even larger code bloat. Choosing between (1) and (2) based on what's needed by the implementation is, well, leaking an implementation detail to the interface. The question is: what is the preffered way? Is there any single guideline I can follow? What's your experience?

    Read the article

  • !gcroot output leads nowhere

    - by Jeff Costa
    I am troubleshooting memory fragmentation in an app pool, as evidenced by a small number of Free objects consuming the most space on the heap: 0x000007ff00256728 6,543 3,890,208 System.Collections.Hashtable+bucket[] 0x000007ff002649a8 7,297 22,979,560 System.Byte[] 0x000007ff001e0d90 251,347 30,374,304 System.String 0x0000000001d0c830 373 48,036,816 Free Running the !dumpgen 3 command reveals the fragmentation; There is a repeating pattern of Free and System.Object objects of the same size: 000000017feb7350 24 **** FREE **** 000000017feb7368 8192 System.Object[] 000000017feb9368 24 **** FREE **** 000000017feb9380 8192 System.Object[] 000000017febb380 24 **** FREE **** 000000017febb398 8192 System.Object[] 000000017febd398 24 **** FREE **** 000000017febd3b0 8192 System.Object[] 000000017febf3b0 24 **** FREE **** 000000017febf3c8 8192 System.Object[] 000000017fec13c8 24 **** FREE **** 000000017fec13e0 8192 System.Object[] 000000017fec33e0 24 **** FREE **** 000000017fec33f8 8192 System.Object[] 000000017fec53f8 24 **** FREE **** 000000017fec5410 14024 System.Object[] 000000017fec8ad8 24 **** FREE **** 000000017fec8af0 8192 System.Object[] 000000017fecaaf0 24 **** FREE **** 000000017fecab08 8192 System.Object[] 000000017feccb08 24 **** FREE **** 000000017feccb20 8192 System.Object[] 000000017feceb20 24 **** FREE **** 000000017feceb38 8192 System.Object[] 000000017fed0b38 24 **** FREE **** 000000017fed0b50 8192 System.Object[] 000000017fed2b50 24 **** FREE **** 000000017fed2b68 8192 System.Object[] When I try to obtain the root of one of the System.Objects with !gcroot, I get a pinned handle, but no additional stack data: Scan Thread 41 OSThread 1044 DOMAIN(0000000001D51330):HANDLE(Pinned):15217e8:Root: 000000017fe60fe8(System.Object[]) As you can see, there is no additional data to go on. Running a !handle command also yields nothing: 0:041> !handle 000000017fe7a068 ff Handle 000000017fe7a068 Type <Error retrieving type> unable to query object information unable to query object information No object specific information available How can I trace out this memory leak when I cannot find what is rooting System.Object?

    Read the article

  • Program always returns binary '>>' : no operator found which takes a left-hand operand of type error

    - by Tom Ward
    So I've been set a task to create a temperature converter in C++ using this equation: Celsius = (5/9)*(Fahrenheit – 32) and so far I've come up with this (I've cut out the 10 lines worth of comments from the start so the code posted begins on line 11, if that makes any sense) #include <iostream> #include <string> #include <iomanip> #include <cmath> using namespace std; int main () { float celsius; float farenheit; std::cout << "**************************" << endl; std::cout << "*4001COMP-Lab5-Question 1*" << endl; std::cout << "**************************" << endl << endl; std::cout << "Please enter a temperature in farenheit: "; std::cin >> farenheit >> endl; std::cout << "Temperature (farenheit): " << endl; std::cout << "Temperature (celsius): " << celsius << endl; std::cin.get(); return 0; } Everytime I try to run this program I get a heap of errors with this one appearing every time: 1m:\visual studio 2010\projects\week 5\week 5\main.cpp(26): error C2678: binary '' : no operator found which takes a left-hand operand of type 'std::basic_istream<_Elem,_Traits' (or there is no acceptable conversion) I've tried everything I can think of to get rid of this error but it reappears every time, any idea on how to fix this?

    Read the article

  • Tracking down origin of I/O in multi-process server

    - by Craig Ringer
    I'm currently trying to track down some phantom I/O in a PostgreSQL build I'm testing. It's a multi-process server and it isn't simple to associate disk I/O back to a particular back-end and query. I thought Linux's perf tool would be ideal for this, but I'm struggling to capture block I/O performance counter metrics and associate them with user-space activity. It's easy to record block I/O requests and completions with, eg: sudo perf record -g -T -u postgres -e 'block:block_rq_*' and the user-space pid is recorded, but there's no kernel or user-space stack captured, or ability to snapshot bits of the user-space process's heap (say, query text) etc. So while you have the pid, you don't know what the process was doing at that point. Just perf script output like: postgres 7462 [002] 301125.113632: block:block_rq_issue: 8,0 W 0 () 208078848 + 1024 [postgres] If I add the -g flag to perf record it'll take snapshots of the kernel stack, but doesn't capture user-space state for perf events captured in the kernel. The user-space stack only goes up to the entry-point from userspace, like LWLockRelease, LWLockAcquire, memcpy (mmap'd IO), __GI___libc_write, etc. So. Any tips? Being able to capture a snapshot of the user-space stack in response to kernel events would be ideal.

    Read the article

  • Any workarounds for non-static member array initialization?

    - by TomiJ
    In C++, it's not possible to initialize array members in the initialization list, thus member objects should have default constructors and they should be properly initialized in the constructor. Is there any (reasonable) workaround for this apart from not using arrays? [Anything that can be initialized using only the initialization list is in our application far preferable to using the constructor, as that data can be allocated and initialized by the compiler and linker, and every CPU clock cycle counts, even before main. However, it is not always possible to have a default constructor for every class, and besides, reinitializing the data again in the constructor rather defeats the purpose anyway.] E.g. I'd like to have something like this (but this one doesn't work): class OtherClass { private: int data; public: OtherClass(int i) : data(i) {}; // No default constructor! }; class Foo { private: OtherClass inst[3]; // Array size fixed and known ahead of time. public: Foo(...) : inst[0](0), inst[1](1), inst[2](2) {}; }; The only workaround I'm aware of is the non-array one: class Foo { private: OtherClass inst0; OtherClass inst1; OtherClass inst2; OtherClass *inst[3]; public: Foo(...) : inst0(0), inst1(1), inst2(2) { inst[0]=&inst0; inst[1]=&inst1; inst[2]=&inst2; }; }; Edit: It should be stressed that OtherClass has no default constructor, and that it is very desirable to have the linker be able to allocate any memory needed (one or more static instances of Foo will be created), using the heap is essentially verboten. I've updated the examples above to highlight the first point.

    Read the article

  • Declaring two large 2d arrays gives segmentation fault.

    - by pfdevil
    Hello, i'm trying to declare and allocate memory for two 2d-arrays. However when trying to assign values to itemFeatureQ[39][16816] I get a segmentation vault. I can't understand it since I have 2GB of RAM and only using 19MB on the heap. Here is the code; double** reserveMemory(int rows, int columns) { double **array; int i; array = (double**) malloc(rows * sizeof(double *)); if(array == NULL) { fprintf(stderr, "out of memory\n"); return NULL; } for(i = 0; i < rows; i++) { array[i] = (double*) malloc(columns * sizeof(double *)); if(array == NULL) { fprintf(stderr, "out of memory\n"); return NULL; } } return array; } void populateUserFeatureP(double **userFeatureP) { int x,y; for(x = 0; x < CUSTOMERS; x++) { for(y = 0; y < FEATURES; y++) { userFeatureP[x][y] = 0; } } } void populateItemFeatureQ(double **itemFeatureQ) { int x,y; for(x = 0; x < FEATURES; x++) { for(y = 0; y < MOVIES; y++) { printf("(%d,%d)\n", x, y); itemFeatureQ[x][y] = 0; } } } int main(int argc, char *argv[]){ double **userFeatureP = reserveMemory(480189, 40); double **itemFeatureQ = reserveMemory(40, 17770); populateItemFeatureQ(itemFeatureQ); populateUserFeatureP(userFeatureP); return 0; }

    Read the article

  • learning C++ from java, trying to make a linked list.

    - by kyeana
    I just started learning c++ (coming from java) and am having some serious problems with doing anything :P Currently, i am attempting to make a linked list, but must be doing something stupid cause i keep getting "void value not ignored as it ought to be" compile errors (i have it marked where it is throwing it bellow). If anyone could help me with what im doing wrong, i would be very grateful :) Also, I am not used to having the choice of passing by reference, address, or value, and memory management in general (currently i have all my nodes and the data declared on the heap). If anyone has any general advice for me, i also wouldn't complain :P Key code from LinkedListNode.cpp LinkedListNode::LinkedListNode() { //set next and prev to null pData=0; //data needs to be a pointer so we can set it to null for //for the tail and head. pNext=0; pPrev=0; } /* * Sets the 'next' pointer to the memory address of the inputed reference. */ void LinkedListNode::SetNext(LinkedListNode& _next) { pNext=&_next; } /* * Sets the 'prev' pointer to the memory address of the inputed reference. */ void LinkedListNode::SetPrev(LinkedListNode& _prev) { pPrev=&_prev; } //rest of class Key code from LinkedList.cpp #include "LinkedList.h" LinkedList::LinkedList() { // Set head and tail of linked list. pHead = new LinkedListNode(); pTail = new LinkedListNode(); /* * THIS IS WHERE THE ERRORS ARE. */ *pHead->SetNext(*pTail); *pTail->SetPrev(*pHead); } //rest of class

    Read the article

  • Slowing process creation under Java?

    - by oconnor0
    I have a single, large heap (up to 240GB, though in the 20-40GB range for most of this phase of execution) JVM [1] running under Linux [2] on a server with 24 cores. We have tens of thousands of objects that have to be processed by an external executable & then load the data created by those executables back into the JVM. Each executable produces about half a megabyte of data (on disk) that when read right in, after the process finishes, is, of course, larger. Our first implementation was to have each executable handle only a single object. This involved the spawning of twice as many executables as we had objects (since we called a shell script that called the executable). Our CPU utilization would start off high, but not necessarily 100%, and slowly worsen. As we began measuring to see what was happening we noticed that the process creation time [3] continually slows. While starting at sub-second times it would eventually grow to take a minute or more. The actual processing done by the executable usually takes less than 10 seconds. Next we changed the executable to take a list of objects to process in an attempt to reduce the number of processes created. With batch sizes of a few hundred (~1% of our current sample size), the process creation times start out around 2 seconds & grow to around 5-6 seconds. Basically, why is it taking so long to create these processes as execution continues? [1] Oracle JDK 1.6.0_22 [2] Red Hat Enterprise Linux Advanced Platform 5.3, Linux kernel 2.6.18-194.26.1.el5 #1 SMP [3] Creation of the ProcessBuilder object, redirecting the error stream, and starting it.

    Read the article

  • My HTML5 web app crashes and I have no clue how to debug

    - by Shouvik
    Hi All, I have written a word game using HTML5 canvas tag and a little bit of audio. I developed the application on the chrome web browser on a linux system. Recently during the testing phase it was tried on safari 5.0.3 on Mac and the webpage froze. Not just the canvas element, but interactive element on the page froze. I have at some times experienced this problem on google chrome when I was developing but since the console did not throw any error before this happened, I did not give it much credence. Now as per requirements I am supposed to support both chrome and safari but this dismal performance on safari has left me shocked and I cannot see what error can be thrown which might lead to such a situation. Worse yet the CPU usage on using this application peaks to 70-80percent on my 2yr old macbook running ubuntu... I can only but pity the person who uses mac to operate this app, which undoubtedly is a heavier OS. Could someone help me out with a place I can start with to find out what exactly is causing this issue. I have run profiles on this webapp on google chromes console and noticed that in the heap spanshot value increases steadily with the playing of the game, specifically (root) value which jumps up by 900 counts. Any help would be very appreciated! Thanks EDIT: I don't know if this helps, but I have noticed that even on refreshing the page after the app becomes unresponsive the page reloads and I am still not able to interact with the page elements but the tab scroll bar continues to work and I can see my application window completely. So to summaries the tab stops accepting any sort of user interaction inside the page. Edit2: Nop. It doesn't work still... The app crashes on double click on the canvas element. The console is not throwing any errors either! =/ I have noticed this problem is isolated only to safari!

    Read the article

  • localhost yes but phpmyadmin blank

    - by Giskin Leow
    WAMP people having problem with both localhost and phpmyadmin loads blank which usually the port problem. Mine is only phpmyadmin blank. sqlbuddy and phpinfo no problem. tried uninstall reinstalled wamp. tried xampp, same problem, all works well, not phpmyadmin. mysql log: 120905 8:03:08 [Note] Plugin 'FEDERATED' is disabled. 120905 8:03:08 InnoDB: The InnoDB memory heap is disabled 120905 8:03:08 InnoDB: Mutexes and rw_locks use Windows interlocked functions 120905 8:03:08 InnoDB: Compressed tables use zlib 1.2.3 120905 8:03:09 InnoDB: Initializing buffer pool, size = 128.0M 120905 8:03:09 InnoDB: Completed initialization of buffer pool 120905 8:03:09 InnoDB: highest supported file format is Barracuda. 120905 8:03:09 InnoDB: Waiting for the background threads to start 120905 8:03:10 InnoDB: 1.1.8 started; log sequence number 1595675 120905 8:03:11 [Note] Server hostname (bind-address): '(null)'; port: 3306 120905 8:03:11 [Note] - '(null)' resolves to '::'; 120905 8:03:11 [Note] - '(null)' resolves to '0.0.0.0'; 120905 8:03:11 [Note] Server socket created on IP: '0.0.0.0'. 120905 8:03:13 [Note] Event Scheduler: Loaded 0 events 120905 8:03:13 [Note] wampmysqld: ready for connections. apache log [Wed Sep 05 08:03:09 2012] [notice] Apache/2.2.22 (Win32) PHP/5.4.3 configured -- resuming normal operations [Wed Sep 05 08:03:09 2012] [notice] Server built: May 13 2012 13:32:42 [Wed Sep 05 08:03:09 2012] [notice] Parent: Created child process 3812 [Wed Sep 05 08:03:09 2012] [notice] Child 3812: Child process is running [Wed Sep 05 08:03:09 2012] [notice] Child 3812: Acquired the start mutex. [Wed Sep 05 08:03:09 2012] [notice] Child 3812: Starting 64 worker threads. [Wed Sep 05 08:03:09 2012] [notice] Child 3812: Starting thread to listen on port 80. [Wed Sep 05 08:03:09 2012] [notice] Child 3812: Starting thread to listen on port 80. [Wed Sep 05 08:04:14 2012] [error] [client 127.0.0.1] File does not exist: C:/wamp/www/favicon.ico [Wed Sep 05 08:09:50 2012] [error] [client 127.0.0.1] File does not exist: C:/wamp/www/favicon.ico [Wed Sep 05 08:41:03 2012] [error] [client 127.0.0.1] File does not exist: C:/wamp/www/phpMyAdmin

    Read the article

  • How to synchronize access to many objects

    - by vividos
    I have a thread pool with some threads (e.g. as many as number of cores) that work on many objects, say thousands of objects. Normally I would give each object a mutex to protect access to its internals, lock it when I'm doing work, then release it. When two threads would try to access the same object, one of the threads has to wait. Now I want to save some resources and be scalable, as there may be thousands of objects, and still only a hand full of threads. I'm thinking about a class design where the thread has some sort of mutex or lock object, and assigns the lock to the object when the object should be accessed. This would save resources, as I only have as much lock objects as I have threads. Now comes the programming part, where I want to transfer this design into code, but don't know quite where to start. I'm programming in C++ and want to use Boost classes where possible, but self written classes that handle these special requirements are ok. How would I implement this? My first idea was to have a boost::mutex object per thread, and each object has a boost::shared_ptr that initially is unset (or NULL). Now when I want to access the object, I lock it by creating a scoped_lock object and assign it to the shared_ptr. When the shared_ptr is already set, I wait on the present lock. This idea sounds like a heap full of race conditions, so I sort of abandoned it. Is there another way to accomplish this design? A completely different way?

    Read the article

  • [C]Dynamic allocation memory of structure, related to GTK

    - by MakeItWork
    Hello, I have following structure: typedef struct { GtkWidget* PoziomaLinijka; GtkWidget* PionowaLinijka; GtkWidget* Label1; GtkWidget* Label2; gint x,y; } StrukturaDrawing; And i need to allocate it on the heap because later I have functions which uses that structure and I don't want to use global variables. So I allocate it like this: StrukturaDrawing* Wsk; Wsk = (StrukturaDrawing*)malloc(sizeof(StrukturaDrawing)); if (!Wsk) { printf("Error\n"); } And it doesn't returning error and also works great with other functions, it works the way I wanted it to work so finally i wanted to free that memory and here is problem because in Debug Mode compilator bitches: First-chance exception at 0x102d12b4 in GTK.exe: 0xC0000005: Access violation reading location 0xfffffffc. Unhandled exception at 0x102d12b4 in GTK.exe: 0xC0000005: Access violation reading location 0xfffffffc. I connect callback to my function, like that: g_signal_connect(G_OBJECT(Okno), "destroy", G_CALLBACK(Wyjscie), Wsk); Function which is suppose to free memory and close program: void Wyjscie(GtkWindow* window, GdkEvent* event, StrukturaDrawing* data) { gtk_main_quit(); free(data); data = NULL; } Any help really appreciated.

    Read the article

  • C# language questions

    - by Water Cooler v2
    1) What is int? Is it any different from the struct System.Int32? I understand that the former is a C# alias (typedef or #define equivalant) for the CLR type System.Int32. Is this understanding correct? 2) When we say: IComparable x = 10; Is that like saying: IComparable x = new System.Int32(); But we can't new a struct, right? or in C like syntax: struct System.In32 *x; x=>someThing = 10; 3) What is String with a capitalized S? I see in Reflector that it is the sealed String class, which, of course, is a reference type, unlike the System.Int32 above, which is a value type. What is string, with an uncapitalized s, though? Is that also the C# alias for this class? Why can I not see the alias definitions in Reflector? 4) Try to follow me down this subtle train of thought, if you please. We know that a storage location of a particular type can only access properties and members on its interface. That means: Person p = new Customer(); p.Name = "Water Cooler v2"; // legal because as Name is defined on Person. but // illegal without an explicit cast even though the backing // store is a Customer, the storage location is of type // Person, which doesn't support the member/method being // accessed/called. p.GetTotalValueOfOrdersMade(); Now, with that inference, consider this scenario: int i = 10; // obvious System.object defines no member to // store an integer value or any other value in. // So, my question really is, when the integer is // boxed, what is the *type* it is actually boxed to. // In other words, what is the type that forms the // backing store on the heap, for this operation? object x = i;

    Read the article

  • Variable Scoping in a method and its persistence in C++

    - by de costo
    Consider the following public method that adds an integer variable to a vector of ints(private member) in a class in C++. KoolMethod() { int x; x = 10; KoolList.Add(x); } Vector<int>KoolList; But is this a valid addition to a vector ??? Upon calling the method, it creates a local variable. The scope of this local variable ends the moment the execution control leaves the method. And since this local variable is allocated on a stack(on the method call), any member of KoolList points to an invalid memory location in deallocated stack which may or may not contain the expected value of x. Is this an accurate description of above mechanism ?? Is there a need for creating an int in heap storage using "new" operator everytime a value needs to be added to the vector like described below ????: KoolMethod() { int *x = new int(); *x = 10; KoolList.Add(x); } Vector<int*>KoolList;

    Read the article

  • nothrow or exception ?

    - by Muggen
    I am a student and I have small knowledge on C++, which I try to expand. This is more of a philosophical question.. I am not trying to implement something. Since #include <new> //... T * t = new (std::nothrow) T(); if(t) { //... } //... Will hide the Exception, and since dealing with Exceptions is heavier compared to a simple if(t), why isn't the normal new T() not considered less good practice, considering we will have to use try-catch() to check if a simple allocation succeeded (and if we don't, just watch the program die)?? What are the benefits (if any) of the normal new allocation compared to using a nothrow new? Exception's overhead in that case is insignificant ? Also, Assume that an allocation fails (eg. no memory exists in the system). Is there anything the program can do in that situation, or just fail gracefully. There is no way to find free memory on the heap, when all is reserved, is there? Incase an allocation fails, and an std::bad_alloc is thrown, how can we assume that since there is not enough memory to allocate an object (Eg. a new int), there will be enough memory to store an exception ?? Thanks for your time. I hope the question is in line with the rules.

    Read the article

  • How can I get size in bytes of an object sent using RMI?

    - by Lucas Batistussi
    I'm implementing a cache server with MongoDB and ConcurrentHashMap java class. When there are available space to put object in memory, it will put at. Otherwise, the object will be saved in a mongodb database. Is allowed that user specify a size limit in memory (this should not exceed heap size limit obviously!) for the memory cache. The clients can use the cache service connecting through RMI. I need to know the size of each object to verify if a new incoming object can be put into memory. I searched over internet and i got this solution to get size: public long getObjectSize(Object o){ try { ByteArrayOutputStream bos = new ByteArrayOutputStream(); ObjectOutputStream oos = new ObjectOutputStream(bos); oos.writeObject(o); oos.close(); return bos.size(); } catch (Exception e) { return Long.MAX_VALUE; } } This solution works very well. But, in terms of memory use doesn't solve my problem. :( If many clients are verifying the object size at same time this will cause stack overflow, right? Well... some people can say: Why you don't get the specific object size and store it in memory and when another object is need to put in memory check the object size? This is not possible because the objects are variable in size. :( Someone can help me? I was thinking in get socket from RMI communication, but I don't know how to do this...

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39  | Next Page >