Search Results

Search found 1188 results on 48 pages for 'heap fragmentation'.

Page 41/48 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • Fastest Java way to remove the first/top line of a file (like a stack)

    - by christangrant
    I am trying to improve an external sort implementation in java. I have a bunch of BufferedReader objects open for temporary files. I repeatedly remove the top line from each of these files. This pushes the limits of the Java's Heap. I would like a more scalable method of doing this without loosing speed because of a bunch of constructor calls. One solution is to only open files when they are needed, then read the first line and then delete it. But I am afraid that this will be significantly slower. So using Java libraries what is the most efficient method of doing this. --Edit-- For external sort, the usual method is to break a large file up into several chunk files. Sort each of the chunks. And then treat the sorted files like buffers, pop the top item from each file, the smallest of all those is the global minimum. Then continue until for all items. http://en.wikipedia.org/wiki/External_sorting My temporary files (buffers) are basically BufferedReader objects. The operations performed on these files are the same as stack/queue operations (peek and pop, no push needed). I am trying to make these peek and pop operations more efficient. This is because using many BufferedReader objects takes up too much space.

    Read the article

  • C++: is it safe to work with std::vectors as if they were arrays?

    - by peoro
    I need to have a fixed-size array of elements and to call on them functions that require to know about how they're placed in memory, in particular: functions like glVertexPointer, that needs to know where the vertices are, how distant they are one from the other and so on. In my case vertices would be members of the elements to store. to get the index of an element within this array, I'd prefer to avoid having an index field within my elements, but would rather play with pointers arithmetic (ie: index of Element *x will be x - & array[0]) -- btw, this sounds dirty to me: is it good practice or should I do something else? Is it safe to use std::vector for this? Something makes me think that an std::array would be more appropriate but: Constructor and destructor for my structure will be rarely called: I don't mind about such overhead. I'm going to set the std::vector capacity to size I need (the size that would use for an std::array, thus won't take any overhead due to sporadic reallocation. I don't mind a little space overhead for std::vector's internal structure. I could use the ability to resize the vector (or better: to have a size chosen during setup), and I think there's no way to do this with std::array, since its size is a template parameter (that's too bad: I could do that even with an old C-like array, just dynamically allocating it on the heap). If std::vector is fine for my purpose I'd like to know into details if it will have some runtime overhead with respect to std::array (or to a plain C array): I know that it'll call the default constructor for any element once I increase its size (but I guess this won't cost anything if my data has got an empty default constructor?), same for destructor. Anything else?

    Read the article

  • I get this error: "glibc detected"

    - by AKGMA
    Hi I just wrote a piece of CPP code and I compiled it using G++ in ubuntu. When I run my code everything is fine, the code runs well and gives output but doesn't exit and it gives this error: *** glibc detected *** ./a.out: free(): invalid next size (fast): 0x09f931f0 *** ======= Backtrace: ========= /lib/libc.so.6(+0x6c501)[0x3de501] /lib/libc.so.6(+0x6dd70)[0x3dfd70] /lib/libc.so.6(cfree+0x6d)[0x3e2e5d] /usr/lib/libstdc++.so.6(_ZdlPv+0x21)[0x6e2441] ./a.out[0x8049ce6] /lib/libc.so.6(+0x2f69e)[0x3a169e] /lib/libc.so.6(+0x2f70f)[0x3a170f] /lib/libc.so.6(__libc_start_main+0xef)[0x388cef] ./a.out[0x8048a61] ======= Memory map: ======== 00219000-0021a000 r-xp 00000000 00:00 0 [vdso] 00354000-00370000 r-xp 00000000 08:01 8781845 /lib/ld-2.12.1.so 00370000-00371000 r--p 0001b000 08:01 8781845 /lib/ld-2.12.1.so 00371000-00372000 rw-p 0001c000 08:01 8781845 /lib/ld-2.12.1.so 00372000-004c9000 r-xp 00000000 08:01 8781869 /lib/libc-2.12.1.so 004c9000-004ca000 ---p 00157000 08:01 8781869 /lib/libc-2.12.1.so 004ca000-004cc000 r--p 00157000 08:01 8781869 /lib/libc-2.12.1.so 004cc000-004cd000 rw-p 00159000 08:01 8781869 /lib/libc-2.12.1.so 004cd000-004d0000 rw-p 00000000 00:00 0 00638000-00717000 r-xp 00000000 08:01 3935829 /usr/lib/libstdc++.so.6.0.14 00717000-0071b000 r--p 000de000 08:01 3935829 /usr/lib/libstdc++.so.6.0.14 0071b000-0071c000 rw-p 000e2000 08:01 3935829 /usr/lib/libstdc++.so.6.0.14 0071c000-00723000 rw-p 00000000 00:00 0 00909000-0092d000 r-xp 00000000 08:01 8781918 /lib/libm-2.12.1.so 0092d000-0092e000 r--p 00023000 08:01 8781918 /lib/libm-2.12.1.so 0092e000-0092f000 rw-p 00024000 08:01 8781918 /lib/libm-2.12.1.so 00fdb000-00ff5000 r-xp 00000000 08:01 8781903 /lib/libgcc_s.so.1 00ff5000-00ff6000 r--p 00019000 08:01 8781903 /lib/libgcc_s.so.1 00ff6000-00ff7000 rw-p 0001a000 08:01 8781903 /lib/libgcc_s.so.1 08048000-0804b000 r-xp 00000000 08:01 8652645 /home/akg/Desktop/contest/a.out 0804b000-0804c000 r--p 00002000 08:01 8652645 /home/akg/Desktop/contest/a.out 0804c000-0804d000 rw-p 00003000 08:01 8652645 /home/akg/Desktop/contest/a.out 09f93000-09fb4000 rw-p 00000000 00:00 0 [heap] b7600000-b7621000 rw-p 00000000 00:00 0 b7621000-b7700000 ---p 00000000 00:00 0 b7765000-b7768000 rw-p 00000000 00:00 0 b7775000-b7779000 rw-p 00000000 00:00 0 bf9a7000-bf9c8000 rw-p 00000000 00:00 0 [stack] Aborted What does this mean? How can i get rid of it? I'm not using malloc or free , I'm just using vector!

    Read the article

  • Preventing a heavy process from sinking in the swap file

    - by eran
    Our service tends to fall asleep during the nights on our client's server, and then have a hard time waking up. What seems to happen is that the process heap, which is sometimes several hundreds of MB, is moved to the swap file. This happens at night, when our service is not used, and others are scheduled to run (DB backups, AV scans etc). When this happens, after a few hours of inactivity the first call to the service takes up to a few minutes (consequent calls take seconds). I'm quite certain it's an issue of virtual memory management, and I really hate the idea of forcing the OS to keep our service in the physical memory. I know doing that will hurt other processes on the server, and decrease the overall server throughput. Having that said, our clients just want our app to be responsive. They don't care if nightly jobs take longer. I vaguely remember there's a way to force Windows to keep pages on the physical memory, but I really hate that idea. I'm leaning more towards some internal or external watchdog that will initiate higher-level functionalities (there is already some internal scheduler that does very little, and makes no difference). If there were a 3rd party tool that provided that kind of service is would have been just as good. I'd love to hear any comments, recommendations and common solutions to this kind of problem. The service is written in VC2005 and runs on Windows servers.

    Read the article

  • MemoryFailPoint fires to early in WinXP 64

    - by msedi
    Hello, I have created a volume class (called VoxelVolume) with a self-organizing memory management, since the GC in C# didn't provide a good mechanism for managing contents of the volume for mapping, unmapping and remapping. Although I could have used the mechanisms of virtual memory, the problem is that the files are often too large to fit into the page file and I don't want to force the users to increase the pagefile size. Currently this system is working quite well and there is no problem in lacking resources and OutOfMemoryExceptions since the InsufficientMemoryException using the MemoryFailPoint works quite well. This was all testes on a 32bit WinXP system with 2GB of main memory. Running the same mechanism on 64bit system with 32GB of main memory also works well, but when the application runs the MemoryFailPoint suddenly throws an exception although 24GB of main memory are still free. Another point is when the MemoryFailPoint has fired once, it fires everytime and there is no chance to get rid of it. What I have read so far, that there is a small object and a large object heap (SOH and LOH). But only for the SOH the GC takes real care of and I can free the SOH from unused objects by applying GC.Collect() and GC.WaitForPendingFinalizers. The MemoryFailPoint is obviously the only way to get a little bit of control for the LOH, but since there is enough memory left on the system I see no reason why the MemoryFilePoint should fire. Is there any experience around here using the MemoryFailPoint? Thank you for your help Martin

    Read the article

  • Using an ActiveX control without a form/dialog/window in C++, VS 2008

    - by younevertell
    an ActiveX control generated by Visual Basic 6, very old stuff, it has a couple of methods and one event. now I have to use the control in C++, visual studio 2008, but I don't like to generate a form/dialog/window as constainer. I add some MFC class From ActiveX Control. Now the wrap class has all the methods defined in the ActiveX control. I definitely need add event handler to take care of the event fired by ActiveX control. With a form/dialgue/window, it could be done by below BEGIN_EVENTSINK_MAP ON_EVENT. My questions are 1. how to add the event handler without a form/dialgue/window. Is this impossible? 2. Could I manually add constructor in the ActiveX control wrap class, so I could use new operator to get an instance on the heap? Thanks Add the Event Sinks Map near the start of the definition (.cpp) file. The BEGIN_EVENTSINK_MAP macro takes the container's Class name, then the base class name as parameters. The container's class name is used again in the AFX_EVENTSINK_MAP macro and the ON_EVENT macro.

    Read the article

  • maximum memory which malloc can allocate!

    - by Vikas
    I was trying to figure out how much memory I can malloc to maximum extent on my machine (1 Gb RAM 160 Gb HD Windows platform). I read that maximum memory malloc can allocate is limited to physical memory.(on heap) Also when a program exceeds consumption of memory to a certain level, the computer stops working because other applications do not get enough memory that they require. So to confirm,I wrote a small program in C, int main(){ int *p; while(1){ p=(int *)malloc(4); if(!p)break; } } Hoping that there would be a time when memory allocation will fail and loop will be breaked. But my computer hanged as It was an infinite loop. I waited for about an hour and finally I had to forcely shut down my computer. Some questions: Does malloc allocate memory from HD also? What was the reason for above behaviour? Why didn't loop breaked at any point of time.? Why wasn't there any allocation failure?

    Read the article

  • How are two-dimensional arrays formatted in memory?

    - by Chris Cooper
    In C, I know I can dynamically allocate a two-dimensional array on the heap using the following code: int** someNumbers = malloc(arrayRows*sizeof(int*)); for (i = 0; i < arrayRows; i++) { someNumbers[i] = malloc(arrayColumns*sizeof(int)); } Clearly, this actually creates a one-dimensional array of pointers to a bunch of separate one-dimensional arrays of integers, and "The System" can figure you what I mean when I ask for: someNumbers[4][2]; But when I statically declare a 2D array, as in the following line...: int someNumbers[ARRAY_ROWS][ARRAY_COLUMNS]; ...does a similar structure get created on the stack, or is it of another form completely? (i.e. is it a 1D array of pointers? If not, what is it, and how do references to it get figured out?) Also, when I said, "The System," what is actually responsible for figuring that out? The kernel? Or does the C compiler sort it out while compiling?

    Read the article

  • how to make accessor for Dictionary in a way that returned Dictionary cannot be changed C# / 2.0

    - by matti
    I thought of solution below because the collection is very very small. But what if it was big? private Dictionary<string, OfTable> _folderData = new Dictionary<string, OfTable>(); public Dictionary<string, OfTable> FolderData { get { return new Dictionary<string,OfTable>(_folderData); } } With List you can make: public class MyClass { private List<int> _items = new List<int>(); public IList<int> Items { get { return _items.AsReadOnly(); } } } That would be nice! Thanks in advance, Cheers & BR - Matti NOW WHEN I THINK THE OBJECTS IN COLLECTION ARE IN HEAP. SO MY SOLUTION DOES NOT PREVENT THE CALLER TO MODIFY THEM!!! CAUSE BOTH Dictionary s CONTAIN REFERENCES TO SAME OBJECT. DOES THIS APPLY TO List EXAMPLE ABOVE? class OfTable { private string _wTableName; private int _table; private List<int> _classes; private string _label; public OfTable() { _classes = new List<int>(); } public int Table { get { return _table; } set { _table = value; } } public List<int> Classes { get { return _classes; } set { _classes = value; } } public string Label { get { return _label; } set { _label = value; } } } so how to make this immutable??

    Read the article

  • Android - BitmapFactory.decodeByteArray - OutOfMemoryError (OOM)

    - by Bob Keathley
    I have read 100s of article about the OOM problem. Most are in regard to large bitmaps. I am doing a mapping application where we download 256x256 weather overlay tiles. Most are totally transparent and very small. I just got a crash on a bitmap stream that was 442 Bytes long while calling BitmapFactory.decodeByteArray(....). The Exception states: java.lang.OutOfMemoryError: bitmap size exceeds VM budget(Heap Size=9415KB, Allocated=5192KB, Bitmap Size=23671KB) The code is: protected Bitmap retrieveImageData() throws IOException { URL url = new URL(imageUrl); InputStream in = null; OutputStream out = null; HttpURLConnection connection = (HttpURLConnection) url.openConnection(); // determine the image size and allocate a buffer int fileSize = connection.getContentLength(); if (fileSize < 0) { return null; } byte[] imageData = new byte[fileSize]; // download the file //Log.d(LOG_TAG, "fetching image " + imageUrl + " (" + fileSize + ")"); BufferedInputStream istream = new BufferedInputStream(connection.getInputStream()); int bytesRead = 0; int offset = 0; while (bytesRead != -1 && offset < fileSize) { bytesRead = istream.read(imageData, offset, fileSize - offset); offset += bytesRead; } // clean up istream.close(); connection.disconnect(); Bitmap bitmap = null; try { bitmap = BitmapFactory.decodeByteArray(imageData, 0, bytesRead); } catch (OutOfMemoryError e) { Log.e("Map", "Tile Loader (241) Out Of Memory Error " + e.getLocalizedMessage()); System.gc(); } return bitmap; } Here is what I see in the debugger: bytesRead = 442 So the Bitmap data is 442 Bytes. Why would it be trying to create a 23671KB Bitmap and running out of memory?

    Read the article

  • Information about PTE's (Page Table Entries) in Windows

    - by Patrick
    In order to find more easily buffer overflows I am changing our custom memory allocator so that it allocates a full 4KB page instead of only the wanted number of bytes. Then I change the page protection and size so that if the caller writes before or after its allocated piece of memory, the application immediately crashes. Problem is that although I have enough memory, the application never starts up completely because it runs out of memory. This has two causes: since every allocation needs 4 KB, we probably reach the 2 GB limit very soon. This problem could be solved if I would make a 64-bit executable (didn't try it yet). even when I only need a few hundreds of megabytes, the allocations fail at a certain moment. The second problem is the biggest one, and I think it's related to the maximum number of PTE's (page table entries, which store information on how Virtual Memory is mapped to physical memory, and whether pages should be read-only or not) you can have in a process. My questions (or a cry-for-tips): Where can I find information about the maximum number of PTE's in a process? Is this different (higher) for 64-bit systems/applications or not? Can the number of PTE's be configured in the application or in Windows? Thanks, Patrick PS. note for those who will try to argument that you shouldn't write your own memory manager: My application is rather specific so I really want full control over memory management (can't give any more details) Last week we had a memory overwrite which we couldn't find using the standard C++ allocator and the debugging functionality of the C/C++ run time (it only said "block corrupt" minutes after the actual corruption") We also tried standard Windows utilities (like GFLAGS, ...) but they slowed down the application by a factor of 100, and couldn't find the exact position of the overwrite either We also tried the "Full Page Heap" functionality of Application Verifier, but then the application doesn't start up either (probably also running out of PTE's)

    Read the article

  • Why does this code sample produce a memory leak?

    - by citronas
    In the university we were given the following code sample and we were being told, that there is a memory leak when running this code. The sample should demonstrate that this is a situation where the garbage collector can't work. As far as my object oriented programming goes, the only codeline able to create a memory leak would be items=Arrays.copyOf(items,2 * size+1); The documentation says, that the elements are copied. Does that mean the reference is copied (and therefore another entry on the heap is created) or the object itself is being copied? As far as I know, Object and therefore Object[] are implemented as a reference type. So assigning a new value to 'items' would allow the garbage collector to find that the old 'item' is no longer referenced and can therefore be collected. In my eyes, this the codesample does not produce a memory leak. Could somebody prove me wrong? =) import java.util.Arrays; public class Foo { private Object[] items; private int size=0; private static final int ISIZE=10; public Foo() { items= new Object[ISIZE]; } public void push(final Object o){ checkSize(); items[size++]=o; } public Object pop(){ if (size==0) throw new ///... return items[--size]; } private void checkSize(){ if (items.length==size){ items=Arrays.copyOf(items,2 * size+1); } } }

    Read the article

  • Link failure with either abnormal memory consumption or LNK1106 in Visual Studio 2005.

    - by Corvin
    Hello, I am trying to build a solution for windows XP in Visual Studio 2005. This solution contains 81 projects (static libs, exe's, dlls) and is being successfully used by our partners. I copied the solution bundle from their repository and tried setting it up on 3 similar machines of people in our group. I was successful on two machines and the solution failed to build on my machine. The build on my machine encountered two problems: During a simple build creation of the biggest static library (about 522Mb in debug mode) would fail with the message "13libd\ui1d.lib : fatal error LNK1106: invalid file or disk full: cannot seek to 0x20101879" Full solution rebuild creates this library, however when it comes to linking the library to main .exe file, devenv.exe spawns link.exe which consumes about 80Mb of physical memory and 250MB of virtual and spawns another link.exe, which does the same. This goes on until the system runs out of memory. On PCs of my colleagues where successful build could be performed, there is only one link.exe process which uses all the memory required for linking (about 500Mb physical). There is a plenty of hard drive space on my machine and the file system is NTFS. All three of our systems are similar - Core2Quad processors, 4Gb of RAM, Windows XP SP3. We are using Visual studio installed from the same source. I tried using a different RAM and CPU, using dedicated graphics adapter to eliminate possibility of video memory sharing influencing the build, putting solution files to different location, using different versions of VS 2005 (Professional, Standard and Team Suite), changing the amount of available virtual memory, running memtest86 and building the project from scratch (i.e. a clean bundle). I have read what MSDN says about LNK1106, none of the cases apply to me except for maybe "out of heap space", however I am not sure how I should fight this. The only idea that I have left is reinstalling the OS, however I am not sure that it would help and I am not sure that my situation wouldn't repeat itself on a different machine. Would anyone have any sort of advice for me? Thanks

    Read the article

  • How come the Actionscript 3 ENTER_FRAME event is crazy nuts?

    - by nstory
    So, I've been toying around with Flash, browsing through the documentation, and all that, and noticed that the ENTER_FRAME event seems to defy my expectation of a deterministic universe. Take the following example: (new MovieClip()).addEventListener(Event.ENTER_FRAME, function(ev) {trace("Test");}); Notice this anonymous MovieClip is not added to the display hierarchy, and any reference to it is immediately lost. It will actually print "Test" once a frame until it is garbage collected. How insane is that? The behavior of this is actually determined by when the garbage collector feels like coming around in all its unpredictable insanity! Is there a better way to create intermittent failures? Seriously. My two theories are that either the DisplayObject class stores weak references to all its instances for the purpose of dispatching ENTER_FRAME events, or, and much wilder, the Flash player actually scans the heap each frame looking for ENTER_FRAME listeners to pull on. Can any hardened Actionscript developer clue me in on how this works? (And maybe a why - the - f**k they thought this was a good idea?)

    Read the article

  • C++ and Dependency Injection in unit testing

    - by lhumongous
    Suppose I have a C++ class like so: class A { public: A() { } void SetNewB( const B& _b ) { m_B = _b; } private: B m_B; } In order to unit test something like this, I would have to break A's dependency on B. Since class A holds onto an actual object and not a pointer, I would have to refactor this code to take a pointer. Additionally, I would need to create a parent interface class for B so I can pass in my own fake of B when I test SetNewB. In this case, doesn't unit testing with dependency injection further complicate the existing code? If I make B a pointer, I'm now introducing heap allocation, and some piece of code is now responsible for cleaning it up (unless I use ref counted pointers). Additionally, if B is a rather trivial class with only a couple of member variables and functions, why introduce a whole new interface for it instead of just testing with an instance of B? I suppose you could make the argument that it would be easier to refactor A by using an interface. But are there some cases where two classes might need to be tightly coupled?

    Read the article

  • Sorted queue with dropping out elements

    - by ffriend
    I have a list of jobs and queue of workers waiting for these jobs. All the jobs are the same, but workers are different and sorted by their ability to perform the job. That is, first person can do this job best of all, second does it just a little bit worse and so on. Job is always assigned to the person with the highest skills from those who are free at that moment. When person is assigned a job, he drops out of the queue for some time. But when he is done, he gets back to his position. So, for example, at some moment in time worker queue looks like: [x, x, .83, x, .7, .63, .55, .54, .48, ...] where x's stand for missing workers and numbers show skill level of left workers. When there's a new job, it is assigned to 3rd worker as the one with highest skill of available workers. So next moment queue looks like: [x, x, x, x, .7, .63, .55, .54, .48, ...] Let's say, that at this moment worker #2 finishes his job and gets back to the list: [x, .91, x, x, .7, .63, .55, .54, .48, ...] I hope the process is completely clear now. My question is what algorithm and data structure to use to implement quick search and deletion of worker and insertion back to his position. For the moment the best approach I can see is to use Fibonacci heap that have amortized O(log n) for deleting minimal element (assigning job and deleting worker from queue) and O(1) for inserting him back, which is pretty good. But is there even better algorithm / data structure that possibly take into account the fact that elements are already sorted and only drop of the queue from time to time?

    Read the article

  • Asynchronous URL connection objective C

    - by tweety
    I created an asynchronous URL connection to call a web service using HTTP POST method. after I am pinging the web i set an NSTimerInterval in the completion handler. my problem is when I'm trying to display the time on the view controller it is not doing promptly. I know block is stored in the heap and gets executed later on anytime and probably that's why i'm not getting prompt answer. I was wondering is there any other way to do this? Thanks in advance. my code: __block NSDate *start= [NSDate date]; __block NSDate *end; __block double miliseconds; __block NSTimeInterval time; [NSURLConnection sendAsynchronousRequest:urlRequest queue:queue completionHandler:^(NSURLResponse *response, NSData *data, NSError *error) { if([data length]==0 && error==nil){ end=[NSDate date]; time=[end timeIntervalSinceDate:start]; NSLog(@"Successfully Pinged"); miliseconds = time; // calling a method to display ping time [self label:miliseconds]; } -(void) label:(double) mili{ double miliseconds=mili*1000; self.timeDisplay.text=[NSString stringWithFormat:@"Time: %.3f ms", miliseconds];

    Read the article

  • How to repair Java in Ubuntu after trying to switch to Java 6 using update-java-alternatives

    - by Kau-Boy
    I tried to switch from Java 5 to Java 6 using the "update-java-alternatives" command like explained on this page: https://help.ubuntu.com/community/Java But afterwards I get the following error when I tried to execute java: root@webserver:~# java Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. I also tried to reinstall the java binaries using "apt-get" but I didn't succeed reinstalling it. I would like to post the "apt-get" errors, but unfortunately I don't know how to print out the error messages in English and not in German. My system is a Ubuntu 8.04 ROOT server. Here is the (Google translated) english text tring to install Java 6 again: root@server:~# apt-get install sun-java6-jdk Reading package lists ... Ready Dependency tree Reading state information ... Ready sun-java6-jdk is already the newest version. sun-java6-jdk set to manually installed. 0 upgraded, 0 newly installed, 0 to remove and 86 not upgraded. 1 not fully installed or removed. After this operation, 0B of additional disk space will be used. Set up a sun-java6-bin (6-03-0ubuntu2) ... Could not create the Java virtual machine. dpkg: error processing sun-java6-bin (- configure): Subprocess post-installation script returned error exit status 1 Errors were encountered while processing: sun-java6-bin E: Sub-process / usr / bin / dpkg returned an error code (1) I hope that this might help you helping me.

    Read the article

  • How string accepting interface should look like?

    - by ybungalobill
    Hello, This is a follow up of this question. Suppose I write a C++ interface that accepts or returns a const string. I can use a const char* zero-terminated string: void f(const char* str); // (1) The other way would be to use an std::string: void f(const string& str); // (2) It's also possible to write an overload and accept both: void f(const char* str); // (3) void f(const string& str); Or even a template in conjunction with boost string algorithms: template<class Range> void f(const Range& str); // (4) My thoughts are: (1) is not C++ish and may be less efficient when subsequent operations may need to know the string length. (2) is bad because now f("long very long C string"); invokes a construction of std::string which involves a heap allocation. If f uses that string just to pass it to some low-level interface that expects a C-string (like fopen) then it is just a waste of resources. (3) causes code duplication. Although one f can call the other depending on what is the most efficient implementation. However we can't overload based on return type, like in case of std::exception::what() that returns a const char*. (4) doesn't work with separate compilation and may cause even larger code bloat. Choosing between (1) and (2) based on what's needed by the implementation is, well, leaking an implementation detail to the interface. The question is: what is the preffered way? Is there any single guideline I can follow? What's your experience?

    Read the article

  • Program always returns binary '>>' : no operator found which takes a left-hand operand of type error

    - by Tom Ward
    So I've been set a task to create a temperature converter in C++ using this equation: Celsius = (5/9)*(Fahrenheit – 32) and so far I've come up with this (I've cut out the 10 lines worth of comments from the start so the code posted begins on line 11, if that makes any sense) #include <iostream> #include <string> #include <iomanip> #include <cmath> using namespace std; int main () { float celsius; float farenheit; std::cout << "**************************" << endl; std::cout << "*4001COMP-Lab5-Question 1*" << endl; std::cout << "**************************" << endl << endl; std::cout << "Please enter a temperature in farenheit: "; std::cin >> farenheit >> endl; std::cout << "Temperature (farenheit): " << endl; std::cout << "Temperature (celsius): " << celsius << endl; std::cin.get(); return 0; } Everytime I try to run this program I get a heap of errors with this one appearing every time: 1m:\visual studio 2010\projects\week 5\week 5\main.cpp(26): error C2678: binary '' : no operator found which takes a left-hand operand of type 'std::basic_istream<_Elem,_Traits' (or there is no acceptable conversion) I've tried everything I can think of to get rid of this error but it reappears every time, any idea on how to fix this?

    Read the article

  • Tracking down origin of I/O in multi-process server

    - by Craig Ringer
    I'm currently trying to track down some phantom I/O in a PostgreSQL build I'm testing. It's a multi-process server and it isn't simple to associate disk I/O back to a particular back-end and query. I thought Linux's perf tool would be ideal for this, but I'm struggling to capture block I/O performance counter metrics and associate them with user-space activity. It's easy to record block I/O requests and completions with, eg: sudo perf record -g -T -u postgres -e 'block:block_rq_*' and the user-space pid is recorded, but there's no kernel or user-space stack captured, or ability to snapshot bits of the user-space process's heap (say, query text) etc. So while you have the pid, you don't know what the process was doing at that point. Just perf script output like: postgres 7462 [002] 301125.113632: block:block_rq_issue: 8,0 W 0 () 208078848 + 1024 [postgres] If I add the -g flag to perf record it'll take snapshots of the kernel stack, but doesn't capture user-space state for perf events captured in the kernel. The user-space stack only goes up to the entry-point from userspace, like LWLockRelease, LWLockAcquire, memcpy (mmap'd IO), __GI___libc_write, etc. So. Any tips? Being able to capture a snapshot of the user-space stack in response to kernel events would be ideal.

    Read the article

  • Any workarounds for non-static member array initialization?

    - by TomiJ
    In C++, it's not possible to initialize array members in the initialization list, thus member objects should have default constructors and they should be properly initialized in the constructor. Is there any (reasonable) workaround for this apart from not using arrays? [Anything that can be initialized using only the initialization list is in our application far preferable to using the constructor, as that data can be allocated and initialized by the compiler and linker, and every CPU clock cycle counts, even before main. However, it is not always possible to have a default constructor for every class, and besides, reinitializing the data again in the constructor rather defeats the purpose anyway.] E.g. I'd like to have something like this (but this one doesn't work): class OtherClass { private: int data; public: OtherClass(int i) : data(i) {}; // No default constructor! }; class Foo { private: OtherClass inst[3]; // Array size fixed and known ahead of time. public: Foo(...) : inst[0](0), inst[1](1), inst[2](2) {}; }; The only workaround I'm aware of is the non-array one: class Foo { private: OtherClass inst0; OtherClass inst1; OtherClass inst2; OtherClass *inst[3]; public: Foo(...) : inst0(0), inst1(1), inst2(2) { inst[0]=&inst0; inst[1]=&inst1; inst[2]=&inst2; }; }; Edit: It should be stressed that OtherClass has no default constructor, and that it is very desirable to have the linker be able to allocate any memory needed (one or more static instances of Foo will be created), using the heap is essentially verboten. I've updated the examples above to highlight the first point.

    Read the article

  • Declaring two large 2d arrays gives segmentation fault.

    - by pfdevil
    Hello, i'm trying to declare and allocate memory for two 2d-arrays. However when trying to assign values to itemFeatureQ[39][16816] I get a segmentation vault. I can't understand it since I have 2GB of RAM and only using 19MB on the heap. Here is the code; double** reserveMemory(int rows, int columns) { double **array; int i; array = (double**) malloc(rows * sizeof(double *)); if(array == NULL) { fprintf(stderr, "out of memory\n"); return NULL; } for(i = 0; i < rows; i++) { array[i] = (double*) malloc(columns * sizeof(double *)); if(array == NULL) { fprintf(stderr, "out of memory\n"); return NULL; } } return array; } void populateUserFeatureP(double **userFeatureP) { int x,y; for(x = 0; x < CUSTOMERS; x++) { for(y = 0; y < FEATURES; y++) { userFeatureP[x][y] = 0; } } } void populateItemFeatureQ(double **itemFeatureQ) { int x,y; for(x = 0; x < FEATURES; x++) { for(y = 0; y < MOVIES; y++) { printf("(%d,%d)\n", x, y); itemFeatureQ[x][y] = 0; } } } int main(int argc, char *argv[]){ double **userFeatureP = reserveMemory(480189, 40); double **itemFeatureQ = reserveMemory(40, 17770); populateItemFeatureQ(itemFeatureQ); populateUserFeatureP(userFeatureP); return 0; }

    Read the article

  • learning C++ from java, trying to make a linked list.

    - by kyeana
    I just started learning c++ (coming from java) and am having some serious problems with doing anything :P Currently, i am attempting to make a linked list, but must be doing something stupid cause i keep getting "void value not ignored as it ought to be" compile errors (i have it marked where it is throwing it bellow). If anyone could help me with what im doing wrong, i would be very grateful :) Also, I am not used to having the choice of passing by reference, address, or value, and memory management in general (currently i have all my nodes and the data declared on the heap). If anyone has any general advice for me, i also wouldn't complain :P Key code from LinkedListNode.cpp LinkedListNode::LinkedListNode() { //set next and prev to null pData=0; //data needs to be a pointer so we can set it to null for //for the tail and head. pNext=0; pPrev=0; } /* * Sets the 'next' pointer to the memory address of the inputed reference. */ void LinkedListNode::SetNext(LinkedListNode& _next) { pNext=&_next; } /* * Sets the 'prev' pointer to the memory address of the inputed reference. */ void LinkedListNode::SetPrev(LinkedListNode& _prev) { pPrev=&_prev; } //rest of class Key code from LinkedList.cpp #include "LinkedList.h" LinkedList::LinkedList() { // Set head and tail of linked list. pHead = new LinkedListNode(); pTail = new LinkedListNode(); /* * THIS IS WHERE THE ERRORS ARE. */ *pHead->SetNext(*pTail); *pTail->SetPrev(*pHead); } //rest of class

    Read the article

  • Slowing process creation under Java?

    - by oconnor0
    I have a single, large heap (up to 240GB, though in the 20-40GB range for most of this phase of execution) JVM [1] running under Linux [2] on a server with 24 cores. We have tens of thousands of objects that have to be processed by an external executable & then load the data created by those executables back into the JVM. Each executable produces about half a megabyte of data (on disk) that when read right in, after the process finishes, is, of course, larger. Our first implementation was to have each executable handle only a single object. This involved the spawning of twice as many executables as we had objects (since we called a shell script that called the executable). Our CPU utilization would start off high, but not necessarily 100%, and slowly worsen. As we began measuring to see what was happening we noticed that the process creation time [3] continually slows. While starting at sub-second times it would eventually grow to take a minute or more. The actual processing done by the executable usually takes less than 10 seconds. Next we changed the executable to take a list of objects to process in an attempt to reduce the number of processes created. With batch sizes of a few hundred (~1% of our current sample size), the process creation times start out around 2 seconds & grow to around 5-6 seconds. Basically, why is it taking so long to create these processes as execution continues? [1] Oracle JDK 1.6.0_22 [2] Red Hat Enterprise Linux Advanced Platform 5.3, Linux kernel 2.6.18-194.26.1.el5 #1 SMP [3] Creation of the ProcessBuilder object, redirecting the error stream, and starting it.

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >