Search Results

Search found 1544 results on 62 pages for 'heap corruption'.

Page 58/62 | < Previous Page | 54 55 56 57 58 59 60 61 62  | Next Page >

  • Handwritten linked list is segfaulting and I don't understand why

    - by Born2Smile
    Hi I was working on a bit of fun, making an interface to run gnuplot from within c++, and for some reason the my linked list implementation fails. The code below fails on the line plots-append(&plot). Stepping through the code I discovered that for some reason the destructor ~John() is called immediately after the constructor John(), and I cannot seem to figure out why. The code included below is a stripped down version operating only on Plot*. Originally I made the linked list as a template class. And it worked fine as ll<int and ll<char* but for some reason it fails as ll<Plot*. Could youp please help me figure out why it fails? and perhaps help me understand how to make it work? In advance: Thanks a heap! //B2S #include <string.h class Plot{ char title[80]; public: Plot(){ } }; class Link{ Plot* element; Link* next; Link* prev; friend class ll; }; class ll{ Link* head; Link* tail; public: ll(){ head = tail = new Link(); head-prev = tail-prev = head-next = tail-next = head; } ~ll(){ while (head!=tail){ tail = tail-prev; delete tail-next; } delete head; } void append(Plot* element){ tail-element = element; tail-next = new Link(); tail-next-prev = tail; tail-next = tail; } }; class John{ ll* plots; public: John(){ plots= new ll(); } ~John(){ delete plots; } John(Plot* plot){ John(); plots-append(plot); } }; int main(){ Plot p; John k(&p); }

    Read the article

  • Seg Fault when using std::string on an embedded Linux platform

    - by Brad
    Hi, I have been working for a couple of days on a problem with my application running on an embedded Arm Linux platform. Unfortunately the platform precludes me from using any of the usual useful tools for finding the exact issue. When the same code is run on the PC running Linux, I get no such error. In the sample below, I can reliably reproduce the problem by uncommenting the string, list or vector lines. Leaving them commented results in the application running to completion. I expect that something is corrupting the heap, but I cannot see what? The program will run for a few seconds before giving a segmentation fault. The code is compiled using a arm-linux cross compiler: arm-linux-g++ -Wall -otest fault.cpp -ldl -lpthread arm-linux-strip test Any ideas greatly appreciated. #include <stdio.h> #include <vector> #include <list> #include <string> using namespace std; ///////////////////////////////////////////////////////////////////////////// class TestSeg { static pthread_mutex_t _logLock; public: TestSeg() { } ~TestSeg() { } static void* TestThread( void *arg ) { int i = 0; while ( i++ < 10000 ) { printf( "%d\n", i ); WriteBad( "Function" ); } pthread_exit( NULL ); } static void WriteBad( const char* sFunction ) { pthread_mutex_lock( &_logLock ); printf( "%s\n", sFunction ); //string sKiller; // <----------------------------------Bad //list<char> killer; // <----------------------------------Bad //vector<char> killer; // <----------------------------------Bad pthread_mutex_unlock( &_logLock ); return; } void RunTest() { int threads = 100; pthread_t _rx_thread[threads]; for ( int i = 0 ; i < threads ; i++ ) { pthread_create( &_rx_thread[i], NULL, TestThread, NULL ); } for ( int i = 0 ; i < threads ; i++ ) { pthread_join( _rx_thread[i], NULL ); } } }; pthread_mutex_t TestSeg::_logLock = PTHREAD_MUTEX_INITIALIZER; int main( int argc, char *argv[] ) { TestSeg seg; seg.RunTest(); pthread_exit( NULL ); }

    Read the article

  • JVM throws OutOfMemory during gc though there are plenty memory left...

    - by Shu L.
    I have my java application configured to use 5G memory. I got an OutOfMemory out of blue. I inspected the gc log and found plenty of memory left: young generation occupies 4% allocated space, tenure generation occupancy is 5% and perm generation is 43%. I am puzzled why JVM throws an OutOfMemory at the gc time. Does anyone know why this is happening? Your help is greatly appreciated. JVM memory and gc settings: -server -Xms5g -Xmx5g -Xss256k -XX:NewSize=2g -XX:MaxNewSize=2g -XX:+UseParallelOldGC -XX:+UseTLAB -XX:SurvivorRatio=8 -XX:TargetSurvivorRatio=90 -XX:+DisableExplicitGC gc.log 2009-09-19T03:34:59.741+0000: 92836.778: [GC Desired survivor size 152567808 bytes, new threshold 1 (max 15) [PSYoungGen: 1941492K-144057K(1947072K)] 3138022K-1340830K(5092800K), 0.1947640 secs] [Times: user=0.61 sys=0.01, real=0.19 secs] 2009-09-19T03:35:29.918+0000: 92866.954: [GC Desired survivor size 152109056 bytes, new threshold 1 (max 15) [PSYoungGen: 1941625K-144049K(1948608K)] 3138398K-1341080K(5094336K), 0.1942000 secs] [Times: user=0.61 sys=0.01, real=0.20 secs] 2009-09-19T03:35:56.883+0000: 92893.920: [GC Desired survivor size 156565504 bytes, new threshold 1 (max 15) [PSYoungGen: 1567994K-115427K(1915072K)] 2765026K-1312820K(5060800K), 0.1586320 secs] [Times: user=0.50 sys=0.01, real=0.16 secs] 2009-09-19T03:35:57.042+0000: 92894.079: [GC Desired survivor size 179961856 bytes, new threshold 1 (max 15) [PSYoungGen: 115427K-0K(1898560K)] 1312820K-1313987K(5044288K), 0.0775650 secs] [Times: user=0.42 sys=0.19, real=0.08 secs] 2009-09-19T03:35:57.120+0000: 92894.157: [Full GC [PSYoungGen: 0K-0K(1898560K)] [ParOldGen: 1313987K-159522K(3145728K)] 1313987K-159522K(5044288K) [PSPermGen: 20025K-19942K(40256K)], 0.56923 00 secs] [Times: user=2.18 sys=0.05, real=0.57 secs] 2009-09-19T03:35:57.690+0000: 92894.726: [GC Desired survivor size 197066752 bytes, new threshold 1 (max 15) [PSYoungGen: 0K-0K(1745728K)] 159522K-159522K(4891456K), 0.0072590 secs] [Times: user=0.01 sys=0.00, real=0.00 secs] 2009-09-19T03:35:57.698+0000: 92894.734: [Full GC [PSYoungGen: 0K-0K(1745728K)] [ParOldGen: 159522K-158627K(3145728K)] 159522K-158627K(4891456K) [PSPermGen: 19942K-19934K(45504K)], 0.3280480 secs] [Times: user=1.46 sys=0.00, real=0.33 secs] Heap PSYoungGen total 1745728K, used 87233K [0x00002aab73650000, 0x00002aabf3650000, 0x00002aabf3650000) eden space 1745664K, 4% used [0x00002aab73650000,0x00002aab78b80778,0x00002aabddf10000) from space 64K, 0% used [0x00002aabddf10000,0x00002aabddf10000,0x00002aabddf20000) to space 192448K, 0% used [0x00002aabe7a60000,0x00002aabe7a60000,0x00002aabf3650000) ParOldGen total 3145728K, used 158627K [0x00002aaab3650000, 0x00002aab73650000, 0x00002aab73650000) object space 3145728K, 5% used [0x00002aaab3650000,0x00002aaabd138d28,0x00002aab73650000) PSPermGen total 45504K, used 19965K [0x00002aaaae250000, 0x00002aaab0ec0000, 0x00002aaab3650000) object space 45504K, 43% used [0x00002aaaae250000,0x00002aaaaf5cf668,0x00002aaab0ec0000) I am on 64-bit Linux and JRE 1.6.0_10: $uname -a Linux x 2.6.24-etchnhalf.1-amd64 #1 SMP Tue Oct 14 03:11:45 UTC 2008 x86_64 GNU/Linux $java -version java version "1.6.0_10" Java(TM) SE Runtime Environment (build 1.6.0_10-b33) Java HotSpot(TM) 64-Bit Server VM (build 11.0-b15, mixed mode)

    Read the article

  • Freetype2 failing under WoW64

    - by Necrolis
    I built a tff to D3D texture function using freetype2(2.3.9) to generate grayscale maps from the fonts. it works great under native win32, however, on WoW64 it just explodes (well, FT_Done and FT_Load_Glyph do). from some debugging, it seems to be a problem with HeapFree as called by free from FT_Free. I know it should work, as games like WCIII, which to the best of my knowledge use freetype2, run fine, this is my code, stripped of the D3D code(which causes no problems on its own): FT_Face pFace = NULL; FT_Error nError = 0; FT_Byte* pFont = static_cast<FT_Byte*>(ARCHIVE_LoadFile(pBuffer,&nSize)); if((nError = FT_New_Memory_Face(pLibrary,pFont,nSize,0,&pFace)) == 0) { FT_Set_Char_Size(pFace,nSize << 6,nSize << 6,96,96); for(unsigned char c = 0; c < 95; c++) { if(!FT_Load_Glyph(pFace,FT_Get_Char_Index(pFace,c + 32),FT_LOAD_RENDER)) { FT_Glyph pGlyph; if(!FT_Get_Glyph(pFace->glyph,&pGlyph)) { LOG("GET: %c",c + 32); FT_Glyph_To_Bitmap(&pGlyph,FT_RENDER_MODE_NORMAL,0,1); FT_BitmapGlyph pGlyphMap = reinterpret_cast<FT_BitmapGlyph>(pGlyph); FT_Bitmap* pBitmap = &pGlyphMap->bitmap; const size_t nWidth = pBitmap->width; const size_t nHeight = pBitmap->rows; //add to texture atlas } } } } else { FT_Done_Face(pFace); delete pFont; return FALSE; } FT_Done_Face(pFace); delete pFont; return TRUE; } ARCHIVE_LoadFile returns blocks allocated with new. As a secondary question, I would like to render a font using pixel sizes, I came across FT_Set_Pixel_Sizes, but I'm unsure as to whether this stretches the font to fit the size, or bounds it to a size. what I would like to do is render all the glyphs at say 24px (MS Word size here), then turn it into a signed distance field in a 32px area. Update After much fiddling, I got a test app to work, which leads me to think the problems are arising from threading, as my code is running in a secondary thread. I have compiled freetype into a static lib using the multithread DLL, my app uses the multithreaded libs. gonna see if i can set up a multithreaded test. Also updated to 2.4.4, to see if the problem was a known but fixed bug, didn't help however. Update 2 After some more fiddling, it turns out I wasn't using the correct lib for 2.4.4 -.- after fixing that, the test app works 100%, but the main app still crashes when FT_Done_Face is called, still seems to be a crash in the memory heap management of windows. is it possible that there is a bug in freetype2 that makes it blow up under user threads?

    Read the article

  • How to optimize my PostgreSQL DB for prefix search?

    - by asmaier
    I have a table called "nodes" with roughly 1.7 million rows in my PostgreSQL db =#\d nodes Table "public.nodes" Column | Type | Modifiers --------+------------------------+----------- id | integer | not null title | character varying(256) | score | double precision | Indexes: "nodes_pkey" PRIMARY KEY, btree (id) I want to use information from that table for autocompletion of a search field, showing the user a list of the ten titles having the highest score fitting to his input. So I used this query (here searching for all titles starting with "s") =# explain analyze select title,score from nodes where title ilike 's%' order by score desc; QUERY PLAN ----------------------------------------------------------------------------------------------------------------------- Sort (cost=64177.92..64581.38 rows=161385 width=25) (actual time=4930.334..5047.321 rows=161264 loops=1) Sort Key: score Sort Method: external merge Disk: 5712kB -> Seq Scan on nodes (cost=0.00..46630.50 rows=161385 width=25) (actual time=0.611..4464.413 rows=161264 loops=1) Filter: ((title)::text ~~* 's%'::text) Total runtime: 5260.791 ms (6 rows) This was much to slow for using it with autocomplete. With some information from Using PostgreSQL in Web 2.0 Applications I was able to improve that with a special index =# create index title_idx on nodes using btree(lower(title) text_pattern_ops); =# explain analyze select title,score from nodes where lower(title) like lower('s%') order by score desc limit 10; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------ Limit (cost=18122.41..18122.43 rows=10 width=25) (actual time=1324.703..1324.708 rows=10 loops=1) -> Sort (cost=18122.41..18144.60 rows=8876 width=25) (actual time=1324.700..1324.702 rows=10 loops=1) Sort Key: score Sort Method: top-N heapsort Memory: 17kB -> Bitmap Heap Scan on nodes (cost=243.53..17930.60 rows=8876 width=25) (actual time=96.124..1227.203 rows=161264 loops=1) Filter: (lower((title)::text) ~~ 's%'::text) -> Bitmap Index Scan on title_idx (cost=0.00..241.31 rows=8876 width=0) (actual time=90.059..90.059 rows=161264 loops=1) Index Cond: ((lower((title)::text) ~>=~ 's'::text) AND (lower((title)::text) ~<~ 't'::text)) Total runtime: 1325.085 ms (9 rows) So this gave me a speedup of factor 4. But can this be further improved? What if I want to use '%s%' instead of 's%'? Do I have any chance of getting a decent performance with PostgreSQL in that case, too? Or should I better try a different solution (Lucene?, Sphinx?) for implementing my autocomplete feature?

    Read the article

  • Help me stabilize this jRun configuration (CF9/Win2k3/IIS6)

    - by jfrobishow
    Not sure if this would be better suited for ServerFault, but since I am not an admin but a developer I figured I would try SO. We've been struggling to keep our multi-server configuration stable for quite some time now. At the end of last month we were running under CF 7.0.2 on a two servers setup (one instance each). At that point we managed to get our uptime to around 1 week per instance before they would restart by themselves. Since the beginning of the month we upgraded to CF 9 and we're back to square one with multi-restart a day. Our current configuration is 2 Win2k3 servers, running a cluster of 4 instances, 2 instances per server. At this point we are pretty certain this is due to improper JVM settings. We've been toying with them and while some are more stable than others we never quite got it right. From the default: java.args=-server -Xmx512m -Dsun.io.useCanonCaches=false -XX:MaxPermSize=192m -XX:+UseParallelGC -Dcoldfusion.rootDir={application.home}/ To currently: java.args=-server -Xmx896m -Dsun.io.useCanonCaches=false -XX:MaxPermSize=512m -XX:SurvivorRatio=8 -XX:TargetSurvivorRatio=90 -XX:+UseParallelGC -Dcoldfusion.rootDir={application.home}/ -verbose:gc -Xloggc:c:/Jrun4/logs/gc/gcInstance1b.log We have determined that we do need more than the default 512MB simply by monitoring with FusionReactor, on average our amount of memory consumed is hovering in the mid 300MB and can go up to low 700MB under heavy load. Most of the crash will be logged in jrun4/bin/hs_err_pid*.log always an "Out of swap space" I've attached links to the hs_err and garbage collector log file from yesterday at the bottom of the post. The relevant part is (I think) this: Heap PSYoungGen total 89856K, used 19025K [0x55490000, 0x5b6f0000, 0x5b810000) eden space 79232K, 16% used [0x55490000,0x561a64c0,0x5a1f0000) from space 10624K, 52% used [0x5ac90000,0x5b20e2f8,0x5b6f0000) to space 10752K, 0% used [0x5a1f0000,0x5a1f0000,0x5ac70000) PSOldGen total 460416K, used 308422K [0x23810000, 0x3f9b0000, 0x55490000) object space 460416K, 66% used [0x23810000,0x36541bb8,0x3f9b0000) PSPermGen total 107520K, used 106079K [0x03810000, 0x0a110000, 0x23810000) object space 107520K, 98% used [0x03810000,0x09fa7e40,0x0a110000) From it, I gather that its the PSPermGen that is full (most logs will show the same before a crash), which is why we increased MaxPermSize but the total still show as 107520K!??! No one here is a jRun expert, so any help or even ideas on what to try next would be greatly appreciated!! The log files: Sorry I know sendspace isn't the friendliest of places - if you have other host suggestion for log files let me know and I'll update the post (SO doesn't like them inline, it blows up the format of the post). The hs_err log file: http://www.sendspace.com/file/fgak8l The gc log: http://www.sendspace.com/file/w0r2ct

    Read the article

  • How to I serialize a large graph of .NET object into a SQL Server BLOB without creating a large bu

    - by Ian Ringrose
    We have code like: ms = New IO.MemoryStream bin = New System.Runtime.Serialization.Formatters.Binary.BinaryFormatter bin.Serialize(ms, largeGraphOfObjects) dataToSaveToDatabase = ms.ToArray() // put dataToSaveToDatabase in a Sql server BLOB But the memory steam allocates a large buffer from the large memory heap that is giving us problems. So how can we stream the data without needing enough free memory to hold the serialized objects. I am looking for a way to get a Stream from SQL server that can then be passed to bin.Serialize() so avoiding keeping all the data in my processes memory. Likewise for reading the data back... Some more background. This is part of a complex numerical processing system that processes data in near real time looking for equipment problems etc, the serialization is done to allow a restart when there is a problem with data quality from a data feed etc. (We store the data feeds and can rerun them after the operator has edited out bad values.) Therefore we serialize the object a lot more often then we de-serialize them. The objects we are serializing include very large arrays mostly of doubles as well as a lot of small “more normal” objects. We are pushing the memory limit on a 32 bit system and make the garage collector work very hard. (Effects are being made elsewhere in the system to improve this, e.g. reusing large arrays rather then create new arrays.) Often the serialization of the state is the last straw that courses an out of memory exception; our peak memory usage is while this serialization is being done. I think we get large memory pool fragmentation when we de-serialize the object, I expect there are also other problem with large memory pool fragmentation given the size of the arrays. (This has not yet been investigated, as the person that first looked at this is a numerical processing expert, not a memory management expert.) Are customers use a mix of Sql Server 2000, 2005 and 2008 and we would rather not have different code paths for each version of Sql Server if possible. We can have many active models at a time (in different process, across many machines), each model can have many saved states. Hence the saved state is stored in a database blob rather then a file. As the spread of saving the state is important, I would rather not serialize the object to a file, and then put the file in a BLOB one block at a time. Other related questions I have asked How to Stream data from/to SQL Server BLOB fields? Is there a SqlFileStream like class that works with Sql Server 2005?

    Read the article

  • Understanding CLR 2.0 Memory Model

    - by Eloff
    Joe Duffy, gives 6 rules that describe the CLR 2.0+ memory model (it's actual implementation, not any ECMA standard) I'm writing down my attempt at figuring this out, mostly as a way of rubber ducking, but if I make a mistake in my logic, at least someone here will be able to catch it before it causes me grief. Rule 1: Data dependence among loads and stores is never violated. Rule 2: All stores have release semantics, i.e. no load or store may move after one. Rule 3: All volatile loads are acquire, i.e. no load or store may move before one. Rule 4: No loads and stores may ever cross a full-barrier (e.g. Thread.MemoryBarrier, lock acquire, Interlocked.Exchange, Interlocked.CompareExchange, etc.). Rule 5: Loads and stores to the heap may never be introduced. Rule 6: Loads and stores may only be deleted when coalescing adjacent loads and stores from/to the same location. I'm attempting to understand these rules. x = y y = 0 // Cannot move before the previous line according to Rule 1. x = y z = 0 // equates to this sequence of loads and stores before possible re-ordering load y store x load 0 store z Looking at this, it appears that the load 0 can be moved up to before load y, but the stores may not be re-ordered at all. Therefore, if a thread sees z == 0, then it also will see x == y. If y was volatile, then load 0 could not move before load y, otherwise it may. Volatile stores don't seem to have any special properties, no stores can be re-ordered with respect to each other (which is a very strong guarantee!) Full barriers are like a line in the sand which loads and stores can not be moved over. No idea what rule 5 means. I guess rule 6 means if you do: x = y x = z Then it is possible for the CLR to delete both the load to y and the first store to x. x = y z = y // equates to this sequence of loads and stores before possible re-ordering load y store x load y store z // could be re-ordered like this load y load y store x store z // rule 6 applied means this is possible? load y store x // but don't pop y from stack (or first duplicate item on top of stack) store z What if y was volatile? I don't see anything in the rules that prohibits the above optimization from being carried out. This does not violate double-checked locking, because the lock() between the two identical conditions prevents the loads from being moved into adjacent positions, and according to rule 6, that's the only time they can be eliminated. So I think I understand all but rule 5, here. Anyone want to enlighten me (or correct me or add something to any of the above?)

    Read the article

  • Java map / nio / NFS issue causing a VM fault: "a fault occurred in a recent unsafe memory access op

    - by Matthew Bloch
    I have written a parser class for a particular binary format (nfdump if anyone is interested) which uses java.nio's MappedByteBuffer to read through files of a few GB each. The binary format is just a series of headers and mostly fixed-size binary records, which are fed out to the called by calling nextRecord(), which pushes on the state machine, returning null when it's done. It performs well. It works on a development machine. On my production host, it can run for a few minutes or hours, but always seems to throw "java.lang.InternalError: a fault occurred in a recent unsafe memory access operation in compiled Java code", fingering one of the Map.getInt, getShort methods, i.e. a read operation in the map. The uncontroversial (?) code that sets up the map is this: /** Set up the map from the given filename and position */ protected void open() throws IOException { // Set up buffer, is this all the flexibility we'll need? channel = new FileInputStream(file).getChannel(); MappedByteBuffer map1 = channel.map(FileChannel.MapMode.READ_ONLY, 0, channel.size()); map1.load(); // we want the whole thing, plus seems to reduce frequency of crashes? map = map1; // assumes the host writing the files is little-endian (x86), ought to be configurable map.order(java.nio.ByteOrder.LITTLE_ENDIAN); map.position(position); } and then I use the various map.get* methods to read shorts, ints, longs and other sequences of bytes, before hitting the end of the file and closing the map. I've never seen the exception thrown on my development host. But the significant point of difference between my production host and development is that on the former, I am reading sequences of these files over NFS (probably 6-8TB eventually, still growing). On my dev machine, I have a smaller selection of these files locally (60GB), but when it blows up on the production host it's usually well before it gets to 60GB of data. Both machines are running java 1.6.0_20-b02, though the production host is running Debian/lenny, the dev host is Ubuntu/karmic. I'm not convinced that will make any difference. Both machines have 16GB RAM, and are running with the same java heap settings. I take the view that if there is a bug in my code, there is enough of a bug in the JVM not to throw me a proper exception! But I think it is just a particular JVM implementation bug due to interactions between NFS and mmap, possibly a recurrence of 6244515 which is officially fixed. I already tried adding in a "load" call to force the MappedByteBuffer to load its contents into RAM - this seemed to delay the error in the one test run I've done, but not prevent it. Or it could be coincidence that was the longest it had gone before crashing! If you've read this far and have done this kind of thing with java.nio before, what would your instinct be? Right now mine is to rewrite it without nio :)

    Read the article

  • Android problem: BufferedReader wont read whole stream into a string

    - by Levara
    Hi all! I'm making an android program that retrieves content of a webpage using HttpURLConnection. I'm new to both Java and Android. Problem is: Reader reads whole page source, but in the last while iteration it doesn't append to stringBuffer that last part. Using debbuger I have determined that, in the last loop iteration, string buff is created, but stringBuffer just doesnt append it. I need to parse retrieved content. Is there any better way to handle the content for parsing than using strings. I've read on numerous other sites that string size in Java is limited only by available heap size. Anyone know what could be the problem. Btw feel free to suggest any improvements to the code. Thanks! URL u; try { u = new URL("http://feeds.timesonline.co.uk/c/32313/f/440134/index.rss"); HttpURLConnection c = (HttpURLConnection) u.openConnection(); c.setRequestProperty("User-agent","Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322; InfoPath.1; .NET CLR 2.0.50727)"); c.setRequestMethod("GET"); c.setDoOutput(true); c.setReadTimeout(3000); c.connect(); StringBuffer stringBuffer = new StringBuffer(""); InputStream in = c.getInputStream(); InputStreamReader inp = new InputStreamReader(in); BufferedReader reader = new BufferedReader(inp); char[] buffer = new char[3072]; int len1 = 0; while ( (len1 = reader.read(buffer)) != -1 ) { String buff = new String(buffer,0,len1); stringBuffer.append(buff); } String stranica = new String(stringBuffer); c.disconnect(); reader.close(); inp.close(); in.close();

    Read the article

  • PHP Processes reaching limit along with FastCGI timeouts

    - by Constant M
    I have a problem similar to http://stackoverflow.com/questions/1168384/how-to-troubleshoot-php-processes, but with a twist. We have a couple of managed servers that recently migrated to FastCGI. Since then we've been having problems. We have a content management system at a central place that we manage all our sites with. All the sites run on the same managed server. So basically its one PHP script that generates pages on the same server. The catch comes in where it works perfectly on some, and gives FCGI timeout errors on others. At the same time our support is saying that the PHP processes heap up to their limit and then causes Apache to stop working. I'm convinced this is a fault on their side, but would like to get my side clean too. Anyway have any suggestions where I can start looking for what's going wrong? I know it's a really broad question, but any help or pointers would be great. Thanks Just to add to that, here's a PS printout from support: happyh 13853 21556 0 01:04 ? 00:00:00 /usr/bin/php5-cgi happyh 13869 13853 0 01:04 ? 00:00:00 /usr/bin/php5-cgi mamedc 13914 21556 0 05:14 ? 00:00:00 /usr/bin/php5-cgi wealthf 13947 21556 0 01:21 ? 00:00:00 /usr/bin/php5-cgi wealthf 13961 13947 0 01:21 ? 00:00:00 /usr/bin/php5-cgi mamedc 14032 13914 0 05:14 ? 00:00:00 /usr/bin/php5-cgi lookgrt 14157 21556 0 04:47 ? 00:00:00 /usr/bin/php5-cgi lookgrt 14178 14157 0 04:47 ? 00:00:00 /usr/bin/php5-cgi wolfie 14262 21556 0 01:08 ? 00:00:00 /usr/bin/php5-cgi wolfie 14276 14262 0 01:08 ? 00:00:00 /usr/bin/php5-cgi yaukrl 14352 21556 0 01:21 ? 00:00:00 /usr/bin/php5-cgi yaukrl 14361 14352 0 01:21 ? 00:00:00 /usr/bin/php5-cgi itpays2 14538 21556 0 01:33 ? 00:00:00 /usr/bin/php5-cgi itpays2 14547 14538 0 01:33 ? 00:00:00 /usr/bin/php5-cgi brichmbx 14732 21556 0 04:47 ? 00:00:00 /usr/bin/php5-cgi brichmbx 14803 14732 0 04:47 ? 00:00:00 /usr/bin/php5-cgi greatl 14969 21556 0 01:00 ? 00:00:00 /usr/bin/php5-cgi

    Read the article

  • How to optimize this SQL query for a rectangular region?

    - by Andrew B.
    I'm trying to optimize the following query, but it's not clear to me what index or indexes would be best. I'm storing tiles in a two-dimensional plane and querying for rectangular regions of that plane. The table has, for the purposes of this question, the following columns: id: a primary key integer world_id: an integer foreign key which acts as a namespace for a subset of tiles tileY: the Y-coordinate integer tileX: the X-coordinate integer value: the contents of this tile, a varchar if it matters. I have the following indexes: "ywot_tile_pkey" PRIMARY KEY, btree (id) "ywot_tile_world_id_key" UNIQUE, btree (world_id, "tileY", "tileX") "ywot_tile_world_id" btree (world_id) And this is the query I'm trying to optimize: ywot=> EXPLAIN ANALYZE SELECT * FROM "ywot_tile" WHERE ("world_id" = 27685 AND "tileY" <= 6 AND "tileX" <= 9 AND "tileX" >= -2 AND "tileY" >= -1 ); QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------- Bitmap Heap Scan on ywot_tile (cost=11384.13..149421.27 rows=65989 width=168) (actual time=79.646..80.075 rows=96 loops=1) Recheck Cond: ((world_id = 27685) AND ("tileY" <= 6) AND ("tileY" >= (-1)) AND ("tileX" <= 9) AND ("tileX" >= (-2))) -> Bitmap Index Scan on ywot_tile_world_id_key (cost=0.00..11367.63 rows=65989 width=0) (actual time=79.615..79.615 rows=125 loops=1) Index Cond: ((world_id = 27685) AND ("tileY" <= 6) AND ("tileY" >= (-1)) AND ("tileX" <= 9) AND ("tileX" >= (-2))) Total runtime: 80.194 ms So the world is fixed, and we are querying for a rectangular region of tiles. Some more information that might be relevant: All the tiles for a queried region may or may not be present The height and width of a queried rectangle are typically about 10x10-20x20 For any given (world, X) or (world, Y) pair, there may be an unbounded number of matching tiles, but the worst case is currently around 10,000, and typically there are far fewer. New tiles are created far less frequently than existing ones are updated (changing the 'value'), and that itself is far less frequent that just reading as in the query above. The only thing I can think of would be to index on (world, X) and (world, Y). My guess is that the database would be able to take those two sets and intersect them. The problem is that there is a potentially unbounded number of matches for either for either of those. Is there some other kind of index that would be more appropriate?

    Read the article

  • Autocomplete server-side implementation

    - by toluju
    What is a fast and efficient way to implement the server-side component for an autocomplete feature in an html input box? I am writing a service to autocomplete user queries in our web interface's main search box, and the completions are displayed in an ajax-powered dropdown. The data we are running queries against is simply a large table of concepts our system knows about, which matches roughly with the set of wikipedia page titles. For this service obviously speed is of utmost importance, as responsiveness of the web page is important to the user experience. The current implementation simply loads all concepts into memory in a sorted set, and performs a simple log(n) lookup on a user keystroke. The tailset is then used to provide additional matches beyond the closest match. The problem with this solution is that it does not scale. It currently is running up against the VM heap space limit (I've set -Xmx2g, which is about the most we can push on our 32 bit machines), and this prevents us from expanding our concept table or adding more functionality. Switching to 64-bit VMs on machines with more memory isn't an immediate option. I've been hesitant to start working on a disk-based solution as I am concerned that disk seek time will kill performance. Are there possible solutions that will let me scale better, either entirely in memory or with some fast disk-backed implementations? Edits: @Gandalf: For our use case it is important the the autocompletion is comprehensive and isn't just extra help for the user. As for what we are completing, it is a list of concept-type pairs. For example, possible entries are [("Microsoft", "Software Company"), ("Jeff Atwood", "Programmer"), ("StackOverflow.com", "Website")]. We are using Lucene for the full search once a user selects an item from the autocomplete list, but I am not yet sure Lucene would work well for the autocomplete itself. @Glen: No databases are being used here. When I'm talking about a table I just mean the structured representation of my data. @Jason Day: My original implementation to this problem was to use a Trie, but the memory bloat with that was actually worse than the sorted set due to needing a large number of object references. I'll read on the ternary search trees to see if it could be of use.

    Read the article

  • How can I pass managed objects from one AppDomain to another?

    - by Dennis P
    I have two assemblies that I'm trying to link together. One is a sort of background process that's built with WinForms and will be designed to run as a Windows Service. I have a second project that will act as a UI for the background process whenever a user launches it. I've never tried attempting something like this with managed code before, so I've started trying to use windows messages to communicate between the two processes. I'm struggling when it comes to passing more than just IntPtrs back and forth, however. Here's the code from a control in my UI project that registers itself with the background process: public void Init() { IntPtr hwnd = IntPtr.Zero; Process[] ps = Process.GetProcessesByName("BGServiceApp"); Process mainProcess = null; if(ps == null || ps.GetLength(0) == 0) { mainProcess = LaunchApp(); } else { mainProcess = ps[0]; } SendMessage(mainProcess.MainWindowHandle, INIT_CONNECTION, this.Handle, IntPtr.Zero); } protected override void WndProc(ref Message m) { if(m.Msg == INIT_CONFIRMED && InitComplete != null) { string message = Marshal.PtrToStringAuto(m.WParam); Marshal.FreeHGlobal(m.WParam); InitComplete(message, EventArgs.Empty); } base.WndProc(ref m); } This is the code from the background process that's supposed to receive a request from the UI process to register for status updates and send a confirmation message. protected override void WndProc(ref Message m) { if(m.Msg == INIT_CONNECTION) { RegisterUIDispatcher(m.WParam); Respond(m.WParam); } if(m.Msg == UNINIT_CONNECTION) { UnregisterUIDispatcher(m.WParam); if(m_RegisteredDispatchers.Count == 0) { this.Close(); } } base.WndProc(ref m); } private void Respond(IntPtr caller) { string test = "Registration confirmed!"; IntPtr ptr = Marshal.StringToHGlobalAuto(test); SendMessage(caller, INIT_CONFIRMED, ptr, IntPtr.Zero); } } The UI process receives the INIT_CONFIRMED message from my background process, but when I try to marshal the IntPtr back into a string, I get an empty string. Is the area of heap I'm using out of scope to the other process or am I missing some security attribute maybe? Is there a better and cleaner way to go about something like this using an event driven model?

    Read the article

  • Java Runtime.freeMemory() returning bizarre results when adding more objects

    - by Sotirios Delimanolis
    For whatever reason, I wanted to see how many objects I could create and populate a LinkedList with. I used Runtime.getRuntime().freeMemory() to get the approximation of free memory in my JVM. I wrote this: public static void main(String[] arg) { Scanner kb = new Scanner(System.in); List<Long> mem = new LinkedList<Long>(); while (true) { System.out.println("Max memory: " + Runtime.getRuntime().maxMemory() + ". Available memory: " + Runtime.getRuntime().freeMemory() + " bytes. Press enter to use more."); String s = kb.nextLine(); if (s.equals("m")) for (int i = 0; i < 1000000; i++) { mem.add(new Long((new Random()).nextLong())); } } } If I write in m, the app adds a million Long objects to the list. You would think the more objects (to which we have references, so can't be gc'ed), the less free memory. Running the code: Max memory: 1897725952. Available memory: 127257696 bytes. m Max memory: 1897725952. Available memory: 108426520 bytes. m Max memory: 1897725952. Available memory: 139873296 bytes. m Max memory: 1897725952. Available memory: 210632232 bytes. m Max memory: 1897725952. Available memory: 137268792 bytes. m Max memory: 1897725952. Available memory: 239504784 bytes. m Max memory: 1897725952. Available memory: 169507792 bytes. m Max memory: 1897725952. Available memory: 259686128 bytes. m Max memory: 1897725952. Available memory: 189293488 bytes. m Max memory: 1897725952. Available memory: 387686544 bytes. The available memory fluctuates. How does this happen? Is the GC cleaning up other things (what other things are there on the heap to really clean up?), is the freeMemory() method returning an approximation that's way off? Am I missing something or am I crazy?

    Read the article

  • Parallel.For maintain input list order on output list

    - by romeozor
    I'd like some input on keeping the order of a list during heavy-duty operations that I decided to try to do in a parallel manner to see if it boosts performance. (It did!) I came up with a solution, but since this was my first attempt at anything parallel, I'd need someone to slap my hands if I did something very stupid. There's a query that returns a list of card owners, sorted by name, then by date of birth. This needs to be rendered in a table on a web page (ASP.Net WebForms). The original coder decided he would construct the table cell-by-cell (TableCell), add them to rows (TableRow), then each row to the table. So no GridView, allegedly its performance is bad, but the performance was very poor regardless :). The database query returns in no time, the most time is spent on looping through the results and adding table cells etc. I made the following method to maintain the original order of the list: private TableRow[] ComposeRows(List<CardHolder> queryResult) { int queryElementsCount = queryResult.Count(); // array with the query's size var rowArray = new TableRow[queryElementsCount]; Parallel.For(0, queryElementsCount, i => { var row = new TableRow(); var cell = new TableCell(); // various operations, including simple ones such as: cell.Text = queryResult[i].Name; row.Cells.Add(cell); // here I'm adding the current item to it's original index // to maintain order in the output list rowArray[i] = row; }); return rowArray; } So as you can see, because I'm returning a very different type of data (List<CardHolder> -> TableRow[]), I can't just simply omit the ordering from the original query to do it after the operations. Also, I also thought it would be a good idea to Dispose() the objects at the end of each loop, because the query can return a huge list and letting cell and row objects pile up in the heap could impact performance.(?) How badly did I do? Does anyone have a better solution in case mine is flawed?

    Read the article

  • Good style for handling constructor failure of critical object

    - by mtlphil
    I'm trying to decide between two ways of instantiating an object & handling any constructor exceptions for an object that is critical to my program, i.e. if construction fails the program can't continue. I have a class SimpleMIDIOut that wraps basic Win32 MIDI functions. It will open a MIDI device in the constructor and close it in the destructor. It will throw an exception inherited from std::exception in the constructor if the MIDI device cannot be opened. Which of the following ways of catching constructor exceptions for this object would be more in line with C++ best practices Method 1 - Stack allocated object, only in scope inside try block #include <iostream> #include "simplemidiout.h" int main() { try { SimpleMIDIOut myOut; //constructor will throw if MIDI device cannot be opened myOut.PlayNote(60,100); //..... //myOut goes out of scope outside this block //so basically the whole program has to be inside //this block. //On the plus side, it's on the stack so //destructor that handles object cleanup //is called automatically, more inline with RAII idiom? } catch(const std::exception& e) { std::cout << e.what() << std::endl; std::cin.ignore(); return 1; } std::cin.ignore(); return 0; } Method 2 - Pointer to object, heap allocated, nicer structured code? #include <iostream> #include "simplemidiout.h" int main() { SimpleMIDIOut *myOut; try { myOut = new SimpleMIDIOut(); } catch(const std::exception& e) { std::cout << e.what() << std::endl; delete myOut; return 1; } myOut->PlayNote(60,100); std::cin.ignore(); delete myOut; return 0; } I like the look of the code in Method 2 better, don't have to jam my whole program into a try block, but Method 1 creates the object on the stack so C++ manages the object's life time, which is more in tune with RAII philosophy isn't it? I'm still a novice at this so any feedback on the above is much appreciated. If there's an even better way to check for/handle constructor failure in a siatuation like this please let me know.

    Read the article

  • Who calls the Destructor of the class when operator delete is used in multiple inheritance.

    - by dicaprio-leonard
    This question may sound too silly, however , I don't find concrete answer any where else. With little knowledge on how late binding works and virtual keyword used in inheritance. As in the code sample, when in case of inheritance where a base class pointer pointing to a derived class object created on heap and delete operator is used to deallocate the memory , the destructor of the of the derived and base will be called in order only when the base destructor is declared virtual function. Now my question is : 1) When the destructor of base is not virtual, why the problem of not calling derived dtor occur only when in case of using "delete" operator , why not in the case given below: derived drvd; base *bPtr; bPtr = &drvd; //DTOR called in proper order when goes out of scope. 2) When "delete" operator is used, who is reponsible to call the destructor of the class? The operator delete will have an implementation to call the DTOR ? or complier writes some extra stuff ? If the operator has the implementation then how does it looks like , [I need sample code how this would have been implemented]. 3) If virtual keyword is used in this example, how does operator delete now know which DTOR to call? Fundamentaly i want to know who calls the dtor of the class when delete is used. Sample Code class base { public: base() { cout<<"Base CTOR called"<<endl; } virtual ~base() { cout<<"Base DTOR called"<<endl; } }; class derived:public base { public: derived() { cout<<"Derived CTOR called"<<endl; } ~derived() { cout<<"Derived DTOR called"<<endl; } }; I'm not sure if this is a duplicate, I couldn't find in search. int main() { base *bPtr = new derived(); delete bPtr;// only when you explicitly try to delete an object return 0; }

    Read the article

  • Memory Leak with Swing Drag and Drop

    - by tom
    I have a JFrame that accepts top-level drops of files. However after a drop has occurred, references to the frame are held indefinitely inside some Swing internal classes. I believe that disposing of the frame should release all of its resources, so what am I doing wrong? Example import java.awt.datatransfer.DataFlavor; import java.io.File; import java.util.List; import javax.swing.JFrame; import javax.swing.JLabel; import javax.swing.TransferHandler; public class DnDLeakTester extends JFrame { public static void main(String[] args) { new DnDLeakTester(); //Prevent main from returning or the jvm will exit while (true) { try { Thread.sleep(10000); } catch (InterruptedException e) { } } } public DnDLeakTester() { super("I'm leaky"); add(new JLabel("Drop stuff here")); setTransferHandler(new TransferHandler() { @Override public boolean canImport(final TransferSupport support) { return (support.isDrop() && support .isDataFlavorSupported(DataFlavor.javaFileListFlavor)); } @Override public boolean importData(final TransferSupport support) { if (!canImport(support)) { return false; } try { final List<File> files = (List<File>) support.getTransferable().getTransferData(DataFlavor.javaFileListFlavor); for (final File f : files) { System.out.println(f.getName()); } } catch (Exception e) { e.printStackTrace(); } return true; } }); setDefaultCloseOperation(DISPOSE_ON_CLOSE); pack(); setVisible(true); } } To reproduce, run the code and drop some files on the frame. Close the frame so it's disposed of. To verify the leak I take a heap dump using JConsole and analyse it with the Eclipse Memory Analysis tool. It shows that sun.awt.AppContext is holding a reference to the frame through its hashmap. It looks like TransferSupport is at fault. What am I doing wrong? Should I be asking the DnD support code to clean itself up somehow? I'm running JDK 1.6 update 19.

    Read the article

  • Basic Custom String Class for C++

    - by wdow88
    Hey all, I'm working on building my own string class with very basic functionality. I am having difficulty understand what is going on with the basic class that I have define, and believe there is some sort of error dealing with the scope occurring. When I try to view the objects I created, all the fields are described as (obviously bad pointer). Also, if I make the data fields public or build an accessor method, the program crashes. For some reason the pointer for the object is 0xccccccccc which points to no where. How can a I fix this? Any help/comments are much appreciated. //This is a custom string class, so far the only functions are //constructing and appending #include<iostream> using namespace std; class MyString1 { public: MyString1() { //no arg constructor char *string; string = new char[0]; string[0] ='\0'; std::cout << string; size = 1; } //constructor receives pointer to character array MyString1(char* chars) { int index = 0; //Determine the length of the array while (chars[index] != NULL) index++; //Allocate dynamic memory on the heap char *string; string = new char[index+1]; //Copy the contents of the array pointed by chars into string, the char array of the object for (int ii = 0; ii < index; ii++) string[ii] = chars[ii]; string[index+1] = '\0'; size = index+1; } MyString1 append(MyString1 s) { //determine new size of the appended array and allocate memory int newsize = s.size + size; MyString1 MyString2; char *newstring; newstring = new char[newsize+1]; int index = 0; //load the first string into the array while (string[index] != NULL) { newstring[index] = string[index]; index++; } //load the second string while (s.string[index] != NULL) { newstring[index] = s.string[index]; index++; } //null terminate newstring[newsize+1] = '\0'; delete string; //generate the object for return MyString2.string=newstring; MyString2.size=newsize; return MyString2; } private: char *string; int size; }; int main() { MyString1 string1; MyString1 string2("Hello There"); MyString1 string3("Buddy"); string2.append(string3); return 0; }

    Read the article

  • StringBuffer wont read whole stream into a string (JAVA/Android)

    - by Levara
    Hi all! I'm making an android program that retrieves content of a webpage using HttpURLConnection. I'm new to both Java and Android. Problem is: Reader reads whole page source, but in the last while iteration it doesn't append to stringBuffer that last part. Using debbuger I have determined that, in the last loop iteration, string buff is created, but stringBuffer just doesnt append it. I need to parse retrieved content. Is there any better way to handle the content for parsing than using strings. I've read on numerous other sites that string size in Java is limited only by available heap size. I've tried with StringBuilder too. Anyone know what could be the problem. Btw feel free to suggest any improvements to the code. Thanks! URL u; try { u = new URL("http://feeds.timesonline.co.uk/c/32313/f/440134/index.rss"); HttpURLConnection c = (HttpURLConnection) u.openConnection(); c.setRequestProperty("User-agent","Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322; InfoPath.1; .NET CLR 2.0.50727)"); c.setRequestMethod("GET"); c.setDoOutput(true); c.setReadTimeout(3000); c.connect(); StringBuffer stringBuffer = new StringBuffer(""); InputStream in = c.getInputStream(); InputStreamReader inp = new InputStreamReader(in); BufferedReader reader = new BufferedReader(inp); char[] buffer = new char[3072]; int len1 = 0; while ( (len1 = reader.read(buffer)) != -1 ) { String buff = new String(buffer,0,len1); stringBuffer.append(buff); } String stranica = new String(stringBuffer); c.disconnect(); reader.close(); inp.close(); in.close();

    Read the article

  • Learning... anything really

    - by WebDevHobo
    I'm particularly interested in Windows PowerShell, but here's a somewhat more general complaint: When asking for help on learning something new, be it a small subject on PHP or understanding a class in Java, what usually happens is that people direct me towards the documentation pages. What I'm looking for is somewhat of a course. A deep explanation of why something works the way it does. I know my basic programming, like Java and C#. I've never seen C or C++, though I have seen a bit of assembler. I know what the Stack and Heap are, how boxing and unboxing works, why you have to deep-copy an array instead of copying the pointer and some other things. Windows PowerShell on the other hand, I know nothing about. And I notice that when reading the small document or some code, I usually forget what it does or why it works. What I am looking for is preferably, a nice tutorial that explains the beginnings, the concepts, and goes to more difficult things at a steady pace. The only thing documentation can do is explain what a function does. That's no good to me since I don't know what I want to do yet. I could read about a thousand functions, and forget about most of them, because I don't need to implement them right after it. Randomly wandering through the documentation doesn't do me any good. So conclude, what is a good tutorial on Windows Powershell? One which explains in clear language what is happening, one which builds on previous things learned. I don't think googling this is a good idea. Doing a Google search on this would turn up numerous tutorials. And experience tells me that you have to look long and hard to find the gem you're looking for. That's why I'm asking here. Because this is the place where you can find more experienced people. Many of the PowerShell guys among you will know the good ones already, and by asking you, I avoid wasting time that could be spent learning. So to summarize: I will not google this!

    Read the article

  • How do I create a thread-safe write-once read-many value in Java?

    - by Software Monkey
    This is a problem I encounter frequently in working with more complex systems and which I have never figured out a good way to solve. It usually involves variations on the theme of a shared object whose construction and initialization are necessarily two distinct steps. This is generally because of architectural requirements, similar to applets, so answers that suggest I consolidate construction and initialization are not useful. By way of example, let's say I have a class that is structured to fit into an application framework like so: public class MyClass { private /*ideally-final*/ SomeObject someObject; MyClass() { someObject=null; } public void startup() { someObject=new SomeObject(...arguments from environment which are not available until startup is called...); } public void shutdown() { someObject=null; // this is not necessary, I am just expressing the intended scope of someObject explicitly } } I can't make someObject final since it can't be set until startup() is invoked. But I would really like it to reflect it's write-once semantics and be able to directly access it from multiple threads, preferably avoiding synchronization. The idea being to express and enforce a degree of finalness, I conjecture that I could create a generic container, like so: public class WoRmObject<T> { private T object; WoRmObject() { object=null; } public WoRmObject set(T val) { object=val; return this; } public T get() { return object; } } and then in MyClass, above, do: private final WoRmObject<SomeObject> someObject; MyClass() { someObject=new WoRmObject<SomeObject>(); } public void startup() { someObject.set(SomeObject(...arguments from environment which are not available until startup is called...)); } Which raises some questions for me: Is there a better way, or existing Java object (would have to be available in Java 4)? Is this thread-safe provided that no other thread accesses someObject.get() until after it's set() has been called. The other threads will only invoke methods on MyClass between startup() and shutdown() - the framework guarantees this. Given the completely unsynchronized WoRmObject container, it is ever possible under either JMM to see a value of object which is neither null nor a reference to a SomeObject? In other words, does has the JMM always guaranteed that no thread can observe the memory of an object to be whatever values happened to be on the heap when the object was allocated.

    Read the article

  • C++: Trouble with Pointers, loop variables, and structs

    - by Rosarch
    Consider the following example: #include <iostream> #include <sstream> #include <vector> #include <wchar.h> #include <stdlib.h> using namespace std; struct odp { int f; wchar_t* pstr; }; int main() { vector<odp> vec; ostringstream ss; wchar_t base[5]; wcscpy_s(base, L"1234"); for (int i = 0; i < 4; i++) { odp foo; foo.f = i; wchar_t loopStr[1]; foo.pstr = loopStr; // wchar_t* = wchar_t ? Why does this work? foo.pstr[0] = base[i]; vec.push_back(foo); } for (vector<odp>::iterator iter = vec.begin(); iter != vec.end(); iter++) { cout << "Vec contains: " << iter->f << ", " << *(iter->pstr) << endl; } } This produces: Vec contains: 0, 52 Vec contains: 1, 52 Vec contains: 2, 52 Vec contains: 3, 52 I would hope that each time, iter->f and iter->pstr would yield a different result. Unfortunately, iter->pstr is always the same. My suspicion is that each time through the loop, a new loopStr is created. Instead of copying it into the struct, I'm only copying a pointer. The location that the pointer writes to is getting overwritten. How can I avoid this? Is it possible to solve this problem without allocating memory on the heap?

    Read the article

  • Understanding C++ dynamic allocation

    - by kiokko89
    Consider the following code: class CString { private: char* buff; size_t len; public: CString(const char* p):len(0), buff(nullptr) { cout << "Constructor called!"<<endl; if (p!=nullptr) { len= strlen(p); if (len>0) { buff= new char[len+1]; strcpy_s(buff, len+1, p); } } } CString (const CString& s) { cout << "Copy constructor called!"<<endl; len= s.len; buff= new char[len+1]; strcpy_s(buff, len+1, s.buff); } CString& operator = (const CString& rhs) { cout << "Assignment operator called!"<<endl; if (this != &rhs) { len= rhs.len; delete[] buff; buff= new char[len+1]; strcpy_s(buff, len+1, rhs.buff); } return *this; } CString operator + (const CString& rhs) const { cout << "Addition operator called!"<<endl; size_t lenght= len+rhs.len+1; char* tmp = new char[lenght]; strcpy_s(tmp, lenght, buff); strcat_s(tmp, lenght, rhs.buff); return CString(tmp); } ~CString() { cout << "Destructor called!"<<endl; delete[] buff; } }; int main() { CString s1("Hello"); CString s2("World"); CString s3 = s1+s2; } My problem is that I don't know how to delete the memory allocated in the addition operator function(char* tmp = new char[length]). I couldn't do this in the constructor(I tried delete[] p) because it is also called from the main function with arrays of chars as parameters which are not allocated on the heap...How can I get around this? (Sorry for my bad English...)

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62  | Next Page >