Search Results

Search found 13869 results on 555 pages for 'memory dump'.

Page 68/555 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • How to use less memory while running a task in Symfony 1.4?

    - by Guillaume Flandre
    I'm using Symfony 1.4 and Doctrine. So far I had no problem running tasks with Symfony. But now that I have to import a pretty big amount of data and save them in the database, I get the infamous "Fatal Error: Allowed memory size of XXXX bytes exhausted" During this import I'm only creating new objects, setting a few fields and saving them. I'm pretty sure it has something to do with the number of objects I'm creating when saving data. Unsetting those objects doesn't do anything though. Are there any best practices to limit memory usage in Symfony?

    Read the article

  • How does memory management in Java and C# differ?

    - by David Johnstone
    I was reading through 2010 CWE/SANS Top 25 Most Dangerous Programming Errors and one of the entries is for Buffer Copy without Checking Size of Input. It suggests using a language with features to prevent or mitigate this problem, and says: For example, many languages that perform their own memory management, such as Java and Perl, are not subject to buffer overflows. Other languages, such as Ada and C#, typically provide overflow protection, but the protection can be disabled by the programmer. I was not aware that Java and C# differed in any meaningful way with regard to memory management. How is it that Java is not subject to buffer overflows, while C# only protects against overflows? And how is it possible to disable this protection in C#?

    Read the article

  • Where are the static methods in gcc's dump file.c.135r.jump

    - by Customizer
    When I run gcc with the parameter -fdump-rtl-jump, I get a dump file with the name file.c.135r.jump, where I can read some information about the intermediate representation of the methods in my C or C++ file. I just recently discovered, that the static methods of a project are missing in this dump file. Do you know, why they are missing in that representation and if there is a possibility to include the static methods in this file, too. Update (some additional information): The test program, I'm using here, is the Hybrid OpenMP MPI Benchmark. Update2: I just reproduced the problem with a serial application, so it has nothing to do with parallel sections.

    Read the article

  • "Thread-Safe Calls" with "Invoke" method to Winform control leads very heavy memory leak!!

    - by konnychen
    In the following link: "Make Thread-Safe Calls to Windows Forms Controls http://msdn.microsoft.com/en-us/library/ms171728.aspx" We can see an example which provide cross tread access to a winform control. But if the thread is in a while loop, it will cause the heavy memory leak. As I use taskmanage I can see the memory is increasing. Can anyone help me to solve the problem? oThread2 = new Thread(new ThreadStart(Cyclic_Call)); oThread2.Start(); delegate void SetText_lab_Statubar(string text); private void m_SetText_lab_Statubar(string text) { if (this.lab_Statubar.InvokeRequired) { SetText_lab_Statubar d = new SetText_lab_Statubar(m_SetText_lab_Statubar); this.Invoke(d, new object[] { text }); } else { this.lab_Statubar.Text = text; } } private void Cyclic_Call() { do { this.m_SetText_lab_Statubar("This string is set from thread"); Thread.Sleep(100); } while (!b_AbortThraed); }

    Read the article

  • are fixtures loaded when using the sql dump to create a test database

    - by Josh Moore
    Because of some non standard table creation options I am forced to use the sql dump instead of the standard schema.rb (i.e. I have uncommented this line in the environment.rb config.active_record.schema_format = :sql). I have noticed that when I use the sql dump that my fixtures do not seem to be loaded into the database. Some data is loaded into it but, I am not sure where it is coming from. Is this normal? and if it is normal can anybody tell me where this other data is coming from?

    Read the article

  • how can c# help in drawing using memory layers concept?

    - by moon
    hello all i am facing problem in drawing dynamically in a picture box. i works very good when the drawing objects are few but as the drawing objects increases the response time of my GUI is getting worse and worse, my GUI works very well up to 90 drawing objects but i have to support more than 1000 so this technique didn't work for me. know i have decided to adopt layers mechanism, i mean i will draw different layers of drawing in memory and then XOR them to load the final image to my display. the question is "i Can play directly with memory do draw layers using C# (Examples needed?)" other ideas are also appreciated, (Drawing objects means the shapes line,circles etc. that i have to draw on my GUI) thanx in advance

    Read the article

  • How can I load a sql "dump" file into sql alchemy

    - by JudoWill
    I have a large sql dump file ... with multiple CREATE TABLE and INSERT INTO statements. Is there any way to load these all into a SQLAlchemy sqlite database at once. I plan to use the introspected ORM from sqlsoup after I've created the tables. However, when I use the engine.execute() method it complains: sqlite3.Warning: You can only execute one statement at a time. Is there a way to work around this issue. Perhaps splitting the file with a regexp or some kind of parser, but I don't know enough SQL to get all of the cases for the regexp. Any help would be greatly appreciated. Will EDIT: Since this seems important ... The dump file was created with a MySQL database and so it has quite a few commands/syntax that sqlite3 does not understand correctly.

    Read the article

  • Are memory barriers necessary for atomic reference counting shared immutable data?

    - by Dietrich Epp
    I have some immutable data structures that I would like to manage using reference counts, sharing them across threads on an SMP system. Here's what the release code looks like: void avocado_release(struct avocado *p) { if (atomic_dec(p->refcount) == 0) { free(p->pit); free(p->juicy_innards); free(p); } } Does atomic_dec need a memory barrier in it? If so, what kind of memory barrier? Additional notes: The application must run on PowerPC and x86, so any processor-specific information is welcomed. I already know about the GCC atomic builtins. As for immutability, the refcount is the only field that changes over the duration of the object.

    Read the article

  • Will it use more and more memory if I keep drawing on the UIView?

    - by Tattat
    This is my drawRect: CGContextRef context = UIGraphicsGetCurrentContext(); CGContextSetLineWidth(context, 2.0); CGContextSetStrokeColorWithColor(context, [UIColor redColor].CGColor); CGContextMoveToPoint(context, x1, y1); CGContextAddLineToPoint(context, x2, y2); CGContextStrokePath(context); If I run this code thousand times or more. My UIView will have many lines on that. Will it use more memory than only just one line on it? Er... ...I mean, will the program remember the line I draw or after it draw the lines, it won't have any information in the memory. thz .

    Read the article

  • How can I avoid "Your system is running low on virtual memory" pop-up?

    - by Xavier Nodet
    Our application sometimes uses a lot of memory, and this is expected. But when we test it under high load on Windows XP, we usually get the very annoying "Your system is running low on virtual memory" popup, and this prevents our automated, unattended, tests to run through... Is it possible to prevent this popup to appear, and just have the allocation fail? The app will handle it gracefully, and tests will go on... We are using Windows XP, but if a solution only exists on later versions, I'd be happy to know anyway.

    Read the article

  • ASP.NET web services leak memory when (de)serializing disposable objects?

    - by Serilla
    In the following two cases, if Customer is disposable (implementing IDisposable), I believe it will not be disposed by ASP.NET, potentially being the cause of a memory leak: [WebMethod] public Customer FetchCustomer(int id) { return new Customer(id); } [WebMethod] public void SaveCustomer(Customer value) { // save it } This flaw applies to any IDisposable object. So returning a DataSet from a ASP.NET web service, for example, will also result in a memory leak - the DataSet will not be disposed. In my case, Customer opened a database connection which was cleaned up in Dispose - except Dispose was never called resulting in loads of unclosed database connections. I realise there a whole bunch of bad practices being followed here (its only an example anyway), but the point is that ASP.NET - the (de)serializer - is responsible for disposing these objects, so why doesn't it? This is an issue I was aware of for a while, but never got to the bottom of. I'm hoping somebody can confirm what I have found, and perhaps explain if there is a way of dealing with it.

    Read the article

  • stack dump accessing malloc char array

    - by robUK
    Hello, gcc 4.4.3 c89 I have the following source code. And getting a stack dump on the printf. char **devices; devices = malloc(10 * sizeof(char*)); strcpy(devices[0], "smxxxx1"); printf("[ %s ]\n", devices[0]); /* Stack dump trying to print */ I am thinking that this should create an char array like this. devices[0] devices[1] devices[2] devices[4] etc And each element I can store my strings. Many thanks for any suggestions,

    Read the article

  • Need to allocate memory before a Delphi string copy?

    - by Duncan
    Do I need to allocate memory when performing a Delphi string copy? I've a function which posts a Windows message to another form in my application. It looks something like this: // Note: PThreadMessage = ^TThreadMessage; TThreadMessage = String; function PostMyMessage( aStr : string ); var gMsgPtr : PThreadMessage; gStrLen : Integer; begin New(gMsgPtr); gStrLen := StrLen(PWideChar(aMsg)); gMsgPtr^ := Copy(aMsg, 0, gStrLen); PostMessage(ParentHandle, WM_LOGFILE, aLevel, Integer(gMsgPtr)); // Prevent Delphi from freeing this memory before consumed. LParam(gMsgPtr) := 0; end;

    Read the article

  • What is the best solution to replace a new memory allocator in an existing code?

    - by O. Askari
    During the last few days I've gained some information about memory allocators other than the standard malloc(). There are some implementations that seem to be much better than malloc() for applications with many threads. For example it seems that tcmalloc and ptmalloc have better performance. I have a C++ application that uses both malloc and new operators in many places. I thought replacing them with something like ptmalloc may improve its performance. But I wonder how does the new operator act when used in C++ application that runs on Linux? Does it use the standard behavior of malloc or something else? What is the best way to replace the new memory allocator with the old one in the code? Is there any way to override the behavior or new and malloc or do I need to replace all the calls to them one by one?

    Read the article

  • Does remove a DOM object (in Javascript) will cause Memory leak if it has event attached?

    - by seatoskyhk
    So, if in the javascript, I create a DOM object in the HTML page, and attach event listener to the DOM object, upon I remove the the DOM from HTML page, does the event listener still exist and causing memory leak? function myTest() { var obj = document.createElement('div'); obj.addEventListener('click', function() {alert('whatever'); }); var body = document.getElementById('body'); // assume there is a <div id='body'></div> already body.appendChild(obj); } // then after some user actions. I call this: function emptyPage() { var body = document.getElementById('body'); body.innerHTML = ''; //empty it. } So, the DOM object, <div> inside body is gone. But what about the eventlistener? I'm just afraid that it will cause memory leak.

    Read the article

  • Edit very large sql dump/text file (on linux)

    - by geo
    I have to import a large mysql dump (up to 10G). However the sql dump already predefined with a database structure with index definition. I want to speed up the db insert by removing the index and table definition. That means I have to remove/edit the first few lines of a 10G text file. What is the most efficient way to do this on linux? Programs that require loading the entire file into RAM will be an overkill to me.

    Read the article

  • procdump on w3wp.exe: Only part of a ReadProcessMemory or WriteProcessMemory request was completed

    - by JakeS
    I'm having a problem with an IIS application that occasionally spikes up in CPU usage, and am trying to use procdump to get a memory dump for examination. I'm running "procdump.exe -64 -mA 9999" where 9999 is the pid of the process. But every time I do it, I get an error: Only part of a ReadProcessMemory or WriteProcessMemory request was completed. Doing this also recycles the apppool, relieving the CPU spike, so I can't keep trying until I get it right. Does anyone know what is going wrong? EDIT WITH MORE INFO: So far I've failed to generate a debug dump no matter what tool I try. All of them seem to generate the same sort of error. This is 2008 R2 Datacenter running IIS7 with a 64-bit asp.net web site. My best guess is that something is getting blocked, causing some requests to remain open in IIS and gradually using up resources. If I monitor the worker process using the IIS Manager and view all requests, throughout the day I'll start to see some requests that "stick" and run forever. Some of these are for static files. Some are for aspx pages. I cannot see any "common" reason for them. Every once in a while the app pool starts taking up 100% CPU and the only remedy is to kill it.

    Read the article

  • How to simulate inner join on very large files in java (without running out of memory)

    - by Constantin
    I am trying to simulate SQL joins using java and very large text files (INNER, RIGHT OUTER and LEFT OUTER). The files have already been sorted using an external sort routine. The issue I have is I am trying to find the most efficient way to deal with the INNER join part of the algorithm. Right now I am using two Lists to store the lines that have the same key and iterate through the set of lines in the right file once for every line in the left file (provided the keys still match). In other words, the join key is not unique in each file so would need to account for the Cartesian product situations ... left_01, 1 left_02, 1 right_01, 1 right_02, 1 right_03, 1 left_01 joins to right_01 using key 1 left_01 joins to right_02 using key 1 left_01 joins to right_03 using key 1 left_02 joins to right_01 using key 1 left_02 joins to right_02 using key 1 left_02 joins to right_03 using key 1 My concern is one of memory. I will run out of memory if i use the approach below but still want the inner join part to work fairly quickly. What is the best approach to deal with the INNER join part keeping in mind that these files may potentially be huge public class Joiner { private void join(BufferedReader left, BufferedReader right, BufferedWriter output) throws Throwable { BufferedReader _left = left; BufferedReader _right = right; BufferedWriter _output = output; Record _leftRecord; Record _rightRecord; _leftRecord = read(_left); _rightRecord = read(_right); while( _leftRecord != null && _rightRecord != null ) { if( _leftRecord.getKey() < _rightRecord.getKey() ) { write(_output, _leftRecord, null); _leftRecord = read(_left); } else if( _leftRecord.getKey() > _rightRecord.getKey() ) { write(_output, null, _rightRecord); _rightRecord = read(_right); } else { List<Record> leftList = new ArrayList<Record>(); List<Record> rightList = new ArrayList<Record>(); _leftRecord = readRecords(leftList, _leftRecord, _left); _rightRecord = readRecords(rightList, _rightRecord, _right); for( Record equalKeyLeftRecord : leftList ){ for( Record equalKeyRightRecord : rightList ){ write(_output, equalKeyLeftRecord, equalKeyRightRecord); } } } } if( _leftRecord != null ) { write(_output, _leftRecord, null); _leftRecord = read(_left); while(_leftRecord != null) { write(_output, _leftRecord, null); _leftRecord = read(_left); } } else { if( _rightRecord != null ) { write(_output, null, _rightRecord); _rightRecord = read(_right); while(_rightRecord != null) { write(_output, null, _rightRecord); _rightRecord = read(_right); } } } _left.close(); _right.close(); _output.flush(); _output.close(); } private Record read(BufferedReader reader) throws Throwable { Record record = null; String data = reader.readLine(); if( data != null ) { record = new Record(data.split("\t")); } return record; } private Record readRecords(List<Record> list, Record record, BufferedReader reader) throws Throwable { int key = record.getKey(); list.add(record); record = read(reader); while( record != null && record.getKey() == key) { list.add(record); record = read(reader); } return record; } private void write(BufferedWriter writer, Record left, Record right) throws Throwable { String leftKey = (left == null ? "null" : Integer.toString(left.getKey())); String leftData = (left == null ? "null" : left.getData()); String rightKey = (right == null ? "null" : Integer.toString(right.getKey())); String rightData = (right == null ? "null" : right.getData()); writer.write("[" + leftKey + "][" + leftData + "][" + rightKey + "][" + rightData + "]\n"); } public static void main(String[] args) { try { BufferedReader leftReader = new BufferedReader(new FileReader("LEFT.DAT")); BufferedReader rightReader = new BufferedReader(new FileReader("RIGHT.DAT")); BufferedWriter output = new BufferedWriter(new FileWriter("OUTPUT.DAT")); Joiner joiner = new Joiner(); joiner.join(leftReader, rightReader, output); } catch (Throwable e) { e.printStackTrace(); } } } After applying the ideas from the proposed answer, I changed the loop to this private void join(RandomAccessFile left, RandomAccessFile right, BufferedWriter output) throws Throwable { long _pointer = 0; RandomAccessFile _left = left; RandomAccessFile _right = right; BufferedWriter _output = output; Record _leftRecord; Record _rightRecord; _leftRecord = read(_left); _rightRecord = read(_right); while( _leftRecord != null && _rightRecord != null ) { if( _leftRecord.getKey() < _rightRecord.getKey() ) { write(_output, _leftRecord, null); _leftRecord = read(_left); } else if( _leftRecord.getKey() > _rightRecord.getKey() ) { write(_output, null, _rightRecord); _pointer = _right.getFilePointer(); _rightRecord = read(_right); } else { long _tempPointer = 0; int key = _leftRecord.getKey(); while( _leftRecord != null && _leftRecord.getKey() == key ) { _right.seek(_pointer); _rightRecord = read(_right); while( _rightRecord != null && _rightRecord.getKey() == key ) { write(_output, _leftRecord, _rightRecord ); _tempPointer = _right.getFilePointer(); _rightRecord = read(_right); } _leftRecord = read(_left); } _pointer = _tempPointer; } } if( _leftRecord != null ) { write(_output, _leftRecord, null); _leftRecord = read(_left); while(_leftRecord != null) { write(_output, _leftRecord, null); _leftRecord = read(_left); } } else { if( _rightRecord != null ) { write(_output, null, _rightRecord); _rightRecord = read(_right); while(_rightRecord != null) { write(_output, null, _rightRecord); _rightRecord = read(_right); } } } _left.close(); _right.close(); _output.flush(); _output.close(); } UPDATE While this approach worked, it was terribly slow and so I have modified this to create files as buffers and this works very well. Here is the update ... private long getMaxBufferedLines(File file) throws Throwable { long freeBytes = Runtime.getRuntime().freeMemory() / 2; return (freeBytes / (file.length() / getLineCount(file))); } private void join(File left, File right, File output, JoinType joinType) throws Throwable { BufferedReader leftFile = new BufferedReader(new FileReader(left)); BufferedReader rightFile = new BufferedReader(new FileReader(right)); BufferedWriter outputFile = new BufferedWriter(new FileWriter(output)); long maxBufferedLines = getMaxBufferedLines(right); Record leftRecord; Record rightRecord; leftRecord = read(leftFile); rightRecord = read(rightFile); while( leftRecord != null && rightRecord != null ) { if( leftRecord.getKey().compareTo(rightRecord.getKey()) < 0) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.LeftExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, leftRecord, null); } leftRecord = read(leftFile); } else if( leftRecord.getKey().compareTo(rightRecord.getKey()) > 0 ) { if( joinType == JoinType.RightOuterJoin || joinType == JoinType.RightExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, null, rightRecord); } rightRecord = read(rightFile); } else if( leftRecord.getKey().compareTo(rightRecord.getKey()) == 0 ) { String key = leftRecord.getKey(); List<File> rightRecordFileList = new ArrayList<File>(); List<Record> rightRecordList = new ArrayList<Record>(); rightRecordList.add(rightRecord); rightRecord = consume(key, rightFile, rightRecordList, rightRecordFileList, maxBufferedLines); while( leftRecord != null && leftRecord.getKey().compareTo(key) == 0 ) { processRightRecords(outputFile, leftRecord, rightRecordFileList, rightRecordList, joinType); leftRecord = read(leftFile); } // need a dispose for deleting files in list } else { throw new Exception("DATA IS NOT SORTED"); } } if( leftRecord != null ) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.LeftExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, leftRecord, null); } leftRecord = read(leftFile); while(leftRecord != null) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.LeftExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, leftRecord, null); } leftRecord = read(leftFile); } } else { if( rightRecord != null ) { if( joinType == JoinType.RightOuterJoin || joinType == JoinType.RightExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, null, rightRecord); } rightRecord = read(rightFile); while(rightRecord != null) { if( joinType == JoinType.RightOuterJoin || joinType == JoinType.RightExclusiveJoin || joinType == JoinType.FullExclusiveJoin || joinType == JoinType.FullOuterJoin ) { write(outputFile, null, rightRecord); } rightRecord = read(rightFile); } } } leftFile.close(); rightFile.close(); outputFile.flush(); outputFile.close(); } public void processRightRecords(BufferedWriter outputFile, Record leftRecord, List<File> rightFiles, List<Record> rightRecords, JoinType joinType) throws Throwable { for(File rightFile : rightFiles) { BufferedReader rightReader = new BufferedReader(new FileReader(rightFile)); Record rightRecord = read(rightReader); while(rightRecord != null){ if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.RightOuterJoin || joinType == JoinType.FullOuterJoin || joinType == JoinType.InnerJoin ) { write(outputFile, leftRecord, rightRecord); } rightRecord = read(rightReader); } rightReader.close(); } for(Record rightRecord : rightRecords) { if( joinType == JoinType.LeftOuterJoin || joinType == JoinType.RightOuterJoin || joinType == JoinType.FullOuterJoin || joinType == JoinType.InnerJoin ) { write(outputFile, leftRecord, rightRecord); } } } /** * consume all records having key (either to a single list or multiple files) each file will * store a buffer full of data. The right record returned represents the outside flow (key is * already positioned to next one or null) so we can't use this record in below while loop or * within this block in general when comparing current key. The trick is to keep consuming * from a List. When it becomes empty, re-fill it from the next file until all files have * been consumed (and the last node in the list is read). The next outside iteration will be * ready to be processed (either it will be null or it points to the next biggest key * @throws Throwable * */ private Record consume(String key, BufferedReader reader, List<Record> records, List<File> files, long bufferMaxRecordLines ) throws Throwable { boolean processComplete = false; Record record = records.get(records.size() - 1); while(!processComplete){ long recordCount = records.size(); if( record.getKey().compareTo(key) == 0 ){ record = read(reader); while( record != null && record.getKey().compareTo(key) == 0 && recordCount < bufferMaxRecordLines ) { records.add(record); recordCount++; record = read(reader); } } processComplete = true; // if record is null, we are done if( record != null ) { // if the key has changed, we are done if( record.getKey().compareTo(key) == 0 ) { // Same key means we have exhausted the buffer. // Dump entire buffer into a file. The list of file // pointers will keep track of the files ... processComplete = false; dumpBufferToFile(records, files); records.clear(); records.add(record); } } } return record; } /** * Dump all records in List of Record objects to a file. Then, add that * file to List of File objects * * NEED TO PLACE A LIMIT ON NUMBER OF FILE POINTERS (check size of file list) * * @param records * @param files * @throws Throwable */ private void dumpBufferToFile(List<Record> records, List<File> files) throws Throwable { String prefix = "joiner_" + files.size() + 1; String suffix = ".dat"; File file = File.createTempFile(prefix, suffix, new File("cache")); BufferedWriter writer = new BufferedWriter(new FileWriter(file)); for( Record record : records ) { writer.write( record.dump() ); } files.add(file); writer.flush(); writer.close(); }

    Read the article

  • Spin-off of "Project: Memory++" in Khan academy [on hold]

    - by smraj
    This is the link of the program that I am trying https://www.khanacademy.org/cs/memory-tile-game/5966959895642112 When I am placing the mouse over the block it should change to red colour and when it is released the image should be displayed but my issue is that when i place the mouse over the block it changes its color ,but on release the image is not displayed.I kindly request someone in solving this

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >