Search Results

Search found 12281 results on 492 pages for 'memory fences'.

Page 67/492 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • How to I serialize a large graph of .NET object into a SQL Server BLOB without creating a large bu

    - by Ian Ringrose
    We have code like: ms = New IO.MemoryStream bin = New System.Runtime.Serialization.Formatters.Binary.BinaryFormatter bin.Serialize(ms, largeGraphOfObjects) dataToSaveToDatabase = ms.ToArray() // put dataToSaveToDatabase in a Sql server BLOB But the memory steam allocates a large buffer from the large memory heap that is giving us problems. So how can we stream the data without needing enough free memory to hold the serialized objects. I am looking for a way to get a Stream from SQL server that can then be passed to bin.Serialize() so avoiding keeping all the data in my processes memory. Likewise for reading the data back... Some more background. This is part of a complex numerical processing system that processes data in near real time looking for equipment problems etc, the serialization is done to allow a restart when there is a problem with data quality from a data feed etc. (We store the data feeds and can rerun them after the operator has edited out bad values.) Therefore we serialize the object a lot more often then we de-serialize them. The objects we are serializing include very large arrays mostly of doubles as well as a lot of small “more normal” objects. We are pushing the memory limit on a 32 bit system and make the garage collector work very hard. (Effects are being made elsewhere in the system to improve this, e.g. reusing large arrays rather then create new arrays.) Often the serialization of the state is the last straw that courses an out of memory exception; our peak memory usage is while this serialization is being done. I think we get large memory pool fragmentation when we de-serialize the object, I expect there are also other problem with large memory pool fragmentation given the size of the arrays. (This has not yet been investigated, as the person that first looked at this is a numerical processing expert, not a memory management expert.) Are customers use a mix of Sql Server 2000, 2005 and 2008 and we would rather not have different code paths for each version of Sql Server if possible. We can have many active models at a time (in different process, across many machines), each model can have many saved states. Hence the saved state is stored in a database blob rather then a file. As the spread of saving the state is important, I would rather not serialize the object to a file, and then put the file in a BLOB one block at a time. Other related questions I have asked How to Stream data from/to SQL Server BLOB fields? Is there a SqlFileStream like class that works with Sql Server 2005?

    Read the article

  • In-Memory OLTP Sample for SQL Server 2014 RTM

    - by Damian
    I have just found a very good resource about Hekaton (In-memory OLTP feature in the SQL Server 2014). On the Codeplex site you can find the newest Hekaton samples - https://msftdbprodsamples.codeplex.com/releases/view/114491. The latest samples we have were related to the CTP2 version but the newest will work with the RTM version.There are some issues fixed you might find if you tried to run the previous samples on the RTM version:Update (Apr 28, 2014): Fixed an issue where the isolation level for sample stored procedures demonstrating integrity checks was too low. The transaction isolation level for the following stored procedures was updated: Sales.uspInsertSpecialOfferProductinmem, Sales.uspDeleteSpecialOfferinmem, Production.uspInsertProductinmem, and Production.uspDeleteProductinmem. 

    Read the article

  • OpenGL Shading Program Object Memory Requirement

    - by Hans Wurst
    gDEbugger states that OpenGL's program objects only occupy an insignificant amount of memory. How much is this actually? I don't know if the stuff I looked up in mesa is actually that I was looking for but it requires 16KB [Edit: false, confusing struct names, less than 1KB immediate, some further behind pointers] per program object. Not quite insignificant. So is it recommended to create a unique program object for each object of the scene? Or to share a single program object and set the scene's object's custom variables just before its draw call?

    Read the article

  • Grails application hogging too much memory

    - by RN
    Tomcat 5.5.x and 6.0.x Grails 1.6.x Java 1.6.x OS CentOS 5.x (64bit) VPS Server with memory as 384M export JAVA_OPTS='-Xms128M -Xmx512M -XX:MaxPermSize=1024m' I have created a blank Grails application i.e simply by giving the command grails create-app and then WARed it I am running Tomcat on a VPS Server When I simply start the Tomcat server, with no apps deployed, the free memory is about 236M and used memory is about 156M When I deploy my "blank" application, the memory consumption spikes to 360M and finally the Tomcat instance is killed as soon as it takes up all free memory As you have seen, my app is as light as it can be. Not sure why the memory consumption is as high it is. I am actually troubleshooting a real application, but have narrowed down to this scenario which is easier to share and explain.

    Read the article

  • In-Memory OLTP Sample for SQL Server 2014 RTM

    - by Damian
    I have just found a very good resource about Hekaton (In-memory OLTP feature in the SQL Server 2014). On the Codeplex site you can find the newest Hekaton samples - https://msftdbprodsamples.codeplex.com/releases/view/114491. The latest samples we have were related to the CTP2 version but the newest will work with the RTM version.There are some issues fixed you might find if you tried to run the previous samples on the RTM version:Update (Apr 28, 2014): Fixed an issue where the isolation level for sample stored procedures demonstrating integrity checks was too low. The transaction isolation level for the following stored procedures was updated: Sales.uspInsertSpecialOfferProductinmem, Sales.uspDeleteSpecialOfferinmem, Production.uspInsertProductinmem, and Production.uspDeleteProductinmem. 

    Read the article

  • How can I give eclipse more memory than 512M?

    - by newbie
    I have following setup, but when I put 1024 and replace all 512 with 1024, then eclipse won't start at all. How can I have more than 512M memory for my eclipse JVM? -startup plugins/org.eclipse.equinox.launcher_1.0.201.R35x_v20090715.jar --launcher.library plugins/org.eclipse.equinox.launcher.win32.win32.x86_1.0.200.v20090519 -product com.springsource.sts.ide --launcher.XXMaxPermSize 512M -vm C:\Program Files (x86)\Java\jdk1.6.0_18\bin\javaw -vmargs -Dosgi.requiredJavaVersion=1.5 -Xms512m -Xmx512m -XX:MaxPermSize=512m

    Read the article

  • Strategies to Defeat Memory Editors for Cheating - Desktop Games

    - by ashes999
    I'm assuming we're talking about desktop games -- something the player downloads and runs on their local computer. Many are the memory editors that allow you to detect and freeze values, like your player's health. How do you prevent cheating? What strategies are effective to combat this kind of cheating? I'm looking for some good ones. Two I use that are mediocre are: Displaying values as a percentage instead of the number (eg. 46/50 = 92% health) A low-level class that holds values in an array and moves them with each change

    Read the article

  • Best way to store motion changes to reduce memory

    - by Andrew Simpson
    I am comparing jpeg to jpeg in a constant 'video-stream'. i am using EMGU/OpenCV to compare each pixels at the byte level. There are 3 channels to each image (RGB). I had heard that it is common practice to store only the pixels that have changed between frames as a way of conserving memory space. But, if for instance/example I say EVERY pixel has changed (pls note i am using an exaggerated example to make my point and i would normally discard such large changes) then the resultant bytes saved is 3 times larger than the original jpeg. How can I store such motion changes efficiently? thanks

    Read the article

  • memory map huge file with boost

    - by HaveF
    I want to handle huge files(TB), after several searches, I find boost could be help boost/interprocess/file_mapping.hpp and I also find the demo code. Because the file that I read is too large(TB), so I think I should create a fixed-size of memory(say 1GB), and remap it when the data isn't on the page. But I don't know how to write this part. I only find another web page, which use "boost.iostreams" to handle this problem. I should use the boost.iostreams? or boost.interprocess.file_mapping? (if this one, please show me some codes), thanks!

    Read the article

  • Book about tcp, http, named pipe, shared memory, wcf and other inter-process communication protocol

    - by Samuel
    Recently, I had to create a program to send messages between two winforms executable. I used a tool with simple built-in functionalities to prevent having to figure out all the ins and outs of this vast quantity of protocols that exist. But now, I'm ready to learn more about the internals difference between each of theses protocols. I googled a couple of them but it would be greatly appreciate to have a good reference book that gives me a clean idea of how each protocol works and what are the pros and cons in a couple of context. Here is a list of nice protocols that I found: Shared memory TCP List item Named Pipe File Mapping Mailslots MSMQ (Microsoft Queue Solution) WCF I know that all of these protocols are not specific to a language, it would be nice if example could be in .net. Thank you very much.

    Read the article

  • Gradual memory leak in loop over contents of QTMovie

    - by Benji XVI
    I have a simple foundation tool that exports every frame of a movie as a .tiff file. Here is the relevant code: NSString* movieLoc = [NSString stringWithCString:argv[1]]; QTMovie *sourceMovie = [QTMovie movieWithFile:movieLoc error:nil]; int i=0; while (QTTimeCompare([sourceMovie currentTime], [sourceMovie duration]) != NSOrderedSame) { // save image of movie to disk NSAutoreleasePool *arp = [[NSAutoreleasePool alloc] init]; NSString *filePath = [NSString stringWithFormat:@"/somelocation_%d.tiff", i++]; NSData *currentImageData = [[sourceMovie currentFrameImage] TIFFRepresentation]; [currentImageData writeToFile:filePath atomically:NO]; NSLog(@"%@", filePath); [sourceMovie stepForward]; [arp release]; } [pool drain]; return 0; As you can see, in order to prevent very large memory buildups with the various transparently-autoreleased variables in the loop, we create, and flush, an autoreleasepool with every run through the loop. However, over the course of stepping through a movie, the amount of memory used by the program still gradually increases. Instruments is not detecting any memory leaks per se, but the object trace shows certain General Data blocks to be increasing in size. [Edited out reference to slowdown as it doesn't seem to be as much of a problem as I thought.] Edit: let's knock out some parts of the code inside the loop & see what we find out... Test 1 while (banana) { NSAutoreleasePool *arp = [[NSAutoreleasePool alloc] init]; NSString *filePath = [NSString stringWithFormat:@"/somelocation_%d.tiff", i++]; NSLog(@"%@", filePath); [sourceMovie stepForward]; [arp release]; } Here we simply loop over the whole movie, creating the filename and logging it. Memory characteristics: remains at 15MB usage for the duration. Test 2 while (banana) { NSAutoreleasePool *arp = [[NSAutoreleasePool alloc] init]; NSImage *image = [sourceMovie currentFrameImage]; [sourceMovie stepForward]; [arp release]; } Here we add back in the creation of the NSImage from the current frame. Memory characteristics: gradually increasing memory usage. RSIZE is at 60MB by frame 200; 75MB by f300. Test 3 while (banana) { NSAutoreleasePool *arp = [[NSAutoreleasePool alloc] init]; NSImage *image = [sourceMovie currentFrameImage]; NSData *imageData = [image TIFFRepresentation]; [sourceMovie stepForward]; [arp release]; } We've added back in the creation of an NSData object from the NSImage. Memory characteristics: Memory usage is again increasing: 62MB at f200; 75MB at f300. In other words, largely identical. It looks like a memory leak in the underlying system QTMovie uses to do currentFrameImage, to me.

    Read the article

  • in memory datastore in haskell

    - by Simon
    I want to implement an in memory datastore for a web service in Haskell. I want to run transactions in the stm monad. When I google hash table steam Haskell I only get this: Data. BTree. HashTable. STM. The module name and complexities suggest that this is implemented as a tree. I would think that an array would be more efficient for mutable hash tables. Is there a reason to avoid using an array for an STM hashtable? Do I gain anything with this stem hash table or should I just use a steam ref to an IntMap?

    Read the article

  • What makes an application memory bandwidth bound?

    - by TheLQ
    This has been something that's been bothering me for a while: What makes an application memory bandwidth bound? For example, take this monstrosity of a computer that calculated the 5 trillionth digit of pi (and later 10 trillionth digit). I was surprised that they choose the lower but faster 98 GB RAM at 1066 MHz instead of the larger but slower 144 GB at 800 MHz. This is especially surprising considering they are using 22 TB HD array to store the results from computation; more RAM means less need for hard drives. Maybe its because I don't write applications for HPC servers, but how would RAM be the bottleneck? Are there any other non-HPC applications that usually run into this problem?

    Read the article

  • memory and time intensive php task

    - by Goddard
    Sorry if this question has been asked before, but I couldn't find anything usable. I'm working on a project for a client and currently I have to loop through the users table which is about 3000 records and still growing. I have to do some calculations on a nightly basis which I am going to be using cron/php. The calculations script uses about 3.5mb of memory and takes about 1 second to run. When loading individual users my current php setup handles this fine, but if I try and loop through the user list my php script execution time runs out. I've read after doing some searching that I can make the page reload itself after each user calculation and just keep my previous place in the loop and this sounds like a good idea, but I wanted to hear some opinions from others that have handled similar situations and how you handled these types of tasks. Thanks.

    Read the article

  • How can I find out how much memory is physically installed in Windows?

    - by Randall
    I need to log information about how much RAM the user has. My first approach was to use GlobalMemoryStatusEx but that only gives me how much memory is available to windows, not how much is installed. I found this function GetPhysicallyInstalledSystemMemory but its only Vista and later. I need this to work on XP. Is there a fairly simple way of querying the SMBIOS information that GetPhysicallyInstalledSystemMemory was using or is there a registry value somewhere that I can find this out.

    Read the article

  • Reusing a NSString variable - does it cause a memory leak?

    - by Chris S
    Coming from a .NET background I'm use to reusing string variables for storage, so is the code below likely to cause a memory leak? The code is targeting OS X on the iphone/itouch so no automatic GC. -(NSString*) stringExample { NSString *result = @"example"; result = [result stringByAppendingString:@" test"]; // where does "example" go? return result; } What confuses me is an NSStrings are immutable, but you can reuse an 'immutable' variable with no problem.

    Read the article

  • Combining FileStream and MemoryStream to avoid disk accesses/paging while receiving gigabytes of data?

    - by w128
    I'm receiving a file as a stream of byte[] data packets (total size isn't known in advance) that I need to store somewhere before processing it immediately after it's been received (I can't do the processing on the fly). Total received file size can vary from as small as 10 KB to over 4 GB. One option for storing the received data is to use a MemoryStream, i.e. a sequence of MemoryStream.Write(bufferReceived, 0, count) calls to store the received packets. This is very simple, but obviously will result in out of memory exception for large files. An alternative option is to use a FileStream, i.e. FileStream.Write(bufferReceived, 0, count). This way, no out of memory exceptions will occur, but what I'm unsure about is bad performance due to disk writes (which I don't want to occur as long as plenty of memory is still available) - I'd like to avoid disk access as much as possible, but I don't know of a way to control this. I did some testing and most of the time, there seems to be little performance difference between say 10 000 consecutive calls of MemoryStream.Write() vs FileStream.Write(), but a lot seems to depend on buffer size and the total amount of data in question (i.e the number of writes). Obviously, MemoryStream size reallocation is also a factor. Does it make sense to use a combination of MemoryStream and FileStream, i.e. write to memory stream by default, but once the total amount of data received is over e.g. 500 MB, write it to FileStream; then, read in chunks from both streams for processing the received data (first process 500 MB from the MemoryStream, dispose it, then read from FileStream)? Another solution is to use a custom memory stream implementation that doesn't require continuous address space for internal array allocation (i.e. a linked list of memory streams); this way, at least on 64-bit environments, out of memory exceptions should no longer be an issue. Con: extra work, more room for mistakes. So how do FileStream vs MemoryStream read/writes behave in terms of disk access and memory caching, i.e. data size/performance balance. I would expect that as long as enough RAM is available, FileStream would internally read/write from memory (cache) anyway, and virtual memory would take care of the rest. But I don't know how often FileStream will explicitly access a disk when being written to. Any help would be appreciated.

    Read the article

  • How can I run ARM code from external memory?

    - by samoz
    I am using an LPC2132 ARM chip to develop a program. However, my program has grown larger than the space on the chip. How can I connect my chip to some sort of external memory chip to hold additional executable code? Is this possible? If not, what do people normally do when they run out of chip space?

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >