Search Results

Search found 12267 results on 491 pages for 'in memory'.

Page 29/491 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • C# WPF application is using too much memory while GC.GetTotalMemory() is low

    - by Dmitry
    I wrote little WPF application with 2 threads - main thread is GUI thread and another thread is worker. App has one WPF form with some controls. There is a button, allowing to select directory. After selecting directory, application scans for .jpg files in that directory and checks if their thumbnails are in hashtable. if they are, it does nothing. else it's adding their full filenames to queue for worker. Worker is taking filenames from this queue, loading JPEG images (using WPF's JpegBitmapDecoder and BitmapFrame), making thumbnails of them (using WPF's TransformedBitmap) and adding them to hashtable. Everything works fine, except memory consumption by this application when making thumbnails for big images (like 5000x5000 pixels). I've added textboxes on my form to show memory consumption (GC.GetTotalMemory() and Process.GetCurrentProcess().PrivateMemorySize64) and was very surprised, cuz GC.GetTotalMemory() stays close to 1-2 Mbytes, while private memory size constantly grows, especially when loading new image (~ +100Mb per image). Even after loading all images, making thumbnails of them and freeing original images, private memory size stays at ~700-800Mbytes. My VirtualBox is limited to 512Mb of physical memory and Windows in VirtualBox starts to swap alot to handle this huge memory consumption. I guess I'm doing something wrong, but I don't know how to investigate this problem, cuz according to GC, allocated memory size is very low. Attaching code of thumbnail loader class: class ThumbnailLoader { Hashtable thumbnails; Queue<string> taskqueue; EventWaitHandle wh; Thread[] workers; bool stop; object locker; int width, height, processed, added; public ThumbnailLoader() { int workercount,i; wh = new AutoResetEvent(false); thumbnails = new Hashtable(); taskqueue = new Queue<string>(); stop = false; locker = new object(); width = height = 64; processed = added = 0; workercount = Environment.ProcessorCount; workers=new Thread[workercount]; for (i = 0; i < workercount; i++) { workers[i] = new Thread(Worker); workers[i].IsBackground = true; workers[i].Priority = ThreadPriority.Highest; workers[i].Start(); } } public void SetThumbnailSize(int twidth, int theight) { width = twidth; height = theight; if (thumbnails.Count!=0) AddTask("#resethash"); } public void GetProgress(out int Added, out int Processed) { Added = added; Processed = processed; } private void AddTask(string filename) { lock(locker) { taskqueue.Enqueue(filename); wh.Set(); added++; } } private string NextTask() { lock(locker) { if (taskqueue.Count == 0) return null; else { processed++; return taskqueue.Dequeue(); } } } public static string FileNameToHash(string s) { return FormsAuthentication.HashPasswordForStoringInConfigFile(s, "MD5"); } public bool GetThumbnail(string filename,out BitmapFrame thumbnail) { string hash; hash = FileNameToHash(filename); if (thumbnails.ContainsKey(hash)) { thumbnail=(BitmapFrame)thumbnails[hash]; return true; } AddTask(filename); thumbnail = null; return false; } private BitmapFrame LoadThumbnail(string filename) { FileStream fs; JpegBitmapDecoder bd; BitmapFrame oldbf, bf; TransformedBitmap tb; double scale, dx, dy; fs = new FileStream(filename, FileMode.Open); bd = new JpegBitmapDecoder(fs, BitmapCreateOptions.None, BitmapCacheOption.OnLoad); oldbf = bd.Frames[0]; dx = (double)oldbf.Width / width; dy = (double)oldbf.Height / height; if (dx > dy) scale = 1 / dx; else scale = 1 / dy; tb = new TransformedBitmap(oldbf, new ScaleTransform(scale, scale)); bf = BitmapFrame.Create(tb); fs.Close(); oldbf = null; bd = null; GC.Collect(); return bf; } public void Dispose() { lock(locker) { stop = true; } AddTask(null); foreach (Thread worker in workers) { worker.Join(); } wh.Close(); } private void Worker() { string curtask,hash; while (!stop) { curtask = NextTask(); if (curtask == null) wh.WaitOne(); else { if (curtask == "#resethash") thumbnails.Clear(); else { hash = FileNameToHash(curtask); try { thumbnails[hash] = LoadThumbnail(curtask); } catch { thumbnails[hash] = null; } } } } } }

    Read the article

  • How to compare a memory bits in C++?

    - by Trunet
    Hi, I need help with a memory bit comparison function. I bought a LED Matrix here with 4 x HT1632C chips and I'm using it on my arduino mega2560. There're no code available for this chipset(it's not the same as HT1632) and I'm writing on my own. I have a plot function that get x,y coordinates and a color and that pixel turn on. Only this is working perfectly. But I need more performance on my display so I tried to make a shadowRam variable that is a "copy" of my device memory. Before I plot anything on display it checks on shadowRam to see if it's really necessary to change that pixel. When I enabled this(getShadowRam) on plot function my display has some, just SOME(like 3 or 4 on entire display) ghost pixels(pixels that is not supposed to be turned on). If I just comment the prev_color if's on my plot function it works perfectly. Also, I'm cleaning my shadowRam array setting all matrix to zero. variables: #define BLACK 0 #define GREEN 1 #define RED 2 #define ORANGE 3 #define CHIP_MAX 8 byte shadowRam[63][CHIP_MAX-1] = {0}; getShadowRam function: byte HT1632C::getShadowRam(byte x, byte y) { byte addr, bitval, nChip; if (x>=32) { nChip = 3 + x/16 + (y>7?2:0); } else { nChip = 1 + x/16 + (y>7?2:0); } bitval = 8>>(y&3); x = x % 16; y = y % 8; addr = (x<<1) + (y>>2); if ((shadowRam[addr][nChip-1] & bitval) && (shadowRam[addr+32][nChip-1] & bitval)) { return ORANGE; } else if (shadowRam[addr][nChip-1] & bitval) { return GREEN; } else if (shadowRam[addr+32][nChip-1] & bitval) { return RED; } else { return BLACK; } } plot function: void HT1632C::plot (int x, int y, int color) { if (x<0 || x>X_MAX || y<0 || y>Y_MAX) return; if (color != BLACK && color != GREEN && color != RED && color != ORANGE) return; char addr, bitval; byte nChip; byte prev_color = HT1632C::getShadowRam(x,y); bitval = 8>>(y&3); if (x>=32) { nChip = 3 + x/16 + (y>7?2:0); } else { nChip = 1 + x/16 + (y>7?2:0); } x = x % 16; y = y % 8; addr = (x<<1) + (y>>2); switch(color) { case BLACK: if (prev_color != BLACK) { // compare with memory to only set if pixel is other color // clear the bit in both planes; shadowRam[addr][nChip-1] &= ~bitval; HT1632C::sendData(nChip, addr, shadowRam[addr][nChip-1]); shadowRam[addr+32][nChip-1] &= ~bitval; HT1632C::sendData(nChip, addr+32, shadowRam[addr+32][nChip-1]); } break; case GREEN: if (prev_color != GREEN) { // compare with memory to only set if pixel is other color // set the bit in the green plane and clear the bit in the red plane; shadowRam[addr][nChip-1] |= bitval; HT1632C::sendData(nChip, addr, shadowRam[addr][nChip-1]); shadowRam[addr+32][nChip-1] &= ~bitval; HT1632C::sendData(nChip, addr+32, shadowRam[addr+32][nChip-1]); } break; case RED: if (prev_color != RED) { // compare with memory to only set if pixel is other color // clear the bit in green plane and set the bit in the red plane; shadowRam[addr][nChip-1] &= ~bitval; HT1632C::sendData(nChip, addr, shadowRam[addr][nChip-1]); shadowRam[addr+32][nChip-1] |= bitval; HT1632C::sendData(nChip, addr+32, shadowRam[addr+32][nChip-1]); } break; case ORANGE: if (prev_color != ORANGE) { // compare with memory to only set if pixel is other color // set the bit in both the green and red planes; shadowRam[addr][nChip-1] |= bitval; HT1632C::sendData(nChip, addr, shadowRam[addr][nChip-1]); shadowRam[addr+32][nChip-1] |= bitval; HT1632C::sendData(nChip, addr+32, shadowRam[addr+32][nChip-1]); } break; } } If helps: The datasheet of board I'm using. On page 7 has the memory mapping I'm using. Also, I have a video of display working.

    Read the article

  • Lua metatable Objects cannot be purge from memory?

    - by Prometheus3k
    Hi there, I'm using a proprietary platform that reported memory usage in realtime on screen. I decided to use a Class.lua I found on http://lua-users.org/wiki/SimpleLuaClasses However, I noticed memory issues when purging object created by this using a simple Account class. Specifically, I would start with say 146k of memory used, create 1000 objects of a class that just holds an integer instance variable and store each object into a table. The memory used is now 300k I would then exit, iterating through the table and setting each element in the table to nil. But would never get back the 146k, usually after this I am left using 210k or something similar. If I run the load sequence again during the same session, it does not exceed 300k so it is not a memory leak. I have tried creating 1000 integers in a table and setting these to nil, which does give me back 146k. In addition I've tried a simpler class file (Account2.lua) that doesn't rely on a class.lua. This still incurs memory fragmentation but not as much as the one that uses Class.lua Can anybody explain what is going on here? How can I purge these objects and get back the memory? here is the code --------Class.lua------ -- class.lua -- Compatible with Lua 5.1 (not 5.0). --http://lua-users.org/wiki/SimpleLuaClasses function class(base,ctor) local c = {} -- a new class instance if not ctor and type(base) == 'function' then ctor = base base = nil elseif type(base) == 'table' then -- our new class is a shallow copy of the base class! for i,v in pairs(base) do c[i] = v end c._base = base end -- the class will be the metatable for all its objects, -- and they will look up their methods in it. c.__index = c -- expose a ctor which can be called by () local mt = {} mt.__call = function(class_tbl,...) local obj = {} setmetatable(obj,c) if ctor then ctor(obj,...) else -- make sure that any stuff from the base class is initialized! if base and base.init then base.init(obj,...) end end return obj end c.init = ctor c.instanceOf = function(self,klass) local m = getmetatable(self) while m do if m == klass then return true end m = m._base end return false end setmetatable(c,mt) return c end --------Account.lua------ --Import Class template require 'class' local classname = "Account" --Declare class Constructor Account = class(function(acc,balance) --Instance variables declared here. if(balance ~= nil)then acc.balance = balance else --default value acc.balance = 2097 end acc.classname = classname end) --------Account2.lua------ local account2 = {} account2.classname = "unnamed" account2.balance = 2097 -----------Constructor 1 do local metatable = { __index = account2; } function Account2() return setmetatable({}, metatable); end end --------Main.lua------ require 'Account' require 'Account2' MAX_OBJ = 5000; test_value = 1000; Obj_Table = {}; MODE_ACC0 = 0 --integers MODE_ACC1 = 1 --Account MODE_ACC2 = 2 --Account2 TEST_MODE = MODE_ACC0; Lua_mem = ""; print("##1) collectgarbage('count'): " .. collectgarbage('count')); function Load() for i=1, MAX_OBJ do if(TEST_MODE == MODE_ACC0 )then table.insert(Obj_Table, test_value); elseif(TEST_MODE == MODE_ACC1 )then table.insert(Obj_Table, Account(test_value)); --Account.lua elseif(TEST_MODE == MODE_ACC2 )then table.insert(Obj_Table, Account2()); --Account2.lua Obj_Table[i].balance = test_value; end end print("##2) collectgarbage('count'): " .. collectgarbage('count')); end function Purge() --metatable purge if(TEST_MODE ~= MODE_ACC0)then --purge stage 0: print("set each elements metatable to nil") for i=1, MAX_OBJ do setmetatable(Obj_Table[i], nil); end end --purge stage 1: print("set table element to nil") for i=1, MAX_OBJ do Obj_Table[i] = nil; end --purge stage 2: print("start table.remove..."); for i=1, MAX_OBJ do table.remove(Obj_Table, i); end print("...end table.remove"); --purge stage 3: print("create new object_table {}"); Obj_Table= {}; --purge stage 4: print("collectgarbage('collect')"); collectgarbage('collect'); print("##3) collectgarbage('count'): " .. collectgarbage('count')); end --Loop callback function OnUpdate() collectgarbage('collect'); Lua_mem = collectgarbage('count'); end ------------------- --NOTE: --On start of game runs Load(), another runs Purge() --Update I've updated the code with suggestions from comments below, and will post my findings later today.

    Read the article

  • Quantifying the Performance of Garbage Collection vs. Explicit Memory Management

    - by EmbeddedProg
    I found this article here: Quantifying the Performance of Garbage Collection vs. Explicit Memory Management http://www.cs.umass.edu/~emery/pubs/gcvsmalloc.pdf In the conclusion section, it reads: Comparing runtime, space consumption, and virtual memory footprints over a range of benchmarks, we show that the runtime performance of the best-performing garbage collector is competitive with explicit memory management when given enough memory. In particular, when garbage collection has five times as much memory as required, its runtime performance matches or slightly exceeds that of explicit memory management. However, garbage collection’s performance degrades substantially when it must use smaller heaps. With three times as much memory, it runs 17% slower on average, and with twice as much memory, it runs 70% slower. Garbage collection also is more susceptible to paging when physical memory is scarce. In such conditions, all of the garbage collectors we examine here suffer order-of-magnitude performance penalties relative to explicit memory management. So, if my understanding is correct: if I have an app written in native C++ requiring 100 MB of memory, to achieve the same performance with a "managed" (i.e. garbage collector based) language (e.g. Java, C#), the app should require 5*100 MB = 500 MB? (And with 2*100 MB = 200 MB, the managed app would run 70% slower than the native app?) Do you know if current (i.e. latest Java VM's and .NET 4.0's) garbage collectors suffer the same problems described in the aforementioned article? Has the performance of modern garbage collectors improved? Thanks.

    Read the article

  • Stack , data and address space limits on an Ubuntu server

    - by PaulDaviesC
    I am running an Ubuntu server which has around 5000 users. The users are allowed to SSH in to the system. So in order to cap the memory used up by a process I have capped the address space limits using limits.conf. So my question is , should I be limiting the data and stack ? I feel that is not required since I am capping address space. Are there any pitfalls if I do not cap the stack and data limits?

    Read the article

  • High RAM usage, not seen in task manager

    - by r4dk0
    Hi! I am using Windows 7 64-bit 7600, with 4Gb of RAM. I have a serious problem, since something uses a lot of RAM(3.94Gb) and I see "stairs" in taskmanager, it rises to +3Gb RAM and it drops to about 2Gb and then rises slowly again, and suddenly drops. I tryed installing this version again and other versions, newer ones, but no effect. Ive even tryed disconnecting other harddrives while I installed it, and then installed NOD32 and updated it. How could I know what is using that much RAM? P.S.: I was suspecting superfetch service, I disabled it, restarted pc, and it didnt work, since the memory is the highest point when I login with password, it is really annoying since I need about 1minute to see my desktop, neither alone try anything else. After loging in it slowly drops and after random time it starts rising again. That doesnt happen immediately after a fresh windows install. And how the drivers go, I tryed older drivers for GPU, and newest ones.

    Read the article

  • High RAM usage, not seen in task manager

    - by r4dk0
    Hi! I am using Windows 7 64-bit 7600, with 4Gb of RAM. I have a serious problem, since something uses a lot of RAM(3.94Gb) and I see "stairs" in taskmanager, it rises to +3Gb RAM and it drops to about 2Gb and then rises slowly again, and suddenly drops. I tryed installing this version again and other versions, newer ones, but no effect. Ive even tryed disconnecting other harddrives while I installed it, and then installed NOD32 and updated it. How could I know what is using that much RAM? P.S.: I was suspecting superfetch service, I disabled it, restarted pc, and it didnt work, since the memory is the highest point when I login with password, it is really annoying since I need about 1minute to see my desktop, neither alone try anything else. After loging in it slowly drops and after random time it starts rising again. That doesnt happen immediately after a fresh windows install. And how the drivers go, I tryed older drivers for GPU, and newest ones.

    Read the article

  • Pointer Implementation Details in C

    - by Will Bickford
    I would like to know architectures which violate the assumptions I've listed below. Also I would like to know if any of the assumptions are false for all architectures (i.e. if any of them are just completely wrong). sizeof(int *) == sizeof(char *) == sizeof(void *) == sizeof(func_ptr *) The in-memory representation of all pointers for a given architecture is the same regardless of the data type pointed to. The in-memory representation of a pointer is the same as an integer of the same bit length as the architecture. Multiplication and division of pointer data types are only forbidden by the compiler. NOTE: Yes I know this is nonsensical. What I mean is - is there hardware support to forbid this incorrect usage? All pointer values can be casted to a single integer. In other words, what architectures still make use of segments and offsets? Incrementing a pointer is equivalent to adding sizeof(the pointed data type) to the memory address stored by the pointer. If p is an int32* then p+1 is equal to the memory address 4 bytes after p. I'm most used to pointers being used in a contiguous, virtual memory space. For that usage, I can generally get by thinking of them as addresses on a number line. See (http://stackoverflow.com/questions/1350471/pointer-comparison/1350488#1350488).

    Read the article

  • read-only memory and heap memory

    - by benjamin button
    hi, AFAIK, string literals are stored in read only memory in case of C language. where is this actually present on the hardware. as per my knowledge heap is on RAM.correct me if i am wrong. how different is heap from read only memory? is it OS dependant?

    Read the article

  • iPhone memory management

    - by Prazi
    I am newbie to iPhone programming. I am not using Interface Builder in my programming. I have some doubt about memory management, @property topics in iPhone. Consider the following code @interface LoadFlag : UIViewController { UIImage *flag; UIImageView *preview; } @property (nonatomic, retain) UIImageView *preview; @property (nonatomic, retain) UIImage *flag; @implementation @synthesize preview; @synthesize flag; - (void)viewDidLoad { flag = [UIImage imageNamed:@"myImage.png"]]; NSLog(@"Preview: %d\n",[preview retainCount]); //Count: 0 but shouldn't it be 1 as I am retaining it in @property in interface file preview=[[UIImageView alloc]init]; NSLog(@"Count: %d\n",[preview retainCount]); //Count: 1 preview.frame=CGRectMake(0.0f, 0.0f, 100.0f, 100.0f); preview.image = flag; [self.view addSubview:preview]; NSLog(@"Count: %d\n",[preview retainCount]); //Count: 2 [preview release]; NSLog(@"Count: %d\n",[preview retainCount]); //Count: 1 } When & Why(what is the need) do I have to set @property with retain (in above case for UIImage & UIImageView) ? I saw this statement in many sample programs but didn't understood the need of it. When I declare @property (nonatomic, retain) UIImageView *preview; statement the retain Count is 0. Why doesn't it increase by 1 inspite of retaining it in @property. Also when I declare [self.view addSubview:preview]; then retain Count increments by 1 again. In this case does the "Autorelease pool" releases for us later or we have to take care of releasing it. I am not sure but I think that the Autorelease should handle it as we didn't explicitly retained it so why should we worry of releasing it. Now, after the [preview release]; statement my count is 1. Now I don't need UIImageView anymore in my program so when and where should I release it so that the count becomes 0 and the memory gets deallocated. Again, I am not sure but I think that the Autorelease should handle it as we didn't explicitly retained it so why should we worry of releasing it. What will happen if I release it in -(void) dealloc method In the statement - flag = [UIImage imageNamed:@"myImage.png"]]; I haven't allocated any memory to flag but how can I still use it in my program. In this case if I do not allocate memory then who allocates & deallocates memory to it or is the "flag" just a reference pointing to - [UIImage imageNamed:@"myImage.png"]];. If it is a reference only then do i need to release it. Thanks in advance.

    Read the article

  • ARC, worth it or not?

    - by MSK
    When I moved to Objective C (iOS) from C++ (and little Java) I had hard time understanding memory management in iOS. But now all this seems natural and I know retain, autorelease, copy and release stuff. After reading about ARC, I am wondering is there more benefits of using ARC or it is just that you dont have to worry about memory management. Before moving to ARC I wanted to know how worth is moving to ARC. XCode has "Convert to Objective C ARC" menu. Is the conversion is that simple (nothing to worry about)? Does it help me in reducing my apps memory foot-print, memory leaks etc (somehow ?) Does it has much testing impact on my apps ? What are non-obvious advantages? Any Disadvantage os moving to it?

    Read the article

  • Linux configurations that would affect Java memory usage?

    - by wmacura
    Hi, Background: I have a set of java background workers I start as part of my webapp. I develop locally on Ubuntu 10.10 and deploy to an Ubuntu 10.04LTS server (a media temple (ve) instance). They're both running the same JVM: Sun JVM 1.6.0_22-b04. As part of the initialization script each worker is started with explicit Xmx, Xms, and XX:MaxPermGen settings. Yet somehow locally all 10 workers use 250MB, while on the server they use more than 2.7GB. I don't know how to begin to track this down. I thought the Ubuntu (and thus, kernel) version might make a difference, but I tried an old 10.04 VM and it behaves as expected. I've noticed that the machine does not seem to ever use memory for buffer or cache (according to htop), which seems a bit strange, but perhaps normal for a server? (edited) Some info: (server) root@devel:/app/axir/target# uname -a Linux devel 2.6.18-028stab069.5 #1 SMP Tue May 18 17:26:16 MSD 2010 x86_64 GNU/Linux (local) wiktor@beastie:~$ uname -a Linux beastie 2.6.35-25-generic #44-Ubuntu SMP Fri Jan 21 17:40:44 UTC 2011 x86_64 GNU/Linux (edited) Comparing PS output: (ps -eo "ppid,pid,cmd,rss,sz,vsz") PPID PID CMD RSS SZ VSZ (local) 1588 1615 java -cp axir-distribution. 25484 234382 937528 1615 1631 java -cp /home/wiktor/Code/ 83472 163059 652236 1615 1657 java -cp /home/wiktor/Code/ 70624 89135 356540 1615 1658 java -cp /home/wiktor/Code/ 37652 77625 310500 1615 1669 java -cp /home/wiktor/Code/ 38096 77733 310932 1615 1675 java -cp /home/wiktor/Code/ 37420 61395 245580 1615 1684 java -cp /home/wiktor/Code/ 38000 77736 310944 1615 1703 java -cp /home/wiktor/Code/ 39180 78060 312240 1615 1712 java -cp /home/wiktor/Code/ 38488 93882 375528 1615 1719 java -cp /home/wiktor/Code/ 38312 77874 311496 1615 1726 java -cp /home/wiktor/Code/ 38656 77958 311832 1615 1727 java -cp /home/wiktor/Code/ 78016 89429 357716 (server) 22522 23560 java -cp axir-distribution. 24860 285196 1140784 23560 23585 java -cp /app/axir/target/a 100764 161629 646516 23560 23667 java -cp /app/axir/target/a 72408 92682 370728 23560 23670 java -cp /app/axir/target/a 39948 97671 390684 23560 23674 java -cp /app/axir/target/a 40140 81586 326344 23560 23739 java -cp /app/axir/target/a 39688 81542 326168 They look very similar. In fact, the question now is why, if I add up the virtual memory usage on the server (3.2GB) does it more closely reflect 2.4GB of memory used (according to free), yet locally the virtual memory used adds up to a much more substantial 4.7GB but only actually uses ~250MB. It seems that perhaps memory isn't being shared as aggressively. (if that's even possible) Thank you for your help, Wiktor

    Read the article

  • New 64 bit linux system has regular processes (ps, grep etc) taking up way too much VIRT mem

    - by user42980
    We just moved from a 32-bit machine to a 64-bit machine. We have quickly ran out of memory despite the new boxes have twice as much ram as the old boxes. Running a simple ps command will illustrate the problem. New machine: 132 prod-Charlotte1-node1 ~/public_html/rearch/cgi-bin ps aux | grep ps root 293 0.0 0.0 0 0 ? S< May09 0:00 [kpsmoused] xamine 2267 1.0 0.0 63728 928 pts/3 R+ 16:50 0:00 ps aux xamine 2268 0.0 0.0 61172 752 pts/3 S+ 16:50 0:00 grep ps Old machine: 132 prod-116431-node1:/home/xamine ps aux | grep ps xamine 23191 0.0 0.0 2332 768 pts/6 R+ 15:41 0:00 ps aux xamine 23192 0.0 0.0 3668 692 pts/6 S+ 15:41 0:00 grep ps Notice that the ps process is using 63M of VIRT mem vs 2 on the old machine. New Machine: Enterprise Linux Enterprise Linux Server release 5.4 (Carthage) Red Hat Enterprise Linux Server release 5.4 (Tikanga) Old Machine: Red Hat Enterprise Linux ES release 4 (Nahant Update 4) Thanks for any thoughts you have!

    Read the article

  • Any dangers in using DDR memory with a higher frequency than the FSB?

    - by raw_noob
    I'm looking to upgrade memory in an older motherboard. The processor is an AMD Sempron 2500+ with a maximum speed of 333/166MHz. The motherboard is an MSI MS-7061 (KV3M-V), which accepts up to 2Gb of DDR memory maximum PC2700 in 2 slots and has a maximum FSB of 333MHz. The board does not have dual-channel support. Existing memory includes a stick of 512Mb PC3200, which seems to be running OK (presumably at PC2700) but is rated 200MHz, which is below the FSB speed. The other stick is 256Mb PC2100/133MHz, again below the FSB speed. (All figures from CPU-Z.) I have a chance to acquire a single used stick of PC3200/400MHz memory very cheaply. Crucial's system scanner seems to suggest that this will be OK with my system, but other sites have suggested that running memory with a higher frequency than the FSB can cause instability. Is this true? Would I be better waiting until I can buy the correct PC2700/333MHz stick? I'm assuming that the mixed memory I have at present is running as 768Mb at 133MHz. Is this a reasonable assumption? If so, would you expect the performance differences between 768Mb/133MHz and 1Gb/333MHz to be very noticeable? If I install the new 1Gb/400 or 333MHz stick in slot 1, am I right in thinking that adding back the existing 512Mb/200MHz stick in slot 2 would pull the whole 1.5Gb system memory speed down to 200MHz? If so, which would be better - 1.5Gb/200MHz, or the single 1Gb stick at the full 333MHz that the FSB permits? Is more headroom more important than extra speed? Any help - or even opinions - gratefully received. I can't find reliable information, and I can't afford to make expensive mistakes.

    Read the article

  • Why do I get swap space related errors when I still have lots of free memory in Solaris 10?

    - by Tom Duckering
    I am seeing a few of my services suffering/crashing with errors along the lines of "Error allocating memory" or "Can't create new process" etc. I'm slightly confused by this since logs show that at the time the system has lots of free memory (around 26GB in one case) of memory available and is not particularly stressed in any other way. After noting a JVM crash with similar error with the added query of "Out of swap space?" it made me dig a little deeper. It turns out that someone has configured our zone with a 2GB swap file. Our zone doesn't have capped memory and currently has access to as much of the 128GB of the RAM as it need. Our SAs are planning to cap this at 32GB when they get the chance. My current thinking is that whilst there is memory aplenty for the OS to allocate, the swap space seems grossly undersized (based on other answers here). It seems as though Solaris is wanting to make sure there's enough swap space in case things have to swap out (i.e. it's reserving the swap space). Is this thinking right or is there some other reason that I get memory allocation errors with this large amount of memory free and seemingly undersized swap space?

    Read the article

  • How much free memory should I have on my webserver?

    - by neanderslob
    I have a webserver that's currently hosting two Wordpress sites and some java-based collaboration software. The server has 2G of memory and is currently using about 1.8G of the available memory. Right now what's on here is pretty much a pilot project that's getting negligible traffic so I think it's pretty clear that I'll be needing more memory. I was wondering, if I was to release it, how I might anticipate my memory needs based on the traffic it gets. I've poked around on Google and what I've found has been a bit tenuous. Is there a good heuristic that one should use when calculating memory demands as a function of the base (no traffic) load on the server? For reference, the output of free -m can be seen below: total used free shared buffers cached Mem: 2048 1832 215 0 0 0 -/+ buffers/cache: 1832 215 Swap: 0 0 0 To me this looks like actual memory used and isn't an illusion due to caching or anything else. I figure the demands of my collaboration software will have to be experimentally tested so here's free -m without that software running: total used free shared buffers cached Mem: 2048 1109 938 0 0 0 -/+ buffers/cache: 1109 938 Swap: 0 0 0 My plan B to figure this out is to add a bunch of swap space to the server, give it some traffic and adjust according the the amount that swap gets used. I was just wondering if anyone had a good rule of thumb to estimate how much memory I should plan on in advance...or if what I'm thinking is nuts. Many thanks in advance (I'm really quite new to this).

    Read the article

  • WPF 3.5 RenderTargetBitmap memory hog

    - by kingRauk
    I have a 3.5 WPF application that use's RenderTargetBitmap. It eat's memory like a big bear. It's is a know problem in 3.5 that RenderTargetBitmap.Render has memory problems. Have find some solutions for it, but i doesnt help. https://connect.microsoft.com/VisualStudio/feedback/details/489723/rendertargetbitmap-render-method-causes-a-memory-leak Program takes too much memory And more... Does anyway have any more ideas to solve it... static Image Method(FrameworkElement e, int width, int height) { const int dpi = 192; e.Width = width; e.Height = height; e.Arrange(new Rect(0, 0, width, height)); e.UpdateLayout(); if(element is Graph) (element as Graph).UpdateComponents(); var bitmap = new RenderTargetBitmap((int)(width*dpi/96.0), (int)(height*dpi/96.0), dpi, dpi, PixelFormats.Pbgra32); bitmap.Render(element); var encoder = new PngBitmapEncoder(); encoder.Frames.Add(BitmapFrame.Create(bitmap)); using (var stream = new MemoryStream()) { encoder.Save(stream); element.Clip = null; Dispose(element); bitmap.Freeze(); DisposeRender(bitmap); bitmap.Clear(); GC.Collect(); GC.WaitForPendingFinalizers(); return System.Drawing.Image.FromStream(stream); } } public static void Dispose(FrameworkElement element) { GC.Collect(); GC.WaitForPendingFinalizers(); GC.Collect(); } public static void DisposeRender(RenderTargetBitmap bitmap) { if (bitmap != null) bitmap.Clear(); bitmap = null; GC.Collect(); GC.WaitForPendingFinalizers(); }

    Read the article

  • Instantiating class with custom allocator in shared memory

    - by recipriversexclusion
    I'm pulling my hair due to the following problem: I am following the example given in boost.interprocess documentation to instantiate a fixed-size ring buffer buffer class that I wrote in shared memory. The skeleton constructor for my class is: template<typename ItemType, class Allocator > SharedMemoryBuffer<ItemType, Allocator>::SharedMemoryBuffer( unsigned long capacity ){ m_capacity = capacity; // Create the buffer nodes. m_start_ptr = this->allocator->allocate(); // allocate first buffer node BufferNode* ptr = m_start_ptr; for( int i = 0 ; i < this->capacity()-1; i++ ) { BufferNode* p = this->allocator->allocate(); // allocate a buffer node } } My first question: Does this sort of allocation guarantee that the buffer nodes are allocated in contiguous memory locations, i.e. when I try to access the n'th node from address m_start_ptr + n*sizeof(BufferNode) in my Read() method would it work? If not, what's a better way to keep the nodes, creating a linked list? My test harness is the following: // Define an STL compatible allocator of ints that allocates from the managed_shared_memory. // This allocator will allow placing containers in the segment typedef allocator<int, managed_shared_memory::segment_manager> ShmemAllocator; //Alias a vector that uses the previous STL-like allocator so that allocates //its values from the segment typedef SharedMemoryBuffer<int, ShmemAllocator> MyBuf; int main(int argc, char *argv[]) { shared_memory_object::remove("MySharedMemory"); //Create a new segment with given name and size managed_shared_memory segment(create_only, "MySharedMemory", 65536); //Initialize shared memory STL-compatible allocator const ShmemAllocator alloc_inst (segment.get_segment_manager()); //Construct a buffer named "MyBuffer" in shared memory with argument alloc_inst MyBuf *pBuf = segment.construct<MyBuf>("MyBuffer")(100, alloc_inst); } This gives me all kinds of compilation errors related to templates for the last statement. What am I doing wrong?

    Read the article

  • why pointer to pointer is needed to allocate memory in function

    - by skydoor
    Hi I have a segmentation fault in the code below, but after I changed it to pointer to pointer, it is fine. Could anybody give me any reason? void memory(int * p, int size) { try{ p = (int *) malloc(size*sizeof(int)); } catch( exception& e) { cout<<e.what()<<endl; } } it does not work in the main function as blow int *p = 0; memory(p, 10); for(int i = 0 ; i < 10; i++) p[i] = i; however, it works like this . void memory(int ** p, int size) { `//pointer to pointer` try{ *p = (int *) malloc(size*sizeof(int)); } catch( exception& e) { cout<<e.what()<<endl; } } int main() { int *p = 0; memory(&p, 10); //get the address of the pointer for(int i = 0 ; i < 10; i++) p[i] = i; for(int i = 0 ; i < 10; i++) cout<<*(p+i)<<" "; return 0; }

    Read the article

  • When memory is actually freeded?

    - by zhyk
    Hello all. I'm trying to understand memory management stuff in Objective-C. If I see the memory usage listed by Activity Monitor, it looks like memory is not being freed (I mean column rsize). But in "Object Allocations" everything looks fine. Here is my simple code: #import <Foundation/Foundation.h> int main (int argc, const char * argv[]) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; NSInteger i, k=10000; while (k>0) { NSMutableArray *array = [[NSMutableArray alloc]init]; for (i=0;i<1000*k; i++) { NSString *srtring = [[NSString alloc] initWithString:@"string...."]; [array addObject:srtring]; [srtring release]; srtring = nil; } [array release]; array = nil; k-=500; } [NSThread sleepForTimeInterval:5]; [pool release]; return 0; } As for retain and release it's cool, everything is balanced. But rsize decreases only after quitting from this little program. Is it possible to "clean" memory somehow before quitting?

    Read the article

  • Shared Memory and Process Sempahores (IPC)

    - by fsdfa
    This is an extract from Advanced Liniux Programming: Semaphores continue to exist even after all processes using them have terminated. The last process to use a semaphore set must explicitly remove it to ensure that the operating system does not run out of semaphores.To do so, invoke semctl with the semaphore identifier, the number of semaphores in the set, IPC_RMID as the third argument, and any union semun value as the fourth argument (which is ignored).The effective user ID of the calling process must match that of the semaphore’s allocator (or the caller must be root). Unlike shared memory segments, removing a semaphore set causes Linux to deallocate immediately. If a process allocate a shared memory, and many process use it and never set to delete it (with shmctl), if all them terminate, then the shared page continues being available. (We can see this with ipcs). If some process did the shmctl, then when the last process deattached, then the system will deallocate the shared memory. So far so good (I guess, if not, correct me). What I dont understand from that quote I did, is that first it say: "Semaphores continue to exist even after all processes using them have terminated." and then: "Unlike shared memory segments, removing a semaphore set causes Linux to deallocate immediately."

    Read the article

  • Intel MKL memory management and exceptions

    - by Andrew
    Hello everyone, I am trying out Intel MKL and it appears that they have their own memory management (C-style). They suggest using their MKL_malloc/MKL_free pairs for vectors and matrices and I do not know what is a good way to handle it. One of the reasons for that is that memory-alignment is recommended to be at least 16-byte and with these routines it is specified explicitly. I used to rely on auto_ptr and boost::smart_ptr a lot to forget about memory clean-ups. How can I write an exception-safe program with MKL memory management or should I just use regular auto_ptr's and not bother? Thanks in advance. EDIT http://software.intel.com/sites/products/documentation/hpc/mkl/win/index.htm this link may explain why I brought up the question UPDATE I used an idea from the answer below for allocator. This is what I have now: template <typename T, size_t TALIGN=16, size_t TBLOCK=4> class aligned_allocator : public std::allocator<T> { public: pointer allocate(size_type n, const void *hint) { pointer p = NULL; size_t count = sizeof(T) * n; size_t count_left = count % TBLOCK; if( count_left != 0 ) count += TBLOCK - count_left; if ( !hint ) p = reinterpret_cast<pointer>(MKL_malloc (count,TALIGN)); else p = reinterpret_cast<pointer>(MKL_realloc((void*)hint,count,TALIGN)); return p; } void deallocate(pointer p, size_type n){ MKL_free(p); } }; If anybody has any suggestions, feel free to make it better.

    Read the article

  • C when to allocate and free memory - before function call, after function call...etc

    - by Keith P
    I am working with my first straight C project, and it has been a while since I worked on C++ for that matter. So the whole memory management is a bit fuzzy. I have a function that I created that will validate some input. In the simple sample below, it just ignores spaces: int validate_input(const char *input_line, char* out_value){ int ret_val = 0; /*false*/ int length = strlen(input_line); cout << "length = " << length << "\n"; out_value =(char*) malloc(sizeof(char) * length + 1); if (0 != length){ int number_found = 0; for (int x = 0; x < length; x++){ if (input_line[x] != ' '){ /*ignore space*/ /*get the character*/ out_value[number_found] = input_line[x]; number_found++; /*increment counter*/ } } out_value[number_found + 1] = '\0'; ret_val = 1; } return ret_val; } Instead of allocating memory inside the function for out_value, should I do it before I call the function and always expect the caller to allocate memory before passing into the function? As a rule of thumb, should any memory allocated inside of a function be always freed before the function returns?

    Read the article

  • Preallocating memory with C++ in realtime environment

    - by Elazar Leibovich
    I'm having a function which gets an input buffer of n bytes, and needs an auxillary buffer of n bytes in order to process the given input buffer. (I know vector is allocating memory at runtime, let's say that I'm using a vector which uses static preallocated memory. Imagine this is NOT an STL vector.) The usual approach is void processData(vector<T> &vec) { vector<T> &aux = new vector<T>(vec.size()); //dynamically allocate memory // process data } //usage: processData(v) Since I'm working in a real time environment, I wish to preallocate all the memory I'll ever need in advance. The buffer is allocated only once at startup. I want that whenever I'm allocating a vector, I'll automatically allocate auxillary buffer for my processData function. I can do something similar with a template function static void _processData(vector<T> &vec,vector<T> &aux) { // process data } template<size_t sz> void processData(vector<T> &vec) { static aux_buffer[sz]; vector aux(vec.size(),aux_buffer); // use aux_buffer for the vector _processData(vec,aux); } // usage: processData<V_MAX_SIZE>(v); However working alot with templates is not much fun (now let's recompile everything since I changed a comment!), and it forces me to do some bookkeeping whenever I use this function. Are there any nicer designs around this problem?

    Read the article

  • How to optimize paging for large in memory database

    - by snakefoot
    I have an application where the entire database is implemented in memory using a stl-map for each table in the database. Each item in the stl-map is a complex object with references to other items in the other stl-maps. The application works with a large amount of data, so it uses more than 500 MByte RAM. Clients are able to contact the application and get a filtered version of the entire database. This is done by running through the entire database, and finding items relevant for the client. When the application have been running for an hour or so, then Windows 2003 SP2 starts to page out parts of the RAM for the application (Eventhough there is 16 GByte RAM on the machine). After the application have been partly paged out then a client logon takes a long time (10 mins) because it now generates a page fault for each pointer lookup in the stl-map. I can see it is possible to tell Windows to lock memory in RAM, but this is generally only recommended for device drivers, and only for "small" amounts of memory. I guess a poor mans solution could be to loop through the entire memory database, and thus tell Windows we are still interested in keeping the datamodel in RAM. I guess another poor mans solution could be to disable the pagefile completely on Windows. I guess the expensive solution would be a SQL database, and then rewrite the entire application to use a database layer. Then hopefully the database system will have implemented means to for fast access. Are there other more elegant solutions ?

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >