Search Results

Search found 258446 results on 10338 pages for 'stack memory'.

Page 29/10338 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • free memory in Linux

    - by user32425
    Hi, I did free -tm on my system, and I got the output below. Is the free buffers/cache part of the used memory? And therefore we can consider it as free memory? total used free shared buffers cached Mem: 5721 5689 32 0 137 4664 -/+ buffers/cache: 887 4834 Swap: 6000 13 5987 Total: 11722 5703 6019 Thanks

    Read the article

  • Tomcat memory usage

    - by Adrian Mester
    I'm running tomcat on a ubuntu 10.4 VPS with 512MB of RAM (1024 burstable). I'm using it for development, so performance isn't an issue, but memory is. Tomcat is currently using about 250MB without any apps installed (I compared memory usage with tomcat stopped and running), and I also need to run lighttpd and mysql. Is there any way to get that number down? I don't need it to be able to handle a large number of requests at once.

    Read the article

  • How can I free memory on linux

    - by user35153
    When I use top to see memory usage, I have 65gb ram but only 1.3gb of it free and remaining is shown as used. When I ran my program It gives memory insufficiency error. Although no other program is using the remaining 63.7gb ram it is hold. how can I get free the unused ram?

    Read the article

  • Glassfish V3 using up all available memory

    - by Mannaz
    I have a Virtual Server with 1GB of RAM. When i start glassfish with asadmin start-domain it instantly allocates all available memory, although i defined -Xmx128m in my domain.xml. Am I missing an option here? How can I prevent glassfish from using all free memory?

    Read the article

  • memory cards capacity needs to be the same?

    - by balalakshmi
    I am not a hardware guy. I just heard this from a service engineer Memory cards of unequal capacities should not be used. that is if there is a 1 GM already in the slot, we need to add another 1 GB card only. Not 512 MB or 2 GB. Is there a problem if we use memory cards which are not equal capacities?

    Read the article

  • gdb stack strangeness

    - by aaa
    Hi I get this weird backtrace (sometimes): (gdb) bt #0 0x00002b36465a5d4c in AY16_Loop_M16 () from /opt/intel/mkl/10.0.3.020/lib/em64t/libmkl_mc.so #1 0x00000000000021da in ?? () #2 0x00000000000021da in ?? () #3 0xbf3e9dec2f04aeff in ?? () #4 0xbf480541bd29306a in ?? () #5 0xbf3e6017955273e8 in ?? () #6 0xbf442b937c2c1f37 in ?? () #7 0x3f5580165832d744 in ?? () ... Any ideas why i cant see the symbols? Compiled with debugging syms of course. The same session gives symbols at other points.

    Read the article

  • VMMap - awesome memory analysis tool

    VMMap is a process virtual and physical memory analysis utility. It shows a breakdown of a process's committed virtual memory types as well as the amount of physical memory (working set) assigned by the operating system to those types. Besides graphical representations of memory usage, VMMap also shows summary information and a detailed process memory map. Powerful filtering and refresh capabilities allow you to identify the sources of process memory usage and the memory cost of application features. Besides flexible views for analyzing live processes, VMMap supports the export of data in multiple forms, including a native format that preserves all the information so that you can load back in. It also includes command-line options that enable scripting scenarios. VMMap is the ideal tool for developers wanting to understand and optimize their application's memory resource usage. span.fullpost {display:none;}

    Read the article

  • VMMap - awesome memory analysis tool

    VMMap is a process virtual and physical memory analysis utility. It shows a breakdown of a process's committed virtual memory types as well as the amount of physical memory (working set) assigned by the operating system to those types. Besides graphical representations of memory usage, VMMap also shows summary information and a detailed process memory map. Powerful filtering and refresh capabilities allow you to identify the sources of process memory usage and the memory cost of application features. Besides flexible views for analyzing live processes, VMMap supports the export of data in multiple forms, including a native format that preserves all the information so that you can load back in. It also includes command-line options that enable scripting scenarios. VMMap is the ideal tool for developers wanting to understand and optimize their application's memory resource usage. span.fullpost {display:none;}

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 4

    - by MarkPearl
    Learning Outcomes Explain the characteristics of memory systems Describe the memory hierarchy Discuss cache memory principles Discuss issues relevant to cache design Describe the cache organization of the Pentium Computer Memory Systems There are key characteristics of memory… Location – internal or external Capacity – expressed in terms of bytes Unit of Transfer – the number of bits read out of or written into memory at a time Access Method – sequential, direct, random or associative From a users perspective the two most important characteristics of memory are… Capacity Performance – access time, memory cycle time, transfer rate The trade off for memory happens along three axis… Faster access time, greater cost per bit Greater capacity, smaller cost per bit Greater capacity, slower access time This leads to people using a tiered approach in their use of memory   As one goes down the hierarchy, the following occurs… Decreasing cost per bit Increasing capacity Increasing access time Decreasing frequency of access of the memory by the processor The use of two levels of memory to reduce average access time works in principle, but only if conditions 1 to 4 apply. A variety of technologies exist that allow us to accomplish this. Thus it is possible to organize data across the hierarchy such that the percentage of accesses to each successively lower level is substantially less than that of the level above. A portion of main memory can be used as a buffer to hold data temporarily that is to be read out to disk. This is sometimes referred to as a disk cache and improves performance in two ways… Disk writes are clustered. Instead of many small transfers of data, we have a few large transfers of data. This improves disk performance and minimizes processor involvement. Some data designed for write-out may be referenced by a program before the next dump to disk. In that case the data is retrieved rapidly from the software cache rather than slowly from disk. Cache Memory Principles Cache memory is substantially faster than main memory. A caching system works as follows.. When a processor attempts to read a word of memory, a check is made to see if this in in cache memory… If it is, the data is supplied, If it is not in the cache, a block of main memory, consisting of a fixed number of words is loaded to the cache. Because of the phenomenon of locality of references, when a block of data is fetched into the cache, it is likely that there will be future references to that same memory location or to other words in the block. Elements of Cache Design While there are a large number of cache implementations, there are a few basic design elements that serve to classify and differentiate cache architectures… Cache Addresses Cache Size Mapping Function Replacement Algorithm Write Policy Line Size Number of Caches Cache Addresses Almost all non-embedded processors support virtual memory. Virtual memory in essence allows a program to address memory from a logical point of view without needing to worry about the amount of physical memory available. When virtual addresses are used the designer may choose to place the cache between the MMU (memory management unit) and the processor or between the MMU and main memory. The disadvantage of virtual memory is that most virtual memory systems supply each application with the same virtual memory address space (each application sees virtual memory starting at memory address 0), which means the cache memory must be completely flushed with each application context switch or extra bits must be added to each line of the cache to identify which virtual address space the address refers to. Cache Size We would like the size of the cache to be small enough so that the overall average cost per bit is close to that of main memory alone and large enough so that the overall average access time is close to that of the cache alone. Also, larger caches are slightly slower than smaller ones. Mapping Function Because there are fewer cache lines than main memory blocks, an algorithm is needed for mapping main memory blocks into cache lines. The choice of mapping function dictates how the cache is organized. Three techniques can be used… Direct – simplest technique, maps each block of main memory into only one possible cache line Associative – Each main memory block to be loaded into any line of the cache Set Associative – exhibits the strengths of both the direct and associative approaches while reducing their disadvantages For detailed explanations of each approach – read the text book (page 148 – 154) Replacement Algorithm For associative and set associating mapping a replacement algorithm is needed to determine which of the existing blocks in the cache must be replaced by a new block. There are four common approaches… LRU (Least recently used) FIFO (First in first out) LFU (Least frequently used) Random selection Write Policy When a block resident in the cache is to be replaced, there are two cases to consider If no writes to that block have happened in the cache – discard it If a write has occurred, a process needs to be initiated where the changes in the cache are propagated back to the main memory. There are several approaches to achieve this including… Write Through – all writes to the cache are done to the main memory as well at the point of the change Write Back – when a block is replaced, all dirty bits are written back to main memory The problem is complicated when we have multiple caches, there are techniques to accommodate for this but I have not summarized them. Line Size When a block of data is retrieved and placed in the cache, not only the desired word but also some number of adjacent words are retrieved. As the block size increases from very small to larger sizes, the hit ratio will at first increase because of the principle of locality, which states that the data in the vicinity of a referenced word are likely to be referenced in the near future. As the block size increases, more useful data are brought into cache. The hit ratio will begin to decrease as the block becomes even bigger and the probability of using the newly fetched information becomes less than the probability of using the newly fetched information that has to be replaced. Two specific effects come into play… Larger blocks reduce the number of blocks that fit into a cache. Because each block fetch overwrites older cache contents, a small number of blocks results in data being overwritten shortly after they are fetched. As a block becomes larger, each additional word is farther from the requested word and therefore less likely to be needed in the near future. The relationship between block size and hit ratio is complex, and no set approach is judged to be the best in all circumstances.   Pentium 4 and ARM cache organizations The processor core consists of four major components: Fetch/decode unit – fetches program instruction in order from the L2 cache, decodes these into a series of micro-operations, and stores the results in the L2 instruction cache Out-of-order execution logic – Schedules execution of the micro-operations subject to data dependencies and resource availability – thus micro-operations may be scheduled for execution in a different order than they were fetched from the instruction stream. As time permits, this unit schedules speculative execution of micro-operations that may be required in the future Execution units – These units execute micro-operations, fetching the required data from the L1 data cache and temporarily storing results in registers Memory subsystem – This unit includes the L2 and L3 caches and the system bus, which is used to access main memory when the L1 and L2 caches have a cache miss and to access the system I/O resources

    Read the article

  • Does 64bit Windows 8 have the same 75% memory-usage limitation for applications as Windows 7?

    - by Barleyman
    64bit Windows 7 (and Windows Vista) have a built-in limit of not being able to use the last 25% of RAM. You will get a low memory warning when you get close to the limit. Even if you disable that warning, applications will run out of memory and crash since the OS will refuse to allocate memory from that last 25%. That was fine when Vista was designed, when machines had 1 GB of total memory, but is pretty daft for today's 8 GB machines. Yes, the system will run cache, etc. on that extra 2 GB, but running out of memory when you have "merely" 2 GB left.... NB: this has nothing to do with the page file. If you limit the page file to a sensible size like 2 GB, you will still see this behavior. The system will cram the page file to the last byte while refusing to touch that 1/4th of the RAM. Does Windows 8 change this behavior? Is there now some fixed minimum free RAM requirement, like 512 MB, or is it still 25%? Can you actually adjust the low memory limit? EDIT: Here is another older post here which discusses this same behavior on Windows 7. There is fixed 25% limit in Windows 7 and I'd like to know if it's still in Windows 8. Windows 7 / Page File Disabled / 12 GB RAM / 2+ GB RAM free and "your computer is running low on memory" Edit2: Here is another link discussing the low memory warning and how to disable it. Note he claims the limit for RAM usage is 80%, not 75%. It would seem to be correct as you can in fact allocate 6.4GB of RAM with 8GB machine. Anything above and beyond that goes to the pagefile, though. http://halflight.com.au/2011/04/06/how-to-disable-low-memory-warnings-and-the-advantages-of-removing-the-page-file/ Edit3: a Here's couple of process explorer screenshots that demonstrate how it goes down. Exhibit1: https://dl.dropbox.com/u/42068601/sysinfo.jpg Exhibit2: https://dl.dropbox.com/u/42068601/sysint2.jpg You can see that Windows 7 will use the memory 6.4GB as the very last resort. I have low memory warning switched off here so programs crashed at the last screenshot allocation. With low memory warning turned on, it starts nagging before you can push OS to use that remaining 1.6GB. The question is not "Is it OK windows does not want to allocate last 20% of RAM because X", it's "Does Windows 8 still behave this way". With 16GB this really becomes dumb.

    Read the article

  • Reducing memory for worker MPM in Apache

    - by ShyM
    I've moved from the prefork MPM to the worker MPM due to a process limit I was hitting on my VPS. However, memory usage increased after switching over (which is odd since the worker MPM is supposed to have a smaller memory footprint?). Most of them belong to php-cgi processes. Is there something I'm doing wrong? I have around 20 sites on it, each with a different fcgi wrapper script. Could that be a reason?

    Read the article

  • Where is the used memory in Task Manager & Resource Monitor coming from?

    - by Sam Adams
    On a Windows 7, the working set memory usage plus private memory does not add up to the total used memory in Task Manager and Windows 7 Resource Monitor. How do you find out where the used memory is coming from? The cached memory can't be part of it because sometimes the total cache is greater than the total in use. The commit memory plus the working set also doesn't add up to the total in use - but even that shouldn't be significant if it did, since commit is virtual.

    Read the article

  • Virtual Memory and SSD

    - by Zombian
    While studying for the A+ Exam I was reading about SSD's and I thought to myself that if you had a mobo with a low RAM limit you could use a dedicated SSD purely for Virtual RAM. I looked up some info on line and the info I found said that this was a poor practice but didn't explain why. Why shouldn't SSD's be used for Virtual Memory and what are your thoughts on a dedicated Virtual Memory drive? Thank you!

    Read the article

  • SQL Server Memory Manager Changes in Denali

    - by SQLOS Team
    The next version of SQL Server will contain significant changes to the memory manager component.  The memory manager component has been rewritten for Denali.  In the previous versions of SQL Server there were two distinct memory managers.  There was one memory manager which handled allocation sizes of 8k or less and another for greater than 8k.  For Denali there will be one memory manager for all allocation sizes.   The majority of the changes will be transparent to the end user.  However, some changes will be visible to the user.  These are listed below: ·         The ‘max server memory’ configuration option has new lower limits.  Specifically, 32-bit versions of SQL Server will have a lower limit of 64 MB.  The 64-bit versions will have a lower limit of 128 MB. ·         All memory allocations by SQL Server components will observe the ‘max server memory’ configuration option.  In previous SQL versions only the 8k allocations were limited the ‘max server memory’ configuration option.  Allocations larger than 8k weren’t constrained. ·         DMVs which refer to memory manager internals have been modified.  This includes adding or removing columns and changing column names. ·         The memory manager configuration messages in the error log have minor changes. ·         DBCC memorystatus output has been changed. ·         Address Windowing Extensions (AWE) has been deprecated.   In the next blog post I will discuss the changes to the memory manager DMVs in greater detail.  In future blog posts I will discuss the other changes in greater detail.  

    Read the article

  • Thread safety with heap-allocated memory

    - by incrediman
    I was reading this: http://en.wikipedia.org/wiki/Thread_safety Is the following function thread-safe? void foo(int y){ int * x = new int[50]; /*...do some stuff with the allocated memory...*/ delete x; } In the article it says that to be thread-safe you can only use variables from the stack. Really? Why? Wouldn't subsequent calls of the above function allocate memory elsewhere?

    Read the article

  • Android NDK Gaussian Blur radius stuck at 60

    - by rennoDeniro
    I implemented this NDK imeplementation of a Gaussian Blur, But I am having problems. I cannot increase the radius above 60, otherwise the activity just closes returning to a previous activity. No error message, nothing? Does anyone know why this could be? Note: This blur is based on the quasimondo implementation, here #include <jni.h> #include <string.h> #include <math.h> #include <stdio.h> #include <android/log.h> #include <android/bitmap.h> #define LOG_TAG "libbitmaputils" #define LOGI(...) __android_log_print(ANDROID_LOG_INFO,LOG_TAG,__VA_ARGS__) #define LOGE(...) __android_log_print(ANDROID_LOG_ERROR,LOG_TAG,__VA_ARGS__) typedef struct { uint8_t red; uint8_t green; uint8_t blue; uint8_t alpha; } rgba; JNIEXPORT void JNICALL Java_com_insert_your_package_ClassName_functionToBlur(JNIEnv* env, jobject obj, jobject bitmapIn, jobject bitmapOut, jint radius) { LOGI("Blurring bitmap..."); // Properties AndroidBitmapInfo infoIn; void* pixelsIn; AndroidBitmapInfo infoOut; void* pixelsOut; int ret; // Get image info if ((ret = AndroidBitmap_getInfo(env, bitmapIn, &infoIn)) < 0 || (ret = AndroidBitmap_getInfo(env, bitmapOut, &infoOut)) < 0) { LOGE("AndroidBitmap_getInfo() failed ! error=%d", ret); return; } // Check image if (infoIn.format != ANDROID_BITMAP_FORMAT_RGBA_8888 || infoOut.format != ANDROID_BITMAP_FORMAT_RGBA_8888) { LOGE("Bitmap format is not RGBA_8888!"); LOGE("==> %d %d", infoIn.format, infoOut.format); return; } // Lock all images if ((ret = AndroidBitmap_lockPixels(env, bitmapIn, &pixelsIn)) < 0 || (ret = AndroidBitmap_lockPixels(env, bitmapOut, &pixelsOut)) < 0) { LOGE("AndroidBitmap_lockPixels() failed ! error=%d", ret); } int h = infoIn.height; int w = infoIn.width; LOGI("Image size is: %i %i", w, h); rgba* input = (rgba*) pixelsIn; rgba* output = (rgba*) pixelsOut; int wm = w - 1; int hm = h - 1; int wh = w * h; int whMax = max(w, h); int div = radius + radius + 1; int r[wh]; int g[wh]; int b[wh]; int rsum, gsum, bsum, x, y, i, yp, yi, yw; rgba p; int vmin[whMax]; int divsum = (div + 1) >> 1; divsum *= divsum; int dv[256 * divsum]; for (i = 0; i < 256 * divsum; i++) { dv[i] = (i / divsum); } yw = yi = 0; int stack[div][3]; int stackpointer; int stackstart; int rbs; int ir; int ip; int r1 = radius + 1; int routsum, goutsum, boutsum; int rinsum, ginsum, binsum; for (y = 0; y < h; y++) { rinsum = ginsum = binsum = routsum = goutsum = boutsum = rsum = gsum = bsum = 0; for (i = -radius; i <= radius; i++) { p = input[yi + min(wm, max(i, 0))]; ir = i + radius; // same as sir stack[ir][0] = p.red; stack[ir][1] = p.green; stack[ir][2] = p.blue; rbs = r1 - abs(i); rsum += stack[ir][0] * rbs; gsum += stack[ir][1] * rbs; bsum += stack[ir][2] * rbs; if (i > 0) { rinsum += stack[ir][0]; ginsum += stack[ir][1]; binsum += stack[ir][2]; } else { routsum += stack[ir][0]; goutsum += stack[ir][1]; boutsum += stack[ir][2]; } } stackpointer = radius; for (x = 0; x < w; x++) { r[yi] = dv[rsum]; g[yi] = dv[gsum]; b[yi] = dv[bsum]; rsum -= routsum; gsum -= goutsum; bsum -= boutsum; stackstart = stackpointer - radius + div; ir = stackstart % div; // same as sir routsum -= stack[ir][0]; goutsum -= stack[ir][1]; boutsum -= stack[ir][2]; if (y == 0) { vmin[x] = min(x + radius + 1, wm); } p = input[yw + vmin[x]]; stack[ir][0] = p.red; stack[ir][1] = p.green; stack[ir][2] = p.blue; rinsum += stack[ir][0]; ginsum += stack[ir][1]; binsum += stack[ir][2]; rsum += rinsum; gsum += ginsum; bsum += binsum; stackpointer = (stackpointer + 1) % div; ir = (stackpointer) % div; // same as sir routsum += stack[ir][0]; goutsum += stack[ir][1]; boutsum += stack[ir][2]; rinsum -= stack[ir][0]; ginsum -= stack[ir][1]; binsum -= stack[ir][2]; yi++; } yw += w; } for (x = 0; x < w; x++) { rinsum = ginsum = binsum = routsum = goutsum = boutsum = rsum = gsum = bsum = 0; yp = -radius * w; for (i = -radius; i <= radius; i++) { yi = max(0, yp) + x; ir = i + radius; // same as sir stack[ir][0] = r[yi]; stack[ir][1] = g[yi]; stack[ir][2] = b[yi]; rbs = r1 - abs(i); rsum += r[yi] * rbs; gsum += g[yi] * rbs; bsum += b[yi] * rbs; if (i > 0) { rinsum += stack[ir][0]; ginsum += stack[ir][1]; binsum += stack[ir][2]; } else { routsum += stack[ir][0]; goutsum += stack[ir][1]; boutsum += stack[ir][2]; } if (i < hm) { yp += w; } } yi = x; stackpointer = radius; for (y = 0; y < h; y++) { output[yi].red = dv[rsum]; output[yi].green = dv[gsum]; output[yi].blue = dv[bsum]; rsum -= routsum; gsum -= goutsum; bsum -= boutsum; stackstart = stackpointer - radius + div; ir = stackstart % div; // same as sir routsum -= stack[ir][0]; goutsum -= stack[ir][1]; boutsum -= stack[ir][2]; if (x == 0) vmin[y] = min(y + r1, hm) * w; ip = x + vmin[y]; stack[ir][0] = r[ip]; stack[ir][1] = g[ip]; stack[ir][2] = b[ip]; rinsum += stack[ir][0]; ginsum += stack[ir][1]; binsum += stack[ir][2]; rsum += rinsum; gsum += ginsum; bsum += binsum; stackpointer = (stackpointer + 1) % div; ir = stackpointer; // same as sir routsum += stack[ir][0]; goutsum += stack[ir][1]; boutsum += stack[ir][2]; rinsum -= stack[ir][0]; ginsum -= stack[ir][1]; binsum -= stack[ir][2]; yi += w; } } // Unlocks everything AndroidBitmap_unlockPixels(env, bitmapIn); AndroidBitmap_unlockPixels(env, bitmapOut); LOGI ("Bitmap blurred."); } int min(int a, int b) { return a > b ? b : a; } int max(int a, int b) { return a > b ? a : b; }

    Read the article

  • Segment register, IP register and memory addressing issue!

    - by Zia ur Rahman
    In the following text I asked two questions and I also described that what I know about these question so that you can understand my thinking. Your precious comments about the below text are required. Below is the Detail of 1ST Question As we know that if we have one mega byte memory then we need 20 bits to address this memory. Another thing is each memory cell has a physical address which is of 20 bits in 1Mb memory. IP register in IAPX88 is of 16 bits. Now my point of view is, we can not access the memory at all by the IP register because the memory need 20 bit address to be addressed but the IP register is of 16 bits. If we have a memory of 64k then IP register can access this memory because this memory needs 16 bits to be addressed. But incase of 1mb memory IP can’t.tell me am i right or not if not why? Suppose physical address of memory is 11000000000000000101 Now how can we access this memory location by 16 bits. Below is the detail of Next Question: My next question is , suppose IP register is pointing to memory location, and the segment register is also pointing to a memory location (start of the segment), the memory is of 1MB, how we can access a memory location by these two 16 bit registers tell me the sequence of steps how the 20 bits addressable memory location is accessed . If your answer is, we take the segment value and we shift it left by 4 bits and then add the IP value into it to get the 20 bits address, then this raises another question that is the address bus (the address bus should be 20 bits wide), the registers both the segment register and the IP register are of 16 bits each , now if address bus is 20 bits wide then this means that the address bus is connected to both these registers. If its not the case then another thing that comes into my mind is that both these registers generate a 20 bit address and there would be a register which can store 20 bits and this register would be connected to both these register and the address bus as well.

    Read the article

  • C# WPF application is using too much memory while GC.GetTotalMemory() is low

    - by Dmitry
    I wrote little WPF application with 2 threads - main thread is GUI thread and another thread is worker. App has one WPF form with some controls. There is a button, allowing to select directory. After selecting directory, application scans for .jpg files in that directory and checks if their thumbnails are in hashtable. if they are, it does nothing. else it's adding their full filenames to queue for worker. Worker is taking filenames from this queue, loading JPEG images (using WPF's JpegBitmapDecoder and BitmapFrame), making thumbnails of them (using WPF's TransformedBitmap) and adding them to hashtable. Everything works fine, except memory consumption by this application when making thumbnails for big images (like 5000x5000 pixels). I've added textboxes on my form to show memory consumption (GC.GetTotalMemory() and Process.GetCurrentProcess().PrivateMemorySize64) and was very surprised, cuz GC.GetTotalMemory() stays close to 1-2 Mbytes, while private memory size constantly grows, especially when loading new image (~ +100Mb per image). Even after loading all images, making thumbnails of them and freeing original images, private memory size stays at ~700-800Mbytes. My VirtualBox is limited to 512Mb of physical memory and Windows in VirtualBox starts to swap alot to handle this huge memory consumption. I guess I'm doing something wrong, but I don't know how to investigate this problem, cuz according to GC, allocated memory size is very low. Attaching code of thumbnail loader class: class ThumbnailLoader { Hashtable thumbnails; Queue<string> taskqueue; EventWaitHandle wh; Thread[] workers; bool stop; object locker; int width, height, processed, added; public ThumbnailLoader() { int workercount,i; wh = new AutoResetEvent(false); thumbnails = new Hashtable(); taskqueue = new Queue<string>(); stop = false; locker = new object(); width = height = 64; processed = added = 0; workercount = Environment.ProcessorCount; workers=new Thread[workercount]; for (i = 0; i < workercount; i++) { workers[i] = new Thread(Worker); workers[i].IsBackground = true; workers[i].Priority = ThreadPriority.Highest; workers[i].Start(); } } public void SetThumbnailSize(int twidth, int theight) { width = twidth; height = theight; if (thumbnails.Count!=0) AddTask("#resethash"); } public void GetProgress(out int Added, out int Processed) { Added = added; Processed = processed; } private void AddTask(string filename) { lock(locker) { taskqueue.Enqueue(filename); wh.Set(); added++; } } private string NextTask() { lock(locker) { if (taskqueue.Count == 0) return null; else { processed++; return taskqueue.Dequeue(); } } } public static string FileNameToHash(string s) { return FormsAuthentication.HashPasswordForStoringInConfigFile(s, "MD5"); } public bool GetThumbnail(string filename,out BitmapFrame thumbnail) { string hash; hash = FileNameToHash(filename); if (thumbnails.ContainsKey(hash)) { thumbnail=(BitmapFrame)thumbnails[hash]; return true; } AddTask(filename); thumbnail = null; return false; } private BitmapFrame LoadThumbnail(string filename) { FileStream fs; JpegBitmapDecoder bd; BitmapFrame oldbf, bf; TransformedBitmap tb; double scale, dx, dy; fs = new FileStream(filename, FileMode.Open); bd = new JpegBitmapDecoder(fs, BitmapCreateOptions.None, BitmapCacheOption.OnLoad); oldbf = bd.Frames[0]; dx = (double)oldbf.Width / width; dy = (double)oldbf.Height / height; if (dx > dy) scale = 1 / dx; else scale = 1 / dy; tb = new TransformedBitmap(oldbf, new ScaleTransform(scale, scale)); bf = BitmapFrame.Create(tb); fs.Close(); oldbf = null; bd = null; GC.Collect(); return bf; } public void Dispose() { lock(locker) { stop = true; } AddTask(null); foreach (Thread worker in workers) { worker.Join(); } wh.Close(); } private void Worker() { string curtask,hash; while (!stop) { curtask = NextTask(); if (curtask == null) wh.WaitOne(); else { if (curtask == "#resethash") thumbnails.Clear(); else { hash = FileNameToHash(curtask); try { thumbnails[hash] = LoadThumbnail(curtask); } catch { thumbnails[hash] = null; } } } } } }

    Read the article

  • iphone - memory leaks in separate thread

    - by Brodie4598
    I create a second thread to call a method that downloads several images using: [NSThread detachNewThreadSelector:@selector(downloadImages) toTarget:self withObject:nil]; It works fine but I get a long list of leaks in the log similar to: 2010-04-18 00:48:12.287 FS Companion[11074:650f] * _NSAutoreleaseNoPool(): Object 0xbec2640 of class NSCFString autoreleased with no pool in place - just leaking Stack: (0xa58af 0xdb452 0x5e973 0x5e770 0x11d029 0x517fa 0x51708 0x85f2 0x3047d 0x30004 0x99481fbd 0x99481e42) 2010-04-18 00:48:12.288 FS Companion[11074:650f] * _NSAutoreleaseNoPool(): Object 0xbe01510 of class NSCFString autoreleased with no pool in place - just leaking Stack: (0xa58af 0xdb452 0x5e7a6 0x11d029 0x517fa 0x51708 0x85f2 0x3047d 0x30004 0x99481fbd 0x99481e42) 2010-04-18 00:48:12.289 FS Companion[11074:650f] * _NSAutoreleaseNoPool(): Object 0xbde6720 of class NSCFString autoreleased with no pool in place - just leaking Stack: (0xa58af 0xdb452 0x5ea73 0x5e7c2 0x11d029 0x517fa 0x51708 0x85f2 0x3047d 0x30004 0x99481fbd 0x99481e42) Can someone help me understand the problem?

    Read the article

  • How to compare a memory bits in C++?

    - by Trunet
    Hi, I need help with a memory bit comparison function. I bought a LED Matrix here with 4 x HT1632C chips and I'm using it on my arduino mega2560. There're no code available for this chipset(it's not the same as HT1632) and I'm writing on my own. I have a plot function that get x,y coordinates and a color and that pixel turn on. Only this is working perfectly. But I need more performance on my display so I tried to make a shadowRam variable that is a "copy" of my device memory. Before I plot anything on display it checks on shadowRam to see if it's really necessary to change that pixel. When I enabled this(getShadowRam) on plot function my display has some, just SOME(like 3 or 4 on entire display) ghost pixels(pixels that is not supposed to be turned on). If I just comment the prev_color if's on my plot function it works perfectly. Also, I'm cleaning my shadowRam array setting all matrix to zero. variables: #define BLACK 0 #define GREEN 1 #define RED 2 #define ORANGE 3 #define CHIP_MAX 8 byte shadowRam[63][CHIP_MAX-1] = {0}; getShadowRam function: byte HT1632C::getShadowRam(byte x, byte y) { byte addr, bitval, nChip; if (x>=32) { nChip = 3 + x/16 + (y>7?2:0); } else { nChip = 1 + x/16 + (y>7?2:0); } bitval = 8>>(y&3); x = x % 16; y = y % 8; addr = (x<<1) + (y>>2); if ((shadowRam[addr][nChip-1] & bitval) && (shadowRam[addr+32][nChip-1] & bitval)) { return ORANGE; } else if (shadowRam[addr][nChip-1] & bitval) { return GREEN; } else if (shadowRam[addr+32][nChip-1] & bitval) { return RED; } else { return BLACK; } } plot function: void HT1632C::plot (int x, int y, int color) { if (x<0 || x>X_MAX || y<0 || y>Y_MAX) return; if (color != BLACK && color != GREEN && color != RED && color != ORANGE) return; char addr, bitval; byte nChip; byte prev_color = HT1632C::getShadowRam(x,y); bitval = 8>>(y&3); if (x>=32) { nChip = 3 + x/16 + (y>7?2:0); } else { nChip = 1 + x/16 + (y>7?2:0); } x = x % 16; y = y % 8; addr = (x<<1) + (y>>2); switch(color) { case BLACK: if (prev_color != BLACK) { // compare with memory to only set if pixel is other color // clear the bit in both planes; shadowRam[addr][nChip-1] &= ~bitval; HT1632C::sendData(nChip, addr, shadowRam[addr][nChip-1]); shadowRam[addr+32][nChip-1] &= ~bitval; HT1632C::sendData(nChip, addr+32, shadowRam[addr+32][nChip-1]); } break; case GREEN: if (prev_color != GREEN) { // compare with memory to only set if pixel is other color // set the bit in the green plane and clear the bit in the red plane; shadowRam[addr][nChip-1] |= bitval; HT1632C::sendData(nChip, addr, shadowRam[addr][nChip-1]); shadowRam[addr+32][nChip-1] &= ~bitval; HT1632C::sendData(nChip, addr+32, shadowRam[addr+32][nChip-1]); } break; case RED: if (prev_color != RED) { // compare with memory to only set if pixel is other color // clear the bit in green plane and set the bit in the red plane; shadowRam[addr][nChip-1] &= ~bitval; HT1632C::sendData(nChip, addr, shadowRam[addr][nChip-1]); shadowRam[addr+32][nChip-1] |= bitval; HT1632C::sendData(nChip, addr+32, shadowRam[addr+32][nChip-1]); } break; case ORANGE: if (prev_color != ORANGE) { // compare with memory to only set if pixel is other color // set the bit in both the green and red planes; shadowRam[addr][nChip-1] |= bitval; HT1632C::sendData(nChip, addr, shadowRam[addr][nChip-1]); shadowRam[addr+32][nChip-1] |= bitval; HT1632C::sendData(nChip, addr+32, shadowRam[addr+32][nChip-1]); } break; } } If helps: The datasheet of board I'm using. On page 7 has the memory mapping I'm using. Also, I have a video of display working.

    Read the article

  • Lua metatable Objects cannot be purge from memory?

    - by Prometheus3k
    Hi there, I'm using a proprietary platform that reported memory usage in realtime on screen. I decided to use a Class.lua I found on http://lua-users.org/wiki/SimpleLuaClasses However, I noticed memory issues when purging object created by this using a simple Account class. Specifically, I would start with say 146k of memory used, create 1000 objects of a class that just holds an integer instance variable and store each object into a table. The memory used is now 300k I would then exit, iterating through the table and setting each element in the table to nil. But would never get back the 146k, usually after this I am left using 210k or something similar. If I run the load sequence again during the same session, it does not exceed 300k so it is not a memory leak. I have tried creating 1000 integers in a table and setting these to nil, which does give me back 146k. In addition I've tried a simpler class file (Account2.lua) that doesn't rely on a class.lua. This still incurs memory fragmentation but not as much as the one that uses Class.lua Can anybody explain what is going on here? How can I purge these objects and get back the memory? here is the code --------Class.lua------ -- class.lua -- Compatible with Lua 5.1 (not 5.0). --http://lua-users.org/wiki/SimpleLuaClasses function class(base,ctor) local c = {} -- a new class instance if not ctor and type(base) == 'function' then ctor = base base = nil elseif type(base) == 'table' then -- our new class is a shallow copy of the base class! for i,v in pairs(base) do c[i] = v end c._base = base end -- the class will be the metatable for all its objects, -- and they will look up their methods in it. c.__index = c -- expose a ctor which can be called by () local mt = {} mt.__call = function(class_tbl,...) local obj = {} setmetatable(obj,c) if ctor then ctor(obj,...) else -- make sure that any stuff from the base class is initialized! if base and base.init then base.init(obj,...) end end return obj end c.init = ctor c.instanceOf = function(self,klass) local m = getmetatable(self) while m do if m == klass then return true end m = m._base end return false end setmetatable(c,mt) return c end --------Account.lua------ --Import Class template require 'class' local classname = "Account" --Declare class Constructor Account = class(function(acc,balance) --Instance variables declared here. if(balance ~= nil)then acc.balance = balance else --default value acc.balance = 2097 end acc.classname = classname end) --------Account2.lua------ local account2 = {} account2.classname = "unnamed" account2.balance = 2097 -----------Constructor 1 do local metatable = { __index = account2; } function Account2() return setmetatable({}, metatable); end end --------Main.lua------ require 'Account' require 'Account2' MAX_OBJ = 5000; test_value = 1000; Obj_Table = {}; MODE_ACC0 = 0 --integers MODE_ACC1 = 1 --Account MODE_ACC2 = 2 --Account2 TEST_MODE = MODE_ACC0; Lua_mem = ""; print("##1) collectgarbage('count'): " .. collectgarbage('count')); function Load() for i=1, MAX_OBJ do if(TEST_MODE == MODE_ACC0 )then table.insert(Obj_Table, test_value); elseif(TEST_MODE == MODE_ACC1 )then table.insert(Obj_Table, Account(test_value)); --Account.lua elseif(TEST_MODE == MODE_ACC2 )then table.insert(Obj_Table, Account2()); --Account2.lua Obj_Table[i].balance = test_value; end end print("##2) collectgarbage('count'): " .. collectgarbage('count')); end function Purge() --metatable purge if(TEST_MODE ~= MODE_ACC0)then --purge stage 0: print("set each elements metatable to nil") for i=1, MAX_OBJ do setmetatable(Obj_Table[i], nil); end end --purge stage 1: print("set table element to nil") for i=1, MAX_OBJ do Obj_Table[i] = nil; end --purge stage 2: print("start table.remove..."); for i=1, MAX_OBJ do table.remove(Obj_Table, i); end print("...end table.remove"); --purge stage 3: print("create new object_table {}"); Obj_Table= {}; --purge stage 4: print("collectgarbage('collect')"); collectgarbage('collect'); print("##3) collectgarbage('count'): " .. collectgarbage('count')); end --Loop callback function OnUpdate() collectgarbage('collect'); Lua_mem = collectgarbage('count'); end ------------------- --NOTE: --On start of game runs Load(), another runs Purge() --Update I've updated the code with suggestions from comments below, and will post my findings later today.

    Read the article

  • Quantifying the Performance of Garbage Collection vs. Explicit Memory Management

    - by EmbeddedProg
    I found this article here: Quantifying the Performance of Garbage Collection vs. Explicit Memory Management http://www.cs.umass.edu/~emery/pubs/gcvsmalloc.pdf In the conclusion section, it reads: Comparing runtime, space consumption, and virtual memory footprints over a range of benchmarks, we show that the runtime performance of the best-performing garbage collector is competitive with explicit memory management when given enough memory. In particular, when garbage collection has five times as much memory as required, its runtime performance matches or slightly exceeds that of explicit memory management. However, garbage collection’s performance degrades substantially when it must use smaller heaps. With three times as much memory, it runs 17% slower on average, and with twice as much memory, it runs 70% slower. Garbage collection also is more susceptible to paging when physical memory is scarce. In such conditions, all of the garbage collectors we examine here suffer order-of-magnitude performance penalties relative to explicit memory management. So, if my understanding is correct: if I have an app written in native C++ requiring 100 MB of memory, to achieve the same performance with a "managed" (i.e. garbage collector based) language (e.g. Java, C#), the app should require 5*100 MB = 500 MB? (And with 2*100 MB = 200 MB, the managed app would run 70% slower than the native app?) Do you know if current (i.e. latest Java VM's and .NET 4.0's) garbage collectors suffer the same problems described in the aforementioned article? Has the performance of modern garbage collectors improved? Thanks.

    Read the article

  • WmiPrvSE memory leak on Windows 2008 *R2*

    - by MichaelGG
    I've seen references on Windows 2008 to WmiPrvSE leaks, but nothing about Windows 2008 R2. We're running R2 on top of Hyper-V (2008). We are also running NSClient++ for monitoring from opsview. Over time, WmiPrvSE.exe starts to use a lot of memory, causing memory alert issues (less than 10% free). VM has 2GB, WmiPrvSE consumes up to 500-600MB before I kill it. Killing the process doesn't seem to have any negative effect; it starts up again and I haven't noticed any problems. But after a day or two, it's back in the same situation. Any ideas on what to do? Resource Monitor doesn't show any Disk or Network IO by WmiPrvSE.exe. Just slowly climbing private memory... Edited to add: We aren't running clustering, or Windows System Resource Manager. The only regular WMI user I can guess is NSClient++, but we don't seem to have this problem on other servers.

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >