Search Results

Search found 16794 results on 672 pages for 'memory usage'.

Page 5/672 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Recommended formats to store bitmaps in memory?

    - by Geotarget
    I'm working with general purpose image rendering, and high-performance image processing, and so I need to know how to store bitmaps in-memory. (24bpp/32bpp, compressed/raw, etc) I'm not working with 3D graphics or DirectX / OpenGL rendering and so I don't need to use graphics card compatible bitmap formats. My questions: What is the "usual" or "normal" way to store bitmaps in memory? (in C++ engines/projects?) How to store bitmaps for high-performance algorithms, such that read/write times are the fastest? (fixed array? with/without padding? 24-bpp or 32-bpp?) How to store bitmaps for applications handling a lot of bitmap data, to minimize memory usage? (JPEG? or a faster [de]compression algorithm?) Some possible methods: Use a fixed packed 24-bpp or 32-bpp int[] array and simply access pixels using pointer access, all pixels are allocated in one continuous memory chunk (could be 1-10 MB) Use a form of "sparse" data storage so each line of the bitmap is allocated separately, reusing more memory and requiring smaller contiguous memory segments Store bitmaps in its compressed form (PNG, JPG, GIF, etc) and unpack only when its needed, reducing the amount of memory used. Delete the unpacked data if its not used for 10 secs.

    Read the article

  • High memory utilization by sqlservr.exe process

    - by abdul samad
    Sub:High memory utilization by sqlservr.exe process. When I look into task manager --processes or by using perfmon memory counters(Sqlserver:memory manager:Target server memory and Total server memory) I am getting high memory utilization by sqlservr.exe process nearly 8 GB (Target server memory counter) and 7.95 GB (Total server memory). and when I restart the MSSQLSERVER service it again shoots up to the same size. I am getting this issue quite frequently. Please help me out in identifying why sql server is using so much memory and how to find out what query , stored procedure etc is making sql server use that much memory. * I am not using any triggers or cursors in my code. Thanks

    Read the article

  • Why is my server using so much memory?

    - by Qasim
    I haven't even set up my website on my dedicated server so I'm the only one using it at the moment. And yet this is what I see in my sys info: Full Size I just got a bunch of security softwares installed today so I'm wondering if that could be the reason. Programs like Dos deflate, CSF firewall, Mod_security, SIM, Log watch, etc. My server's details: CentOS Processor Intel Xeon CPU X3220 CPU Speed 2.39 GHz Cache Size 4.00 MB RAM 2GB DDR2

    Read the article

  • NPOT texture and video memory usage

    - by Eonil
    I read in this QA that NPOT will take memory as much as next POT sized texture. It means it doesn't give any benefit than POT texture with proper management. (maybe even worse because NPOT should be slower!) Is this true? Does NPOT texture take and waste same memory like POT texture? I am considering NPOT texture for post-processing, so if it doesn't give memory space benefit, using of NPOT texture is meaningless to me. Maybe answer can be different for each platforms. I am targeting mobile devices. Such as iPhone or Androids. Does NPOT texture takes same amount of memory on mobile GPUs?

    Read the article

  • Windows 7 100% Memory Usage (without any process listed as using that much memory)

    - by Paul Tarjan
    When I plug my external USB 2TB hard drive into my windows 7 box, my RAM usage climbs up to all 4 Gigs (but in task manager it shows that all process are small) and the hard drive is churning like crazy. My CPU is only about 20% utilized All I can think of is there is a Virus scanner or an indexer running like crazy. I've tried to kill all virus scanners (AVG and Windows Security Essentials) and it still keeps going. My computer is completely unusable as everything is constantly swapping. I've tried leaving it on for 2 days now and it still hasn't finished whatever it was doing. Any ideas?

    Read the article

  • Clarification of the difference between PCI memory addressing and I/O addressing?

    - by KevinM
    Could someone please clarify the difference between memory and I/O addresses on the PCI/PCIe bus? I understand that I/O addresses are 32-bit, limited to the range 0 to 4GB, and do not map onto system memory (RAM), and that memory addresses are either 32-bit or 64-bit. I get the impression that memory addressing must map onto available RAM, is this true? That if a PCI device wishes to transfer data to a memory address, that address must exist in actual system RAM (and is allocated during PCI configuration) and not virtual memory. So if a PCI device only needs to transfer a small amount of data at a time, where there is no advantage to putting it into RAM or using DMA, then I/O addressing is fine (e.g. a parallel port implemented on a PCI card). And why do I keep reading that PCI/PCIe I/O addressing is being deprecated in favour of memory addressing? Thanks!

    Read the article

  • Large virtual memory size of ElasticSearch JVM

    - by wfaulk
    I am running a JVM to support ElasticSearch. I am still working on sizing and tuning, so I left the JVM's max heap size at ElasticSearch's default of 1GB. After putting data in the database, I find that the JVM's process is showing 50GB in SIZE in top output. It appears that this is actually causing performance problems on the system; other processes are having trouble allocating memory. In asking the ElasticSearch community, they suggested that it's "just" filesystem caching. In my experience, filesystem caching doesn't show up as memory used by a particular process. Of course, they may have been talking about something other than the OS's filesystem cache, maybe something that the JVM or ElasticSearch itself is doing on top of the OS. But they also said that it would be released if needed, and that didn't seem to be happening. So can anyone help me figure out how to tune the JVM, or maybe ElasticSearch itself, to not use so much RAM. System is Solaris 10 x86 with 72GB RAM. JVM is "Java(TM) SE Runtime Environment (build 1.7.0_45-b18)".

    Read the article

  • Does Firefox still have memory leaks?

    - by tom
    I've had my Firefox 3.6.3 browser consume up to 1.5GB of RAM after a few days of usage without closing it. This is even true when, after a few days of opening/closing tabs, there's only 1 tab open. Does Firefox still have memory leaks?

    Read the article

  • Wheres my memory going?

    - by Stu2000
    My machine keeps 'freezing' before eventaully logging out with all the programs exiting. This is rather annoying, and I think its because I keep running out of memory. I am not running any custom software, just netbeans, chrome etc. (Stuff I usually run on other ubuntu computers without issue). For some reason my memory usage is through the roof as seen here, but I can't quite figure out why. Here is a screenshot which may be useful with htop and gnome-system monitor open as user and as root. I notice that my console-kit-daemon is taking up about a gig of 'virtual memory'. Is that normal? Any tips/advice will be helpful. In the meantime I have ordered 2 x 4 gig ram sticks to try and just throw hardware at the issue.

    Read the article

  • Manual memory allocation and purity

    - by Eonil
    Language like Haskell have concept of purity. In pure function, I can't mutate any state globally. Anyway Haskell fully abstracts memory management, so memory allocation is not a problem here. But if languages can handle memory directly like C++, it's very ambiguous to me. In these languages, memory allocation makes visible mutation. But if I treat making new object as impure action, actually, almost nothing can be pure. So purity concept becomes almost useless. How should I handle purity in languages have memory as visible global object?

    Read the article

  • Increasing JRE Memory Usage in Eclipse

    - by gMcizzLe
    I read in another question that you can increase the JRE memory allowance for an app through Window - Preferences in Eclipse, but I can't seem to find anything related to heap memory allocation. Editing -xms/xmx values in eclipse.ini doesn't help since those are for Eclipse itself.

    Read the article

  • Data Structures usage and motivational aspects

    - by Aubergine
    For long student life I was always wondering why there are so many of them yet there seems to be lack of usage at all in many of them. The opinion didn't really change when I got a job. We have brilliant books on what they are and their complexities, but I never encounter resources which would actually give a good hint of practical usage. I perfectly understand that I have to look at problem , analyse required operations, look for data structure that does them efficiently. However in practice I never do that, not because of human laziness syndrome, but because when it comes to work I acknowledge time priority over self-development. Over time I thought that when I would be better developer I will automatically use more of them - that didn't happen at all or maybe I just didn't. Then I found that the colleagues usually in the same plate as me - knowing more or less some three of data structures and being totally happy about it and refusing to discuss this matter further with me, coming back to conversations about 'cool new languages' 'libraries that do jobs for you' and the joy to work under scrumban etc. I am stuck with ArrayLists, Arrays and SortedMap , which no matter what I do always suffice or either I tweak them to be capable of fulfilling my task. Yes, it might be inefficient but do we really have to care if Intel increases performance over years no matter if we improve our skills? Does new Xeon or IBM machines really care what we use? What if I like build things, but I am not particularly excited whether it is n log(n) or just n? Over twenty years the processing power increased enormously, which gives us freedom of not being critical about which one to use? On top of that new more optimized languages appear which support multiple cores more efficiently. To be more specific: I would like to find motivational material on complex real areas/cases of possible effective usages of data structures. I would be really grateful if you would provide relevant resources. There is similar question ,but in the end the links again mostly describe or do dumb example(vehicles, students or holy grail quest - yes, very relevant) them and people keep referring to the "scenario decides the data structure to use". I want to know these complex scenarios to be able to identify similarities to my scenario and then use them. The complex scenarios where it really matters and not necessarily of quantitive nature. It seems that data structures only concern is efficiency and nothing else? There seems to be no particular convenience for developer in use one over another. (only when I found scientific resources on why exactly simple carbohydrates are evil I stopped eating sugar and candies completely replacing it with less harmful fruits - I hope you can see the analogy)

    Read the article

  • Thread Local Memory for Scratch Memory.

    - by Hassan Syed
    I am using Protocol Buffers and OpensSSL to generate, HMACs and then CBC encrypt the two fields to obfuscate the session cookies -- similar Kerberos tokens. Protocol Buffers' API communicates with std::strings and has a buffer caching mechanism; I exploit the caching mechanism, for successive calls in the the same thread, by placing it in thread local memory; additionally the OpenSSL HMAC and EVP CTX's are also placed in the same thread local memory structure ( see this question for some detail on why I use thread local memory and the massive amount of speedup it enables even with a single thread). The generation and deserialization, "my algorithms", of these cookie strings uses intermediary void *s and std::strings and since Protocol Buffers has an internal memory retention mechanism I want these characteristics for "my algorithms". So how do I implement a common scratch memory ? I don't know much about the rdbuf of the std::string object. I would presumeably need to grow it to the lowest common size ever encountered during the execution of "my algorithms". Thoughts ?

    Read the article

  • Can memory be cleaned up?

    - by Tom
    I am working in Delphi 5 (with FastMM installed) on a Win32 project, and have recently been trying to drastically reduce the memory usage in this application. So far, I have cut the usage nearly in half, but noticed something when working on a separate task. When I minimized the application, the memory usage shrunk from 45 megs down to 1 meg, which I attributed to it paging out to disk. When I restored it and restarted working, the memory went up only to 15 megs. As I continued working, the memory usage slowly went up again, and a minimize and restore flushed it back down to 15 megs. So to my thinking, when my code tells the system to release the memory, it is still being held on to according to Windows, and the actual garbage collection doesn't kick in until a lot later. Can anyone confirm/deny this sort of behavior? Is it possible to get the memory cleaned up programatically? If I keep using the program without doing this manual flush, I get an out of memory error after a while, and would like to eliminate that. Thanks.

    Read the article

  • Reducing Oracle LOB Memory Use in PHP, or Paul's Lesson Applied to Oracle

    - by christopher.jones
    Paul Reinheimer's PHP memory pro tip shows how re-assigning a value to a variable doesn't release the original value until the new data is ready. With large data lengths, this unnecessarily increases the peak memory usage of the application. In Oracle you might come across this situation when dealing with LOBS. Here's an example that selects an entire LOB into PHP's memory. I see this being done all the time, not that that is an excuse to code in this style. The alternative is to remove OCI_RETURN_LOBS to return a LOB locator which can be accessed chunkwise with LOB->read(). In this memory usage example, I threw some CLOB rows into a table. Each CLOB was about 1.5M. The fetching code looked like: $s = oci_parse ($c, 'SELECT CLOBDATA FROM CTAB'); oci_execute($s); echo "Start Current :" . memory_get_usage() . "\n"; echo "Start Peak : " .memory_get_peak_usage() . "\n"; while(($r = oci_fetch_array($s, OCI_RETURN_LOBS)) !== false) { echo "Current :" . memory_get_usage() . "\n"; echo "Peak : " . memory_get_peak_usage() . "\n"; // var_dump(substr($r['CLOBDATA'],0,10)); // do something with the LOB // unset($r); } echo "End Current :" . memory_get_usage() . "\n"; echo "End Peak : " . memory_get_peak_usage() . "\n"; Without "unset" in loop, $r retains the current data value while new data is fetched: Start Current : 345300 Start Peak : 353676 Current : 1908092 Peak : 2958720 Current : 1908092 Peak : 4520972 End Current : 345668 End Peak : 4520972 When I uncommented the "unset" line in the loop, PHP's peak memory usage is much lower: Start Current : 345376 Start Peak : 353676 Current : 1908168 Peak : 2958796 Current : 1908168 Peak : 2959108 End Current : 345744 End Peak : 2959108 Even if you are using LOB->read(), unsetting variables in this manner will reduce the PHP program's peak memory usage. With LOBS in Oracle DB there is also DB memory use to consider. Using LOB->free() is worthwhile for locators. Importantly, the OCI8 1.4.1 extension (from PECL or included in PHP 5.3.2) has a LOB fix to free up Oracle's locators earlier. For long running scripts using lots of LOBS, upgrading to OCI8 1.4.1 is recommended.

    Read the article

  • linux log memory hogging issue

    - by helpmhost
    Hi, We have a VPS server (it's using Virtuozzo). On a few occasions now, our VPS memory was fully used up and no new connections could be made to the server on SSH, SMTP, or POP. The only thing that works is connecting to the web service. Luckily, plesk is running on the VPS and we have been able to reboot it through plesk (as well as see that the RAM is 100% used). I would like to find what process is causing this. I have a feeling it's MySQL, but don't really know. Is there some sort of logging I could implement that would help me find out what was the cause of this next time it happens? Thanks.

    Read the article

  • Reduce memory usage

    - by Flintoff
    I have just installed the standard default desktop configuration of Ubuntu 12.10 (Quantal Quetzal). My PC only has 1GB of RAM and is struggling a little. What steps can I take to reduce the memory overhead of the standard install? If it makes a difference, I use Firefox, and a terminal most of the time. Simply running those two applications I see: free -m total used free shared buffers cached Mem: 938 873 64 0 5 167 -/+ buffers/cache: 701 237 Swap: 959 158 801

    Read the article

  • Memory usage in Flash / Flex / AS3

    - by ggambett
    I'm having some trouble with memory management in a flash app. Memory usage grows quite a bit, and I've tracked it down to the way I load assets. I embed several raster images in a class Embedded, like this [Embed(source="/home/gabriel/text_hard.jpg")] public static var ASSET_text_hard_DOT_jpg : Class; I then instance the assets this way var pClass : Class = Embedded[sResource] as Class; return new pClass() as Bitmap; At this point, memory usage goes up, which is perfectly normal. However, nulling all the references to the object doesn't free the memory. Based on this behavior, looks like the flash player is creating an instance of the class the first time I request it, but never ever releases it - not without references, calling System.gc(), doing the double LocalConnection trick, or calling dispose() on the BitmapData objects. Of course, this is very undesirable - memory usage would grow until everything in the SWFs is instanced, regardless of whether I stopped using some asset long ago. Is my analysis correct? Can anything be done to fix this?

    Read the article

  • Memory issues - Living vs. overall -> app is killed

    - by D33
    I'm trying to check my applications memory issues in Instruments. When I load the application I play some sounds and show some animations in UIImageViews. To save some memory I load the sounds only when I need it and when I stop playing it I free it from the memory. problem 1: My application is using about 5.5MB of Living memory. BUT The Overall section is growing after start to 20MB and then it's slowly growing (about 100kB/sec). But responsible Library is OpenAL (OAL::Buffer), dyld (_dyld_start)-I am not sure what this really is, and some other stuff like ft_mem_qrealloc, CGFontStrikeSetValue, … problem 2: When the overall section breaks about 30MB, application crashes (is killed). According to the facts I already read about overall memory, it means then my all allocations and deallocation is about 30MB. But I don't really see the problem. When I need some sound for example I load it to the memory and when I don't need it anymore I release it. But that means when I load 1MB sound, this operation increase overall memory usage with 2MB. Am I right? And when I load 10 sounds my app crashes just because the fact my overall is too high even living is still low??? I am very confused about it. Could someone please help me clear it up? (I am on iOS 5 and using ARC) SOME CODE: creating the sound OpenAL: MYOpenALSound *sound = [[MyOpenALSound alloc] initWithSoundFile:filename willRepeat:NO]; if(!sound) return; [soundDictionary addObject:sound]; playing: [sound play]; dispatch_after(dispatch_time(DISPATCH_TIME_NOW, ((sound.duration * sound.pitch) + 0.1) * NSEC_PER_SEC), dispatch_get_current_queue(), ^{ [soundDictionary removeObjectForKey:[NSNumber numberWithInt:soundID]]; }); } creating the sound with AVAudioPlayer: [musics replaceObjectAtIndex:ID_MUSIC_MAP withObject:[[Music alloc] initWithFilename:@"mapMusic.mp3" andWillRepeat:YES]]; pom = [musics objectAtIndex:musicID]; [pom playMusic]; and stop and free it: [musics replaceObjectAtIndex:ID_MUSIC_MAP withObject:[NSNull null]]; AND IMAGE ANIMATIONS: I load images from big PNG file (this is realated also to my other topic : Memory warning - UIImageView and its animations) I have few UIImageViews and by time I'm setting animation arrays to play Animations... UIImage *source = [[UIImage alloc] initWithCGImage:[[UIImage imageNamed:@"imageSource.png"] CGImage]]; cutRect = CGRectMake(0*dimForImg.width,1*dimForImg.height,dimForImg.width,dimForImg.height); image1 = [[UIImage alloc] initWithCGImage:CGImageCreateWithImageInRect([source CGImage], cutRect)]; cutRect = CGRectMake(1*dimForImg.width,1*dimForImg.height,dimForImg.width,dimForImg.height); ... image12 = [[UIImage alloc] initWithCGImage:CGImageCreateWithImageInRect([source CGImage], cutRect)]; NSArray *images = [[NSArray alloc] initWithObjects:image1, image2, image3, image4, image5, image6, image7, image8, image9, image10, image11, image12, image12, image12, nil]; and this array I just use simply like : myUIImageView.animationImages = images, ... duration -> startAnimating

    Read the article

  • easy visualization of usage statistics (web app)

    - by sova
    I have some usage queries for my web app's database, the results of which I want to display graphically. Is there an easy-to-use api that exists for this purpose? I want to show things like average query-time per user (a small user-base), average query time per day, and things like that. I think it would be cool to show these on a two-axis graph. I am displaying this data on my site, so a jQuery/javascript/html solution for rendering information into graphs would be ideal. Thank you :) P.S. I wasn't sure if I should ask this on SO, but I am looking more for which product to use, not how to program with it.

    Read the article

  • easy visualization of usage statistics (web app)

    - by sova
    I have some usage queries for my web app's database, the results of which I want to display graphically. Is there an easy-to-use api that exists for this purpose? I want to show things like average query-time per user (a small user-base), average query time per day, and things like that. I think it would be cool to show these on a two-axis graph. I am displaying this data on my site, so a jQuery/javascript/html solution for rendering information into graphs would be ideal. Thank you :) P.S. I wasn't sure if I should ask this on SO, but I am looking more for which product to use, not how to program with it.

    Read the article

  • Question about server usage, big community platform

    - by Json
    I’m working on a community platform writen in PHP, MySQL. I have some questions about the server usage maybe someone can help me out. The community is based on JQuery with many ajax requests to update content. It makes 5 - 10 AJAX(Json, GET, POST) requests every 5 seconds, the requests fetch user data like user notifications and messages by doing mySQL queries. I wonder how a server will handle this when there are for more than 5000 users online. Then it will be 50.000 requests every 5 seconds, what kind of server you need to handle this? Or maybe even more, when there are 15.000 users online, 150.000 requests every 5 seconds. My webserver have the following specs. Xeon Quad 2048MB 5000GB traffic Will it be good enough, and for how many users? Anyone can help me out or know where to find such information, like make a calculation?

    Read the article

  • What is the maximum memory a process (MySQL) can consume on a 32-bit OS?

    - by mmattax
    I have MySQL running on a 32-bit RHEL box. The server itself has 4GB total memory with 2GB allocated to MySQL. I would like to know the max amount of memory I can put in the box and how much of that I can allocate to MySQL. I have heard both 2GB and 4GB as the per-process-limit on a 32-bit OS... Ultimately I'd like to know if I can increase the memory for MySQL without upgrading to a 64-bit OS.

    Read the article

  • Cached memory refers to both cached memory (that is currently usable) and used memory (that was previous cached)?

    - by Pacerier
    Hi all I was trying to confirm my understanding of "standby list" and "modified list" as stated in this article. Is it true that "Cached memory" (as shown in the image below) refers to memory that is currently cached (available for use), and memory that was previous cached (previously available for use), but currently used (now not available for use) ? So if x = "Cached memory" (1184), y = "modified cache pages", z = "cached and were modified", x = y + z holds true ?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >