Search Results

Search found 18119 results on 725 pages for 'shared memory'.

Page 4/725 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • iPhone - Memory Management - Using Leaks tool and getting some bizarre readings.

    - by Robert
    Hey all, putting the finishing touches on a project of mine so I figured I would run through it and see if and where I had any memory leaks. Found and fixed most of them but there are a couple of things regarding the memory leaks and object alloc that I am confused about. 1) There are 2 memory leaks that do not show me as responsible. There are 8 leaks attributed to AudioToolbox with the function being RegisterEmbeddedAudioCodecs(). This accounts for about 1.5 kb of leaks. The other one is detected immediately when the app begins. Core Graphics is responsible with the extra info being open_handle_to_dylib_path. For the audio leak I have looked over my audio code and to me it seems ok. self.musicPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:[NSURL fileURLWithPath:songFilePath] error:NULL]; [musicPlayer prepareToPlay]; [musicPlayer play] is called later on in a function. 2) Is it normal for there to be a spike in Object Allocation whenever a new view or controller is presented? My total memory usage is very, very low except for whenever I present a view controller. It spikes then immediately goes back down. I am guessing that this is just the phone handling all the information for switching or something. Blegh. Wall of text. Thanks in advance to anyone who helps! =)

    Read the article

  • Out of memory Problem

    - by Sunil
    I'm running a C++ program in Ubuntu 10.04 (32-bit system architecture). If I calculate the amount of memory that my program uses, it comes up to 800MB. I have a 4GB RAM in place. But still before the program even finishes it throws an out of memory exception. Why is that happening ? Is it because of the structure of the memory or implementation problems or what could possibly trigger this issue ? I've had seen this problem quite a number of times before but never understood the reason behind it. Have any of you handled this case before ?

    Read the article

  • Munin fills server memory

    - by danilo
    In the last weeks, it happened several times to me that my vserver (Debian Lenny) was out of RAM (500M) and therefore wasn't able to run apache anymore. When looking at the processes with top, I saw that there were many open munin-limits and munin-cron processes that consumed most of the memory. My guess would be that sometimes Apache temporarily needs more memory, which prevents munin-cron from running. And if munin-cron isn't able to stop itself, it would fill the memory until nothing is left. I don't know whether this guess is true, but could maybe someone know what the problem is and how to prevent it? If necessary I'll remove munin, but I'd prefer to keep it running.

    Read the article

  • How is dynamic memory allocation handled when extreme reliability is required?

    - by sharptooth
    Looks like dynamic memory allocation without garbage collection is a way to disaster. Dangling pointers there, memory leaks here. Very easy to plant an error that is sometimes hard to find and that has severe consequences. How are these problems addressed when mission-critical programs are written? I mean if I write a program that controls a spaceship like Voyager 1 that has to run for years and leave a smallest leak that leak can accumulate and halt the program sooner or later and when that happens it translates into epic fail. How is dynamic memory allocation handled when a program needs to be extremely reliable?

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #053 – Final Post in Series

    - by Pinal Dave
    It has been a fantastic journey to write memory lane series for an entire year. This series gave me the opportunity to go back and see what I have contributed to this blog throughout the last 7 years. This was indeed fantastic series as this provided me the opportunity to witness how technology has grown throughout the year and how I have progressed in my career while writing this blog post. This series was indeed fantastic experience readers as many joined during the last few years and were not sure what they have missed in recent years. Let us continue with the final episode of the Memory Lane Series. Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Get Current User – Get Logged In User Here is the straight script which list logged in SQL Server users. Disable All Triggers on a Database – Disable All Triggers on All Servers Question : How to disable all the triggers for a database? Additionally, how to disable all the triggers for all servers? For answer execute the script in the blog post. Importance of Master Database for SQL Server Startup I have received following questions many times. I will list all the questions here and answer them together. What is the purpose of Master database? Should our backup Master database? Which database is must have database for SQL Server for startup? Which are the default system database created when SQL Server 2005 is installed for the first time? What happens if Master database is corrupted? Answers to all of the questions are very much related. 2008 DECLARE Multiple Variables in One Statement SQL Server is a great product and it has many features which are very unique to SQL Server. Regarding feature of SQL Server where multiple variable can be declared in one statement, it is absolutely possible to do. 2009 How to Enable Index – How to Disable Index – Incorrect syntax near ‘ENABLE’ Many times I have seen that the index is disabled when there is a large update operation on the table. Bulk insert of very large file updates in any table using SSIS is usually preceded by disabling the index and followed by enabling the index. I have seen many developers running the following query to disable the index. 2010 List of all the Views from Database Many emails I received suggesting that they have hundreds of the view and now have no clue what is going on and how many of them have indexes and how many does not have an index. Some even asked me if there is any way they can get a list of the views with the property of Index along with it. Here is the quick script which does exactly the same. You can also include many other columns from the same view. Minimum Maximum Memory – Server Memory Options I was recently reading about SQL Server Memory Options over here. While reading this one line really caught my attention is minimum value allowed for maximum memory options. The default setting for min server memory is 0, and the default setting for max server memory is 2147483647. The minimum amount of memory you can specify for max server memory is 16 megabytes (MB). 2011 Fundamentals of Columnstore Index There are two kinds of storage in a database. Row Store and Column Store. Row store does exactly as the name suggests – stores rows of data on a page – and column store stores all the data in a column on the same page. These columns are much easier to search – instead of a query searching all the data in an entire row whether the data are relevant or not, column store queries need only to search a much lesser number of the columns. How to Ignore Columnstore Index Usage in Query In summary the question in simple words “How can we ignore using the column store index in selective queries?” Very interesting question – you can use I can understand there may be the cases when the column store index is not ideal and needs to be ignored the same. You can use the query hint IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX to ignore the column store index. The SQL Server Engine will use any other index which is best after ignoring the column store index. 2012 Storing Variable Values in Temporary Array or Temporary List SQL Server does not support arrays or a dynamic length storage mechanism like list. Absolutely there are some clever workarounds and few extra-ordinary solutions but everybody can;t come up with such solution. Additionally, sometime the requirements are very simple that doing extraordinary coding is not required. Here is the simple case. Move Database Files MDF and LDF to Another Location It is not common to keep the Database on the same location where OS is installed. Usually Database files are in SAN, Separate Disk Array or on SSDs. This is done usually for performance reason and manageability perspective. Now the challenges comes up when database which was installed at not preferred default location and needs to move to a different location. Here is the quick tutorial how you can do it. UNION ALL and ORDER BY – How to Order Table Separately While Using UNION ALL If your requirement is such that you want your top and bottom query of the UNION resultset independently sorted but in the same result set you can add an additional static column and order by that column. Let us re-create the same scenario. Copy Data from One Table to Another Table – SQL in Sixty Seconds #031 – Video http://www.youtube.com/watch?v=FVWIA-ACMNo Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Memory Usage for Databases on Linux

    - by Kyle Brandt
    So with free output what we care about with application memory usage is generally the amount of free memory in the -/+ buffers/cache line. What about with database applications such as Oracle, is it important to have a good amount of cached and buffers available for a database to run well with all the IO? If that makes any sense, how do you figure out just how much?

    Read the article

  • Boost Shared Pointers and Memory Management

    - by Izza
    I began using boost rather recently and am impressed by the functionality and APIs provided. In using boost::shared_ptr, when I check the program with Valgrind, I found a considerable number of "Still reachable" memory leaks. As per the documentation of Valgrind, these are not a problem. However, since I used to use the standard C++ library only, I always made sure that any program written is completely free from memory leaks. My question is, are these memory leaks something to worry about? I tried using reset(), however it only decrements the reference count, doesn't deallocate memory. Can I safely ignore these, or any way to forcibly deallocate the memory allocated by boost::shared_ptr? Thank you.

    Read the article

  • Embedded Linux: Memory Fragmentation

    - by waffleman
    In many embedded systems, memory fragmentation is a concern. Particularly, for software that runs for long periods of time (months, years, etc...). For many projects, the solution is to simply not use dynamic memory allocation such as malloc/free and new/delete. Global memory is used whenever possible and memory pools for types that are frequently allocated and deallocated are good strategies to avoid dynamic memory management use. In Embedded Linux how is this addressed? I see many libraries use dynamic memory. Is there mechanism that the OS uses to prevent memory fragmentation? Does it clean up the heap periodically? Or should one avoid using these libraries in an embedded environment?

    Read the article

  • Shared Memory and Process Sempahores (IPC)

    - by fsdfa
    This is an extract from Advanced Liniux Programming: Semaphores continue to exist even after all processes using them have terminated. The last process to use a semaphore set must explicitly remove it to ensure that the operating system does not run out of semaphores.To do so, invoke semctl with the semaphore identifier, the number of semaphores in the set, IPC_RMID as the third argument, and any union semun value as the fourth argument (which is ignored).The effective user ID of the calling process must match that of the semaphore’s allocator (or the caller must be root). Unlike shared memory segments, removing a semaphore set causes Linux to deallocate immediately. If a process allocate a shared memory, and many process use it and never set to delete it (with shmctl), if all them terminate, then the shared page continues being available. (We can see this with ipcs). If some process did the shmctl, then when the last process deattached, then the system will deallocate the shared memory. So far so good (I guess, if not, correct me). What I dont understand from that quote I did, is that first it say: "Semaphores continue to exist even after all processes using them have terminated." and then: "Unlike shared memory segments, removing a semaphore set causes Linux to deallocate immediately."

    Read the article

  • How to store bitmaps in memory?

    - by Geotarget
    I'm working with general purpose image rendering, and high-performance image processing, and so I need to know how to store bitmaps in-memory. (24bpp/32bpp, compressed/raw, etc) I'm not working with 3D graphics or DirectX / OpenGL rendering and so I don't need to use graphics card compatible bitmap formats. My questions: What is the "usual" or "normal" way to store bitmaps in memory? (in C++ engines/projects?) How to store bitmaps for high-performance algorithms, such that read/write times are the fastest? (fixed array? with/without padding? 24-bpp or 32-bpp?) How to store bitmaps for applications handling a lot of bitmap data, to minimize memory usage? (JPEG? or a faster [de]compression algorithm?) Some possible methods: Use a fixed packed 24-bpp or 32-bpp int[] array and simply access pixels using pointer access, all pixels are allocated in one continuous memory chunk (could be 1-10 MB) Use a form of "sparse" data storage so each line of the bitmap is allocated separately, reusing more memory and requiring smaller contiguous memory segments Store bitmaps in its compressed form (PNG, JPG, GIF, etc) and unpack only when its needed, reducing the amount of memory used. Delete the unpacked data if its not used for 10 secs.

    Read the article

  • Recommended formats to store bitmaps in memory?

    - by Geotarget
    I'm working with general purpose image rendering, and high-performance image processing, and so I need to know how to store bitmaps in-memory. (24bpp/32bpp, compressed/raw, etc) I'm not working with 3D graphics or DirectX / OpenGL rendering and so I don't need to use graphics card compatible bitmap formats. My questions: What is the "usual" or "normal" way to store bitmaps in memory? (in C++ engines/projects?) How to store bitmaps for high-performance algorithms, such that read/write times are the fastest? (fixed array? with/without padding? 24-bpp or 32-bpp?) How to store bitmaps for applications handling a lot of bitmap data, to minimize memory usage? (JPEG? or a faster [de]compression algorithm?) Some possible methods: Use a fixed packed 24-bpp or 32-bpp int[] array and simply access pixels using pointer access, all pixels are allocated in one continuous memory chunk (could be 1-10 MB) Use a form of "sparse" data storage so each line of the bitmap is allocated separately, reusing more memory and requiring smaller contiguous memory segments Store bitmaps in its compressed form (PNG, JPG, GIF, etc) and unpack only when its needed, reducing the amount of memory used. Delete the unpacked data if its not used for 10 secs.

    Read the article

  • Linux shared library that uses a shared library undefined symbol

    - by johnnycrash
    two shared libraries liba.so and libb.so. liba.so uses libb.so. All c files are compiled with -fPIC. Linking uses -shared. When we call dlopen on liba.so it cannot find symbols in libb.so...we get the "undefined symbol" error. We can dlopen libb.so with no errors. We know that liba is finding libb because we don't get a file not found error. We get a file not found error when we delete libb.so. We tried -lutil and no luck. Any ideas???? oh yeah. gcc 4.1.2

    Read the article

  • Finding source of leaking active memory on Mac OS Lion

    - by Tim Kemp
    My activity monitor shows 6GB of active RAM usage: Yet my Real Memory column shows nothing like that amount: (There's another screenful below that, all smaller.) Backing that up, the output from this command (which sums up memory usage of every running process): ps -axm -o "rss,comm" | awk 'BEGIN { s=0;}; {s=s+$1;}; END { printf("%.2f GB\n", (s/1024.0/1024));}' Gives 4.09GB, so it looks to me like 2GB has leaked. I see much wider ranges sometimes, perhaps 2 or 3GB from the ps command and as much as 7 or 8GB of Active usage reported by Activity Monitor. I've tried quitting everything and logging my user out and back in again, but the Active usage is still far higher than the RAM reported by ps and by each process to Activity Monitor. This 2GB of active RAM is basically unrecoverable unless I reboot. Is there any way to a) detect what's leaking and b) get it back? Thanks

    Read the article

  • Memory limiting solutions for greedy applications that can crash OS?

    - by Hooked
    I use my computer for scientific programming. It has a healthy 8GB of RAM and 12GB of swap space. Often, as my problems have gotten larger, I exceed all of the available RAM. Rather than crashing (which would be preferred), it seems Ubuntu starts loading everything into swap, including Unity and any open terminals. If I don't catch a run-away program in time, there is nothing I can do but wait - it takes 4-5 minutes to switch to a command prompt eg. Ctrl-Alt-F2 where I can kill the offending process. Since my own stupidity is out of scope of this forum, how can I prevent Ubuntu from crashing via thrashing when I use up all of the available memory from a single offending program? At-home experiment*! Open a terminal, launch python and if you have numpy installed try this: >>> import numpy >>> [numpy.zeros((10**4, 10**4)) for _ in xrange(50)] * Warning: may have adverse effects, monitor the process via iotop or top to kill it in time. If not, I'll see you after your reboot.

    Read the article

  • error while loading shared libraries, file too short

    - by tommyk
    From one of my customers I got an application. When I try to run it I got following error error while loading shared libraries: ./libvtkWidgets.so.5.4: file too short In my project structure I see following: -rwxrwxrwx 1 tomasz tomasz 20 2011-02-01 10:44 libvtkWidgets.so -rwxrwxrwx 1 tomasz tomasz 22 2011-02-01 10:44 libvtkWidgets.so.5.4 -rwxrwxrwx 1 tomasz tomasz 2147103 2011-02-01 10:44 libvtkWidgets.so.5.4.2 Is my shared library libvtkWidgets corrupted ? How to solve that error ?

    Read the article

  • shotwell 0.12 shared library error

    - by blade19899
    i installed shotwell 0.12 like so via its official ppa sudo add-apt-repository ppa:yorba/ppa sudo apt-get update sudo apt-get install shotwell and when i tried to run it via dash it didn't start. i then typed in the gnome-terminal "shotwell" I then got this error error while loading shared libraries: libgexiv2.so.0: cannot open shared object file: No such file or directory my question is how to get shotwell 0.12 up and running in ubuntu 11.10 amd64

    Read the article

  • Monitor and Control Memory Usage in Google Chrome

    - by Asian Angel
    Do you want to know just how much memory Google Chrome and any installed extensions are using at a given moment? With just a few clicks you can see just what is going on under the hood of your browser. How Much Memory are the Extensions Using? Here is our test browser with a new tab and the Extensions Page open, five enabled extensions, and one disabled at the moment. You can access Chrome’s Task Manager using the Page Menu, going to Developer, and selecting Task manager… Or by right clicking on the Tab Bar and selecting Task manager. There is also a keyboard shortcut (Shift + Esc) available for the “keyboard ninjas”. Sitting idle as shown above here are the stats for our test browser. All of the extensions are sitting there eating memory even though some of them are not available/active for use on our new tab and Extensions Page. Not so good… If the default layout is not to your liking then you can easily modify the information that is available by right clicking and adding/removing extra columns as desired. For our example we added Shared Memory & Private Memory. Using the about:memory Page to View Memory Usage Want even more detail? Type about:memory into the Address Bar and press Enter. Note: You can also access this page by clicking on the Stats for nerds Link in the lower left corner of the Task Manager Window. Focusing on the four distinct areas you can see the exact version of Chrome that is currently installed on your system… View the Memory & Virtual Memory statistics for Chrome… Note: If you have other browsers running at the same time you can view statistics for them here too. See a list of the Processes currently running… And the Memory & Virtual Memory statistics for those processes. The Difference with the Extensions Disabled Just for fun we decided to disable all of the extension in our test browser… The Task Manager Window is looking rather empty now but the memory consumption has definitely seen an improvement. Comparing Memory Usage for Two Extensions with Similar Functions For our next step we decided to compare the memory usage for two extensions with similar functionality. This can be helpful if you are wanting to keep memory consumption trimmed down as much as possible when deciding between similar extensions. First up was Speed Dial”(see our review here). The stats for Speed Dial…quite a change from what was shown above (~3,000 – 6,000 K). Next up was Incredible StartPage (see our review here). Surprisingly both were nearly identical in the amount of memory being used. Purging Memory Perhaps you like the idea of being able to “purge” some of that excess memory consumption. With a simple command switch modification to Chrome’s shortcut(s) you can add a Purge Memory Button to the Task Manager Window as shown below.  Notice the amount of memory being consumed at the moment… Note: The tutorial for adding the command switch can be found here. One quick click and there is a noticeable drop in memory consumption. Conclusion We hope that our examples here will prove useful to you in managing the memory consumption in your own Google Chrome installation. If you have a computer with limited resources every little bit definitely helps out. Similar Articles Productive Geek Tips Stupid Geek Tricks: Compare Your Browser’s Memory Usage with Google ChromeMonitor CPU, Memory, and Disk IO In Windows 7 with Taskbar MetersFix for Firefox memory leak on WindowsHow to Purge Memory in Google ChromeHow to Make Google Chrome Your Default Browser TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools Track Daily Goals With 42Goals Video Toolbox is a Superb Online Video Editor Fun with 47 charts and graphs

    Read the article

  • How to compare Shared versus VPS hosting? [closed]

    - by Itai
    Possible Duplicate: How to find web hosting that meets my requirements? While shopping around for a new hosting service, I have find that I have no idea how to decide between shared hosting (which I presently use for all my sites) service or go towards virtual (VPS) hosting which are always much more expensive. The real question is How to determine when shared hosting is no longer an option for a site? PS: This question covers some similar ground but is too specific for my needs.

    Read the article

  • Clarification of the difference between PCI memory addressing and I/O addressing?

    - by KevinM
    Could someone please clarify the difference between memory and I/O addresses on the PCI/PCIe bus? I understand that I/O addresses are 32-bit, limited to the range 0 to 4GB, and do not map onto system memory (RAM), and that memory addresses are either 32-bit or 64-bit. I get the impression that memory addressing must map onto available RAM, is this true? That if a PCI device wishes to transfer data to a memory address, that address must exist in actual system RAM (and is allocated during PCI configuration) and not virtual memory. So if a PCI device only needs to transfer a small amount of data at a time, where there is no advantage to putting it into RAM or using DMA, then I/O addressing is fine (e.g. a parallel port implemented on a PCI card). And why do I keep reading that PCI/PCIe I/O addressing is being deprecated in favour of memory addressing? Thanks!

    Read the article

  • Large virtual memory size of ElasticSearch JVM

    - by wfaulk
    I am running a JVM to support ElasticSearch. I am still working on sizing and tuning, so I left the JVM's max heap size at ElasticSearch's default of 1GB. After putting data in the database, I find that the JVM's process is showing 50GB in SIZE in top output. It appears that this is actually causing performance problems on the system; other processes are having trouble allocating memory. In asking the ElasticSearch community, they suggested that it's "just" filesystem caching. In my experience, filesystem caching doesn't show up as memory used by a particular process. Of course, they may have been talking about something other than the OS's filesystem cache, maybe something that the JVM or ElasticSearch itself is doing on top of the OS. But they also said that it would be released if needed, and that didn't seem to be happening. So can anyone help me figure out how to tune the JVM, or maybe ElasticSearch itself, to not use so much RAM. System is Solaris 10 x86 with 72GB RAM. JVM is "Java(TM) SE Runtime Environment (build 1.7.0_45-b18)".

    Read the article

  • error while loading shared libraries; cannot open shared object file: No such file or directory

    - by glitchyme
    The program evince complains that it can't find libfreetype.so.6; however I clearly have the file and its included in my LD_LIBRARY_PATH; furthermore I have another program which uses libfreetype6 and is able to run just fine. What's going on here? jbud@jb-pc ~> evince evince: error while loading shared libraries: libfreetype.so.6: cannot open shared object file: No such file or directory jbud@jb-pc ~> ldd /usr/bin/evince | grep freetype libfreetype.so.6 => /usr/local/lib/libfreetype.so.6 (0x00007f912179d000) jbud@jb-pc ~> file /usr/local/lib/libfreetype.so.6 /usr/local/lib/libfreetype.so.6: symbolic link to `libfreetype.so.6.11.1' jbud@jb-pc ~> file /usr/local/lib/libfreetype.so.6.11.1 /usr/local/lib/libfreetype.so.6.11.1: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=0x21a4b8005e0c9a42af001b35fb984f4e25efc71c, not stripped jbud@jb-pc ~> echo $LD_LIBRARY_PATH /usr/lib/:/usr/lib64/:/usr/lib/x86_64-linux-gnu/:/usr/local/lib/ jbud@jb-pc ~> ldd jdrive/jstuff/work/personal/noengine/client | grep freetype libfreetype.so.6 => /usr/local/lib/libfreetype.so.6 (0x00007feb5ac89000)

    Read the article

  • How to access variables in shared memory

    - by user1723361
    I am trying to create a shared memory segment containing three integers and an array. The segment is created and a pointer is attached, but when I try to access the values of the variables (whether changing, printing, etc.) I get a segmentation fault. Here is the code I tried: #include <stdio.h> #include <stdbool.h> #include <stdlib.h> #include <errno.h> #include <sys/types.h> #include <sys/ipc.h> #include <sys/sem.h> #define SIZE 10 int* shm_front; int* shm_end; int* shm_count; int* shm_array; int shm_size = 3*sizeof(int) + sizeof(shm_array[SIZE]); int main(int argc, char* argsv[]) { int shmid; //create shared memory segment if((shmid = shmget(IPC_PRIVATE, shm_size, 0644)) == -1) { printf("error in shmget"); exit(1); } //obtain the pointer to the segment if((shm_front = (int*)shmat(shmid, (void *)0, 0)) == (void *)-1) { printf("error in shmat"); exit(1); } //move down the segment to set the other pointers shm_end = shm_front + 1; shm_count = shm_front + 2; shm_array = shm_front + 3; //tests on shm //*shm_end = 10; //gives segmentation fault //printf("\n%d", *shm_front); //gives segmentation fault //clean-up //get rid of shared memory shmdt(shm_front); shmctl(shmid, IPC_RMID, NULL); //printf("\n\n"); return 0; } I tried accessing the shared memory by dereferencing the pointer to the struct, but got a segmentation fault each time.

    Read the article

  • How does landscape calculate memory usage?

    - by David Planella
    I'm trying to debug an OOM situation in an Ubuntu 12.04 server, and looking at the Memory graphs in Landscape, I noticed that there wasn't any serious memory usage spike. Then I looked at the output of the free command and I wasn't quite sure how both memory usage results relate to each other. Here's landscape's output on the server: $ landscape-sysinfo System load: 0.0 Processes: 93 Usage of /: 5.6% of 19.48GB Users logged in: 1 Memory usage: 26% IP address for eth0: - Swap usage: 2% Then I ran the free command and I get: $ free -m total used free shared buffers cached Mem: 486 381 105 0 4 165 -/+ buffers/cache: 212 274 Swap: 255 7 248 I can understand the 2% swap usage, but where does the 26% memory usage come from?

    Read the article

  • How does landscape calculate free memory?

    - by David Planella
    I'm trying to debug an OOM situation in an Ubuntu 12.04 server, and looking at the Memory graphs in Landscape, I noticed that there wasn't any serious memory usage spike spike. Then I looked at the output of the free command and I wasn't quite sure how both memory usage results relate to each other. Here's landscape's output on the server: $ landscape-sysinfo System load: 0.0 Processes: 93 Usage of /: 5.6% of 19.48GB Users logged in: 1 Memory usage: 26% IP address for eth0: - Swap usage: 2% Then I run the free command and I get: $ free -m total used free shared buffers cached Mem: 486 381 105 0 4 165 -/+ buffers/cache: 212 274 Swap: 255 7 248 I can understand the 2% swap usage, but where does the 26% memory usage come from?

    Read the article

  • Manual memory allocation and purity

    - by Eonil
    Language like Haskell have concept of purity. In pure function, I can't mutate any state globally. Anyway Haskell fully abstracts memory management, so memory allocation is not a problem here. But if languages can handle memory directly like C++, it's very ambiguous to me. In these languages, memory allocation makes visible mutation. But if I treat making new object as impure action, actually, almost nothing can be pure. So purity concept becomes almost useless. How should I handle purity in languages have memory as visible global object?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >