Search Results

Search found 12872 results on 515 pages for 'memory alignment'.

Page 59/515 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • Condition Variable in Shared Memory - is this code POSIX-conformant?

    - by GrahamS
    We've been trying to use a mutex and condition variable to synchronise access to named shared memory on a LynuxWorks LynxOS-SE system (POSIX-conformant). One shared memory block is called "/sync" and contains the mutex and condition variable, the other is "/data" and contains the actual data we are syncing access to. We're seeing failures from pthread_cond_signal() if both processes don't perform the mmap() calls in exactly the same order, or if one process mmaps in some other piece of shared memory before it mmaps the sync memory. This example code is about as short as I can make it: #include <sys/types.h> #include <sys/stat.h> #include <sys/mman.h> #include <sys/file.h> #include <stdlib.h> #include <pthread.h> #include <errno.h> #include <iostream> #include <string> using namespace std; static const string shm_name_sync("/sync"); static const string shm_name_data("/data"); struct shared_memory_sync { pthread_mutex_t mutex; pthread_cond_t condition; }; struct shared_memory_data { int a; int b; }; //Create 2 shared memory objects // - sync contains 2 shared synchronisation objects (mutex and condition) // - data not important void create() { // Create and map 'sync' shared memory int fd_sync = shm_open(shm_name_sync.c_str(), O_CREAT|O_RDWR, S_IRUSR|S_IWUSR); ftruncate(fd_sync, sizeof(shared_memory_sync)); void* addr_sync = mmap(0, sizeof(shared_memory_sync), PROT_READ|PROT_WRITE, MAP_SHARED, fd_sync, 0); shared_memory_sync* p_sync = static_cast<shared_memory_sync*> (addr_sync); // init the cond and mutex pthread_condattr_t cond_attr; pthread_condattr_init(&cond_attr); pthread_condattr_setpshared(&cond_attr, PTHREAD_PROCESS_SHARED); pthread_cond_init(&(p_sync->condition), &cond_attr); pthread_condattr_destroy(&cond_attr); pthread_mutexattr_t m_attr; pthread_mutexattr_init(&m_attr); pthread_mutexattr_setpshared(&m_attr, PTHREAD_PROCESS_SHARED); pthread_mutex_init(&(p_sync->mutex), &m_attr); pthread_mutexattr_destroy(&m_attr); // Create the 'data' shared memory int fd_data = shm_open(shm_name_data.c_str(), O_CREAT|O_RDWR, S_IRUSR|S_IWUSR); ftruncate(fd_data, sizeof(shared_memory_data)); void* addr_data = mmap(0, sizeof(shared_memory_data), PROT_READ|PROT_WRITE, MAP_SHARED, fd_data, 0); shared_memory_data* p_data = static_cast<shared_memory_data*> (addr_data); // Run the second process while it sleeps here. sleep(10); int res = pthread_cond_signal(&(p_sync->condition)); assert(res==0); // <--- !!!THIS ASSERT WILL FAIL ON LYNXOS!!! munmap(addr_sync, sizeof(shared_memory_sync)); shm_unlink(shm_name_sync.c_str()); munmap(addr_data, sizeof(shared_memory_data)); shm_unlink(shm_name_data.c_str()); } //Open the same 2 shared memory objects but in reverse order // - data // - sync void open() { sleep(2); int fd_data = shm_open(shm_name_data.c_str(), O_RDWR, S_IRUSR|S_IWUSR); void* addr_data = mmap(0, sizeof(shared_memory_data), PROT_READ|PROT_WRITE, MAP_SHARED, fd_data, 0); shared_memory_data* p_data = static_cast<shared_memory_data*> (addr_data); int fd_sync = shm_open(shm_name_sync.c_str(), O_RDWR, S_IRUSR|S_IWUSR); void* addr_sync = mmap(0, sizeof(shared_memory_sync), PROT_READ|PROT_WRITE, MAP_SHARED, fd_sync, 0); shared_memory_sync* p_sync = static_cast<shared_memory_sync*> (addr_sync); // Wait on the condvar pthread_mutex_lock(&(p_sync->mutex)); pthread_cond_wait(&(p_sync->condition), &(p_sync->mutex)); pthread_mutex_unlock(&(p_sync->mutex)); munmap(addr_sync, sizeof(shared_memory_sync)); munmap(addr_data, sizeof(shared_memory_data)); } int main(int argc, char** argv) { if(argc>1) { open(); } else { create(); } return (0); } Run this program with no args, then another copy with args, and the first one will fail at the assert checking the pthread_cond_signal(). But change the open() function to mmap() the "/sync" memory first and it will all work fine. This seems like a major bug in LynxOS but LynuxWorks claim that using mutex and condition variable in this way is not covered by the POSIX standard, so they are not interested. Can anyone determine if this code does violate POSIX? Or does anyone have any convincing documentation that it is POSIX compliant?

    Read the article

  • how can i figure out iis7 memory leak from this dump result?

    - by Cenk Erdem
    my application sometimes starts to eat too much memory in a few seconds then crashes, i used debugdiag to take a dump when this happened, in the analyse i see lots of memory allocations all of them has the same information and each of them allocates 128mb. they look like this: Address 0x00000000`aff41798 Allocation Time 06:56:06 since tracking started Allocation Size 128.00 MBytes Function Source Destination LeakTrack+186cf clr!CExecutionEngine::ClrVirtualAlloc+3c clr!ClrVirtualAlloc+3c clr!WKS::virtual_alloc+42 clr!WKS::gc_heap::get_segment+a2 clr!WKS::gc_heap::get_large_segment+204 clr!WKS::gc_heap::loh_get_new_seg+78 clr! ?? ::FNODOBFM::`string'+a008a clr!WKS::gc_heap::try_allocate_more_space+31b clr!WKS::gc_heap::allocate_more_space+26 clr!WKS::gc_heap::allocate_large_object+6a clr!WKS::GCHeap::Alloc+b5 clr!FramedAllocateString+b06 mscorlib_ni+39f5fd mscorlib_ni+389f83 System_Xml_ni+451adc System_Data_SqlXml_ni+2275d4 System_Data_SqlXml_ni+233f32 System_Data_SqlXml_ni+8ec28 System_Data_SqlXml_ni+8eb65 System_Web_ni+2882b2 System_Web_ni+2794b6 System_Web_ni+2794b6 0x7FF002474BC what can be wrong about my code? any suggestions?

    Read the article

  • page allocation failure - am I running out of memory?

    - by mfriedman
    Lately I've noticing entries like this one in the kern.log of one of my servers: Feb 16 00:24:05 aramis kernel: swapper: page allocation failure. order:0, mode:0x20 This is what I'd like to know: What exactly does that message mean? Is my server running out of memory? The swap usage is quite low (less than 10%), and so far I haven't noticed any processes being killed because of lack of memory. Additional information: The server is a Xen instance (DomU) running Debian 6.0 It has 512 MB of RAM and a 512 MB swap partition CPU load inside the virtual machine shows an average of 0.25

    Read the article

  • System says memory controller not available and doesn't boot?

    - by Martin
    Hello everybody, Recently I have had a MainBoard-problem. I've send my mainboard to service and today I got it back from the company. It is a Foxconn 520a mainboard. Now I have installed my exchanged mainboard. But now I have a problem. My system boots until the device list with IRQ entries appear. The system says "Verifying DMI-pool-data..." and nothing happens. The IRQ-device list shows that the memory-controller is not available. All other devices have got an IRQ. Bus No. Device No. Func No. Vendor/Device Class Device Class IRQ 0 0 0 10DE 0547 0500 Memory Controller NA Do you have any ideas where the problem could be? I already have disconnected all unnecessary devices like the hard disks. Perhaps it is a BIOS problem, but I don't know where I should look. Would be nice if there is any advice, Greetings, Martin

    Read the article

  • Installed 4GB memory but Windows XP 32 bit only reporting 2GB?

    - by AnthonyWJones
    I've just taken an existing XP Pro 32 bit system that had only 0.5GB of memory installed and maxed it out to 4GB. The BIOS reports the 4GB ram however when XP is booted and I look at the computer properties only 2GB of RAM is reported. Can anyone explain this? Before we go up any blind allys the /3GB switch is not the answer here, I have no need for a single process to use more the 2GB of memory. I'm wondering if the the 32 bit XP Pro is deliberately limited to 2GB. I seem to remember seeing an excellent table on a Microsoft site listing all the various SKUs of Windows and what each one was limited to. However I can't seem to find that table now.

    Read the article

  • Installed 4GB memory but Windows XP 32 bit only reporting 2GB?

    - by AnthonyWJones
    I've just taken an existing XP Pro 32 bit system that had only 0.5GB of memory installed and maxed it out to 4GB. The BIOS reports the 4GB ram however when XP is booted and I look at the computer properties only 2GB of RAM is reported. Can anyone explain this? Before we go up any blind allys the /3GB switch is not the answer here, I have no need for a single process to use more the 2GB of memory. I'm wondering if the the 32 bit XP Pro is deliberately limited to 2GB. I seem to remember seeing an excellent table on a Microsoft site listing all the various SKUs of Windows and what each one was limited to. However I can't seem to find that table now.

    Read the article

  • Per bytes RAM memory acess

    - by b-gen-jack-o-neill
    Hi, I have just a simple question. Today memory DDR chips are 64 bits wide, and the CPU data bus is also 64 bits wide. But memory is stil organised in single bytes. So, what I want to ask is, when CPU selects some memory adress, it should be one byte, right? Becouse the lowest memory portion you can access is 1 byte. But, if you get 1 byte per 1 adress, why is memory bus 8 bytes wide?

    Read the article

  • Memory management (segmentation and paging) in 80286 and 80386: How does it work?

    - by Andrew J. Brehm
    I found lots of Web sites and books explaining how memory management worked on the 8086 and later x86 CPUs in Real Mode. I understand, I think, how two 16 bit values, segment address and offset are combined to get a linear 20 bit physical address (shift segment four bits to the left, add offset; segments are 64K and start every 16 bytes). But I couldn't find any good Web sites or books that explained how memory management works in Protected Mode, specifically the differences between 80286 and 80386. Can anyone point me to a good Web site or book (or explain it right here)? (For extra credit, i.e. an upvote, how does it work in Long Mode?)

    Read the article

  • How do I make the Windows low memory warning less sensitive?

    - by Stephen
    I keep getting this annoying low memory warning/prompt to close games I play. It happens very often and I still have ~6 gigs of ram free. I disabled virtual memory because it was putting stuff on the pagefile when I had 10 gigs free ram so that spiked my disk usage. Is there any way to disable this warning? I have 16GB ram so it shouldn't be an issue. I would prefer to keep pagefiles off because my HD is very loud so it's nice to keep it spun down as much as possible. I don't want to disable it completely. Ideally, I would like it to go off when I have ~2GB left rather than 6, but if this isn't viable, I may just disable it completely.

    Read the article

  • ideal memory configuration 4 bank, ddr3, AM3+ FX - 1 vs 2 vs 4 dimms?

    - by TardisGuy
    Ok, so ive been looking around, trying to learn and understand the way that ram works. Ive gotten one answer that said "The addressing is best for 2 sticks, and when you use 4; it slows down" Another answer said something like: Theres bank/channel interleave that makes the memory read like one stick Also I read something about the memory density also being a factor. I dug further and found out that theres a higher speed limit on my board for 2 sticks vs 4, so now im trying to put an image in my head of how and why, and... pfft. Can anyone explain, or recommend a resource that would answer these questions?

    Read the article

  • Do memory cards have any max file size limitation?

    - by Dmitriy R
    I am not sure where to ask this question, so perhaps it is physical limitation. I have a 8 GB flash micro SD memory card. When I copy any file size of up to few gigabytes, copying happens normally. But if I am trying to copy file over 4 GB file, then the system tells me like insufficient memory on card, although 8 GB is available. So perhaps only 32 bit address is used for keeping size of file in micro SD card, or is my micro SD defective?

    Read the article

  • 2 GB of memory in 1 GB system is a problem?

    - by daveslab
    Hi folks, I just installed 2 1 Gig sticks into my friend's machine, thinking that it would take all the 2 GBs. Unfortunately, according to Dell's website, it says the maximum amount of memory accessible to the machine is arbitrarily set to 1 GB! The system indeed reports having 1 GB of memory accessible to it, but I'm worried that having 2 GB in there might break something. Are my fears reasonable? Should I buy two 512 MB sticks instead? Thanks for any help!

    Read the article

  • Is it safe to use up all memory on linux server, not leaving anything for the cache?

    - by Temnovit
    I have a CentOS server fully dedicated to MySQL 5.5 (with innodb tables mostly). Server has 32 GB RAM, SSD disks, and avarage memory usage looks like this: So about 25GB is in use and about 6.5GB is cached. I am experiencing performance problems with WRITE queries, so I was thinking, is this the optimal cache size? I might increase innodb buffer size, so that linux cache would become smaller, or decrease it, so it would be bigger. What is the optimal used/cached memory balance for busy MySQL server on linux?

    Read the article

  • We have a Solaris 9 server running Oracle 10G and have been getting memory consumption errors for a few weeks now

    - by another_netadmin
    We recently upgraded our Enterprise application and everything worked ok until one weekend when we did a server reboot, ever since then we have run into memory errors. The server has 4GB of physical memory installed and the kernel parameters are set to the following (/etc/system). I'm not an Oracle guy so I'm not sure where to start looking but any informaiton is greatly appreciated. Thanks in advance. There are two databases running on this server, one is a production database and the other is a pre-production database. [root@bandb /]# cat /etc/system | grep seminfo set semsys:seminfo_semmni=100 set semsys:seminfo_semmns=2048 set semsys:seminfo_semmsl=400 set semsys:seminfo_semopm=100 set semsys:seminfo_semvmx=32767 [root@bandb /]# cat /etc/system | grep shminfo set shmsys:shminfo_shmmax=4294967295 set shmsys:shminfo_shmmin=1 set shmsys:shminfo_shmmni=100 set shmsys:shminfo_shmseg=10 [root@bandb /]#

    Read the article

  • PASS 13 Dispatches: Memory Optimized = On

    - by Tony Davis
    I'm at the PASS Summit in Charlotte for the Day 1 keynote by Quentin Clarke, Corporate VP of the data platform group at Microsoft. He's talking about how SQL Server 2014 is “pushing boundaries” and first up is SQL Server 2014's In-Memory OLTP technology (former codename “hekaton”) It is a feature that provokes a lot of interest and for good reason as, without any need for application rewrites or hardware updates, it can enable us to ensure that an application can find in memory most or all of the data it needs, and can lead to huge improvements in processing times. A good recent hekaton use cases article talks about applications that need a “Shock Absorber” when either spikes or just a high rate of incoming workload (including data in ETL scenarios) become a primary bottleneck. To get a really deep look at this technology, I would check out David DeWitt's summit keynote tomorrow (it will be live streamed). Other than that, to get started I'd recommend Kalen Delaney's whitepaper. She offers a lot of insight into how it works and how to start to define memory-optimized tables, and natively compiled stored procedures. These memory-optimized tables uses completely optimistic multi-version concurrency control – no waiting on locks! After that, Tom LaRock has compiled a useful set of links to drill deeper, and includes one to Microsoft's AMR tool to help you gauge the tables that might benefit most. Tony.

    Read the article

  • Memory concerns while plotting escape from DLL Hell in Delphi

    - by Peter Turner
    I work on a program with about 50 DLLs that are loaded from one executable, it's an old organically grown program where the only rationale for creating a new DLL is that one previously didn't exist to fill a given need. (and namespaces didn't exist in Delphi so it never crossed our mind to make dll1.main.pas, dll2.main.pas or something even more unique) What we want to do is consolidate all these DLLs into one executable, since none of them are used out of the program, there shouldn't be much of a problem. The concern my boss has is that if we did this, the memory overhead for terminal server clients would go through the roof. So, I've stepped through enough initialization code to know that lots of stuff is done every time a DLL is loaded in to memory, but say I've got a project with about 4000 files, and 50 dlls, 10 of which are probably utilized by any one user in any one session of the program. The 50 dlls are about 2/3rds form files, if not more, but beyond that there's not a lot of other resources being loaded (only a few embedded pictures, icons, cursors, etc..). If I loaded all these files in to memory, how much memory is used per unit? how much is used per class? How do I keep the overhead down? and what is the biggest project one can reasonably expect to build with Delphi? This tidbit won't help answering, but I think it might clarify what my boss is worried about, we currently start our program at about 18megs, normal working conditions are usually less than 40 megs, he thinks it could climb as high as 120 megs.

    Read the article

  • Understanding how memory contents map into a struct

    - by user95592
    I am not able to understand how bytes in memory are being mapped into a struct. My machine is a little-endian x86_64. The code was compiled with gcc 4.7.0 from the Win64 mingw32-64 distribution for Win64. These are contents of the relevant memory fragment: ...450002cf9fe5000040115a9fc0a8fe... And this is the struct definition: typedef struct ip4 { unsigned int ihl :4; unsigned int version :4; uint8_t tos; uint16_t tot_len; uint16_t id; uint16_t frag_off; // flags=3 bits, offset=13 bits uint8_t ttl; uint8_t protocol; uint16_t check; uint32_t saddr; uint32_t daddr; /*The options start here. */ } ip4_t; When a pointer to such an structure (let it be *ip4) is initialized to the starting address of the above pasted memory region, this is what the debugger shows for the struct's fields: ip4: address=0x8da36ce ip4->ihl: address=0x8da36ce, value=0x5 ip4->version: address=0x8da36ce, value=0x4 ip4->tos: address=0x8da36d2, value=0x9f ip4->tot_len: address=0x8da36d4, value=0x0 ... I see how ihl and version are mapped: 4 bytes for a long integer, little-endian. But I don't understand how tos and tot_len are mapped; which bytes in memory correspond to each one of them. Thank you in advance.

    Read the article

  • TXPAUSE : polite waiting for hardware transactional memory

    - by Dave
    Classic locks are an appropriate tool to prevent potentially conflicting operations A and B, invoked by different threads, from running at the same time. In a sense the locks cause either A to run before B or vice-versa. Similarly, we can replace the locks with hardware transactional memory, or use transactional lock elision to leverage potential disjoint access parallelism between A and B. But often we want A to wait until B has run. In a Pthreads environment we'd usually use locks in conjunction with condition variables to implement our "wait until" constraint. MONITOR-MWAIT is another way to wait for a memory location to change, but it only allows us to track one cache line and it's only available on x86. There's no similar "wait until" construct for hardware transactions. At the instruction-set level a simple way to express "wait until" in transactions would be to add a new TXPAUSE instruction that could be used within an active hardware transaction. TXPAUSE would politely stall the invoking thread, possibly surrendering or yielding compute resources, while at the same time continuing to track the transaction's address-set. Once a transaction has executed TXPAUSE it can only abort. Ideally that'd happen when some other thread modifies a variable that's in the transaction's read-set or write-set. And since we're aborting all writes would be discarded. In a sense this gives us multi-location MWAIT but with much more flexibility. We could also augment the TXPAUSE with a cycle-count bound to cap the time spent stalled. I should note that we can already enter a tight spin loop in a transaction to wait for updates to address-set to cause an abort. Assuming that the implementation monitors the address-set via cache-coherence probes, by waiting in this fashion we actually communicate via the probes, and not via memory values. That is the updating thread signals the waiter via probes instead of by traditional memory values. But TXPAUSE gives us a polite way to spin.

    Read the article

  • 50 sequences in one line

    - by user343934
    I have Muttiple sequence alignment (clustal) file and i want to read this file and arrange sequences in a such a way that in looks more clear and precise in order. I am doing this from biopython using AlignIO object. My codes like this alignment = AlignIO.read("opuntia.aln", "clustal") print "Number of rows: %i" % len(align) for record in alignment: print "%s - %s" % (record.id, record.seq) My Output-- http://i48.tinypic.com/ae48ew.jpg , it looks messy and long scrolling. What i want to do is print only 50 sequences in each line and continue till the end of alignment file. I wish to have output like this---http://i45.tinypic.com/4vh5rc.jpg from --http://www.ebi.ac.uk/Tools/clustalw2/, sorry two links are just a text due to my reputation. Any suggestions, algorithm and sample code is appreciated Thanks in advance Br,

    Read the article

  • [PHP] - Lowering script memory usage in a "big" file creation

    - by Riccardo
    Hi there people, it looks like I'm facing a typical memory outage problem when using a PHP script. The script, originally developed by another person, serves as an XML sitemap creator, and on large websites uses quite a lot of memory. I thought that the problem was related due to an algorithm holding data in memory until the job was done, but digging into the code I have discovered that the script works in this way: open file in output (will contain XML sitemap entries) in the loop: ---- for each entry to be added in sitemap, do fwrite close file end Although there are no huge arrays or variables being kept in memory, this technique uses a lot of memory. I thought that maybe PHP was buffering under the hood the fwrites and "flushing" data at the end of the script, so I have modified the code to close and open the file every Nth record, but the memory usage is still the same.... I'm debugging the script on my computer and watching memory usage: while script execution runs, memory allocation grows. Is there a particular technique to instruct PHP to free unsed memory, to force flushing buffers if any? Thanks

    Read the article

  • Session memory – who’s this guy named Max and what’s he doing with my memory?

    - by extended_events
    SQL Server MVP Jonathan Kehayias (blog) emailed me a question last week when he noticed that the total memory used by the buffers for an event session was larger than the value he specified for the MAX_MEMORY option in the CREATE EVENT SESSION DDL. The answer here seems like an excellent subject for me to kick-off my new “401 – Internals” tag that identifies posts where I pull back the curtains a bit and let you peek into what’s going on inside the extended events engine. In a previous post (Option Trading: Getting the most out of the event session options) I explained that we use a set of buffers to store the event data before  we write the event data to asynchronous targets. The MAX_MEMORY along with the MEMORY_PARTITION_MODE defines how big each buffer will be. Theoretically, that means that I can predict the size of each buffer using the following formula: max memory / # of buffers = buffer size If it was that simple I wouldn’t be writing this post. I’ll take “boundary” for 64K Alex For a number of reasons that are beyond the scope of this blog, we create event buffers in 64K chunks. The result of this is that the buffer size indicated by the formula above is rounded up to the next 64K boundary and that is the size used to create the buffers. If you think visually, this means that the graph of your max_memory option compared to the actual buffer size that results will look like a set of stairs rather than a smooth line. You can see this behavior by looking at the output of dm_xe_sessions, specifically the fields related to the buffer sizes, over a range of different memory inputs: Note: This test was run on a 2 core machine using per_cpu partitioning which results in 5 buffers. (Seem my previous post referenced above for the math behind buffer count.) input_memory_kb total_regular_buffers regular_buffer_size total_buffer_size 637 5 130867 654335 638 5 130867 654335 639 5 130867 654335 640 5 196403 982015 641 5 196403 982015 642 5 196403 982015 This is just a segment of the results that shows one of the “jumps” between the buffer boundary at 639 KB and 640 KB. You can verify the size boundary by doing the math on the regular_buffer_size field, which is returned in bytes: 196403 – 130867 = 65536 bytes 65536 / 1024 = 64 KB The relationship between the input for max_memory and when the regular_buffer_size is going to jump from one 64K boundary to the next is going to change based on the number of buffers being created. The number of buffers is dependent on the partition mode you choose. If you choose any partition mode other than NONE, the number of buffers will depend on your hardware configuration. (Again, see the earlier post referenced above.) With the default partition mode of none, you always get three buffers, regardless of machine configuration, so I generated a “range table” for max_memory settings between 1 KB and 4096 KB as an example. start_memory_range_kb end_memory_range_kb total_regular_buffers regular_buffer_size total_buffer_size 1 191 NULL NULL NULL 192 383 3 130867 392601 384 575 3 196403 589209 576 767 3 261939 785817 768 959 3 327475 982425 960 1151 3 393011 1179033 1152 1343 3 458547 1375641 1344 1535 3 524083 1572249 1536 1727 3 589619 1768857 1728 1919 3 655155 1965465 1920 2111 3 720691 2162073 2112 2303 3 786227 2358681 2304 2495 3 851763 2555289 2496 2687 3 917299 2751897 2688 2879 3 982835 2948505 2880 3071 3 1048371 3145113 3072 3263 3 1113907 3341721 3264 3455 3 1179443 3538329 3456 3647 3 1244979 3734937 3648 3839 3 1310515 3931545 3840 4031 3 1376051 4128153 4032 4096 3 1441587 4324761 As you can see, there are 21 “steps” within this range and max_memory values below 192 KB fall below the 64K per buffer limit so they generate an error when you attempt to specify them. Max approximates True as memory approaches 64K The upshot of this is that the max_memory option does not imply a contract for the maximum memory that will be used for the session buffers (Those of you who read Take it to the Max (and beyond) know that max_memory is really only referring to the event session buffer memory.) but is more of an estimate of total buffer size to the nearest higher multiple of 64K times the number of buffers you have. The maximum delta between your initial max_memory setting and the true total buffer size occurs right after you break through a 64K boundary, for example if you set max_memory = 576 KB (see the green line in the table), your actual buffer size will be closer to 767 KB in a non-partitioned event session. You get “stepped up” for every 191 KB block of initial max_memory which isn’t likely to cause a problem for most machines. Things get more interesting when you consider a partitioned event session on a computer that has a large number of logical CPUs or NUMA nodes. Since each buffer gets “stepped up” when you break a boundary, the delta can get much larger because it’s multiplied by the number of buffers. For example, a machine with 64 logical CPUs will have 160 buffers using per_cpu partitioning or if you have 8 NUMA nodes configured on that machine you would have 24 buffers when using per_node. If you’ve just broken through a 64K boundary and get “stepped up” to the next buffer size you’ll end up with total buffer size approximately 10240 KB and 1536 KB respectively (64K * # of buffers) larger than max_memory value you might think you’re getting. Using per_cpu partitioning on large machine has the most impact because of the large number of buffers created. If the amount of memory being used by your system within these ranges is important to you then this is something worth paying attention to and considering when you configure your event sessions. The DMV dm_xe_sessions is the tool to use to identify the exact buffer size for your sessions. In addition to the regular buffers (read: event session buffers) you’ll also see the details for large buffers if you have configured MAX_EVENT_SIZE. The “buffer steps” for any given hardware configuration should be static within each partition mode so if you want to have a handy reference available when you configure your event sessions you can use the following code to generate a range table similar to the one above that is applicable for your specific machine and chosen partition mode. DECLARE @buf_size_output table (input_memory_kb bigint, total_regular_buffers bigint, regular_buffer_size bigint, total_buffer_size bigint) DECLARE @buf_size int, @part_mode varchar(8) SET @buf_size = 1 -- Set to the begining of your max_memory range (KB) SET @part_mode = 'per_cpu' -- Set to the partition mode for the table you want to generate WHILE @buf_size <= 4096 -- Set to the end of your max_memory range (KB) BEGIN     BEGIN TRY         IF EXISTS (SELECT * from sys.server_event_sessions WHERE name = 'buffer_size_test')             DROP EVENT SESSION buffer_size_test ON SERVER         DECLARE @session nvarchar(max)         SET @session = 'create event session buffer_size_test on server                         add event sql_statement_completed                         add target ring_buffer                         with (max_memory = ' + CAST(@buf_size as nvarchar(4)) + ' KB, memory_partition_mode = ' + @part_mode + ')'         EXEC sp_executesql @session         SET @session = 'alter event session buffer_size_test on server                         state = start'         EXEC sp_executesql @session         INSERT @buf_size_output (input_memory_kb, total_regular_buffers, regular_buffer_size, total_buffer_size)             SELECT @buf_size, total_regular_buffers, regular_buffer_size, total_buffer_size FROM sys.dm_xe_sessions WHERE name = 'buffer_size_test'     END TRY     BEGIN CATCH         INSERT @buf_size_output (input_memory_kb)             SELECT @buf_size     END CATCH     SET @buf_size = @buf_size + 1 END DROP EVENT SESSION buffer_size_test ON SERVER SELECT MIN(input_memory_kb) start_memory_range_kb, MAX(input_memory_kb) end_memory_range_kb, total_regular_buffers, regular_buffer_size, total_buffer_size from @buf_size_output group by total_regular_buffers, regular_buffer_size, total_buffer_size Thanks to Jonathan for an interesting question and a chance to explore some of the details of Extended Event internals. - Mike

    Read the article

  • Hekaton – SQL Server’s in-memory database engine

    - by Christian
    Microsoft have just gone public at the PASS Summit in Seattle about a new SQL Server engine that they’re working on which is optimized for high-memory servers – an in-memory OLTP database engine which is built-in to SQL Server rather than a separate entity.  This means that you can move just the performance critical parts of your database to Hekaton. The new engine really pushes the performance boundaries by eliminating as many instructions as possible: Main memory optimized tables which are decoupled from on-disk structures; Everything is lock and latch free; More work is pushed to compile time so your T-SQL code is compiled natively into low-level code. We’re already working with a customer on an early adoption program so expect to hear from us on what we learn about implementing it!   Christian Bolton - MCA, MCM, MVP Technical Director http://coeo.com - SQL Server Consulting & Managed Services

    Read the article

  • Best in-memory cache of DB objects for Silverlight [closed]

    - by Jon
    Hi, I'd like to set up a cache of database objects (i.e. rows in a table) in memory in silverlight, which I'll do using WCF and linq-to-sql. Once I have the objects in memory, I'm planning on using MSMQ to receive new objects whenever they have been modified. It's a somewhat complex approach but the goal is to reduce trips to the database and allow instant data communication between Silverlight applications that are connected to the MSMQ. My Silverlight applications are meant to be long-running and the amount of data to be cached will not be large. I'm planning on saving the in-memory cache using local storage. Anyway, in order to process the updated objects that come in, I'd like to know if the user has changed the existing object. Could I use some event relating to data-binding to set a flag indicating that the object has changes? Maybe there's a better way to do the cache entirely? Thanks!

    Read the article

  • Ubuntu One using 500 MB memory also when idle

    - by cdysthe
    I'm a Dropbox convert (I hope!), but after having used Ubuntu One for a couple of weeks I notice a few differences from Dropbox. The most glaring difference is that the sync daemon constantly takes 500MB ram on my system (Ubuntu 12.04 x64). It hogs this amount of memory as soon as I log in, does it's initial sync/check but keeps the memory. All in all it seems to me that Ubuntu One uses more system resources than Dropbox. I am syncing the same folders and files with Ubuntu One as I was with Dropbox. Also, afte I log in Ubuntu One grids at 100% CPU for at least five minutes which can be annoying on a laptop, but is not a showstopper. I'm wondering if this is a problem on my system, or if Ubuntu One is expected to use that amount of memory even when idle?

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >