Search Results

Search found 18119 results on 725 pages for 'shared memory'.

Page 7/725 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • C# performance varying due to memory

    - by user1107474
    Hope this is a valid post here, its a combination of C# issues and hardware. I am benchmarking our server because we have found problems with the performance of our quant library (written in C#). I have simulated the same performance issues with some simple C# code- performing very heavy memory-usage. The code below is in a function which is spawned from a threadpool, up to a maximum of 32 threads (because our server has 4x CPUs x 8 cores each). This is all on .Net 3.5 The problem is that we are getting wildly differing performance. I run the below function 1000 times. The average time taken for the code to run could be, say, 3.5s, but the fastest will only be 1.2s and the slowest will be 7s- for the exact same function! I have graphed the memory usage against the timings and there doesnt appear to be any correlation with the GC kicking in. One thing I did notice is that when running in a single thread the timings are identical and there is no wild deviation. I have also tested CPU-bound algorithms and the timings are identical too. This has made us wonder if the memory bus just cannot cope. I was wondering could this be another .net or C# problem, or is it something related to our hardware? Would this be the same experience if I had used C++, or Java?? We are using 4x Intel x7550 with 32GB ram. Is there any way around this problem in general? Stopwatch watch = new Stopwatch(); watch.Start(); List<byte> list1 = new List<byte>(); List<byte> list2 = new List<byte>(); List<byte> list3 = new List<byte>(); int Size1 = 10000000; int Size2 = 2 * Size1; int Size3 = Size1; for (int i = 0; i < Size1; i++) { list1.Add(57); } for (int i = 0; i < Size2; i = i + 2) { list2.Add(56); } for (int i = 0; i < Size3; i++) { byte temp = list1.ElementAt(i); byte temp2 = list2.ElementAt(i); list3.Add(temp); list2[i] = temp; list1[i] = temp2; } watch.Stop(); (the code is just meant to stress out the memory) I would include the threadpool code, but we used a non-standard threadpool library. EDIT: I have reduced "size1" to 100000, which basically doesn't use much memory and I still get a lot of jitter. This suggests it's not the amount of memory being transferred, but the frequency of memory grabs?

    Read the article

  • Ways to avoid Memory Leaks in C/C++

    - by Ankur
    What are some tips I can use to avoid memory leaks in my applications? In my current project I use a tool "INSURE++" which finds the memory leak and generate the report. Apart from the tool is there any method to identify memory leaks and overcome it.

    Read the article

  • c# : simulate memory leaks..

    - by dotnet-practitioner
    Hi, I would like to write the following code in c#. a) small console application that simulates memory leak. b) small console application that would invoke the above application and release it right away simulating managing memory leak problem.. In other words the (b) application would continuously call and release application (a) to simulate how the "rebellious" memory leak application is being contained with out addressing the root cause which is application (a). Some sample code for application (a) and (b) would be very helpful. Thanks

    Read the article

  • Error mounting VirtualBox shared folders in an Ubuntu guest

    - by skaz
    I have Ubuntu 10 as the guest OS on a Windows 7 machine. I have been trying to setup shares through VirtualBox, but nothing is working. First, I create the share in VirtualBox and point it to a Windows folder. Then I try to mount the drive in Linux, but I keep getting /sbin/mount.vboxsf: mounting failed with the error: Protocol error I have read so many solutions to this, but none seem to work. I have tried: Using the mount.vboxsf syntax Reinstalling VBox additions Rebooting Enabling and trying as root account I made a share called "Test" in VBox Shared folders. Then I made a directory in ubuntu named "test2". Then I tried to execute this command: sudo mount -t vboxsf Test /mnt/test2 Any other ideas?

    Read the article

  • Mac host and Ubuntu guest on virtual box shared folder issue

    - by Thomas Ryan
    I have set up ubuntu server on virtual box on my mac. I created a shared folder which appears to be saved and visible in the configuration section on virtual box for the ubuntu server machine. My only issue is I don't know where this is or how to access it from inside ubuntu server. Does it get it's own directory or do I have to create some sort of a sym-link? If I do need to manually tell it to look in the mac for the file how do I reference the mac machine?

    Read the article

  • How to get shared bashed history among different tabs

    - by Luca Cerone
    I used the answer in http://unix.stackexchange.com/a/1292/41729 to enable real-time shared history among separate bash terminals. As explained in the answer above, this is achieved by adding: # avoid duplicates.. export HISTCONTROL=ignoredups:erasedups # append history entries.. shopt -s histappend # After each command, save and reload history export PROMPT_COMMAND="history -a; history -c; history -r; $PROMPT_COMMAND" This works fine if the bash shells are separate (e.g. opening different bash terminals using CTRL+ALT+T. However it doesn't work if I use tabs (from an open terminal `CTRL+SHIFT+T) rather than new windows. Why this difference in behaviour? How can I share the bash history also among various tabs?

    Read the article

  • how to access shared external drive connected by usb to Airport Extreme Router

    - by Nathaniel
    I have an external hard drive connected to my Airport Extreme. I keep my photo and music files on it and can access them from both a Mac PowerPC and a Windows XP machine. For some reason I can find, much less connect to, it on my Ubuntu 10.10 machine. I can see my router in the "network" folder but can't seem to find and connect to the shared drive. Any help would greatly appreciated. I am somewhat of a novice with Ubuntu and networking. Thnaks, Nathaniel

    Read the article

  • Transferring local site to shared hosting

    - by Pete
    I'm looking to setup a simple online text processing tool similar to the Clang demo. The processing program itself is a C++ program which I can modify to provide the desired output I need. Since I use Linux+Perl daily and have used Apache in the past, I'd like to get this working locally first. My two questions are: Is it possible to do this with only Apache and Perl? I've looked into frameworks for doing this and quickly ran into The Paradox Of Choice. Will I be able to easily transfer a working local site to a shared hosting service? I want to administer as little as possible. My understanding is since this needs to run a C++ program that CGI is a requirement and thus I need to administer the httpd server. Hopefully this doesn't mean a VPS. Thanks

    Read the article

  • Cann't find shared folder anymore

    - by SoftTimur
    I use VMware Fusion under Mac, and I have installed Ubuntu 12.04 and Windows 7 as virtual machine. Under Ubuntu, by "shared folder", I used to manage a common folder with Mac via /mnt/hgfs/myfolder. Under Windows, I have set up so that this folder is also accessible. But today I cann't find myfolder anymore under /mnt/hgfs in Ubuntu, while it is still accessible via Windows (and Mac of cause). I tried to restart the machine, reset the sharing settings, it still does not work. Could anyone help?

    Read the article

  • Transfering site from shared to vps account

    - by N e w B e e
    I got a hosting account at HOSTGATOR. it was a shared account and i had three websites present over there. now, i have upgraded my account to VPS at HOSTGATOR and i want to transfer only ONE website over the new VPS. say, mydom.net. this website includes wordpress installation and other custom pages and setup can somebody please guide me How can I transfer the web to my new account? with speed, accuracy, and such that my website remains in working condition.. what will I do about wordpress? simply copy it will work?(I dont thnk so), if not how can I move it? I need guideline. and I am asking the question with a hope that many others will also learn the things just as i am learning,, and one quick thing, answering it like creating a manual can help people a lot...that's my thinking.. thanks to all

    Read the article

  • Emgu CV - memory-leaks (memory consumption)

    - by martin pilch
    I am using EmguCV, the OpenCV wrapper for .NET. I am disposing all created objects but my app is still using more and more memory (in release configuration too). I have debugged my app using .NET Memory profiler and get this result: http://img532.imageshack.us/img532/2503/screenqv.png all objects instance count is oscilating but GChandle instance counr is increasing until my machine is unusable. Garbage collector does not release memory (i think). I am using VS 2008 professional, Win7 prof 32-bit, both up to date, and last stable version of emguCV. I can post some app code, if it will help. Thanks and sorry for my English. Martin

    Read the article

  • Why does my Delphi program's memory continue to grow?

    - by lkessler
    I am using Delphi 2009 which has the FastMM4 memory manager built into it. My program reads in and processes a large dataset. All memory is freed correctly whenever I clear the dataset or exit the program. It has no memory leaks at all. Using the CurrentMemoryUsage routine given in spenwarr's answer to: http://stackoverflow.com/questions/437683/how-to-get-the-memory-used-by-a-delphi-program, I have displayed the memory used by FastMM4 during processing. What seems to be happening is that memory is use is growing after every process and release cycle. e.g.: 1,456 KB used after starting my program with no dataset. 218,455 KB used after loading a large dataset. 71,994 KB after clearing the dataset completely. If I exit at this point (or any point in my example), no memory leaks are reported. 271,905 KB used after loading the same dataset again. 125,443 KB after clearing the dataset completely. 325,519 KB used after loading the same dataset again. 179,059 KB after clearing the dataset completely. 378,752 KB used after loading the same dataset again. It seems that my program's memory use is growing by about 53,400 KB upon each load/clear cycle. Task Manager confirms that this is actually happening. I have heard that FastMM4 does not always release all of the program's memory back to the Operating system when objects are freed so that it can keep some memory around when it needs more. But this continual growing bothers me. Since no memory leaks are reported, I can't identify a problem. Does anyone know why this is happening, if it is bad, and if there is anything I can or should do about it?

    Read the article

  • Windows Physical Direct Memory Mapping

    - by chrisjleaf
    I'm a bit disappointed there is almost no discussion of this no matter where I look so I guess I'll have to ask. I'm writing a cross platform memory bench marking application which requires direct physical address mapping rather than virtual addressing. EDIT The solution would look something like the Linux/Unix system calls: int fd = open("/dev/mem", O_RDONLY); mmap(NULL, len, PROT_READ, MAP_SHARED, fd, PHYSICAL_ADDRESS_OFFSET); which will require the kernel to either give you a virtual page mapping to the desired physical address or return that it failed. This does require supervisor privileges but that is ok. I have seen a lot of information about shared memory and memory mapped files but all of these reside on disc and are thus not really useful when I'm trying to make a system dependent read. It is very similar to writing an IO driver although I do no need write permissions to the physical address. This site gives an example of how to do it on a driver level using the Windows Driver Kit: NT Insider: Sharing Memory between drivers and applications This solution would probably require Visual Studio which currently I do not have access to. (I have downloaded the WDK api but it complained about my use of GCC for Windows). I'm traditionally a Linux programmer so I'm hoping there might be something really simple I'm missing. Thanks in advance if you know something I don't!

    Read the article

  • Why is Apache seg faulting?

    - by Jamie Howard
    We have a production server that seems to Seg Fault a few times every day. The fault is picked up by Apache and logged in the error log - but there seems to be no traffic around the time. If it's a request generating the fault then it looks like it happens before any other logging is made so I can't see how it's happening so it's very hard to debug. Our setup is Linux 64 bit Centos 5.3 Apache is loaded with the following modules apachectl -t -D DUMP_MODULES | more Loaded Modules: core_module (static) mpm_prefork_module (static) http_module (static) so_module (static) auth_basic_module (shared) auth_digest_module (shared) authn_file_module (shared) authn_alias_module (shared) authn_anon_module (shared) authn_dbm_module (shared) authn_default_module (shared) authz_host_module (shared) authz_user_module (shared) authz_owner_module (shared) authz_groupfile_module (shared) authz_dbm_module (shared) authz_default_module (shared) ldap_module (shared) authnz_ldap_module (shared) include_module (shared) log_config_module (shared) logio_module (shared) env_module (shared) ext_filter_module (shared) mime_magic_module (shared) expires_module (shared) deflate_module (shared) headers_module (shared) usertrack_module (shared) setenvif_module (shared) mime_module (shared) dav_module (shared) status_module (shared) autoindex_module (shared) info_module (shared) dav_fs_module (shared) vhost_alias_module (shared) negotiation_module (shared) dir_module (shared) actions_module (shared) speling_module (shared) userdir_module (shared) alias_module (shared) rewrite_module (shared) proxy_module (shared) proxy_balancer_module (shared) proxy_ftp_module (shared) proxy_http_module (shared) proxy_connect_module (shared) cache_module (shared) suexec_module (shared) disk_cache_module (shared) file_cache_module (shared) mem_cache_module (shared) cgi_module (shared) version_module (shared) security2_module (shared) unique_id_module (shared) fcgid_module (shared) php5_module (shared) proxy_ajp_module (shared) ssl_module (shared) Here's an exert from the Apache error log: [Mon Mar 15 06:39:25 2010] [error] [client 213.246.222.74] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Mon Mar 15 07:41:31 2010] [error] [client 213.246.222.74] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Mon Mar 15 08:24:16 2010] [error] [client 67.19.250.146] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Mon Mar 15 08:43:46 2010] [error] [client 213.246.222.74] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Mon Mar 15 08:54:02 2010] [error] [client 74.208.123.71] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Mon Mar 15 09:09:51 2010] [notice] child pid 2138 exit signal Segmentation fault (11), possible coredump in /tmp [Mon Mar 15 09:45:27 2010] [error] [client 213.246.222.74] client sent HTTP/1.1 request without hostname (see RFC2616 section 14.23): /w00tw00t.at.ISC.SANS.DFind:) [Mon Mar 15 09:49:05 2010] [error] [client 190.12.113.196] File does not exist: /var/www/vhosts/default/htdocs/phpMyAdmin [Mon Mar 15 09:49:06 2010] [error] [client 190.12.113.196] File does not exist: /var/www/vhosts/default/htdocs/PMA And the Access log around the same time (09:09:51): 213.246.222.74 - - [15/Mar/2010:08:43:46 +0000] "GET /" 400 561 "-" "-" 208.80.193.28 - - [15/Mar/2010:08:52:20 +0000] "GET / HTTP/1.0" 301 313 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; SU 2.009)" 74.208.123.71 - - [15/Mar/2010:08:54:02 +0000] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 298 "-" "-" 81.149.146.231 - - [15/Mar/2010:09:15:18 +0000] "GET /zabbix/ HTTP/1.1" 200 3565 "-" "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_2; en-us) AppleWebKit/531.21.8 (KHTML, like Gecko) Version/4.0.4 Safari/531.21.10" 81.158.71.196 - - [15/Mar/2010:09:16:06 +0000] "GET / HTTP/1.1" 301 313 "-" "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.4; en-US; rv:1.9.0.18) Gecko/2010020219 Firefox/3.0.18" 213.246.222.74 - - [15/Mar/2010:09:45:27 +0000] "GET /" 400 561 "-" "-" 213.246.222.74 - - [15/Mar/2010:09:45:27 +0000] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 298 "-" "-" 190.12.113.196 - - [15/Mar/2010:09:49:05 +0000] "GET /phpMyAdmin/main.php HTTP/1.0" 404 295 "-" "-" So As you can see, there's no access logged around the time of the fault!! How annoying :s I enabled core dumps and here is the backtrace: #0 0x00007f9c8c8a858b in memcpy () from /lib64/libc.so.6 No symbol table info available. #1 0x00007f9c8cfb066d in apr_pstrcat (a=<value optimized out>) at strings/apr_strings.c:165 cp = 0x1fa6b "\205¦H\211¦t`¦\003" argp = 0x7f9c9ad790e8 "Referer, Referer, Referer, Referer, Referer, Referer, Referer, Referer, Referer, Referer, Referer, Referer, Referer, Referer, Referer, Referer, Referer, Referer, Referer, Referer, Referer, Referer, Re"... res = 0x0 saved_lengths = {129643, 2, 43, 140310399395576, 0, 140310394592712} nargs = <value optimized out> len = <value optimized out> adummy = {{gp_offset = 16, fp_offset = 32668, overflow_arg_area = 0x7fff968a0ec0, reg_save_area = 0x7fff968a0de0}} #2 0x00007f9c8cfb1bf9 in apr_table_merge (t=0x7f9c8f83b148, key=0x7f9c85a465fe "Vary", val=0x7f9c9ad99070 "Referer, Referer, Referer, Referer, Referer") at tables/apr_tables.c:688 next_elt = (apr_table_entry_t *) 0x7f9c8f83b270 end_elt = (apr_table_entry_t *) 0x7f9c8f83b270 checksum = <value optimized out> hash = 22 #3 0x00007f9c85a42cfa in ?? () from /etc/httpd/modules/mod_rewrite.so No symbol table info available. #4 0x00007f9c85a44022 in ?? () from /etc/httpd/modules/mod_rewrite.so No symbol table info available. #5 0x00007f9c8e87bd1a in ap_run_fixups () from /usr/sbin/httpd No symbol table info available. #6 0x00007f9c8e88e8f8 in ap_process_request () from /usr/sbin/httpd No symbol table info available. #7 0x00007f9c8e88bb40 in ?? () from /usr/sbin/httpd No symbol table info available. #8 0x00007f9c8e887ca2 in ap_run_process_connection () from /usr/sbin/httpd No symbol table info available. #9 0x00007f9c8e892849 in ?? () from /usr/sbin/httpd No symbol table info available. #10 0x00007f9c8e892ada in ?? () from /usr/sbin/httpd No symbol table info available. #11 0x00007f9c8e892b90 in ?? () from /usr/sbin/httpd No symbol table info available. #12 0x00007f9c8e89387b in ap_mpm_run () from /usr/sbin/httpd No symbol table info available. #13 0x00007f9c8e86de48 in main () from /usr/sbin/httpd No symbol table info available. Can anyone shed any light on how to move forward with this? I can confirm that the server is operational and doesn't appear to be misbehaving - the failures are so infrequent that I haven't seen it do one while making a request myself. Really appreciate any help! Cheers!

    Read the article

  • G++ Multi-platform memory leak detection tool

    - by indyK1ng
    Does anyone know where I can find a memory memory leak detection tool for C++ which can be either run in a command line or as an Eclipse plug-in in Windows and Linux. I would like it to be easy to use. Preferably one that doesn't overwrite new(), delete(), malloc() or free(). Something like GDB if its gonna be in the command line, but I don't remember that being used for detecting memory leaks. If there is a unit testing framework which does this automatically, that would be great. This question is similar to other questions (such as http://stackoverflow.com/questions/283726/memory-leak-detection-under-windows-for-gnu-c-c ) however I feel it is different because those ask for windows specific solutions or have solutions which I would rather avoid. I feel I am looking for something a bit more specific here. Suggestions don't have to fulfill all requirements, but as many as possible would be nice. Thanks. EDIT: Since this has come up, by "overwrite" I mean anything which requires me to #include a library or which otherwise changes how C++ compiles my code, if it does this at run time so that running the code in a different environment won't affect anything that would be great. Also, unfortunately, I don't have a Mac, so any suggestions for that are unhelpful, but thank you for trying. My desktop runs Windows (I have Linux installed but my dual monitors don't work with it) and I'd rather not run Linux in a VM, although that is certainly an option. My laptop runs Linux, so I can use that tool on there, although I would definitely prefer sticking to my desktop as the screen space is excellent for keeping all of the design documentation and requirements in view without having to move too much around on the desktop. NOTE: While I may try answers, I won't mark one as accepted until I have tried the suggestion and it is satisfactory. EDIT2: I'm not worried about the cross-platform compatibility of my code, it's a command line application using just the C++ libraries.

    Read the article

  • How is an array stored in memory?

    - by George
    In an interest to delve deeper into how memory is allocated and stored, I have written an application that can scan memory address space, find a value, and write out a new value. I developed a sample application with the end goal to be able to programatically locate my array, and overwrite it with a new sequence of numbers. In this situation, I created a single dimensional array, with 5 elements, e.g. int[] array = new int[] {8,7,6,5,4}; I ran my application and searched for a sequence of the five numbers above. I was looking for any value that fell between 4 and 8, for a total of 5 numbers in a row. Unforuntately, my the sequential numbers in my array matched hundreds of results, as the numbers 4 through 8, in no particular sequence happened to be next to each other, in memory, in many situations. Is there any way to distinguish that a set of numbers within memory, represents an array, not simply integers that are next to each other? Is there any way of knowing that if I find a certain value, that the matching values proceeding it are that of an array? I would assume that when I declare int[] array, its pointing at the first address of my array, which would provide some kind of meta-data to what existed in the array, e.g. 0x123456789 meta-data, 5 - 32 bit integers 0x123456789 + 32 "8" 0x123456789 + 64 "7" 0x123456789 + 96 "6" 0x123456789 + 128 "5" 0x123456789 + 160 "4" Am I way off base?

    Read the article

  • Firefox consumes too much memory

    - by Vivek
    I have firefox version 11.0 and am running ubuntu 11.10. Firefox takes upto 850MB RAM with only six or seven tabs opened and all the tabs loaded with light weight websites only. I wonder why would a browser consume so much memory. It keeps increasing its memory consumption over time. I have 3GB RAM and most of the times firefox consumes upto 30% of my memory. How do I fix this? EDIT: The output of the command sudo iotop -oPa as asked by @Jippie

    Read the article

  • EBS: OPP Out of memory issue...

    - by ashish.shrivastava
    FO Processor is little more hungry for memory compare to other Java process. If XSLT scalable option is not set and the same time your RTF template is not well optimized definitely you are going to hit Out of memory exception while working with large volume of data. If the memory requirement is not too bad, you can set the OOP Heap size using following SQL queries. Check the current OPP JVM Heap size using following SQL query SQL select DEVELOPER_PARAMETERS from FND_CP_SERVICES where SERVICE_ID = (select MANAGER_TYPE from FND_CONCURRENT_QUEUES where CONCURRENT_QUEUE_NAME = 'FNDCPOPP' DEVELOPER_PARAMETERS ----------------------------------------------------- J:oracle.apps.fnd.cp.gsf.GSMServiceController:-mx512m Set the JVM Heap size using following SQL query SQL update FND_CP_SERVICES set DEVELOPER_PARAMETERS = 'J:oracle.apps.fnd.cp.gsf.GSMServiceController:-mx2048m' where SERVICE_ID = (select MANAGER_TYPE from FND_CONCURRENT_QUEUES where CONCURRENT_QUEUE_NAME = 'FNDCPOPP'); SQLCommit; . You need to restart the Concurrent Manager to make it effective. If this does not resolve the issue, You need to optimize RTF template and set the XSLT scalable option true.

    Read the article

  • Getting started with Oracle Database In-Memory Part III - Querying The IM Column Store

    - by Maria Colgan
    In my previous blog posts, I described how to install, enable, and populate the In-Memory column store (IM column store). This weeks post focuses on how data is accessed within the IM column store. Let’s take a simple query “What is the most expensive air-mail order we have received to date?” SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE  lo_shipmode = 5; The LINEORDER table has been populated into the IM column store and since we have no alternative access paths (indexes or views) the execution plan for this query is a full table scan of the LINEORDER table. You will notice that the execution plan has a new set of keywords “IN MEMORY" in the access method description in the Operation column. These keywords indicate that the LINEORDER table has been marked for INMEMORY and we may use the IM column store in this query. What do I mean by “may use”? There are a small number of cases were we won’t use the IM column store even though the object has been marked INMEMORY. This is similar to how the keyword STORAGE is used on Exadata environments. You can confirm that the IM column store was actually used by examining the session level statistics, but more on that later. For now let's focus on how the data is accessed in the IM column store and why it’s faster to access the data in the new column format, for analytical queries, rather than the buffer cache. There are four main reasons why accessing the data in the IM column store is more efficient. 1. Access only the column data needed The IM column store only has to scan two columns – lo_shipmode and lo_ordtotalprice – to execute this query while the traditional row store or buffer cache has to scan all of the columns in each row of the LINEORDER table until it reaches both the lo_shipmode and the lo_ordtotalprice column. 2. Scan and filter data in it's compressed format When data is populated into the IM column it is automatically compressed using a new set of compression algorithms that allow WHERE clause predicates to be applied against the compressed formats. This means the volume of data scanned in the IM column store for our query will be far less than the same query in the buffer cache where it will scan the data in its uncompressed form, which could be 20X larger. 3. Prune out any unnecessary data within each column The fastest read you can execute is the read you don’t do. In the IM column store a further reduction in the amount of data accessed is possible due to the In-Memory Storage Indexes(IM storage indexes) that are automatically created and maintained on each of the columns in the IM column store. IM storage indexes allow data pruning to occur based on the filter predicates supplied in a SQL statement. An IM storage index keeps track of minimum and maximum values for each column in each of the In-Memory Compression Unit (IMCU). In our query the WHERE clause predicate is on the lo_shipmode column. The IM storage index on the lo_shipdate column is examined to determine if our specified column value 5 exist in any IMCU by comparing the value 5 to the minimum and maximum values maintained in the Storage Index. If the value 5 is outside the minimum and maximum range for an IMCU, the scan of that IMCU is avoided. For the IMCUs where the value 5 does fall within the min, max range, an additional level of data pruning is possible via the metadata dictionary created when dictionary-based compression is used on IMCU. The dictionary contains a list of the unique column values within the IMCU. Since we have an equality predicate we can easily determine if 5 is one of the distinct column values or not. The combination of the IM storage index and dictionary based pruning, enables us to only scan the necessary IMCUs. 4. Use SIMD to apply filter predicates For the IMCU that need to be scanned Oracle takes advantage of SIMD vector processing (Single Instruction processing Multiple Data values). Instead of evaluating each entry in the column one at a time, SIMD vector processing allows a set of column values to be evaluated together in a single CPU instruction. The column format used in the IM column store has been specifically designed to maximize the number of column entries that can be loaded into the vector registers on the CPU and evaluated in a single CPU instruction. SIMD vector processing enables the Oracle Database In-Memory to scan billion of rows per second per core versus the millions of rows per second per core scan rate that can be achieved in the buffer cache. I mentioned earlier in this post that in order to confirm the IM column store was used; we need to examine the session level statistics. You can monitor the session level statistics by querying the performance views v$mystat and v$statname. All of the statistics related to the In-Memory Column Store begin with IM. You can see the full list of these statistics by typing: display_name format a30 SELECT display_name FROM v$statname WHERE  display_name LIKE 'IM%'; If we check the session statistics after we execute our query the results would be as follow; SELECT Max(lo_ordtotalprice) most_expensive_order FROM lineorderWHERE lo_shipmode = 5; SELECT display_name FROM v$statname WHERE  display_name IN ('IM scan CUs columns accessed',                        'IM scan segments minmax eligible',                        'IM scan CUs pruned'); As you can see, only 2 IMCUs were accessed during the scan as the majority of the IMCUs (44) in the LINEORDER table were pruned out thanks to the storage index on the lo_shipmode column. In next weeks post I will describe how you can control which queries use the IM column store and which don't. +Maria Colgan

    Read the article

  • Design review for application facing memory issues

    - by Mr Moose
    I apologise in advance for the length of this post, but I want to paint an accurate picture of the problems my app is facing and then pose some questions below; I am trying to address some self inflicted design pain that is now leading to my application crashing due to out of memory errors. An abridged description of the problem domain is as follows; The application takes in a “dataset” that consists of numerous text files containing related data An individual text file within the dataset usually contains approx 20 “headers” that contain metadata about the data it contains. It also contains a large tab delimited section containing data that is related to data in one of the other text files contained within the dataset. The number of columns per file is very variable from 2 to 256+ columns. The original application was written to allow users to load a dataset, map certain columns of each of the files which basically indicating key information on the files to show how they are related as well as identify a few expected column names. Once this is done, a validation process takes place to enforce various rules and ensure that all the relationships between the files are valid. Once that is done, the data is imported into a SQL Server database. The database design is an EAV (Entity-Attribute-Value) model used to cater for the variable columns per file. I know EAV has its detractors, but in this case, I feel it was a reasonable choice given the disparate data and variable number of columns submitted in each dataset. The memory problem Given the fact the combined size of all text files was at most about 5 megs, and in an effort to reduce the database transaction time, it was decided to read ALL the data from files into memory and then perform the following; perform all the validation whilst the data was in memory relate it using an object model Start DB transaction and write the key columns row by row, noting the Id of the written row (all tables in the database utilise identity columns), then the Id of the newly written row is applied to all related data Once all related data had been updated with the key information to which it relates, these records are written using SqlBulkCopy. Due to our EAV model, we essentially have; x columns by y rows to write, where x can by 256+ and rows are often into the tens of thousands. Once all the data is written without error (can take several minutes for large datasets), Commit the transaction. The problem now comes from the fact we are now receiving individual files containing over 30 megs of data. In a dataset, we can receive any number of files. We’ve started seen datasets of around 100 megs coming in and I expect it is only going to get bigger from here on in. With files of this size, data can’t even be read into memory without the app falling over, let alone be validated and imported. I anticipate having to modify large chunks of the code to allow validation to occur by parsing files line by line and am not exactly decided on how to handle the import and transactions. Potential improvements I’ve wondered about using GUIDs to relate the data rather than relying on identity fields. This would allow data to be related prior to writing to the database. This would certainly increase the storage required though. Especially in an EAV design. Would you think this is a reasonable thing to try, or do I simply persist with identity fields (natural keys can’t be trusted to be unique across all submitters). Use of staging tables to get data into the database and only performing the transaction to copy data from staging area to actual destination tables. Questions For systems like this that import large quantities of data, how to you go about keeping transactions small. I’ve kept them as small as possible in the current design, but they are still active for several minutes and write hundreds of thousands of records in one transaction. Is there a better solution? The tab delimited data section is read into a DataTable to be viewed in a grid. I don’t need the full functionality of a DataTable, so I suspect it is overkill. Is there anyway to turn off various features of DataTables to make them more lightweight? Are there any other obvious things you would do in this situation to minimise the memory footprint of the application described above? Thanks for your kind attention.

    Read the article

  • Applications affected by memory performance

    - by robotron
    I'm writing a paper on the topic of applications affected more by memory performance than processor performance. I've got a lot written regarding the gap between the two, however I can't seem to find anything about the applications that might be affected more by memory performance than by processor speed. I suppose these are applications that make a large amount of memory references, but I have no idea what kind of applications would make such large number of references to make it stand out? Perhaps databases? Can you please give me any pointers on how to proceed, some links to papers? I'm really stuck.

    Read the article

  • Wheres my memory going?

    - by Stu2000
    My machine keeps 'freezing' before eventaully logging out with all the programs exiting. This is rather annoying, and I think its because I keep running out of memory. I am not running any custom software, just netbeans, chrome etc. (Stuff I usually run on other ubuntu computers without issue). For some reason my memory usage is through the roof as seen here, but I can't quite figure out why. Here is a screenshot which may be useful with htop and gnome-system monitor open as user and as root. I notice that my console-kit-daemon is taking up about a gig of 'virtual memory'. Is that normal? Any tips/advice will be helpful. In the meantime I have ordered 2 x 4 gig ram sticks to try and just throw hardware at the issue.

    Read the article

  • Loading saved byte array to memory stream causes out of memory exception

    - by user2320861
    At some point in my program the user selects a bitmap to use as the background image of a Panel object. When the user does this, the program immediately draws the panel with the background image and everything works fine. When the user clicks "Save", the following code saves the bitmap to a DataTable object. MyDataSet.MyDataTableRow myDataRow = MyDataSet.MyDataTableRow.NewMyDataTableRow(); //has a byte[] column named BackgroundImageByteArray using (MemoryStream stream = new MemoryStream()) { this.Panel.BackgroundImage.Save(stream, ImageFormat.Bmp); myDataRow.BackgroundImageByteArray = stream.ToArray(); } Everything works fine, there is no out of memory exception with this stream, even though it contains all the image bytes. However, when the application launches and loads saved data, the following code throws an Out of Memory Exception: using (MemoryStream stream = new MemoryStream(myDataRow.BackGroundImageByteArray)) { this.Panel.BackgroundImage = Image.FromStream(stream); } The streams are the same length. I don't understand how one throws an out of memory exception and the other doesn't. How can I load this bitmap? P.S. I've also tried using (MemoryStream stream = new MemoryStream(myDataRow.BackgroundImageByteArray.Length)) { stream.Write(myDataRow.BackgroundImageByteArray, 0, myDataRow.BackgroundImageByteArray.Length); //throw OoM exception here. }

    Read the article

  • Memory allocation in case of static variables

    - by eSKay
    I am always confused about static variables, and the way memory allocation happens for them. For example: int a = 1; const int b = 2; static const int c = 3; int foo(int &arg){ arg++; return arg; } How is the memory allocated for a,b and c? What is the difference (in terms of memory) if I call foo(a), foo(b) and foo(c)?

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >