Search Results

Search found 16731 results on 670 pages for 'memory limit'.

Page 62/670 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • Understanding how memory contents map into a struct

    - by user95592
    I am not able to understand how bytes in memory are being mapped into a struct. My machine is a little-endian x86_64. The code was compiled with gcc 4.7.0 from the Win64 mingw32-64 distribution for Win64. These are contents of the relevant memory fragment: ...450002cf9fe5000040115a9fc0a8fe... And this is the struct definition: typedef struct ip4 { unsigned int ihl :4; unsigned int version :4; uint8_t tos; uint16_t tot_len; uint16_t id; uint16_t frag_off; // flags=3 bits, offset=13 bits uint8_t ttl; uint8_t protocol; uint16_t check; uint32_t saddr; uint32_t daddr; /*The options start here. */ } ip4_t; When a pointer to such an structure (let it be *ip4) is initialized to the starting address of the above pasted memory region, this is what the debugger shows for the struct's fields: ip4: address=0x8da36ce ip4->ihl: address=0x8da36ce, value=0x5 ip4->version: address=0x8da36ce, value=0x4 ip4->tos: address=0x8da36d2, value=0x9f ip4->tot_len: address=0x8da36d4, value=0x0 ... I see how ihl and version are mapped: 4 bytes for a long integer, little-endian. But I don't understand how tos and tot_len are mapped; which bytes in memory correspond to each one of them. Thank you in advance.

    Read the article

  • TXPAUSE : polite waiting for hardware transactional memory

    - by Dave
    Classic locks are an appropriate tool to prevent potentially conflicting operations A and B, invoked by different threads, from running at the same time. In a sense the locks cause either A to run before B or vice-versa. Similarly, we can replace the locks with hardware transactional memory, or use transactional lock elision to leverage potential disjoint access parallelism between A and B. But often we want A to wait until B has run. In a Pthreads environment we'd usually use locks in conjunction with condition variables to implement our "wait until" constraint. MONITOR-MWAIT is another way to wait for a memory location to change, but it only allows us to track one cache line and it's only available on x86. There's no similar "wait until" construct for hardware transactions. At the instruction-set level a simple way to express "wait until" in transactions would be to add a new TXPAUSE instruction that could be used within an active hardware transaction. TXPAUSE would politely stall the invoking thread, possibly surrendering or yielding compute resources, while at the same time continuing to track the transaction's address-set. Once a transaction has executed TXPAUSE it can only abort. Ideally that'd happen when some other thread modifies a variable that's in the transaction's read-set or write-set. And since we're aborting all writes would be discarded. In a sense this gives us multi-location MWAIT but with much more flexibility. We could also augment the TXPAUSE with a cycle-count bound to cap the time spent stalled. I should note that we can already enter a tight spin loop in a transaction to wait for updates to address-set to cause an abort. Assuming that the implementation monitors the address-set via cache-coherence probes, by waiting in this fashion we actually communicate via the probes, and not via memory values. That is the updating thread signals the waiter via probes instead of by traditional memory values. But TXPAUSE gives us a polite way to spin.

    Read the article

  • Bandwidth Limit Php Not working

    - by Saxtor
    Hey How are you doing guys, i am trying to limit bandwidth per users not by ipaddress for some reason my code doesnt work i need some help, what i am trying to do is to limit the download of the user that they would only have 10Gb per day to download however it seems to me that my buffer is not working when i use multiple connections it doesnt work, but when i use one connect it works 80% here is my code can you debug the error for me thanks. /** * @author saxtor if you can improve this code email me [email protected] * @copyright 2010 */ /** * CREATE TABLE IF NOT EXISTS `max_traffic` ( `id` int(255) NOT NULL AUTO_INCREMENT, `limit` int(255) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=0 ; */ //SQL Connection [this is hackable for testing] date_default_timezone_set("America/Guyana"); mysql_connect("localhost", "root", "") or die(mysql_error()); mysql_select_db("Quota") or die(mysql_error()); function quota($id) { $result = mysql_query("SELECT `limit` FROM max_traffic WHERE id='$id' ") or die(error_log(mysql_error()));; $row = mysql_fetch_array($result); return $row[0]; } function update_quota($id,$value) { $result = mysql_query("UPDATE `max_traffic` SET `limit`='$value' WHERE id='$id'") or die(mysql_error()); return $value; } if ( quota(1) != 0) $limit = quota(1); else $limit = 0; $multipart = false; //was a part of the file requested? (partial download) $range = $_SERVER["HTTP_RANGE"]; if ($range) { $cookie .= "\r\nRange: $range"; $multipart = true; header("X-UR-RANGE-Range: $range"); } $url = 'http://127.0.0.1/puppy.iso'; $filename = basename($url); //octet-stream + attachment => client always stores file header('Content-type: application/octet-stream'); header('Content-Disposition: attachment; filename="'.$filename.'"'); //always included so clients know this script supports resuming header("Accept-Ranges: bytes"); $user_agent = ini_get("user_agent"); ini_set("user_agent", $user_agent . "\r\nCookie: enc=$cookie"); $httphandle = fopen($url, "r"); $headers = stream_get_meta_data($httphandle); $size = $headers["wrapper_data"][6]; $sizer = explode(' ',$size); $size = $sizer[1]; //let's check the return header of rapidshare for range / length indicators //we'll just pass these to the client foreach ($headers["wrapper_data"] as $header) { $header = trim($header); if (substr(strtolower($header), 0, strlen("content-range")) == "content-range") { // _insert($range); header($header); header("X-RS-RANGE-" . $header); $multipart = true; //content-range indicates partial download } elseif (substr(strtolower($header), 0, strlen("Content-Length")) == "content-length") { // _insert($range); header($header); header("X-RS-CL-" . $header); } } if ($multipart) header('HTTP/1.1 206 Partial Content'); flush(); $speed = 4128; $packet = 1; //this is private dont touch. $bufsize = 128; //this is private dont touch/ $bandwidth = 0; //this is private dont touch. while (!(connection_aborted() || connection_status() == 1) && $size > 0) { while (!feof($httphandle) && $size > 0) { if ($limit <= 0 ) $size = 0; if ( $size < $bufsize && $size != 0 && $limit != 0) { echo fread($httphandle,$size); $bandwidth += $size; } else { if( $limit != 0) echo fread($httphandle,$bufsize); $bandwidth += $bufsize; } $size -= $bufsize; $limit -= $bufsize; flush(); if ($speed > 0 && ($bandwidth > $speed*$packet*103)) { usleep(100000); $packet++; //update_quota(1,$limit); } error_log(update_quota(1,$limit)); $limit = quota(1); //if( $size <= 0 ) // exit; } fclose($httphandle); } exit;

    Read the article

  • [PHP] - Lowering script memory usage in a "big" file creation

    - by Riccardo
    Hi there people, it looks like I'm facing a typical memory outage problem when using a PHP script. The script, originally developed by another person, serves as an XML sitemap creator, and on large websites uses quite a lot of memory. I thought that the problem was related due to an algorithm holding data in memory until the job was done, but digging into the code I have discovered that the script works in this way: open file in output (will contain XML sitemap entries) in the loop: ---- for each entry to be added in sitemap, do fwrite close file end Although there are no huge arrays or variables being kept in memory, this technique uses a lot of memory. I thought that maybe PHP was buffering under the hood the fwrites and "flushing" data at the end of the script, so I have modified the code to close and open the file every Nth record, but the memory usage is still the same.... I'm debugging the script on my computer and watching memory usage: while script execution runs, memory allocation grows. Is there a particular technique to instruct PHP to free unsed memory, to force flushing buffers if any? Thanks

    Read the article

  • C++: Is there a way to limit access to certain methods to certain classes without exposing other pri

    - by dadads
    I have a class with a protected method Zig::punt() and I only want it to be accessible to the class "Avocado". In C++, you'll normally do this using the "friend Avocado" specifier, but this will cause all of the other variables to become accessible to "Avocado" class; I don't want this because this breaks encapsulation. Is what I want impossible, or does there already exist an obscure trick out there that I can use to achieve what I want? Or possibly alternative class design patterns that'll achieve the same thing? Thanks in advance for any ideas!

    Read the article

  • (jquery): After Ajax: success, how to limit parsing in pieces and insert piece by piece into the DOM

    - by Johua
    Let's say on ajax success the function: xmlParser(xml) is called and the XML response contains about 1500 xml elements (for example: ...) Inside it looks like: function xmlParser(xml){ var xmlDOM = $(xml); // dom object containing xml response xdom.find("book").each(function(){ var $node = $(this); // parsing all the <book> nodes }); } How can you say: parse only first 20 nodes from the xml response, do something with them. And you have a button,..when clicked parses the next 20 nodes...and so on. So bassically, is there something like: children(20)....and a function wich will parse the next 20 nodes onClick. thx

    Read the article

  • Session memory – who’s this guy named Max and what’s he doing with my memory?

    - by extended_events
    SQL Server MVP Jonathan Kehayias (blog) emailed me a question last week when he noticed that the total memory used by the buffers for an event session was larger than the value he specified for the MAX_MEMORY option in the CREATE EVENT SESSION DDL. The answer here seems like an excellent subject for me to kick-off my new “401 – Internals” tag that identifies posts where I pull back the curtains a bit and let you peek into what’s going on inside the extended events engine. In a previous post (Option Trading: Getting the most out of the event session options) I explained that we use a set of buffers to store the event data before  we write the event data to asynchronous targets. The MAX_MEMORY along with the MEMORY_PARTITION_MODE defines how big each buffer will be. Theoretically, that means that I can predict the size of each buffer using the following formula: max memory / # of buffers = buffer size If it was that simple I wouldn’t be writing this post. I’ll take “boundary” for 64K Alex For a number of reasons that are beyond the scope of this blog, we create event buffers in 64K chunks. The result of this is that the buffer size indicated by the formula above is rounded up to the next 64K boundary and that is the size used to create the buffers. If you think visually, this means that the graph of your max_memory option compared to the actual buffer size that results will look like a set of stairs rather than a smooth line. You can see this behavior by looking at the output of dm_xe_sessions, specifically the fields related to the buffer sizes, over a range of different memory inputs: Note: This test was run on a 2 core machine using per_cpu partitioning which results in 5 buffers. (Seem my previous post referenced above for the math behind buffer count.) input_memory_kb total_regular_buffers regular_buffer_size total_buffer_size 637 5 130867 654335 638 5 130867 654335 639 5 130867 654335 640 5 196403 982015 641 5 196403 982015 642 5 196403 982015 This is just a segment of the results that shows one of the “jumps” between the buffer boundary at 639 KB and 640 KB. You can verify the size boundary by doing the math on the regular_buffer_size field, which is returned in bytes: 196403 – 130867 = 65536 bytes 65536 / 1024 = 64 KB The relationship between the input for max_memory and when the regular_buffer_size is going to jump from one 64K boundary to the next is going to change based on the number of buffers being created. The number of buffers is dependent on the partition mode you choose. If you choose any partition mode other than NONE, the number of buffers will depend on your hardware configuration. (Again, see the earlier post referenced above.) With the default partition mode of none, you always get three buffers, regardless of machine configuration, so I generated a “range table” for max_memory settings between 1 KB and 4096 KB as an example. start_memory_range_kb end_memory_range_kb total_regular_buffers regular_buffer_size total_buffer_size 1 191 NULL NULL NULL 192 383 3 130867 392601 384 575 3 196403 589209 576 767 3 261939 785817 768 959 3 327475 982425 960 1151 3 393011 1179033 1152 1343 3 458547 1375641 1344 1535 3 524083 1572249 1536 1727 3 589619 1768857 1728 1919 3 655155 1965465 1920 2111 3 720691 2162073 2112 2303 3 786227 2358681 2304 2495 3 851763 2555289 2496 2687 3 917299 2751897 2688 2879 3 982835 2948505 2880 3071 3 1048371 3145113 3072 3263 3 1113907 3341721 3264 3455 3 1179443 3538329 3456 3647 3 1244979 3734937 3648 3839 3 1310515 3931545 3840 4031 3 1376051 4128153 4032 4096 3 1441587 4324761 As you can see, there are 21 “steps” within this range and max_memory values below 192 KB fall below the 64K per buffer limit so they generate an error when you attempt to specify them. Max approximates True as memory approaches 64K The upshot of this is that the max_memory option does not imply a contract for the maximum memory that will be used for the session buffers (Those of you who read Take it to the Max (and beyond) know that max_memory is really only referring to the event session buffer memory.) but is more of an estimate of total buffer size to the nearest higher multiple of 64K times the number of buffers you have. The maximum delta between your initial max_memory setting and the true total buffer size occurs right after you break through a 64K boundary, for example if you set max_memory = 576 KB (see the green line in the table), your actual buffer size will be closer to 767 KB in a non-partitioned event session. You get “stepped up” for every 191 KB block of initial max_memory which isn’t likely to cause a problem for most machines. Things get more interesting when you consider a partitioned event session on a computer that has a large number of logical CPUs or NUMA nodes. Since each buffer gets “stepped up” when you break a boundary, the delta can get much larger because it’s multiplied by the number of buffers. For example, a machine with 64 logical CPUs will have 160 buffers using per_cpu partitioning or if you have 8 NUMA nodes configured on that machine you would have 24 buffers when using per_node. If you’ve just broken through a 64K boundary and get “stepped up” to the next buffer size you’ll end up with total buffer size approximately 10240 KB and 1536 KB respectively (64K * # of buffers) larger than max_memory value you might think you’re getting. Using per_cpu partitioning on large machine has the most impact because of the large number of buffers created. If the amount of memory being used by your system within these ranges is important to you then this is something worth paying attention to and considering when you configure your event sessions. The DMV dm_xe_sessions is the tool to use to identify the exact buffer size for your sessions. In addition to the regular buffers (read: event session buffers) you’ll also see the details for large buffers if you have configured MAX_EVENT_SIZE. The “buffer steps” for any given hardware configuration should be static within each partition mode so if you want to have a handy reference available when you configure your event sessions you can use the following code to generate a range table similar to the one above that is applicable for your specific machine and chosen partition mode. DECLARE @buf_size_output table (input_memory_kb bigint, total_regular_buffers bigint, regular_buffer_size bigint, total_buffer_size bigint) DECLARE @buf_size int, @part_mode varchar(8) SET @buf_size = 1 -- Set to the begining of your max_memory range (KB) SET @part_mode = 'per_cpu' -- Set to the partition mode for the table you want to generate WHILE @buf_size <= 4096 -- Set to the end of your max_memory range (KB) BEGIN     BEGIN TRY         IF EXISTS (SELECT * from sys.server_event_sessions WHERE name = 'buffer_size_test')             DROP EVENT SESSION buffer_size_test ON SERVER         DECLARE @session nvarchar(max)         SET @session = 'create event session buffer_size_test on server                         add event sql_statement_completed                         add target ring_buffer                         with (max_memory = ' + CAST(@buf_size as nvarchar(4)) + ' KB, memory_partition_mode = ' + @part_mode + ')'         EXEC sp_executesql @session         SET @session = 'alter event session buffer_size_test on server                         state = start'         EXEC sp_executesql @session         INSERT @buf_size_output (input_memory_kb, total_regular_buffers, regular_buffer_size, total_buffer_size)             SELECT @buf_size, total_regular_buffers, regular_buffer_size, total_buffer_size FROM sys.dm_xe_sessions WHERE name = 'buffer_size_test'     END TRY     BEGIN CATCH         INSERT @buf_size_output (input_memory_kb)             SELECT @buf_size     END CATCH     SET @buf_size = @buf_size + 1 END DROP EVENT SESSION buffer_size_test ON SERVER SELECT MIN(input_memory_kb) start_memory_range_kb, MAX(input_memory_kb) end_memory_range_kb, total_regular_buffers, regular_buffer_size, total_buffer_size from @buf_size_output group by total_regular_buffers, regular_buffer_size, total_buffer_size Thanks to Jonathan for an interesting question and a chance to explore some of the details of Extended Event internals. - Mike

    Read the article

  • Hekaton – SQL Server’s in-memory database engine

    - by Christian
    Microsoft have just gone public at the PASS Summit in Seattle about a new SQL Server engine that they’re working on which is optimized for high-memory servers – an in-memory OLTP database engine which is built-in to SQL Server rather than a separate entity.  This means that you can move just the performance critical parts of your database to Hekaton. The new engine really pushes the performance boundaries by eliminating as many instructions as possible: Main memory optimized tables which are decoupled from on-disk structures; Everything is lock and latch free; More work is pushed to compile time so your T-SQL code is compiled natively into low-level code. We’re already working with a customer on an early adoption program so expect to hear from us on what we learn about implementing it!   Christian Bolton - MCA, MCM, MVP Technical Director http://coeo.com - SQL Server Consulting & Managed Services

    Read the article

  • Best in-memory cache of DB objects for Silverlight [closed]

    - by Jon
    Hi, I'd like to set up a cache of database objects (i.e. rows in a table) in memory in silverlight, which I'll do using WCF and linq-to-sql. Once I have the objects in memory, I'm planning on using MSMQ to receive new objects whenever they have been modified. It's a somewhat complex approach but the goal is to reduce trips to the database and allow instant data communication between Silverlight applications that are connected to the MSMQ. My Silverlight applications are meant to be long-running and the amount of data to be cached will not be large. I'm planning on saving the in-memory cache using local storage. Anyway, in order to process the updated objects that come in, I'd like to know if the user has changed the existing object. Could I use some event relating to data-binding to set a flag indicating that the object has changes? Maybe there's a better way to do the cache entirely? Thanks!

    Read the article

  • Ubuntu One using 500 MB memory also when idle

    - by cdysthe
    I'm a Dropbox convert (I hope!), but after having used Ubuntu One for a couple of weeks I notice a few differences from Dropbox. The most glaring difference is that the sync daemon constantly takes 500MB ram on my system (Ubuntu 12.04 x64). It hogs this amount of memory as soon as I log in, does it's initial sync/check but keeps the memory. All in all it seems to me that Ubuntu One uses more system resources than Dropbox. I am syncing the same folders and files with Ubuntu One as I was with Dropbox. Also, afte I log in Ubuntu One grids at 100% CPU for at least five minutes which can be annoying on a laptop, but is not a showstopper. I'm wondering if this is a problem on my system, or if Ubuntu One is expected to use that amount of memory even when idle?

    Read the article

  • Flash: Memory usage is low but framerate keeps dropping

    - by Cyborg771
    So I'm working on a puzzle game in flash. For all intents and purposes it's like Tetris. I spawn blocks, they move around the screen, then they get destroyed and disappear. I was having some trouble with the memory usage being too high over time so I read up on memory management and I think I have that figured out now. It's definitely climbing slower than it was before, but the framerate is still taking a huge dive after playing for a while. If it's not a memory leak what else could be causing this? Thanks in advance.

    Read the article

  • Larry Ellison Unveils Oracle Database In-Memory

    - by jgelhaus
    A Breakthrough Technology, Which Turns the Promise of Real-Time into a Reality Oracle Database In-Memory delivers leading-edge in-memory performance without the need to restrict functionality or accept compromises, complexity and risk. Deploying Oracle Database In-Memory with virtually any existing Oracle Database compatible application is as easy as flipping a switch--no application changes are required. It is fully integrated with Oracle Database's scale-up, scale-out, storage tiering, availability and security technologies making it the most industrial-strength offering in the industry. Learn More Read the Press Release Get Product Details View the Webcast On-Demand Replay Follow the conversation #DB12c #OracleDBIM

    Read the article

  • Podcast: Dell Perot Systems Relies on Oracle In-Memory Database Cache

    - by john.brust
    Recently we spoke with Bill Binko, Technology Consultant at Dell Perot Systems, about a high volume web-based content delivery system they implemented for a client with Oracle In-Memory Database Cache. Their client needed to respond to ~1 billion hits (web requests) per day, but hadn't been able to support this load. Oracle In-Memory Database Cache allowed for multiple & complicated queries to take place without ever hitting the disk...providing sub-millisecond response time and ability to manage much higher high volumes of data. Old System: Old SQL Server Database, over 300 servers, difficult to maintain. New System: One Oracle Database 11g instance, multiple Oracle RAC nodes, backed up by Oracle Data Guard, and Oracle In-Memory Database Cache to cut query response times by 10x. Listen to the podcast.

    Read the article

  • Taking advantage of an "Intel Turbo Memory" card for swap or fast bootup

    - by Brian Ballsun-Stanton
    I have an X61 thinkpad (currently running 10.10) that I purchased 3 years ago. I splurged a little and got a Turbo Memory expansion to improve my windows boots. When I installed 10.04 (and subsquently upgraded to 10.10) there was no Turbo Memory support and there's an awful lot of noise on searches. 1) Is there any support for Intel Turbo Memory in 11.04 or trivially compilable into the kernel as swap, suspend, hibernate point, or boot partition? 2) If there is, should I bother trying to use it?

    Read the article

  • Out of memory on MATLAB

    - by Eric Sánchez
    I'm trying to run a script on matlab_2011a, which calculate same means for a climatology of 50 years. When I started to run the script for all the years it worked fine until the iteration 20th, and then appeared the message: Out of memory. Type HELP MEMORY for your options. Then I used clear v1 v2 v3 ... to clear all the variables inside the function, also i used clear train because i saw it in another forum, and these with the modifications or not, I run again the script (since the 21th iteration), and the result is the same message, but curiously sometimes it run a year and then stop. Any ideas about solving this problem?, what I have to clean to run correctly? (in this matlab version there's not the command memory which maybe could help me).

    Read the article

  • WHMCS Fatal error: Out of memory while View Invoice PDF

    - by prakash
    I can log into WHMCS & can access everything I should be able to access, but if i try to click View PDF Invoice, the following error will occur, Fatal error: Out of memory (allocated 67633152) (tried to allocate 76 bytes) in /home/xxxx/public_html/whmcs/includes/classes/class.tcpdf.php on line 8419 I have already set the allocated Memory limit to 256MB, but the error still occurs. At that time of the error, the process memory is exceeding the allocation I set. I checked log file, and found the following errors: #2 /home/xxxxx/public_html/client/includes/classes/class.tcpdf.php(8453): TCPDF->Image('/home/xxxxx/...', 20, 25, 75, 17.5816023739, 'PNG', '', '', false, 300, '', false, 8) #3 /home/xxxxx/public_html/client/includes/classes/class.tcpdf.php(7881): TCPDF->ImagePngAlpha('/home/xxxxx/...', 20, 25, 337, 79, 75, 17.5816023739, 'PNG', '', '', false, 300, '', NULL) While I was investigating the issue above I also noticed the error condition pictured below:

    Read the article

  • Cikk az Oracle Database In-Memory elonyeirol

    - by user645740
    Megjelent egy cikk a forradalmi újdonságot jelento Oracle Database In-Memory adatbázis funkcióról a bitport.hu-n: Ugorjunk szintet a döntéshozatal gyorsaságában! címmel. A Database In-Memory legfontosabb elonyei: Az alkalmazások változatlanok, nem kell semmit megváltoztatni rajtuk. Úgyanúgy minden megtalálható a diszken, nincs semmi változás a mentésekben sem, az élet ugyanúgy megy tovább, Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 „csak" sokkal gyorsabb lesz a muködés! Pillanatok alatt bekapcsolható, és beállítható, szinte nem igényel konzultációs tevékenységet.csak azt kell kiválasztani, milyen objektumokra lépjen életbe, milyen tömörítést használjon hozzá, és milyen prioritással töltse be a memóriába az adatokat. Más gyártók részmegoldásaihoz nagy bevezetési költségek kapcsolódnak! Nem kell hozzá új infrastuktúra elem, nem kell hozzá új szerver sem Minden Oracle Database alapú rendszerhez használható: tranzakciós rendszerekhez, vegyes rendszerekhez és adattárház, üzleti analitikai, üzleti intelligencia rendszerekhez is. Oracle Database In-Memory

    Read the article

  • Performance impact of the new mtmalloc memory allocator

    - by nospam(at)example.com (Joerg Moellenkamp)
    I wrote at a number of occasions (here or here), that it could be really beneficial to use a different memory allocator for highly-threaded workloads, as the standard allocator is well ... the standard, however not very effective as soon as many threads comes into play. I didn't wrote about this as it was in my phase of silence but there was some change in the allocator area, Solaris 10 got a revamped mtmalloc allocator in version Solaris 10 08/11 (as described in "libmtmalloc Improvements"). The new memory allocator was introduced to Solaris development by the PSARC case 2010/212. But what's the effect of this new allocator and how does it works? Rickey C. Weisner wrote a nice article with "How Memory Allocation Affects Performance in Multithreaded Programs" explaining the inner mechanism of various allocators but he also publishes test results comparing Hoard, mtmalloc, umem, new mtmalloc and the libc malloc. Really interesting read and a must for people running applications on servers with a high number of threads.

    Read the article

  • New eBook: In-Memory Data Grids for Dummies

    - by jeckels
    We've just released a new eBook In-Memory Data Grids for Dummies. This is a fantastic resource if you're looking to explain in-memory data grids to colleagues, convince your boss of their value, or even discover some new use cases for your existing investment. In true "Dummies" style, this eBook will walk you through the basics tenets of in-memory data grids, their common use cases, where IMDGs sit in your architecture, and some key considerations when looking to implement them. While the title may say "Dummies," we know you'll find some useful overview and technical information in the resource. It's published by us on the Coherence team in partnership with Wiley (the "Dummies" company), but it's not only about Coherence or Oracle. In fact, we took pains to make this book fairly neutral to give you the best information, not a product pitch. Happy reading! Download the eBook now 

    Read the article

  • Git fails to push with error 'out of memory'

    - by jwir3
    I'm using gitosis on a server that has a low amount of memory, specifically around 512 MB. When I try to push a large folder (happens to be a backup from an android phone), I get: me@corellia:~/Configs/$ git push origin master Counting objects: 18, done. Delta compression using up to 8 threads. Compressing objects: 100% (14/14), done. fatal: Out of memory, malloc failed MiB | 685 KiB/s error: pack-objects died of signal 13 error: failed to push some refs to 'git@dagobah:Configs' I've been searching the web, and notably found: http://www.mail-archive.com/[email protected]/msg01747.html as well as http://git.661346.n2.nabble.com/Out-of-memory-error-during-git-push-td5443705.html but these don't seem to help me for two reasons: 1) I am not actually out of memory when I push. When I run 'top' during the push, I get: 24262 git 18 0 16204 6084 1096 S 2 1.2 0:00.12 git-unpack-obje Also, during the push if I run /head/meminfo, I get: MemTotal: 524288 kB MemFree: 289408 kB Buffers: 0 kB Cached: 0 kB SwapCached: 0 kB Active: 0 kB Inactive: 0 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 524288 kB So, it seems that I have enough memory free, but it's actually still failing, and I'm not enough of a git guru to figure out what is happening. I would appreciate it if someone could give me a hand here and tell me what could be causing this problem, and what I can do to solve it. Thanks! EDIT: The output of running the ulimit -a command: scottj@dagobah:~$ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 204800 max locked memory (kbytes, -l) 32 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 204800 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited

    Read the article

  • Per connection bandwidth limit

    - by Kyr
    Apparently, our server box running Windows Server 2008 R2 has a per connection bandwidth limit of 0.2 MB/s. Meaning, while one TCP connection can pull at max 0.2 MB/s, 60 parallel connections can pull 12 MB/s. We first noticed this when trying to checkout large SVN repository from this server. I used a simple Java application to test this, transferring data from server to workstation using variable number of threads (one connection per thread). Server part of the application simply writes 1 MB memory buffer to socket 100 times, so there is no disk involvement. Each connection topped at 0.2 MB/s. Same per connection limit was for only one as was for 60 parallel connections. The problem is that I have no idea from where this limit comes from. I have very little experience administrating Windows Server, so I was mostly trying to find something by googling. I have checked the following: Local Computer Policy QoS Packet Scheduler Limit reservable bandwidth: it's Not configured; Group Policy Management Console: we have two GOPs, but neiher has any Policy-based QoS defined; There isn't any bandwidth limiter program installed, as far as I can tell. We're using standard Windows Firewall. I can update this question with any additional information if needed.

    Read the article

  • Using AWStats, cannot get MaxNbOfExtraX to limit rows in Extra Report

    - by user137519
    Folks, got something really odd here I'd like to resolve. I've been using Awstats and have a couple of extra reports. I cannot get any of them to limit the rows using MaxNbOfExtraX to work. Here are two examples: ExtraSectionName1="Top 100 Searches" ExtraSectionCodeFilter1="200 304" ExtraSectionCondition1="URL,/search/search_post.php" ExtraSectionFirstColumnTitle1="Search Parameters" ExtraSectionFirstColumnValues1="QUERY_STRING,(.*)" ExtraSectionFirstColumnFormat1="QueryParameters: %s" ExtraSectionStatTypes1=HL ExtraSectionAddAverageRow1=0 ExtraSectionAddSumRow1=1 MaxNbOfExtra1=100 MinHitExtra1=4 ExtraSectionName2="Top 100 Downloads" ExtraSectionCodeFilter2="200 304" ExtraSectionCondition2="URL,/filedownload.php" ExtraSectionFirstColumnTitle2="File Downloads" ExtraSectionFirstColumnValues2="QUERY_STRING,(.[0-9]{5})(h|p)?." ExtraSectionFirstColumnFormat2="File ID: %s" ExtraSectionStatTypes2=HL ExtraSectionAddAverageRow2=0 ExtraSectionAddSumRow2=1 MaxNbOfExtra2=100 MinHitExtra2=3 According to all documentation I've read the MaxNbOfExtra1 should keep the limit to 100. However when I run this, with the debug messages enabled I get a message indicated that the query will be in excess of of 500 and would not run it. I increased the number of ExtraTrackedRowsLimit to 2000 and it would work. But the option I provided should have lowered that. I even tried without the ExtraTrackedRowsLimit with MaxNbOfExtra1=100 but same error: No limit to 100 and the "excess of 500" error. I have the URLWithQuery=1 and my reports do run properly along with my regex filters. I am using MinHitExtra1 to limit the rows and that works, but why can I not get the MaxObOfExtraX option to work. Any ideas? Thanks in advance.

    Read the article

  • Why do Asp.net timers/updatepanels leak memory and can it be fixed/worked around?

    - by KallDrexx
    I have built a suite of internal websites for our company to manage some of our processes. I have been noticing that these pages have massive memory leaks that cause the pages to be using well over 150mb of memory, which is ridiculous for a webpage that consists of a single form and a GridView that is displaying 7-10 rows of data at a time, sometimes with the data not changing for a whole day. This data does need to be refreshed on a semi-regular basis so that we always see the latest results and can act on them. After some testing it appears that the memory leak is extremely easy to reproduce, and very noticeable. I created a page with the following asp.net markup: <body> <form id="form1" runat="server"> <div> <asp:scriptmanager ID="Scriptmanager1" runat="server"></asp:scriptmanager> <asp:Timer ID="timer1" runat="server" Interval="1000" /> <asp:UpdatePanel ID="UpdatePanel1" runat="server"> <ContentTemplate> </ContentTemplate> </asp:UpdatePanel> </div> </form> </body> There is absolutely no code behind for this. This is the entirety of the page. Running this site in Chrome shows the memory usage shoot up to 25 megs in the span of 20-30 seconds. Leaving it running for a few minutes makes the memory go up to the 70 megs and such. Am I using timers and update panels wrong, or is this a pure Asp.net issue with no work around?

    Read the article

  • is delete p where p is a pointer to array a memory leak ?

    - by Eli
    following a discussion in a software meeting I setup to find out if deleting an dynamically allocated primitive array with plain delete will cause a memory leak. I have written this tiny program and compiled with visual studio 2008 running on windows XP: #include "stdafx.h" #include "Windows.h" const unsigned long BLOCK_SIZE = 1024*100000; int _tmain() { for (unsigned int i =0; i < 1024*1000; i++) { int* p = new int[1024*100000]; for (int j =0;j<BLOCK_SIZE;j++) p[j]= j % 2; Sleep(1000); delete p; } } I than monitored the memory consumption of my application using task manager, surprisingly the memory was allocated and freed correctly, allocated memory did not steadily increase as was expected I've modified my test program to allocate a non primitive type array : #include "stdafx.h" #include "Windows.h" struct aStruct { aStruct() : i(1), j(0) {} int i; char j; } NonePrimitive; const unsigned long BLOCK_SIZE = 1024*100000; int _tmain() { for (unsigned int i =0; i < 1024*100000; i++) { aStruct* p = new aStruct[1024*100000]; Sleep(1000); delete p; } } after running for for 10 minutes there was no meaningful increase in memory I compiled the project with warning level 4 and got no warnings. is it possible that the visual studio run time keep track of the allocated objects types so there is no different between delete and delete[] in that environment ?

    Read the article

  • displaying keyboard raises memory... but it never comes down iPhone

    - by Joshep Freeman
    Hello again. I encountered a weird behavior in memory just by displaying the default keyboard. I've just created a project with an .xib file for testing purposes. This .xib file has an UITextField element in it and it's connected in the .h via: @property(nonatomic, retain) IBOutlet UITextField *sometext; The .m has no changes but: - (void)viewDidAppear:(BOOL)animated { [someText becomeFirstResponder]; } As you see it's very very simple. The problem is that once the keyboard is shown, the memory allocated for it NEVER goes down. I've tested this scenario in another project with the only difference of having two .xib files. Standar pushViewController and popViewController calls are made. Instruments show an increase of 600kb in memory allocations [which are a lot more in the actual iPhone device]. All in all, hehehe. My question is: How do I release the memory allocated for the keyboard?.

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >