Search Results

Search found 12398 results on 496 pages for 'in memory oltp'.

Page 31/496 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • .NET Framework - Possible memory-leaky classes?

    - by Robert Fraser
    Just the other day I was investigating a memory leak that was ballooning the app from ~50MB to ~130MB in under two minutes. Turns out that the problem was with the ConcurrentQueue class. Internally, the class stores a linked list of arrays. When an item is dequeued from the ConcurrentQueue, the index in the array is bumped, but the item remains in the array (i.e. it's not set to null). The entire array node is dropped after enough enqueues/dequeues, so it's not technically a leak, but if storing only a few large objects in the ConcurrentQueue, this can get out of hand fast. The documentation makes no note of this danger. I was wondering what other potential memory pitfalls are in the Base Class Library? I know about the Substring one (that is, if you call substring and just hold onto the result, the whole string will still be in memory). Any others you've encountered?

    Read the article

  • PHP - Plesk - Cron - Allowed memory size exhausted?

    - by John
    ini_set('max_execution_time',0); ini_set('memory_limit','1000M'); These are the first two lines at the very top of my script. I was under the impression if I ran something via cron memory limits didn't apply, but I was wrong. Safe mode is off and when I test to see if these values are being set they are but I keep getting the good ol' "PHP Fatal: Memory exhausted" error. Any ideas what I may be doing wrong? And whats the "more elegant way" of writing "infinite" for the "memory limit" value is it -1 or something?

    Read the article

  • AS3 Memory management when instantiating extended classes

    - by araid
    I'm developing an AS3 application which has some memory leaks I can't find, so I'd like to ask some newbie questions about memory management. Imagine I have a class named BaseClass, and some classes that extend this one, such as ClassA, ClassB, etc. I declare a variable: myBaseClass:BaseClass = new ClassA(); After a while, I use it to instantiate a new object: myBaseClass = new ClassB(); some time after myBaseClass = new ClassC(); and the same thing keeps happening every x millis, triggered by a timer. Is there any memory problem here? Are the unused instances correctly deleted by the garbage collector? Thanks!

    Read the article

  • php java in memory database

    - by msaif
    i need to load data as array to memory in PHP.but in PHP if i write $array= array("1","2"); in test.php then this $array variable is initialized every time user requests.if we request test.php 100 times by clicking 100 times browser refresh button then this $array variable will be executed 100 times. but i need to execute the $array variable only one time for first time request and subsequent request of test.php must not execute the $array variable.but only use that memory location.how can i do that in PHP. but in JAVA SEVRVLET it is easy to execute,just write the $array variable in one time execution of init() method of servlet lifecycle method and subsequent request of that servlet dont execute init() method but service() method but service() method always uses that $array memeory location. all i want to initilize $array variable once but use that memory loc from subsequent request in PHP.is there any possiblity in PHP?

    Read the article

  • javascript memory leak

    - by hhj
    I have a some javascript (used with google maps api) that I am testing on IE and Chrome and noticed memory leak symptoms in IE only: when I refresh the page continuously, the amount of memory used in IE keeps growing (fast), but in Chrome it stays constant. Without posting all of the code (as it is rather long), can I get some suggestions as to what to look out for? What could cause the memory to keep growing like this in IE on page refreshes? Like I said I know its hard without code, but I'd like to see if any generic advice works first. Thanks.

    Read the article

  • Preallocate memory for a program in Linux before it gets started

    - by Fyg
    Hi, folks, I have a program that repeatedly solves large systems of linear equations using cholesky decomposition. Characterising is that I sometimes need to store the complete factorisation which can exceed about 20 GB of memory. The factorisation happens inside a library that I call. Furthermore, this matrix and the resulting factorisation changes quite frequently and as such the memory requirements as well. I am not the only person to use this compute-node. Therefore, is there a way to start the program under Linux and preallocate free memory for the process? Something like: $: prealloc -m 25G ./program

    Read the article

  • Can in-memory SQLite databases be used concurrently?

    - by Kent Boogaart
    In order to prevent a SQLite in-memory database from being cleaned up, one must use the same connection to access the database. However, using the same connection causes SQLite to synchronize access to the database. Thus, if I have many threads performing reads against an in-memory database, it is slower on a multi-core machine than the exact same code running against a file-backed database. Is there any way to get the best of both worlds? That is, an in-memory database that permits multiple, concurrent calls to the database?

    Read the article

  • PHP Possible Memory Leak

    - by dropson
    I have a script that loops through a database for images to convert with gd & imagick. I unset or replace all variables and objects in between each loop. For each loop, get_memory_usage(1) reveals a concurrent amount of memory used by that script. Which is expected. But, when I run "top", the %MEM column reports that this script, (same PID), increments with several percentages for each loop. I destroy all images when I'm done with them, and when I run get_defined_vars(); only the standard globals and a few variables I have is set. Why is "top" % Memory Usage different than what PHP reports? After 10 loops, PHP has taken 20% percetage of the system memory. I run php 5.2.6 on Debian 5

    Read the article

  • Can in-memory SQLite databases scale with concurrency?

    - by Kent Boogaart
    In order to prevent a SQLite in-memory database from being cleaned up, one must use the same connection to access the database. However, using the same connection causes SQLite to synchronize access to the database. Thus, if I have many threads performing reads against an in-memory database, it is slower on a multi-core machine than the exact same code running against a file-backed database. Is there any way to get the best of both worlds? That is, an in-memory database that permits multiple, concurrent calls to the database?

    Read the article

  • Low memory with 640Kb of live bytes?

    - by Chiodo
    Hello, i've a problem with my application that need to display a lot of images and video. After running ObjectAlloc tool, i see that the live bytes is 640Kb and the overall memory is 31,54Mb when the application crash. In the organizer i get a "low memory" report so i guess the app crashed because low memory but the ObjectAllocation data don't make any sense to me... Any ideas? This is the Organizer crash log: Incident Identifier: CDCAF38C-CFFD-4316-9C4A-5C8E37794B49 CrashReporter Key: 65390aeb97b2b81076576c3e33b025feb5db9202 OS Version: iPhone OS 3.1.3 (7E18) Date: 2010-05-19 10:07:19 +0200 Free pages: 372 Wired pages: 12260 Purgeable pages: 0 Largest process: DTMobileIS Processes Name UUID Count resident pages ATreeTest <1d51c3a5fef8b747c3a1be9405bdd52a 1150 (jettisoned) (active) DTMobileIS <69c3fa96db2f29474d62964aa1a69bfa 3316 notification_pro <8a7725017106a28b545fd13ed58bf98c 68 mediaserverd <3d3800d6acfff050e4d0ed91cbe2467e 464 (jettisoned) syslogd <8eddddc00294d5615afded36ee3f1b62 56 (jettisoned) apsd <32070d91b216d806973c8f1b1d8077a4 173 SpringBoard <324939a437d1cca1fa4af72d9f5d0eba 2475 (jettisoned) (active) accessoryd <8f21c8b376d16e2ccb95ed6d21d8317a 99 (jettisoned) notification_pro <8a7725017106a28b545fd13ed58bf98c 64 ptpd 129 notifyd <591dd4dd804b4b8741f52335ea1fa4ab 64 CommCenter 167 configd <85efd41aceac34ccc0019df76623c7a9 294 fairplayd 91 mDNSResponder 101 lockdownd <80d2bd44c0bcca273d48ce52010f7e65 285 launchd 71 End

    Read the article

  • C++ memory management of reference types

    - by Russel
    Hello, I'm still a fairly novice programmer and I have a question about c++ memory management with refence types. First of all, my understanding of reference types: A pointer is put on the stack and the actual data that the pointer points to is created and placed on the heap. Standard arrays and user defined classes are refence types. Is this correct? Second, my main question is do c and c++'s memory management mechanisms (malloc, free and new, delete) always handle this properly and free the memory that a class or array is pointing to? Does everything still work if those pointers get reassigned somehow to other objects of the same size/type on the heap? What if a class has a pointer member that points to another object? I am assuming that delete/freeing the class object doesn't free what it's member pointer points to, is that correct? Thanks all! -R

    Read the article

  • Where does memory dynamically allocated reside?

    - by Summer_More_More_Tea
    Hello everyone: We know that malloc() and new operation allocate memory from heap dynamically, but where does heap reside? Does each process have its own private heap in the namespace for dynamic allocation or the OS have a global one shared by all the processes. What's more, I read from a textbook that once memory leak occurs, the missing memory cannot be reused until next time we restart our computer. Is this thesis right? If the answer is yes, how can we explain it? Thanks for your reply. Regards.

    Read the article

  • In a process using lots of memory, how can I spawn a shell without a memory-hungry fork()?

    - by kdt
    On an embedded platform (with no swap partition), I have an application whose main process occupies most of the available physical memory. The problem is that I want to launch an external shell script from my application, but using fork() requires that there be enough memory for 2x my original process before the child process (which will ultimately execl itself to something much smaller) can be created. So is there any way to invoke a shell script from a C program without incurring the memory overhead of a fork()? I've considered workarounds such as having a secondary smaller process which is responsible for creating shells, or having a "watcher" script which I signal by touching a file or somesuch, but I'd much rather have something simpler.

    Read the article

  • C# memory / allocation cleanup

    - by Number8
    Some near-code to try to illustrate the question, when are objects marked as available to be garbage-collected -- class ToyBox { public List<Toy> Toys = new List<Toy>(); } class Factory { public ToyBox GetToys() { ToyBox tb = new ToyBox(); tb.Toys.Add(new Toy()); tb.Toys.Add(new Toy()); return tb; } } main() { ToyBox tb = Factory.GetToys(); // After tb is used, does all the memory get cleaned up when tb goes out of scope? } Factory.GetToys() allocates memory. When is that memory cleaned up? I assume that when Factoy.GetToys() returns the ToyBox object, the only reference to the ToyBox object is the one in main(), so when that reference goes out of scope, the Toy objects and the ToyBox object are marked for garbage collection. Is that right? Thanks for any insights...

    Read the article

  • Freeing Java memory at a specific point in time

    - by Marcus
    Given this code, where we load a lot of data, write it to a file, and then run an exe.. void myMethod() { Map stuff = createMap(); //Consumes 250 MB memory File file = createFileInput(stuff); //Create input for exe runExectuable(file); //Run Windows exe } What is the best way to release the memory consumed by stuff prior to running the exe? We don't need this in memory any more as we have dumped the data to a file for input to the exe... Is the best method to just set stuff = null prior to runExecutable(file)?

    Read the article

  • C/C++ memory usage API in Linux/Windows

    - by minjang
    I'd like to obtain memory usage information for both per process and system wide. In Windows, it's pretty easy. GetProcessMemoryInfo and GlobalMemoryStatusEx do these jobs greatly and very easily. For example, GetProcessMemoryInfo gives "PeakWorkingSetSize" of the given process. GlobalMemoryStatusEx returns system wide available memory. However, I need to do it on Linux. I'm trying to find Linux system APIs that are equivalent GetProcessMemoryInfo and GlobalMemoryStatusEx. I found 'getrusage'. However, max 'ru_maxrss' (resident set size) in struct rusage is just zero, which is not implemented. Also, I have no idea to get system-wide free memory. Current workaround for it, I'm using "system("ps -p %my_pid -o vsz,rsz");". Manually logging to the file. But, it's dirty and not convenient to process the data. I'd like to know some fancy Linux APIs for this purpose.

    Read the article

  • Unsafe, super-fast cross-process memory buffer?

    - by John
    Cross-process memory buffers always have some overhead, and my understanding is this is quite high. But what if you're implementing a cross-process render-buffer, this isn't critically important in the same way as other data so are there techniques we can use to get 'raw' access to a chunk of memory from multiple processes, with no safety nets apart from it not crashing? Or do modern operating systems simply not work with unabstracted memory in a way to make this possible? I'm working in C++ but the question applies to Win XP/Vista/7, MacOSX 10.5+ (& Linux less importantly).

    Read the article

  • Memory leak in Mozilla when unloading stylesheets

    - by KaptajnKold
    I'm working with Mozilla v1.7.12 on a constrained device (a Motorola set-top box) trying to resolve some memory leaks. When I dynamically load a stylesheet which refers to some large images, I can see that the amount of consumed memory increases in correspondance with the size of the images. This is what I would expect. Then, when I remove the stylesheet from the DOM, I would expect the memory to be freed. However, this does not happen. This is a problem, because the web application I'm working on needs to be able to dynamically load and and unload stylesheets potentially many times in the lifetime of the page. My question therefore is this: Is what I'm seeing expected behavior or is it a known bug? Is there a way to work around this? I should point out that I've set the expires header to -1 on all the images in the stylesheet.

    Read the article

  • Memory usages in Linux drops frequently

    - by FunkyChicken
    I run a CentOS 5.6 (64bit) machine that has Nginx (latest version) running, with php-fpm (latest version). Things run very well, but since about 2 weeks I noticed in my Munin graphs that about every 2 hours the 'cache' usages drops. Before it used be a steady fully graph, that didn't seem to reset every so often. PHP-FPM settings: pm.max_children = 300 daemonize = yes pm = static listen = /tmp/fpm.sock pm.max_requests = 1000 I have checked the php-fpm.log, and about once per 5 seconds a child process is killed, and restarted. But this is all the time, so this does not explain the sudden drops. I only run Nginx, PHP (via fpm), Munin and vsftpd on this machine. No crons run at exactly the time of the drops. My question: What could be causing these drops in cache usage?

    Read the article

  • Memory cache Ubuntu 9.10 server x86 doesn't work as expected

    - by Matthijs
    We're using an Ubuntu 9.10 server to transfer Ghost-image files. We configured it only with Samba. And the DOS-clients connect to Samba. The latest updates are installed and so far the servers is running fine. When we image 10 pc's with the same image of 2 files of 2GB there's no disk activity. Everything is loaded in the RAM. There's 4GB in the server. But when we use 2 pc's with 2 different image files of 500 MB (8x) files then there's a lot of continuous disk activity. The speed is lower. So it seems that Ubuntu doesn't cache more then one big file. Are there settings to change this behaviour?

    Read the article

  • What's using up my memory?

    - by Mehrdad
    I already know what's causing this -- it's the driver for FancyCache, which I'd installed myself. But as you can see, nothing in the screenshot tells me anything about this. I just happen to know. So the question is, if I didn't know this, how would I figure out what's using up so much of my RAM? (For reference: It's currently 1.7 GiB used, and the "missing" amount -- for FancyCache -- is 512 MiB. Clearly, that extra half-gigabyte not showing up anywhere I can see below.)

    Read the article

  • Laptop won't boot with both memory slots used

    - by Johnny W
    I'm currently trying to upgrade my old Sony Vaio VGN-SZ1HP/B to 2GB of RAM. It already had 1GB of Crucial RAM in one of its slots, and one empty. I checked on Crucial.com and it confirms that each bank can hold 1GB of PC2-5300. The 1GB stick already installed was this, but Crucial's page recommended this... The two are identical from what I can make out, so I just ordered another one of the former. Unfortunately the machine refuses to even POST with both sticks installed. If I remove the old RAM from Slot 1 and replace it with the new RAM it runs fine. If I leave Slot 1 empty and put RAM (either stick) in Slot 2, it won't POST. Basically it seems that Slot 2 just isn't working properly. Does anyone have any ideas on how to solve this problem? Or maybe have any experience with this sort of thing with Sony Vaios? Thanks for any help!

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >