Search Results

Search found 18119 results on 725 pages for 'shared memory'.

Page 118/725 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • Low-traffic WordPress website on Apache keeps crashing server

    - by OC2PS
    I have recently moved my low-moderate traffic (1000 UAUs, 5000 pageviews on a busy day) website from shared hosting to a Centos 6 64-bit VPS with Apache and cPanel running on 4 quad-core processor (likely oversold) and 3GB memory (Xen). We've had problems from the beginning. The server keeps crashing. It seems PHP keeps expanding till it consumes all the memory and crashes the server. Some folks have suggested that I should abandon Apache/cPanel/PHP/mySQL and go with nginX/Varnish/PHP-FPM/SQLite. But that's just not possible for me as I am not very tech savvy and need a simple GUI like cPanel to be able to manage the mundane management tasks (can't afford to hire system administrator or get fully managed hosting). I have come across several posts discussing optimization of Apache for WordPress. But all of these lead to articles that are pretty dated such as this ~4 year old one from Jan 2009 - http://thethemefoundry.com/blog/optimize-apache-wordpress/ The article is pretty detailed and seems helpful, but I stumble even on the first step. My httpd.conf only has 2 loadmodule commands LoadModule fastinclude_module modules/mod_fastinclude.so LoadModule bwlimited_module modules/mod_bwlimited.so So I go total bust right there. Further, my httpd.conf says Direct modifications to the Apache configuration file may be lost upon subsequent regeneration of the configuration file. To have modifications retained, all modifications must be checked into the configuration system by running: /usr/local/cpanel/bin/apache_conf_distiller I am having trouble finding where to change the modules in WHM. Please can someone help me with updated guidelines on how to optimize Apache for WordPress? Many thanks! P.S. The WordPress installation also has WP Super Cache installed. P.P.S. I also have phpBB, OpenCart, and Menalto Gallery installed.

    Read the article

  • vSphere education - What are the downsides of configuring virtual machines with *too* much RAM?

    - by ewwhite
    VMware memory management seems to be a tricky balancing act. With cluster RAM, Resource Pools, VMware's management techniques (TPS, ballooning, host swapping), in-guest RAM utilization, swapping, reservations, shares and limits, there are a lot of variables. I'm in a situation where clients are using dedicated vSphere cluster resources. However, they are configuring the virtual machines as though they were on physical hardware. In turn, this means a standard VM build may have 4 vCPUs and 16GB or more of RAM. I come from the school of starting small (1 vCPU, minimal RAM), checking real-world use and adjusting up as necessary. Some examples from a "problem" cluster. Resource pool summary - Looks almost 4:1 overcommitted. Note the high amount of ballooned RAM. Resource allocation - The Worst Case Allocation column shows that these VMs would have access to less than 50% of their configured RAM under constrained conditions. The real-time memory utilization graph of the top VM in the listing above. 4 vCPU and 64GB RAM allocated. It averages under 9GB use. Summary of the same VM What are the downsides of overcommitting and overconfiguring resources (specifically RAM) in vSphere environments? Assuming that the VMs can run in less RAM, is it fair to say that there's overhead to configuring virtual machines with more RAM than they need? What is the counter-argument to: "if a VM has 16GB of RAM allocated, but only uses 4GB, what's the problem??"? E.g. do customers need to be educated? What specific metric should be used to meter RAM usage. Tracking the peaks of "Active" versus time?

    Read the article

  • linux kernel buffer memory is zero

    - by user64772
    Hi all. There are one qestion that i can`t find in google. I have many linux boxes mostly with SLES or openSUSE, diffrent versions and kernels. On some of them i faced with slow oracle transactions problem. It time to time problem and when i log in the box on that time i see that oracle blocked in kernel function sync_page # while :; do ps axo stat,pid,cmd,wchan | egrep '^D|^R'; echo --; sleep 5; done D 3483 hald-addon-storage: polling ide_do_drive_cmd Ds 4635 ora_dbw0_orcl sync_page Ds 4637 ora_lgwr_orcl sync_page Ds 4639 ora_ckpt_orcl sync_page D 11210 oracleorcl (LOCAL=NO) sync_page D 12457 [smtpd] sync_page R+ 12458 ps axo stat,pid,cmd,wchan - -- Ds 4635 ora_dbw0_orcl sync_page Ds 4637 ora_lgwr_orcl sync_page Ds 4639 ora_ckpt_orcl sync_page D 11210 oracleorcl (LOCAL=NO) sync_page R+ 12501 ps axo stat,pid,cmd,wchan - -- Ds 4635 ora_dbw0_orcl sync_page Ds 4637 ora_lgwr_orcl sync_page Ds 4639 ora_ckpt_orcl sync_page D 11210 oracleorcl (LOCAL=NO) sync_page R+ 12535 ps axo stat,pid,cmd,wchan - -- Ds 4635 ora_dbw0_orcl sync_page Ds 4637 ora_lgwr_orcl sync_page Ds 4639 ora_ckpt_orcl sync_page D 11210 oracleorcl (LOCAL=NO) sync_page R+ 12570 ps axo stat,pid,cmd,wchan - -- so i think that box is run out of memory for disk buffers but memry is fine total used free shared buffers cached Mem: 4149084 3994552 154532 0 0 2424328 -/+ buffers/cache: 1570224 2578860 Swap: 3148700 750696 2398004 i think that this is the problem, buffer is zero and we must write directly to disk, but why buffer is zero ? - i try to google it and find nothing - is anyone can help ?

    Read the article

  • Deploying Memcached as 32bit or 64bit?

    - by rlotun
    I'm curious about how people deploy memcached on 64 bit machines. Do you compile a 64bit (standard) memcached binary and run that, or do people compile it in 32bit mode and run N instances (where N = machine_RAM / 4GB)? Consider a recommended deployment of Redis (from the Redis FAQ): Redis uses a lot more memory when compiled for 64 bit target, especially if the dataset is composed of many small keys and values. Such a database will, for instance, consume 50 MB of RAM when compiled for the 32 bit target, and 80 MB for 64 bit! That's a big difference. You can run 32 bit Redis binaries in a 64 bit Linux and Mac OS X system without problems. For OS X just use make 32bit. For Linux instead, make sure you have libc6-dev-i386 installed, then use make 32bit if you are using the latest Git version. Instead for Redis <= 1.2.2 you have to edit the Makefile and replace "-arch i386" with "-m32". If your application is already able to perform application-level sharding, it is very advisable to run N instances of Redis 32bit against a big 64 bit Redis box (with more than 4GB of RAM) instead than a single 64 bit instance, as this is much more memory efficient. Would not the same recommendation also apply to a memcached cluster?

    Read the article

  • swapping or thrashing with vast amounts of unmapped pagecache

    - by Marco
    EDIT: I noticed that this is more appropriate for superuser.com, I apologize. I don't know how to delete this question. I'm using kubuntu jaunty (i386 32bit), kernel 2.6.28-13-generic. I've 4Gb of RAM, of which only 3317Mb are seen by the system (I guess because of the 32bit system). I'm seeing that the pagecache utilization is continually growing, up to the point that the system is unusable (after a few days). This happens also when I don't do anything (all user applications closed and the bare minimum of services enabled). If enabled, the system starts to use swap space (using it all in the end). Even if swap is disabled, disk activity becomes continuous, with the system unresponsive. For example, right now the system is working (albeit a tad slow), with only firefox and wing ide running, and I have 2Gb cached with only 45Mb mapped: $ free total used free shared buffers cached Mem: 3346388 3247328 99060 0 8416 2117980 -/+ buffers/cache: 1120932 2225456 Swap: 2144668 519448 1625220 $ cat /proc/meminfo MemTotal: 3346388 kB MemFree: 97128 kB Buffers: 7872 kB Cached: 2120224 kB SwapCached: 413860 kB Active: 2304596 kB Inactive: 865984 kB Active(anon): 2279168 kB Inactive(anon): 830236 kB Active(file): 25428 kB Inactive(file): 35748 kB Unevictable: 32 kB Mlocked: 32 kB HighTotal: 2492940 kB HighFree: 5456 kB LowTotal: 853448 kB LowFree: 91672 kB SwapTotal: 2144668 kB SwapFree: 1625244 kB Dirty: 84 kB Writeback: 0 kB AnonPages: 629304 kB Mapped: 45768 kB Slab: 45600 kB SReclaimable: 21756 kB SUnreclaim: 23844 kB PageTables: 4468 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 3817860 kB Committed_AS: 3735020 kB VmallocTotal: 122880 kB VmallocUsed: 9352 kB VmallocChunk: 66600 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 4096 kB DirectMap4k: 16376 kB DirectMap4M: 888832 kB If I try to drop the caches, little happes: # sync ; echo 3 > /proc/sys/vm/drop_caches ; free total used free shared buffers cached Mem: 3346388 3220580 125808 0 3020 2100600 -/+ buffers/cache: 1116960 2229428 Swap: 2144668 519356 1625312 Right now I've vm.swappiness = 5, but I've tried also with 0 and 1 (without noticeable differences). I've also tried vm.vfs_cache_pressure = 50 and 150 (again, no differences). As I said the pagecache eats all memory even with swapping turned off. What is happening? How to avoid this? TIA, Marco

    Read the article

  • Flash Player 10.1 crash on shared object access

    - by Simon
    Hi all, since updating my Flash Player plugin from 10 to 10.1, I'm seeing a weird crash when accessing shared objects. Flex Builder's debugger pops up and prints a stack trace like this: undefined at flash.net::SharedObject$/getLocal() at my.code::MyClass$/load()[/my/path/to/my/MyClass.as:27] (...) This happens when calling SharedObject.getLocal("someString") for the second time for the same string, though it doesn't always crash. When using another browser on the same machine (not configured as the preferred debugging browser in Flex Builder), Flash Player remains silent. The code is wrapped in a try/catch(Error) block which does not catch this error. I'm using Flex SDK 3.5 and Flex Builder 3 on Mac OS X 10.6.3. Has anyone else seen this? Thanks, Simon

    Read the article

  • SQL Server CE - Internal error: Cannot open the shared memory region

    - by blu
    I have a SQL Server CE database that works fine in dev, but when installed on the client has an issue. The SQL Server CE 3.5 dependencies are copied as part of the deployment. The target machine is a clean Windows 7 32-bit Ultimate image. The message for the exception in the event log is: Internal error: Cannot open the shared memory region. It looks like this is SSCE_M_CANTOPENSHAREDMEMORY and the site says there isn't a connection string value to change this and that these issues are typically not resolvable by the end developers. Has anyone run into this, and if so were you able to resolve this issue?

    Read the article

  • What AMF Servers Support Remote Shared Objects?

    - by GrayB
    Greetings. I'm planning on building a Flex based multiplayer game, and I'm researching what will be required for the server end. I have PHP experience, so I started looking at ZendAMF. Now in this game, I'll need the concept of rooms, and real time updates to clients in those rooms, so it looks like I'll be using remote shared objects (correct, yes?). I'm not seeing where ZendAMF can support this. So I found this page: http://arunbluebrain.wordpress.com/2009/03/04/flex-frameworks-httpcorlanorg/ It seems to indicate that ZendAMF isn't going to do what I want. WebORB for PHP seems to be the only PHP based solution that does messaging, but on that page it doesn't mention "real-time" next to it like the Java based ones below it do. What should I be looking at for the server piece with my requirements? Do I need to make the jump to something like BlazeDS and try to pick up a bit of Java knowledge? Thanks.

    Read the article

  • Java OutOfMemoryError due to Linux RAM disk cache not freed

    - by Markus Jevring
    The process will run fine all day, then, bam, without warning, it will throw this error. Sometimes seemingly in the middle of doing nothing. It will happen at seemingly random times during the day. I checked to see if anything else was running on the machine, like scheduled backups or something, but found nothing. The machine has enough physical memory (2GB, with about 1GB free for a 3-500MB load), and has sufficient -Xmx specified. According to our sysadmin, the problem is that the RAM that the kernel uses as a disk cache (apparently all but 8MB) is not freed when the JVM needs to allocate memory, so the JVM process throws an OutOfMemoryError. This could be because Java asks the kernel if enough memory is available before allocating and finds that it is insufficient, resulting in a crash. I would like to think, however, that Java simply tries to allocate the memory via the kernel, and when the kernel gets such a request, it makes room for the application by throwing our some of the disk cache. Has anyone else run in to the issue, and if so, what was the error, and how did you solve it? We are currently using jdk1.6.0_20 on SLES 10 SP2 Linux 2.6.16.60-0.42.9-smp in VMWare ESX.

    Read the article

  • Why isn’t my autoreleased object getting released?

    - by zoul
    Hello. I am debugging a weird memory management error and I can’t figure it out. I noticed that some of my objects are staying in memory longer than expected. I checked all my memory management and finally got to the very improbable conclusion that some of my autorelease operations don’t result in a release. Under what circumstances is that possible? I created a small testing Canary class that logs a message in dealloc and have the following testing code in place: NSLog(@"On the main thread: %i.", [NSThread isMainThread]); [[[Canary alloc] init] autorelease]; According to the code we’re really on the main thread, but the dealloc in Canary does not get called until much later. The delay is not deterministic and can easily take seconds or more. How is that possible? The application runs on a Mac, the garbage collection is turned off (Objective-C Garbage Collection is set to Unsupported on the target.) I am mostly used to iOS, is memory management on OS X different in some important way?

    Read the article

  • Instruments (Leaks) and NSDateFormatter

    - by Cal
    When I run my iPhone app with Instruments Leaks and parse a bunch of NSDates using NSDateFormatter my memory goes up about 1mb and stays even though these NSDates should be dealloc'd after the parsing (I just discard them if they aren't new). I thought the malloc (in my heaviest stack trace below) could become part of the NSDate but I also thought it could be memory that only used during some intermediate step in parsing. Does anyone know which one it is or how to find out? Also, is there a way to put a breakpoint on NSDate dealloc to see if that memory is really being reclaimed? Here's what my date formatter looks like for parsing these dates: df = [[NSDateFormatter alloc] init]; [df setDateFormat:@"EEE, d MMM yyyy H:m:s z"]; Here's the Heaviest Stack trace when the memory bumps up and stays there: 0 libSystem.B.dylib 208.80 Kb malloc 1 libicucore.A.dylib 868.19 Kb icu::ZoneMeta::getSingleCountry(icu::UnicodeString const&, icu::UnicodeString&) 2 libicucore.A.dylib 868.66 Kb icu::ZoneMeta::getSingleCountry(icu::UnicodeString const&, icu::UnicodeString&) 3 libicucore.A.dylib 868.67 Kb icu::ZoneMeta::getSingleCountry(icu::UnicodeString const&, icu::UnicodeString&) 4 libicucore.A.dylib 868.67 Kb icu::DateFormatSymbols::initZoneStringFormat() 5 libicucore.A.dylib 868.67 Kb icu::DateFormatSymbols::getZoneStringFormat() const 6 libicucore.A.dylib 868.67 Kb icu::SimpleDateFormat::subParse(icu::UnicodeString const&, int&, unsigned short, int, signed char, signed char, signed char*, icu::Calendar&) const 7 libicucore.A.dylib 868.67 Kb icu::SimpleDateFormat::parse(icu::UnicodeString const&, icu::Calendar&, icu::ParsePosition&) const 8 libicucore.A.dylib 868.67 Kb icu::DateFormat::parse(icu::UnicodeString const&, icu::ParsePosition&) const 9 libicucore.A.dylib 868.67 Kb udat_parse 10 CoreFoundation 868.67 Kb CFDateFormatterGetAbsoluteTimeFromString 11 CoreFoundation 868.67 Kb CFDateFormatterCreateDateFromString 12 Foundation 868.67 Kb -[NSDateFormatter getObjectValue:forString:range:error:] 13 Foundation 868.75 Kb -[NSDateFormatter getObjectValue:forString:errorDescription:] 14 Foundation 868.75 Kb -[NSDateFormatter dateFromString:] Thanks!

    Read the article

  • tomcat axis2 shared libs and spring destroy-method problem

    - by EugeneP
    in tomcat\lib there are axis*, axis2* jars. I cannot delete them [sysadmin won't allow to do that]. My web-app invokes web-services. I put Jax-ws jars directly to myapp/web-inf/lib. so inner calls frow a web app servlets use jax-ws libraries. but since "destroy-method" of the bean invokes a web-service, and session is destroyed, then spring makes a service call through tomcat/lib/* = axis2 libraries, and not by using web-inf/lib/jax-ws* How to: a) while not deleting axis2 libs from shared tomcat folder b) make spring use jax-ws libraries to make a web-service call ?

    Read the article

  • Shared Source CLI 4.0?

    - by Paul Alexander
    Microsoft released the Shared Source Common Language Infrastructure (the code previously known as ROTOR) code some years ago basically as a reference implementation of the .NET runtime. While the actual .NET runtime (mscorlib, mscoree, mscorjit, etc.) aren't compiled from the code, debugging them shows that they are remarkably similar and at a minimum share much of the same memory structures. This has been an invaluable resource when debugging tricky system behavior with .NET 2.0 compiled assemblies. Now that 4.0 has been released with major changes to the runtime I'd love to find the reference source for that as well. Microsoft has changed names for the source in the past so I'm either searching for the wrong thing, or it hasn't been released. Is there reference source for a .NET 4.0 compatible runtime?

    Read the article

  • Read vector into CUDA shared memory

    - by Ben
    I am new to CUDA and programming GPUs. I need each thread in my block to use a vector of length ndim. So I thought I might do something like this: extern __shared__ float* smem[]; ... if (threadIddx.x == 0) { for (int d=0; d<ndim; ++d) { smem[d] = vector[d]; } } __syncthreads(); ... This works fine. However, I seems wasteful that a single thread should do all loading, so I changed the code to if (threadIdx.x < ndim) { smem[threadIdx.x] = vector[threadIdx.x]; } __syncthreads(); which does not work. Why? It gives different results than the above code even when ndim << blockDim.x.

    Read the article

  • shared library in C

    - by sasayins
    Hi, I am creating a shared library in C but don't know what is the correct implementation of the source codes. I want to create an API like for example, printHello, int printHello( char * text ); This printHello function will call another function: In source file, libprinthello.c, void printHello( char * text ) { printHi(); printf("%s", text); } Since this printHello function is the interface for the user or application: In header file libprinthello.h, extern void printHello( char * text); Then in the source file of the printHi function, printhi.c void printHi() { printf("Hi\n"); } Then my problem is, since printHello is the only function that I want to expose in the user, what implementation should I do in printHi function?

    Read the article

  • How to find the leaky faucet that loads into Malloc 32kb

    - by Rob
    I have been messing around with Leaks trying to find which function is not being deallocated (I am still new to this) and could really use some experienced insight. I have this bit of code that seems to be the culprit. Every time I press the button that calls this code, 32kb of memory is additionally allocated to memory and when the button is released that memory does not get deallocated. What I found was that everytime that AVAudioPlayer is called to play an m4a file, the final function to parse the m4a file is MP4BoxParser::Initialize() and this in turn allocates 32kb of memory through Cached_DataSource::ReadBytes My question is, how do I go about deallocating that after it is finished so that it doesn't keep allocating 32kb every time the button is pressed? Any help you could provide is greatly appreciated! - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { //stop playing theAudio.stop; // cancel any pending handleSingleTap messages [NSObject cancelPreviousPerformRequestsWithTarget:self selector:@selector(handleSingleTap) object:nil]; UITouch* touch = [[event allTouches] anyObject]; NSString* filename = [g_AppsList objectAtIndex: [touch view].tag]; NSString *path = [[NSBundle mainBundle] pathForResource: filename ofType:@"m4a"]; theAudio=[[AVAudioPlayer alloc] initWithContentsOfURL:[NSURL fileURLWithPath:path] error:NULL]; theAudio.delegate = self; [theAudio prepareToPlay]; [theAudio setNumberOfLoops:-1]; [theAudio setVolume: g_Volume]; [theAudio play]; }

    Read the article

  • iPhone OS: Strategies for high density image work

    - by Jasconius
    I have a project that is coming around the bend this summer that is going to involve, potentially, an extremely high volume of image data for display. We are talking hundreds of 640x480-ish images in a given application session (scaled to a smaller resolution when displayed), and handfuls of very large (1280x1024 or higher) images at a time. I've already done some preliminary work and I've found that the typical 640x480ish image is just a shade under 1MB in memory when placed into a UIImageView and displayed... but the very large images can be a whopping 5+ MB's in some cases. This project is actually be targeted at the iPad, which, in my Instruments tests seems to cap out at about 80-100MB's of addressable physical memory. Details aside, I need to start thinking of how to move huge volumes of image data between virtual and physical memory while preserving the fluidity and responsiveness of the application, which will be high visibility. I'm probably on the higher ends of intermediate at Objective-C... so I am looking for some solid articles and advice on the following: 1) Responsible management of UIImage and UIImageView in the name of conserving physical RAM 2) Merits of using CGImage over UIImage, particularly for the huge images, and if there will be any performance gain 3) Anything dealing with memory paging particularly as it pertains to images I will epilogue by saying that the numbers I have above maybe off by about 10 or 15%. Images may or may not end up being bundled into the actual app itself as opposed to being loaded in from an external server.

    Read the article

  • Core Data - How to check if a managed object's properties have been deallocated?

    - by georryan
    I've created a program that uses core data and it works beautifully. I've since attempted to move all my core data methods calls and fetch routines into a class that is self contained. My main program then instantiates that class and makes some basic method calls into that class, and the class then does all the core data stuff behind the scenes. What I'm running into, is that sometimes I'll find that when I grab a managed object from the context, I'll have a valid object, but its properties have been deallocated, and I'll cause a crash. I've played with the zombies and looked for memory leaks, and what I have gathered is it seems that the run loop is probably responsible for deallocating the memory, but I'm not sure. Is there a way to determine if that memory has been deallocated and force the core data to get it back if I need to access it? My managedObjectContext never gets deallocated, and the fetchedResultsController never does, either. I thought maybe I needed to use the [managedObjectContext refreshObject:mergeData:] method, or the [managedObjectContext setRetainsRegisteredObjects:] method. Although, I'm under the impression that last one may not be the best bet since it will be more memory intensive (from what I understand). These errors only popped up when I moved the core data calls into another class file, and they are random when they show up. Any insight would be appreciated. -Ryan

    Read the article

  • Managing of shared resources between classes?

    - by Axarydax
    Imagine that I have a several Viewer component that are used for displaying text and they have few modes that user can switch (different font presets for viewing text/binary/hex). What would be the best approach for managing shared objects - for example fonts, find dialog, etc? I figured that static class with lazily initialized objects would be OK, but this might be the wrong idea. static class ViewerStatic { private static Font monospaceFont; public static Font MonospaceFont { get { if (monospaceFont == null) //TODO read font settings from configuration monospaceFont = new Font(FontFamily.GenericMonospace, 9, FontStyle.Bold); return monospaceFont; } } private static Font sansFont; public static Font SansFont { get { if (sansFont == null) //TODO read font settings from configuration sansFont = new Font(FontFamily.GenericSansSerif, 9, FontStyle.Bold); return sansFont; } } }

    Read the article

  • shared libraries creating a soft link

    - by robUK
    Hello, Redhat 5.5 gcc version 4.1.2 I have a directory call lib, and in that directory I have all the shared libraries (about 30) that we get from our customer as we use their API. We link with this API. directory structure: /usr/CSAPI/lib However, our customer will update their API so we get new libraries, normally about 3 or 4. What I have been doing is when I get new libraries. Is to remove the old one and put in another directory. And replace them with the new libaries in the lib directory. /usr/CSAPI/Old_libs The new and old will have the same name. i.e. libcs.so < old libcs.so < new Is there a better way to manage this? I was thinking of creating a soft line, but as the names are the same, I am not sure that this will work. Many thanks,

    Read the article

  • Kohana PHP - Multiple apps with shared model

    - by Josamoto
    I'm using Kohana 3 to create a website that has two applications, an admin application and the actual site frontend. I have separated my folders to have the two applications separated, so the hierarchy looks as follows: /applications /admin /classes /controller /... /site /classes /controller /.... My question is, how I need to go about creating a shared /model folder. Essentially, both the admin and site itself operates on the same data, so the database layer and business logic remains more or less the same. So to me, it makes sense to have a single model folder, sitting outside of the two application folders. Is it possible to achieve the following hierarchy: /applications /model --> Where model sits in a neatly generic location, accessible to all applications /admin /classes /controller /... /site /classes /controller /.... Thanks in advance!

    Read the article

  • Access cost of dynamically created objects with dynamically allocated members

    - by user343547
    I'm building an application which will have dynamic allocated objects of type A each with a dynamically allocated member (v) similar to the below class class A { int a; int b; int* v; }; where: The memory for v will be allocated in the constructor. v will be allocated once when an object of type A is created and will never need to be resized. The size of v will vary across all instances of A. The application will potentially have a huge number of such objects and mostly need to stream a large number of these objects through the CPU but only need to perform very simple computations on the members variables. Could having v dynamically allocated could mean that an instance of A and its member v are not located together in memory? What tools and techniques can be used to test if this fragmentation is a performance bottleneck? If such fragmentation is a performance issue, are there any techniques that could allow A and v to allocated in a continuous region of memory? Or are there any techniques to aid memory access such as pre-fetching scheme? for example get an object of type A operate on the other member variables whilst pre-fetching v. If the size of v or an acceptable maximum size could be known at compile time would replacing v with a fixed sized array like int v[max_length] lead to better performance? The target platforms are standard desktop machines with x86/AMD64 processors, Windows or Linux OSes and compiled using either GCC or MSVC compilers.

    Read the article

  • How to simulate OutOfMemory exception

    - by Gacek
    I need to refactor my project in order to make it immune to OutOfMemory exception. There are huge collections used in my project and by changing one parameter I can make my program to be more accurate or use less of the memory... OK, that's the background. What I would like to do is to run the routines in a loop: Run the subroutines with the default parameter. Catch the OutOfMemory exception, change the parameter and try to run it again. Do the 2nd point until parameters allow to run the subroutines without throwing the exception (usually, there will be only one change needed). Now, I would like to test it. I know, that I can throw the OutOfMemory exception on my own, but I would like to simulate some real situation. So the main question is: Is there a way of setting some kind of memory limit for my program, after reaching which the OutOfMemory exception will be thrown automatically? For example, I would like to set a limit, let's say 400MB of memory for my whole program to simulate the situation when there is such an amount of memory available in the system. Can it be done?

    Read the article

  • Is extending a base class with non-virtual destructor dangerous in C++

    - by Akusete
    Take the following code class A { }; class B : public A { }; class C : public A { int x; }; int main (int argc, char** argv) { A* b = new B(); A* c = new C(); //in both cases, only ~A() is called, not ~B() or ~C() delete b; //is this ok? delete c; //does this line leak memory? return 0; } when calling delete on a class with a non-virtual destructor with member functions (like class C), can the memory allocator tell what the proper size of the object is? If not, is memory leaked? Secondly, if the class has no member functions, and no explicit destructor behaviour (like class B), is everything ok? I ask this because I wanted to create a class to extend std::string, (which I know is not recommended, but for the sake of the discussion just bear with it), and overload the +=,+ operator. -Weffc++ gives me a warning because std::string has a non virtual destructor, but does it matter if the sub-class has no members and does not need to do anything in its destructor? -- FYI the += overload was to do proper file path formatting, so the path class could be used like class path : public std::string { //... overload, +=, + //... add last_path_component, remove_path_component, ext, etc... }; path foo = "/some/file/path"; foo = foo + "filename.txt"; //and so on... I just wanted to make sure someone doing this path* foo = new path(); std::string* bar = foo; delete bar; would not cause any problems with memory allocation

    Read the article

  • Error while storing to a shared network drive when running website on IIS

    - by vini
    System.UnauthorizedAccessException: Access to the path '\\192.168.0.99\c$\DTA\DTA564348.64U20121217161754.dta' is denied. When i store the file by running on my local asp.net website it works fine and gets stored on the shared network drive However when i run this through IIS i get the above error C# Code StrPath = FilePhypath.ToString(); Web.Config <location allowOverride="true"> <appSettings> <add key="FilephyPath" value="\\192.168.0.99\c$\DTA\"/> </appSettings> </location>

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >