Search Results

Search found 21089 results on 844 pages for 'virtual memory'.

Page 70/844 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • Oracle Virtual Desktop Infrastructure

    - by Fat Bloke
    A lot of the recent blog entries here have been about Oracle VM VirtualBox, possibly the coolest personal desktop virtualization product known to man. Deploying VirtualBox on your PC or Mac lets you run many virtual desktops at the same time to one user, you. But did you know that VirtualBox can also power an Enterprise-scale virtual desktop deployment too, delivering many desktops to many users?  As part of another Oracle product, Oracle Virtual Desktop Infrastructure (VDI), VirtualBox can run your Windows, Linux or Solaris desktops on servers located in the datacenter. Oracle VDI orchestrates the whole deal by looking after : creating or cloning the virtual desktops from a master template; managing the lifecycle of the desktops (create, start, suspend, resume, stop, delete); assigning which users get which desktops;  delivering easy and fast access to these virtual desktops from almost any device, such as existing PCs or Macs, iPads, or specially designed Sun Ray client devices too; load balancing and session management of all of this.  Architecturally the solution looks something like this: This is an increasingly hot area of the IT landscape, so the Fat Bloke has decided to create a new blog category (VDI) and dedicate a few blog entries to look into this in a bit more detail over the next few weeks. Watch this space... - FB 

    Read the article

  • How to Increase Memory Allocated to IIS .NET Application?

    - by Mark Hansen
    We are using Windows 2008 R2 and IIS 7 running on Amazon EC2. IIS is running a single .NET application written in C#. We are having performance issues and I want to give the application more memory, but I cannot figure out how to do it. How do I control the amount of memory that the CLR gets? I'm a total newbie with IIS, .NET and the CLR. If I were working with Java, I would just use the -Xmx flag to increase the memory available to the JVM (e.g., -Xmx3000m for 3GB). But, I cannot seem to figure out how to do this in the Windows world.

    Read the article

  • Reading the memory of a N64

    - by toazron1
    I'm looking for a way to read the memory of a N64, while the game is running, in real time. I have a c# program which hooks into the memory of a running emulator and tracks SSB64 stats. I want to do the same thing with the physical N64. Currently it is possible to read the memory with a gameshark pro, however it's extremely slow, buggy, and not practical for what I am trying to accomplish. Would it be possible to tap into the gameshark, or the N64 directly, to access the memory in real time? Thanks!

    Read the article

  • How to keep memory consumption below 500 MB or less than 25 processes at background on netbook?

    - by overmann
    I bought a netbook yesterday, (I'm loving it) but I will never understand why they need to be a lot of processes running on background. I worry about other users who have no idea about it and continue using their computers with occasional choppiness due to 70 processes on background occupying most of the memory I'd like to keep my memory consumption below 500MB (I have 1 GB) is this possible? What are your ideas for this to work? I always run Microsoft Security Essentials at startup and real time protection, how many features can I disable to reach my goal memory usage?

    Read the article

  • Tweaking Hudson memory usage

    - by rovarghe
    Hudson 3.1 has some performance optimizations that greatly reduces its memory footprint. Prior to this Hudson used to always hold the entire data model (all jobs and all builds) in memory which affected scalability. Some installations configured heap sizes in excess of 1GB to counteract this. Hudson 3.1.x maintains an MRU cache and only loads jobs and builds as they are required. Because of the inability to change existing APIs and be backward compatible with plugins, there were limits to how far we could go with this approach. Memory optimizations almost always come with a related cost, in this case its additional I/O that has to be performed to load data on request. On a small site that has frequent traffic, this is usually not noticeable since the MRU cache will usually hold on to all the data. A large site with infrequent traffic might experience some delays when the first request hits the server after a long gap. If you have a large heap and are able to allocate more memory, the cache settings can be adjusted to take advantage of this and even go back to pre-3.1 behavior. All the cache settings can be passed as options to the JVM container (Tomcat or the default Jetty container) using the -D option. There are two caches, independant of each other, one for Jobs and the other for Builds. For the jobs cache: hudson.jobs.cache.evict_in_seconds ( default=60 ) Seconds from last access (could be because of a servlet request or a background cron thread) a job should be purged from the cache. Set this to 0 to never purge based on time. hudson.jobs.cache.initial_capacity ( default=1024 ) Initial number of jobs the cache can accomodate. Setting this to the number of jobs you typically display on your Hudson landing page or home page will speed up consecutive access to that page. If the default is too large you may consider downsizing and using that memory for the Builds cache instead. hudson.jobs.cache.max_entries ( default=1024) Maximum number of jobs in the cache. The default is large enough for most installations, but if you find I/O activity when always accessing the hudson home page you might consider increasing this, but first verify if the I/O is caused by frequent eviction (see above), rather than by the cache not being large enough. For the builds cache: The builds cache is used to store Build objects as they are read from storage. Typically this happens when a user drills down into the details of a particular Job from the hudson hom epage. The cache is shared among builds for different jobs since in most installations all jobs are not accessed with the same frequency, so a per-job builds cache would be a waste of memory. hudson.job.builds.cache.evict_in_seconds ( default=60 ) Same as the equivalent Job cache, applied to Build. hudson.job.builds.cache.initial_capacity" ( default=512 ) Same as equivalent Job cache setting. Note the smaller initial size. If your site stores a large number of builds and has frequent access to more builds you might consider bumping this up. hudson.job.builds.cache.max_entries ( default=10240 ) The default max is large enough for most installations, the builds cache has bigger sized objects, so be careful about increasing the upper limit on this. See section on monitoring below. Sample usage: java -jar hudson-war-3.1.2-SNAPSHOT.war -Dhudson.jobs.cache.evict_in_seconds=300 \ -Dhudson.job.builds.cache.evict_in_seconds=300 Monitoring cache usage The 'jmap' tool that comes with the JDK can be used to monitor cache performance in an indirect way by looking at the number of Job and Build objects in each cache. Find the PID of the hudson instance and run $ jmap -histo:live <pid | grep 'hudson.model.*Lazy.*Key$' Here's a sample output: num #instances #bytes class name 523: 28 896 hudson.model.RunMap$LazyRunValue$Key 1200: 3 96 hudson.model.LazyTopLevelItem$Key These are the keys to the Jobs (LazyTopLevelItem$Key) and Builds (RunMap$LazyRunValue$Key) in the caches, so counting the number of keys is a good indicator of the number of items in the cache at any given moment. The size in bytes can be ignored, they are just the size of the keys, not the actual sizes of the objects they hold. Those sizes can only be obtained with a profiler. With the output above we can conclude that there are 3 jobs and 28 builds in memory. The 28 builds can all be from 1 job or all 3 jobs. Over time on an idle system, these should get evicted and memory cache should be empty. In practice, because of background cron threads and triggers, jobs rarely fall down to zero. Access of a job or a build by a cron thread resets the eviction timer.

    Read the article

  • Does Virtual Machines with Microsoft server 2003 with Host operating system Vista Home Premium suffer from Vista contraints?

    - by mokokamello
    Experts ! i have a machine with vista home premium and i wanted to share a folder with my colleagues unfortunately i vista allow only 10 concurrent connections to a shared folder one of my colleagues advised me to install a Virtual machine with windows server 2003 so that i will be able to share the folder with more than 1000 user. another colleague stated that the kind of virtual server does not matter, what matters is the host OS, in this case vista. so the folder will be shared by no more than 10 users. my Question is Does Virtual Machines with Microsoft server 2003 with Host operating system Vista Home Premium suffer from Vista contraints?

    Read the article

  • Disposing of ContentManager increases memory usage

    - by Havsmonstret
    I'm trying to wrap my head around how memory management works in XNA 4.0 I've created a screen management test and when I close a screen, the ContentManager created by that screen is unloaded. I have used ANTS Memory Manager to look at how the memory usage is altered when I do this, and it gives me some results which makes me scratch my head. The game starts with loading two textures (435kB and 48,3kB) which puts the usage at about 61MB. Then, when I delete the screen and runs Unload on the ContentManager, the memory usage drops to 56,5MB but instantly after goes up to 64,8MB. Am I doing something wrong or is this usual for XNA games? Do I have to dispose of everything the ContentManager loads seperatly or do I need to do something more to the ContentManager? Thanks in advance!

    Read the article

  • Virtual Box is not getting start

    - by Neo
    I had been using Ubuntu 13.04 32-bit. After upgrading to Ubuntu 13.10 my earlier installed Virtual Box (from Software Center) did not get started even after removing and reinstalling many times. could any one tell me the way to fix this? In order to solve the problem, I upgraded to Ubuntu 14.04 LTS but the problem still persists even after reinstalling Virtual Box from Software Center. Can anyone help me since I need Virtual Box?

    Read the article

  • Virtual Linux

    Virtual Linux has many desktop uses, yet it&#146;s really enterprises that are most actively driving virtual Linux&#146;s development.

    Read the article

  • How to hot-add a vCPU to a virtual SQL Server

    One of the benefits of running SQL on virtual environment is the capability to present additional vCPUs to the virtual server online and real-time without interruption to running processes. Our VM administrator normally presents only 1 vCPU on the virtual server and extends as required. This article describes how SQL Server is able to detect hot-added vCPUs in my virtual server. New! SQL Prompt 6 – now with tab historyWriting, exploring, and editing SQL just became even more effortless with SQL Prompt 6. Download a free trial.

    Read the article

  • Growing your VirtualBox Virtual Disk

    - by Fat Bloke
    Don't you just hate it when this happens: Fortunately, if you're running inside VirtualBox, you can resize your virtual disk and magically make your guest have a bigger disk very easily. There are 2 steps to doing this... 1. Resize the virtual disk Use the VBoxManage command line tool to extend the size of the Virtual Disk, specifying the path to the disk and the size in MB: VBoxManage modifyhd <uuid>|<filename> [--type normal|writethrough|immutable|shareable| readonly|multiattach] [--autoreset on|off] [--compact] [--resize <megabytes>|--resizebyte <bytes>]   If you booted up your guest at this point, the extra space is seen as an unformatted area on the disk, like this: So we now need to tell the guest about the extra space available. 2. Extend the guest's partition to use the extra space How you do this step depends on you guest OS type and the tools you have available. Linux guests often include the excellent gparted partition editor, whereas Windows 7 and 8 provide the Computer Management tool which can resize partitions. Unfortunately, my Windows XP vm has no such tool. But I do have a couple of other options: Most Linux installable .isos include the aforementioned gparted tool, so I could simply attach, say, an Ubuntu.iso as a Virtual CD/DVD in my Windows XP vm and boot off that. Then use gparted to extend the Windows XP partition, before finally rebooting. But I took another route and attached my resized virtual disk to a Windows Server 2012 vm I had lying around. Then I used the Computer Management tool in Windows Server 2012 to extend the partition of the Windows XP disk, before shutting down, unplugging the disk and reattaching to my Windows XP vm. (Note that if your vm's use different disk controllers, Windows will check the disks on booting). When I finally boot up my Windows XP guest I see the available disk space and all is well. At least until the next time - FB 

    Read the article

  • can a OOM be caused by not finding enough contiguous memory?

    - by raticulin
    I start some java code with -Xmx1024m, and at some point I get an hprof due to OOM. The hprof shows just 320mb, and give me a stack trace: at java.util.Arrays.copyOfRange([CII)[C (Arrays.java:3209) at java.lang.String.<init>([CII)V (String.java:215) at java.lang.StringBuilder.toString()Ljava/lang/String; (StringBuilder.java:430) ... This comes from a large string I am copying. I remember reading somewhere (cannot find where) what happened is these cases is: process still has not consumed 1gb of memory, is way below even if heap still below 1gb, it needs some amount of memory, and for copyOfRange() it has to be continuous memory, so even if it is not over the limit yet, it cannot find a large enough piece of memory on the host, it fails with an OOM. I have tried to look for doc on this (copyOfRange() needs a block of continuous memory), but could not find any. The other possible culprit would be not enough permgen memory. Can someone confirm or refute the continuous memory hypothesis? Any pointer to some doc would help too.

    Read the article

  • how to force operating system to give java more memory?

    - by Denny
    Hello, I've got problem with java jar files and memory. I use netbeans 6.7 to develop an application and this application need more memory to run because it converts another files. Whenever this application convert a 6-10 mb file, it'll crash. So I set netbeans VM Options : -Xms32m -Xmx256m and the application can convert 6-10mb files with no problem. I Clean and Build the project so it can make a jar file of my application. I run the jar on my computer and use jconsole to monitor the memory. The maximum memory to use by the application shows 256 mb. But whenever I move it to some other computers, it shows 65-66 mb on jconsole and the application will crash when convert 6-10 mb files. So I need to use command prompt : java -jar -Xmx256m myjar.jar to execute the jar with maximum memory Why it can be happen, in my computer the maximum memory shows 256 mb but on another computer 65-66 mb? Can I force another computer to give extra maximum memory to my application? Thank you for your answer. I'm sorry for my inadequate English. If you all find my question is hard to understand, please let me know. Best Regards Denny ps: fyi the computer i used to develop the application have 2gb ram, on the other computers i tested have 1-2 gb ram.

    Read the article

  • Can you catch exceeded allocated memory error before it kills the script?

    - by kristovaher
    The thing is that I want to catch memory problems before they happen. I have a system that gets rows from database and casts the returned associative array to a variable, but I never know what the size of the database result is is or how much memory it will take once the database request is made. This means that my software can fail simply because memory is exceeded. But I want to avoid that somehow. One of the ways is to obviously make database requests that are smaller, but what if this is not possible or what if I do not know the size of data that is returned from database? Is it possible to 'catch' situations where memory use is exceeded in PHP? Something like this: $requestOk=memory_test(function(){ return doSomething(); }); if($requestOk){ // Memory seems fine // $requestOk now has the value from memory_test() function } else { // Function would have exceeded memory } I just find it problematic that my script can just die at any moment because of memory issues. From what I know, try-catch cannot be used here because it is a fatal error. Any help would be appreciated!

    Read the article

  • Does the Java Memory Model (JSR-133) imply that entering a monitor flushes the CPU data cache(s)?

    - by Durandal
    There is something that bugs me with the Java memory model (if i even understand everything correctly). If there are two threads A and B, there are no guarantees that B will ever see a value written by A, unless both A and B synchronize on the same monitor. For any system architecture that guarantees cache coherency between threads, there is no problem. But if the architecture does not support cache coherency in hardware, this essentially means that whenever a thread enters a monitor, all memory changes made before must be commited to main memory, and the cache must be invalidated. And it needs to be the entire data cache, not just a few lines, since the monitor has no information which variables in memory it guards. But that would surely impact performance of any application that needs to synchronize frequently (especially things like job queues with short running jobs). So can Java work reasonably well on architectures without hardware cache-coherency? If not, why doesn't the memory model make stronger guarantees about visibility? Wouldn't it be more efficient if the language would require information what is guarded by a monitor? As i see it the memory model gives us the worst of both worlds, the absolute need to synchronize, even if cache coherency is guaranteed in hardware, and on the other hand bad performance on incoherent architectures (full cache flushes). So shouldn't it be more strict (require information what is guarded by a monitor) or more lose and restrict potential platforms to cache-coherent architectures? As it is now, it doesn't make too much sense to me. Can somebody clear up why this specific memory model was choosen? EDIT: My use of strict and lose was a bad choice in retrospect. I used "strict" for the case where less guarantees are made and "lose" for the opposite. To avoid confusion, its probably better to speak in terms of stronger or weaker guarantees.

    Read the article

  • Retain count problem iphone sdk

    - by neha
    Hi all, I'm facing a memory leak problem which is like this: I'm allocating an object of class A in class B. // RETAIN COUNT OF CLASS A OBJECT BECOMES 1 I'm placing the object in an nsmutablearray. // RETAIN COUNT OF CLASS A OBJECT BECOMES 2 In an another class C, I'm grabbing this nsmutablearray, fetching all the elements in that array in a local nsmutablearray, releasing this first array of class B. // RETAIN COUNT OF CLASS A OBJECTS IN LOCAL ARRAY BECOMES 1 Now in this class C, I'm creating an object of class A and fetching the elements in local nsmutable array. //RETAIN COUNT OF NEW CLASS A OBJECT IN LOCAL ARRAY BECOMES 2 [ALLOCATION + FETCHED OBJECT WITH RETAIN COUNT 1] My question is, I want to retain this array which I'm displaying in tableview, and want to release it after new elements are filled in the array. I'm doing this in class B. So before adding new elements, I'm removing all the elements and releasing this array in class B. And in class C I'm releasing object of class A in dealloc. But in Instruments-Leaks it's showing me leak for this class A object in class C. Can anybody please tell me wheather where I'm going wrong. Thanx in advance.

    Read the article

  • What is the exact difference between MEM_RESERVE and MEM_COMMIT states?

    - by pj4533
    As I understand it MEM_RESERVE is actually 'free' memory, ie available to be used by my process, but just hasn't been allocated yet? Or it was previously allocated, but had since been freed? Specifically, see in my !address output below how I am nearly out of virtual address space (99900 KB free, 2307872 as MEM_PRIVATE. But the states shows that 44.75% of that is actually MEM_RESERVE. Does that mean it is actually free, in my process...but maybe fragmented? 0:000> !address -summary --------- PEB a8bd8000 not found ---- -------------------- Usage SUMMARY -------------------------- TotSize ( KB) Pct(Tots) Pct(Busy) Usage 259af000 ( 616124) : 22.29% 23.12% : RegionUsageIsVAD 618f000 ( 99900) : 03.61% 00.00% : RegionUsageFree 13e22000 ( 325768) : 11.78% 12.22% : RegionUsageImage 42c04000 ( 1093648) : 39.56% 41.04% : RegionUsageStack 42d000 ( 4276) : 00.15% 00.16% : RegionUsageTeb 2625d000 ( 625012) : 22.61% 23.45% : RegionUsageHeap 0 ( 0) : 00.00% 00.00% : RegionUsagePageHeap 0 ( 0) : 00.00% 00.00% : RegionUsagePeb 1000 ( 4) : 00.00% 00.00% : RegionUsageProcessParametrs 1000 ( 4) : 00.00% 00.00% : RegionUsageEnvironmentBlock Tot: a8bf0000 (2764736 KB) Busy: a2a61000 (2664836 KB) -------------------- Type SUMMARY -------------------------- TotSize ( KB) Pct(Tots) Usage 618f000 ( 99900) : 03.61% : <free> 13e22000 ( 325768) : 11.78% : MEM_IMAGE 1e77000 ( 31196) : 01.13% : MEM_MAPPED 8cdc8000 ( 2307872) : 83.48% : MEM_PRIVATE -------------------- State SUMMARY -------------------------- TotSize ( KB) Pct(Tots) Usage 57235000 ( 1427668) : 51.64% : MEM_COMMIT 618f000 ( 99900) : 03.61% : MEM_FREE 4b82c000 ( 1237168) : 44.75% : MEM_RESERVE Largest free region: Base 7e4a1000 - Size 000ff000 (1020 KB)

    Read the article

  • C++ Dynamic Allocation Mismatch: Is this problematic?

    - by acanaday
    I have been assigned to work on some legacy C++ code in MFC. One of the things I am finding all over the place are allocations like the following: struct Point { float x,y,z; }; ... void someFunc( void ) { int numPoints = ...; Point* pArray = (Point*)new BYTE[ numPoints * sizeof(Point) ]; ... //do some stuff with points ... delete [] pArray; } I realize that this code is atrociously wrong on so many levels (C-style cast, using new like malloc, confusing, etc). I also realize that if Point had defined a constructor it would not be called and weird things would happen at delete [] if a destructor had been defined. Question: I am in the process of fixing these occurrences wherever they appear as a matter of course. However, I have never seen anything like this before and it has got me wondering. Does this code have the potential to cause memory leaks/corruption as it stands currently (no constructor/destructor, but with pointer type mismatch) or is it safe as long as the array just contains structs/primitive types?

    Read the article

  • pthread_create followed by pthread_detach still results in possibly lost error in Valgrind.

    - by alesplin
    I'm having a problem with Valgrind telling me I have some memory possible lost: ==23205== 544 bytes in 2 blocks are possibly lost in loss record 156 of 265 ==23205== at 0x6022879: calloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so) ==23205== by 0x540E209: allocate_dtv (in /lib/ld-2.12.1.so) ==23205== by 0x540E91D: _dl_allocate_tls (in /lib/ld-2.12.1.so) ==23205== by 0x623068D: pthread_create@@GLIBC_2.2.5 (in /lib/libpthread-2.12.1.so) ==23205== by 0x758D66: MTPCreateThreadPool (MTP.c:290) ==23205== by 0x405787: main (MServer.c:317) The code that creates these threads (MTPCreateThreadPool) basically gets an index into a block of waiting pthread_t slots, and creates a thread with that. TI becomes a pointer to a struct that has a thread index and a pthread_t. (simplified/sanitized): for (tindex = 0; tindex < NumThreads; tindex++) { int rc; TI = &TP->ThreadInfo[tindex]; TI->ThreadID = tindex; rc = pthread_create(&TI->ThreadHandle,NULL,MTPHandleRequestsLoop,TI); /* check for non-success that I've omitted */ pthread_detach(&TI->ThreadHandle); } Then we have a function MTPDestroyThreadPool that loops through all the threads we created and cancels them (since the MTPHandleRequestsLoop doesn't exit). for (tindex = 0; tindex < NumThreads; tindex++) { pthread_cancel(TP->ThreadInfo[tindex].ThreadHandle); } I've read elsewhere (including other questions here on SO) that detaching a thread explicitly would prevent this possibly lost error, but it clearly isn't. Any thoughts?

    Read the article

  • Instance of method leaking, iPhone

    - by Wolfert
    The following method shows up as leaking while performing a memory-leaks test with Instruments: - (NSDictionary*) initSigTrkLstWithNiv:(int)pm_SigTrkNiv SigTrkSig:(int)pm_SigTrkSig SigResIdt:(int)pm_SigResIdt SigResVal:(int)pm_SigResVal { NSArray *objectArray; NSArray *keyArray; if (self = [super init]) { self.SigTrkNiv = [NSNumber numberWithInt:pm_SigTrkNiv]; self.SigTrkSig = [NSNumber numberWithInt:pm_SigTrkSig]; self.SigResIdt = [NSNumber numberWithInt:pm_SigResIdt]; self.SigResVal = [NSNumber numberWithInt:pm_SigResVal]; objectArray = [NSArray arrayWithObjects:SigTrkNiv,SigTrkSig,SigResIdt,SigResVal, nil]; keyArray = [NSArray arrayWithObjects:@"SigTrkNiv", @"SigTrkSig", @"SigResIdt", @"SigResVal", nil]; self = [NSDictionary dictionaryWithObjects:objectArray forKeys:keyArray]; } return self; } code that invokes the instance: NSDictionary *lv_SigTrkLst = [[SigTrkLst alloc]initSigTrkLstWithNiv:[[tempDict objectForKey:@"SigTrkNiv"] intValue] SigTrkSig:[[tempDict objectForKey:@"SigTrkSig"] intValue] SigResIdt:[[tempDict objectForKey:@"SigResIdt"] intValue] SigResVal:[[tempDict objectForKey:@"SigResVal"] intValue]]; [[QBDataContainer sharedDataContainer].SigTrkLstArray addObject:lv_SigTrkLst]; [lv_SigTrkLst release]; Instruments informs that 'SigTrkLst' is leaking. Even though I have released the instance? (I know that adding it to the array increments the retainCount by 1 but releasing it twice removes it from the array?)

    Read the article

  • WPF Memory Leak

    - by Oskar Kjellin
    I have an WPF form that I myself did not create, so I am not very good at WPF. It is leaking badly though, up to 400 MB and closing the form does not help. The problem lies in my application loading all the pictures at once. I would like to only load the ones visible at the moment. It is about 300 pictures and they are a bit large so my WPF-form suffers from loading them all. I have a DataTemplate with my own type that has a property Thumbnail. The code in the template is like this: <Image Source="{Binding Path=Thumbnail}" Stretch="Fill"/> And then I have a grid with a control that has the above template as source. The code for this control is the below. Please provide me with hints on how to optimize the code and perhaps get the only ones that are visible and only have that many controls loaded at the same time? <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="Controls:ElementFlow"> <Grid Background="{TemplateBinding Background}"> <Canvas x:Name="PART_HiddenPanel" IsItemsHost="True" Visibility="Hidden" /> <Viewport3D x:Name="PART_Viewport"> <!-- Camera --> <Viewport3D.Camera> <PerspectiveCamera FieldOfView="60" Position="0,1,4" LookDirection="0,-1,-4" UpDirection="0,1,0" /> </Viewport3D.Camera> <ContainerUIElement3D x:Name="PART_ModelContainer" /> <ModelVisual3D> <ModelVisual3D.Content> <AmbientLight Color="White" /> </ModelVisual3D.Content> </ModelVisual3D> <Viewport2DVisual3D RenderOptions.CachingHint="Cache" RenderOptions.CacheInvalidationThresholdMaximum="2" RenderOptions.CacheInvalidationThresholdMinimum="0.5"/> </Viewport3D> </Grid> </ControlTemplate> </Setter.Value> </Setter> </Style>

    Read the article

  • SQL Server 2005 high memory usage and performance problems

    - by emzero
    Hi there guys. I have this ASP.NET/SQLServer2005 website running on a production server (Win2003, QuadCore, 4GB). The site runs smoothly normally, but after 2-3 weeks I notice a slow performance on the site (especifically in one particular page). Also I notice that the SQL Server process is using like 2GBs of RAM. So I restart the service, the site runs fast again and the process 300-400MBs. I'm looking for an explanation of why is this happening? What is SQL Server storing in RAM that takes too much space and degrades the performance? What can I do to avoid this? I'm trying to avoid restarting the SQLServer everytime this happens. Thank you!

    Read the article

  • Javascript new keyword and memory management

    - by Whyamistilltyping
    Coming from C++ it is hard grained into my mind that everytime I call new I call delete. In javascript I find myself calling new occasionally in my code but (hoping) the garbage collection functionality in the browser will take care of the mess for me. I don't like this - is there a 'delete' method in javascript and is how I use it different from in C++? Thanks.

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >