Search Results

Search found 15637 results on 626 pages for 'memory efficient'.

Page 13/626 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Windows 7 using exactly HALF the installed memory

    - by Nathan Ridley
    I've taken this directly from system information: Installed Physical Memory (RAM) 4.00 GB Total Physical Memory 2.00 GB Available Physical Memory 434 MB Total Virtual Memory 5.10 GB Available Virtual Memory 1.19 GB Page File Space 3.11 GB Also the BIOS reports a full 4GB available. Note the 4gb installed, yet 2gb total. I understand that on a 32 bit operating system, you'll never get the full 4gb of ram, however typically you'll get in the range of 2.5-3.2gb of ram. I have only 2gb available! My swap file goes nuts when I do anything! Note that I have dual SLI nvidia video cards, each with 512mb of on board ram, though I have the SLI feature turned off. Anybody know why Windows might claim that I have exactly 2gb of ram total? Note: previously asked on SuperUser, but closed as "belongs on superuser" before this site opened: http://serverfault.com/questions/39603/windows-7-using-exactly-half-the-installed-memory (I still need an answer!)

    Read the article

  • High Memory Utilization on weblogic

    - by Anup
    My Weblogic App Server shows high Memory Utization. The application seems to perform good without any memeory issues. Now that my traffix is going to increase i am worried about the memory and have a feeling things could go bad and need to take action now and i am confused as to should i increase The JVM memory on the weblogic instances which means adding more physical memory or should i increase the number of managed instances in the cluster. Would like to understand what does having high memory utilization mean and the advantage and disadvantages of adding JVM memory and adding managed instances. Thanks

    Read the article

  • High Memory Utilization on weblogic

    - by Anup
    My Weblogic App Server shows high Memory Utization. The application seems to perform good without any memeory issues. Now that my traffix is going to increase i am worried about the memory and have a feeling things could go bad and need to take action now and i am confused as to should i increase The JVM memory on the weblogic instances which means adding more physical memory or should i increase the number of managed instances in the cluster. Would like to understand what does having high memory utilization mean and the advantage and disadvantages of adding JVM memory and adding managed instances. Thanks

    Read the article

  • Optimal Memory Configuration for Dell PowerEdge 1800 (Windows Server 2000 32bit)

    - by David Murdoch
    I am upgrading the memory on a Dell PowerEdge 1800 Server running Windows Server 2000 (32 bit). My Computer Properties currently reports "2,096,432 KB RAM" (4 modules @ 512MB each). Crucial.com scan reports: "Each memory slot can hold DDR2 PC2-5300 with a maximum of 2GB per slot. Maximum Memory Capacity:  12288MB Currently Installed Memory:  2GB Available Memory Slots:  2 Total Memory Slots:  6 Dual Channel Support:   No CPU Manufacturer:  GenuineIntel CPU Family:  Intel(R) Xeon(TM) CPU 2.80GHz Model 4, Stepping 1 CPU Speed:  2793 MHz Installed in pairs of modules." We will be completely replacing the old 512 MB modules. Will there be any performance difference between installing 4 modules @ 1GB vs. 2 modules @ 2GB?

    Read the article

  • Xinetd , vncserver memory requirement

    - by JP19
    Hi, I am installing the following on a low memory system: vnc4server xinetd xterm openbox obconf I will only occasionally be logging into the vncs for some admin work. My question is: 1) Does xinetd take memory / cpu even when vncserver is not running? If so, can I "run" xinetd on demand (how)? And if no, any idea how much memory it will take when vncserver is not running? 2) Does vncserver take substantial memory when no clients are connected? 3) Do openbox/obconf take memory when vncserver is running but no client are connected? 4) Do openbox/obconf take memory when no vncserver is running? thanks JP

    Read the article

  • SharePoint / SQL Server "out of memory" error

    - by aardvark
    After months of preparation, we launched a new SharePoint intranet portal today. Immediately, some users began getting a "server out of memory" error when they tried to log in. The SharePoint server appeared to be fine, but the SQL Server was reporting 100% memory use. (It has 4 GB.) We rebooted the server and have not had further memory problems, though memory usage is hovering around 60% or above. I'm not convinced that we have solved the problem; I suspect it may return Monday morning when the whole staff tries to log in again. I'm not a database guy, and I'm stumped about how to troubleshoot this. Do we need more memory, or is there somewhere I should look to reduce memory usage?

    Read the article

  • Strategies for memory profiling

    In this whitepaper, Red Gate discusses the importance of handling two common issues in memory management: memory leaks and excessive memory usage. Red Gate demonstrates how their ANTS Memory Profiler can identify issues with memory management and provide a detailed view of a program's memory usage. This whitepaper doubles as a brief tutorial for using the ANTS Memory Profiler by providing an example of a program that is experiencing memory management issues.

    Read the article

  • Vmstat indicates memory is disappearing

    - by jimbotron
    I wanted to profile the memory usage of a script. Here's the output before it was running: procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 15624 186660 39460 439052 0 0 0 2 1 1 0 0 100 0 Here's the output while the script is running, at the point where free memory was at its lowest value: procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 15624 11464 40312 473524 0 0 0 2 1 1 0 0 100 0 So free memory dropped by about 175 MB, and I expected that buff would increase by that amount. But it seems the other columns changed by relatively negligible amounts - how is this possible? Am I interpreting this wrong, or is some memory just not being accounted for in this output?

    Read the article

  • Motherboard memory question

    - by JERiv
    I am currently drawing up specs on a new workstation for my office. I am considering the Asus P6X58D for a motherboard. This board's specs list it as supporting 24 gigs of memory. Suppose I were to use six four gig memory cards and then two video cards with 1 gig of memory apiece. Is the maximum supported memory similar to how 32 bit operating systems only have enough address space for 4 gigs of memory? Simply: Will the board post? If so, will the system be able to address all the memory, both the 24 gigs on the ddr3 bus and the 3 gigs on the graphics card?

    Read the article

  • Low CPU/Memory/Memory-bandwith Pathfinding (maybe like in Warcraft 1)

    - by Valmond
    Dijkstra and A* are all nice and popular but what kind of algorithm was used in Warcraft 1 for pathfinding? I remember that the enemy could get trapped in bowl-like caverns which means there were (most probably) no full-path calculations from "start to end". If I recall correctly, the algorithm could be something like this: A) Move towards enemy until success or hitting a wall B) If blocked by a wall, follow the wall until you can move towards the enemy without being blocked and then do A) But I'd like to know, if someone knows :-) [edit] As explained to Byte56, I'm searching for a low cpu/mem/mem-bandwidth algo and wanted to know if Warcraft had some special secrets to deliver (never seen that kind of pathfinding elsewhere), I hope that that is more concordant with the stackexchange rules.

    Read the article

  • If my application doesn't use a lot of memory, can I ignore viewDidUnload:?

    - by iPhoneToucher
    My iPhone app generally uses under 5MB of living memory and even in the most extreme conditions stays under 8MB. The iPhone 2G has 128MB of RAM and from what I've read an app should only expect to have 20-30MB to use. Given that I never expect to get anywhere near the memory limit, do I need to care about memory warnings and setting objects to nil in viewDidUnload:? The only way I see my app getting memory warnings is if something else on the phone is screwing with the memory, in which case the entire phone would be acting silly. I built my app without ever using viewDidUnload:, so there's more than a hundred classes that I'd need to inspect and add code to if I did need to implement it.

    Read the article

  • Free RAM disappears - Memory leak?

    - by Izzy
    On a fresh started system, free reports about 1.5G used RAM (8G RAM alltogether, Ubuntu 12.04 with lightdm and plasma desktop, one konsole window started). Having the apps running I use, it still consumes not more than 2G. However, having the system running for a couple of days, more and more of my free RAM disappears -- without showing up in the list of used apps: while smem --pie=name reports less than 20% used (and 80% being available), everything else says differently. free -m for example reports on about day 7: total used free shared buffers cached Mem: 7459 7013 446 0 178 997 -/+ buffers/cache: 5836 1623 Swap: 9536 296 9240 (so you can see, it's not the buffers or the cache). Today this finally ended with the system crashing completely: the windows manager being gone, apps "hanging in the air" (frameless) -- and a popup notifying me about "too many open files". Syslog reports: kernel: [856738.020829] VFS: file-max limit 752838 reached So I closed those applications I was able to close, and killed X using Ctrl-Alt-backspace. X tried to come up again after that with failsafeX, but was unable to do so as it could no longer detect its configuration. So I switched to a console using Ctrl-Alt-F2, captured all information I could think of (vmstat, free, smem, proc/meminfo, lsof, ps aux), and finally rebooted. X again came up with failsafeX; this time I told it to "recover from my backed-up configuration", then switched to a console and successfully used startx to bring up the graphical environment. I have no real clue to what is causing this issue -- though it must have to do either with X itself, or with some user processes running on X -- as after killing X, free -m output looked like this: total used free shared buffers cached Mem: 7459 2677 4781 0 62 419 -/+ buffers/cache: 2195 5263 Swap: 9536 59 9477 (~3.5GB being freed) -- to compare with the output after a fresh start: total used free shared buffers cached Mem: 7459 1483 5975 0 63 730 -/+ buffers/cache: 689 6769 Swap: 9536 0 9536 Two more helpful outputs are provided by memstat -u. Shortly before the crash: User Count Swap USS PSS RSS mail 1 0 200 207 616 whoopsie 1 764 740 817 2300 colord 1 3200 836 894 2156 root 62 70404 352996 382260 569920 izzy 80 177508 1465416 1519266 1851840 After having X killed: User Count Swap USS PSS RSS mail 1 0 184 188 356 izzy 1 1400 708 739 1080 whoopsie 1 848 668 826 1772 colord 1 3204 804 888 1728 root 62 54876 131708 149950 267860 And after a restart, back in X: User Count Swap USS PSS RSS mail 1 0 212 217 628 whoopsie 1 0 1536 1880 5096 colord 1 0 3740 4217 7936 root 54 0 148668 180911 345132 izzy 47 0 370928 437562 915056 Edit: Just added two graphs from my monitoring system. Interesting to see: everytime when there's a "jump" in memory consumption, CPU peaks as well. Just found this right now -- and it reminds me of another indicator pointing to X itself: Often when returning to my machine and unlocking the screen, I found something doing heavvy work on my CPU. Checking with top, it always turned out to be /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch -background none. So after this long explanation, finally my questions: What could be the possible causes? How can I better identify involved processes/applications? What steps could be taken to avoid this behaviour -- short from rebooting the machine all X days? I was running 8.04 (Hardy) for about 5 years on my old machine, never having experienced the like (always more than 100 days uptime, before rebooting for e.g. kernel updates). This now is a complete new machine with a fresh install of 8.04. In case it matters, some specs: AMD A4-3400 APU with Radeon(tm) HD Graphics, using the open-source ati/radeon driver (so no fglrx installed), 8GB RAM, WDC WD1002FAEX-0 hdd (1TB), Asus F1A75-V Evo mainboard. Ubuntu 12.04 64-bit with KDE4/Plasma. Apps usually open more or less permanently include Evolution, Firefox, konsole (with Midnight Commander running inside, about 4 tabs), and LibreOffice -- plus occasionally Calibre, Gimp and Moneyplex (banking software I'm already using for almost 20 years now, in a version which did fine on Hardy).

    Read the article

  • Force full garbage collection when memory occupation goes beyond a certain threshold

    - by Silvio Donnini
    I have a server application that, in rare occasions, can allocate large chunks of memory. It's not a memory leak, as these chunks can be claimed back by the garbage collector by executing a full garbage collection. Normal garbage collection frees amounts of memory that are too small: it is not adequate in this context. The garbage collector executes these full GCs when it deems appropriate, namely when the memory footprint of the application nears the allotted maximum specified with -Xmx. That would be ok, if it wasn't for the fact that these problematic memory allocations come in bursts, and can cause OutOfMemoryErrors due to the fact that the jvm is not able to perform a GC quickly enough to free the required memory. If I manually call System.gc() beforehand, I can prevent this situation. Anyway, I'd prefer not having to monitor my jvm's memory allocation myself (or insert memory management into my application's logic); it would be nice if there was a way to run the virtual machine with a memory threshold, over which full GCs would be executed automatically, in order to release very early the memory I'm going to need. Long story short: I need a way (a command line option?) to configure the jvm in order to release early a good amount of memory (i.e. perform a full GC) when memory occupation reaches a certain threshold, I don't care if this slows my application down every once in a while. All I've found till now are ways to modify the size of the generations, but that's not what I need (at least not directly). I'd appreciate your suggestions, Silvio P.S. I'm working on a way to avoid large allocations, but it could require a long time and meanwhile my app needs a little stability

    Read the article

  • How much memory should my rails stack be consuming?

    - by Hamish
    I am running my own webserver on a 384MB VPS from Slicehost to serve two Ruby on Rails applications on separate Virtual Hosts. I am running Phusion Passenger with Apache2. The following is the contents of my Passenger.conf <IfModule passenger_module> PassengerRoot /opt/ruby-enterprise-1.8.6-20090610/lib/ruby/gems/1.8/gems/passenger-2.2.11 PassengerLogLevel 0 PassengerRuby /usr/local/bin/ruby PassengerUserSwitching on PassengerDefaultUser nobody PassengerMaxPoolSize 3 PassengerMaxInstancesPerApp 2 PassengerPoolIdleTime 300 # Ruby on Rails Options RailsAutoDetect on RailsSpawnMethod smart NameVirtualHost *:80 If i do a 'top' on my server I have 314MB used on average, this seems like too much? Am I mistaken and if not what possible steps can I take to reduce the Memory usage? Thanks!

    Read the article

  • free -m output, should I be concerend about this servers low memory?

    - by Michael
    This is the output of free -m on a production database (MySQL with machine. 83MB looks pretty bad, but I assume the buffer/cache will be used instead of Swap? [admin@db1 www]$ free -m total used free shared buffers cached Mem: 16053 15970 83 0 122 5343 -/+ buffers/cache: 10504 5549 Swap: 2047 0 2047 top ouptut sorted by memory: top - 10:51:35 up 140 days, 7:58, 1 user, load average: 2.01, 1.47, 1.23 Tasks: 129 total, 1 running, 128 sleeping, 0 stopped, 0 zombie Cpu(s): 6.5%us, 1.2%sy, 0.0%ni, 60.2%id, 31.5%wa, 0.2%hi, 0.5%si, 0.0%st Mem: 16439060k total, 16353940k used, 85120k free, 122056k buffers Swap: 2096472k total, 104k used, 2096368k free, 5461160k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 20757 mysql 15 0 10.2g 9.7g 5440 S 29.0 61.6 28588:24 mysqld 16610 root 15 0 184m 18m 4340 S 0.0 0.1 0:32.89 sysshepd 9394 root 15 0 154m 8336 4244 S 0.0 0.1 0:12.20 snmpd 17481 ntp 15 0 23416 5044 3916 S 0.0 0.0 0:02.32 ntpd 2000 root 5 -10 12652 4464 3184 S 0.0 0.0 0:00.00 iscsid 8768 root 15 0 90164 3376 2644 S 0.0 0.0 0:00.01 sshd

    Read the article

  • iPhone development: pointer being freed was not allocated

    - by w4nderlust
    Hello, i got this message from the debugger: Pixture(1257,0xa0610500) malloc: *** error for object 0x21a8000: pointer being freed was not allocated *** set a breakpoint in malloc_error_break to debug so i did a bit of tracing and got: (gdb) shell malloc_history 1257 0x21a8000 ALLOC 0x2196a00-0x21a89ff [size=73728]: thread_a0610500 |start | main | UIApplicationMain | GSEventRun | GSEventRunModal | CFRunLoopRunInMode | CFRunLoopRunSpecific | __CFRunLoopDoObservers | CA::Transaction::observer_callback(__CFRunLoopObserver*, unsigned long, void*) | CA::Transaction::commit() | CA::Context::commit_transaction(CA::Transaction*) | CALayerDisplayIfNeeded | -[CALayer _display] | CABackingStoreUpdate | backing_callback(CGContext*, void*) | -[CALayer drawInContext:] | -[UIView(CALayerDelegate) drawLayer:inContext:] | -[AvatarView drawRect:] | -[AvatarView overlayPNG:] | +[UIImageUtility createMaskOf:] | UIGraphicsGetImageFromCurrentImageContext | CGBitmapContextCreateImage | create_bitmap_data_provider | malloc | malloc_zone_malloc and i really can't understand what i am doing wrong. here's the code of the [UIImageUtility createMaskOf:] function: + (UIImage *)createMaskOf:(UIImage *)source { CGRect rect = CGRectMake(0, 0, source.size.width, source.size.height); UIGraphicsBeginImageContext(CGSizeMake(source.size.width, source.size.height)); CGContextRef context = UIGraphicsGetCurrentContext(); CGContextTranslateCTM(context, 0, source.size.height); CGContextScaleCTM(context, 1.0, -1.0); UIImage *original = [self createGrayCopy:source]; CGContextRef context2 = CGBitmapContextCreate(NULL, source.size.width, source.size.height, 8, 4 * source.size.width, CGColorSpaceCreateDeviceRGB(), kCGImageAlphaNoneSkipLast); CGContextDrawImage(context2, CGRectMake(0, 0, source.size.width, source.size.height), original.CGImage); CGImageRef unmasked = CGBitmapContextCreateImage(context2); const float myMaskingColorsFrameColor[6] = { 1,256,1,256,1,256 }; CGImageRef mask = CGImageCreateWithMaskingColors(unmasked, myMaskingColorsFrameColor); CGContextSetRGBFillColor (context, 256,256,256, 1); CGContextFillRect(context, rect); CGContextDrawImage(context, rect, mask); UIImage *whiteMasked = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); return whiteMasked; } the other custom function called before that is the following: - (UIImage *)overlayPNG:(SinglePart *)sp { NSLog([sp description]); // Rect and context setup CGRect rect = CGRectMake(0, 0, sp.image.size.width, sp.image.size.height); NSLog(@"%f x %f", sp.image.size.width, sp.image.size.height); // Create an image of a color filled rectangle UIImage *baseColor = nil; if (sp.hasOwnColor) { baseColor = [UIImageUtility imageWithRect:rect ofColor:sp.color]; } else { SinglePart *facePart = [editingAvatar.face.partList objectAtIndex:0]; baseColor = [UIImageUtility imageWithRect:rect ofColor:facePart.color]; } // Crete the mask of the layer UIImage *mask = [UIImageUtility createMaskOf:sp.image]; mask = [UIImageUtility createGrayCopy:mask]; // Create a new context for merging the overlay and a mask of the layer UIGraphicsBeginImageContext(CGSizeMake(sp.image.size.width, sp.image.size.height)); CGContextRef context2 = UIGraphicsGetCurrentContext(); // Adjust the coordinate system so that the origin // is in the lower left corner of the view and the // y axis points up CGContextTranslateCTM(context2, 0, sp.image.size.height); CGContextScaleCTM(context2, 1.0, -1.0); // Create masked overlay color layer CGImageRef MaskedImage = CGImageCreateWithMask (baseColor.CGImage, mask.CGImage); // Draw the base color layer CGContextDrawImage(context2, rect, MaskedImage); // Get the result of the masking UIImage* overlayMasked = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); UIGraphicsBeginImageContext(CGSizeMake(sp.image.size.width, sp.image.size.height)); CGContextRef context = UIGraphicsGetCurrentContext(); // Adjust the coordinate system so that the origin // is in the lower left corner of the view and the // y axis points up CGContextTranslateCTM(context, 0, sp.image.size.height); CGContextScaleCTM(context, 1.0, -1.0); // Get the result of the blending of the masked overlay and the base image CGContextDrawImage(context, rect, overlayMasked.CGImage); // Set the blend mode for the next drawn image CGContextSetBlendMode(context, kCGBlendModeOverlay); // Component image drawn CGContextDrawImage(context, rect, sp.image.CGImage); UIImage* blendedImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); CGImageRelease(MaskedImage); return blendedImage; }

    Read the article

  • Server high memory usage at same time every day

    - by Sam Parmenter
    Right, we moved one of our main sites onto a new AWS box with plenty of grunt as it would allow us more control that we had before and future proof ourselves. About a month ago we started running into issues with high memory usage at the same time every day. In the morning an export is run to export data to a file which is the FTPed to a local machine for processing. The issues were co-inciding with the rough time of the export but when we didn't run the export one day, the server still ran into the same issues. The export has been run at other times in the day since to monitor memory usage to see if it spikes. The conclusion is that the export is fine and barely touches the sides memory wise. No noticeable change in memory usage. When the issue happens, its effect is to kill mysql and require us to restart the process. We think it might be a mysql memory issue, but might just be that mysql is just the first to feel it. Looking at the logs there is no particular query run before the memory usage hits 90%. When it strikes at about 9:20am, the memory usage spikes from a near constant 25% to 98% and very quickly kills mysql to save itself. It usually takes about 3-4 minutes to die. There are no cron jobs running at that time of the day and we haven't noticed a spike in traffic over the period of the issues. Any help would be massively appreciated! thanks.

    Read the article

  • Upgrading memory on an IBM Power 710 Express (8231-E2B)

    - by cairnz
    We have a Power 710 Express server that was loaded with 4x4 GB memory on a single riser card. I have replaced the 4 chips with 4x8GB and put in another riser card and loaded it with 4x8GB more for a total of 64GB memory. The firmware is AL730_078. When i power it on, the service processor boots up and i can access the ASMi. From here I can look at "Memory Serial Presence Data" and see that the system in some way detects 8x8 GB. However when i look at Hardware Deconfiguration and specifically Memory Deconfiguration, it is still listed with old values, 16384MB, and claims there are 4x4 chips in the C17 riser. How do i proceed to make the server recognize properly the amount of memory installed? I get a FSPSP04 and B181B50F progress code on booting because (i think) it hasn't been told the memory has changed. It then does not proceed to booting the operating system (VIOS) when turned on. Are there any steps I have overlooked here? Can I do some commands, either on the service processor, or otherwise, to tell the system to configure with the proper amount of memory? PS: This is a stand alone server, not configured with HMC or SDMC.

    Read the article

  • differing methods of alloc / init / retaining an object in objective-c

    - by taber
    In several pieces of sample objective-c code I've seen people create new objects like this: RootViewController *viewController = [[RootViewController alloc] init]; self.rootViewController = viewController; // self.rootViewController is a (nonatomic,retain) synthesized property [viewController release]; [window addSubview: [self.rootViewController view]]; Is that any different "behind the scenes" than doing it like this instead? self.rootViewController = [[RootViewController alloc] init]; [window addSubview: [self.rootViewController view]]; Seems a bit more straightforward/streamlined that way so I'm wondering why anyone would opt for the first method. Thanks!

    Read the article

  • Website memory problem

    - by Toktik
    I have CentOS 5 installed on my server. I'm in VPS server. I have site where I have constant online ~150. First look on site looks OK. But when I go through links, sometimes I receive Out of memory PHP error. It looks like this Fatal error: Out of memory (allocated 36962304) (tried to allocate 7680 bytes) in /home/host/public_html/sites/all/modules/cck/modules/fieldgroup/fieldgroup.install on line 100 And always, not allocated memory is very small. In average I have 30% CPU load, 25% RAM load. So I think here is not a physical memory problem. My PHP memory limit was set to 1500MB. My apache error log looks like this [Thu Sep 30 17:48:59 2010] [error] [client 91.204.190.5] Out of memory, referer: http://www.host.com/17402 [Thu Sep 30 17:48:59 2010] [error] [client 91.204.190.5] Premature end of script headers: index.php, referer: http://www.host.com/17402 [Thu Sep 30 17:48:59 2010] [error] [client 91.204.190.5] Out of memory, referer: http://www.host.com/17402 [Thu Sep 30 17:48:59 2010] [error] [client 91.204.190.5] Premature end of script headers: index.php, referer: http://www.host.com/17402 [Thu Sep 30 17:49:00 2010] [error] [client 91.204.190.5] File does not exist: /home/host/public_html/favicon.ico Past I have not met with this on my server and the problem appeared itself. Besides this I'm receiving some server errors on mail. cpsrvd failed @ Fri Sep 24 16:45:20 2010. A restart was attempted automagically. Service Check Method: [tcp connect] Failure Reason: Unable to connect to port 2086 Same for tailwatchd. Support tried, and can't help me...

    Read the article

  • Quad channel memory and compatibility

    - by balteo
    My motherboard has quad channel memory compatibility. There are 8 memory slots in all: 4 slots are black the other 4 slots are white. I currently have 4 memory modules of 1 GB each in the 4 white slots. That leaves me with 4 free memory slots. My question is: can I put 4 memory modules of 2 GB each in the 4 remaining slots or do I have to use modules of 1 GB all over? FYI here is the output of lshw: alpha description: Ordinateur Tour produit: Precision WorkStation 690 *-cpu:0 description: CPU produit: Intel(R) Xeon(R) CPU X5355 @ 2.66GHz *-memory description: Mémoire Système identifiant matériel: 1000 emplacement: Carte mère taille: 4GiB *-bank:0 description: FB-DIMM DDR2 FB-DIMM Synchrone 667 MHz (1,5 ns) produit: HYMP512F72CP8N3-Y5 fabriquant: Hynix Semiconductor (Hyundai Electronics) identifiant matériel: 0 numéro de série: 56737501 emplacement: DIMM 1 taille: 1GiB bits: 64 bits horloge: 667MHz (1.5ns) *-bank:1 description: FB-DIMM DDR2 FB-DIMM Synchrone 667 MHz (1,5 ns) produit: HYMP512F72CP8N3-Y5 fabriquant: Hynix Semiconductor (Hyundai Electronics) identifiant matériel: 1 numéro de série: 48115124 emplacement: DIMM 2 taille: 1GiB bits: 64 bits horloge: 667MHz (1.5ns) *-bank:2 description: FB-DIMM DDR2 FB-DIMM Synchrone 667 MHz (1,5 ns) produit: HYMP512F72CP8N3-Y5 fabriquant: Hynix Semiconductor (Hyundai Electronics) identifiant matériel: 2 numéro de série: 48115523 emplacement: DIMM 3 taille: 1GiB bits: 64 bits horloge: 667MHz (1.5ns) *-bank:3 description: FB-DIMM DDR2 FB-DIMM Synchrone 667 MHz (1,5 ns) produit: HYMP512F72CP8N3-Y5 fabriquant: Hynix Semiconductor (Hyundai Electronics) identifiant matériel: 3 numéro de série: 48115424 emplacement: DIMM 4 taille: 1GiB bits: 64 bits horloge: 667MHz (1.5ns) *-bank:4 description: FB-DIMM DDR2 FB-DIMM Synchrone 667 MHz (1,5 ns) [vide] fabriquant: FFFFFFFFFFFF identifiant matériel: 4 numéro de série: FFFFFFFF emplacement: DIMM 5 bits: 64 bits horloge: 667MHz (1.5ns) *-bank:5 description: FB-DIMM DDR2 FB-DIMM Synchrone 667 MHz (1,5 ns) [vide] fabriquant: FFFFFFFFFFFF identifiant matériel: 5 numéro de série: FFFFFFFF emplacement: DIMM 6 bits: 64 bits horloge: 667MHz (1.5ns) *-bank:6 description: FB-DIMM DDR2 FB-DIMM Synchrone 667 MHz (1,5 ns) [vide] fabriquant: FFFFFFFFFFFF identifiant matériel: 6 numéro de série: FFFFFFFF emplacement: DIMM 7 bits: 64 bits horloge: 667MHz (1.5ns) *-bank:7 description: FB-DIMM DDR2 FB-DIMM Synchrone 667 MHz (1,5 ns) [vide] fabriquant: FFFFFFFFFFFF identifiant matériel: 7 numéro de série: FFFFFFFF emplacement: DIMM 8 bits: 64 bits horloge: 667MHz (1.5ns) *-pci:0 description: Host bridge produit: 5000X Chipset Memory Controller Hub fabriquant: Intel Corporation identifiant matériel: 100 information bus: pci@0000:00:00.0 version: 12 bits: 32 bits horloge: 33MHz

    Read the article

  • Why isn't garbage collection being activated in my code? [migrated]

    - by Netmoon
    I have a foreach statement in my code, where each iteration calculates huge amounts of data and goes to the next iteration. I run this code, but when I read the log, I see there's a memory leak error. PHP.net says when this happens, using gc_enabled() is a good way to handle this. I've added these lines to last line of the foreach block: echo "Check GC enabled : " . gc_enabled(); echo "Number of affected cycles : " . gc_collect_cycles(); And this is the output: Check GC enabled : 1 Number of affected cycles : 0 Why do cycles exist, but the affected cycles is 0?

    Read the article

  • What should the memory configuration be?

    - by AngryHacker
    We have a server (ProLiant DL585 G1 by HP), which hosts Windows 2003 x64 R2 with SQL Server 2005 x64 and a host of other apps. It currently has 6GB of RAM. We are currently very memory constrained and it's clear that we need to get more memory. 8GB will probably do the trick, however, we are not sure as to what memory configuration will give us the biggest performance buck. Currently all 8 memory slots are filled (4 slots have 1GB chip, while the other 4 slots have 512MB chips). Should we throw the 512MB sticks away and just replace them all with 1GB sticks? If we decided to go with a higher memory configuration (e.g. 10GB or 12GB or 16GB), is it advisable to keep all the sticks of the same size or it does not matter? I was once told that interleaved memory requires (for better performance) that memory should be in multiples (e.g. 2 or 4 or 8 or 16, etc...). I am not even sure that the server has an interleaved configuration (and don't know how to find out), but is this true? Thanks.

    Read the article

  • MMGR Questions, code use and thread-saftey

    - by chadb
    1) Is MMGR thread safe? 2) I was hoping someone could help me understand some code. I am looking at something where a macro is used, but I don't understand the macro. I know it contains a function call and an if check, however, the function is a void function. How does wrapping "(m_setOwner (FILE,_LINE_,FUNCTION),false)" ever change return types? #define someMacro (m_setOwner(__FILE__,__LINE__,__FUNCTION__),false) ? NULL : new ... void m_setOwner(const char *file, const unsigned int line, const char *func); 3) What is the point of the reservoir? 4) On line 770 ("void *operator new(size_t reportedSize)" there is the line "// ANSI says: allocation requests of 0 bytes will still return a valid value" Who/what is ANSI in this context? Do they mean the standards? 5) This is more of C++ standards, but where does "reportedSize" come from for "void *operator new(size_t reportedSize)"? 6) Is this the code that is actually doing the allocation needed? "au-actualAddress = malloc(au-actualSize);"

    Read the article

  • EXC_BAD_ACCESS when simply casting a pointer in Obj-C

    - by AlexChilcott
    Hi all, Frequent visitor but first post here on StackOverflow, I'm hoping that you guys might be able to help me out with this. I'm fairly new to Obj-C and XCode, and I'm faced with this really... weird... problem. Googling hasn't turned up anything whatsoever. Basically, I get an EXC_BAD_ACCESS signal on a line that doesn't do any dereferencing or anything like that that I can see. Wondering if you guys have any idea where to look for this. I've found a work around, but no idea why this works... The line the broken version barfs out on is the line: LevelEntity *le = entity; where I get my bad access signal. Here goes: THIS VERSION WORKS NSArray *contacts = [self.body getContacts]; for (PhysicsContact *contact in contacts) { PhysicsBody *otherBody; if (contact.bodyA == self.body) { otherBody = contact.bodyB; } if (contact.bodyB == self.body) { otherBody = contact.bodyA; } id entity = [otherBody userData]; if (entity != nil) { LevelEntity *le = entity; CGPoint point = [contact contactPointOnBody:otherBody]; } } THIS VERSION DOESNT WORK NSArray *contacts = [self.body getContacts]; for (NSUInteger i = 0; i < [contacts count]; i++) { PhysicsContact *contact = [contacts objectAtIndex:i]; PhysicsBody *otherBody; if (contact.bodyA == self.body) { otherBody = contact.bodyB; } if (contact.bodyB == self.body) { otherBody = contact.bodyA; } id entity = [otherBody userData]; if (entity != nil) { LevelEntity *le = entity; CGPoint point = [contact contactPointOnBody:otherBody]; } } Here, the only difference between the two examples is the way I enumerate through my array. In the first version (which works) I use for (... in ...), where as in the second I use for (...; ...; ...). As far as I can see, these should be the same. This is seriously weirding me out. Anyone have any similar experience or idea whats going on here? Would be really great :) Cheers, Alex

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >