Search Results

Search found 15637 results on 626 pages for 'memory efficient'.

Page 15/626 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Max ram for computer 16GB or 8GB

    - by Laptop memory question
    Manufacturer's specifications for my notebook say memory can be extended from 4GB to 8GB. Whereas, running sudo dmidecode suggests the computer can use 16GB as below: Handle 0x0037, DMI type 16, 15 bytes Physical Memory Array Location: System Board Or Motherboard Use: System Memory Error Correction Type: None Maximum Capacity: 16 GB Error Information Handle: Not Provided Number Of Devices: 4 Which one is correct?

    Read the article

  • Why do we use to talk about addresses and memory of variable in C?

    - by user2720323
    Why do we use to talk about addresses and memory of variable in C, where in other languages (like in Java, .Net etc) we do not talk about variable address and memory in a program, we will directly use the variables. But in C Language we are listening the word address and memory. How to explain this? I hope C is high level language designed over the assembly language. So C is a thin layer over assembly language (in assembly language we will use memory locations to store a variable and track a variable). But in other languages these addresses and memory related things are wrapped in that specific language, so that we will not listen these words.

    Read the article

  • CentOS 6.3 X86_64 RAM detection

    - by Peter
    I have a machine with 8GB ram (BIOS sees it, so my motherboard and CPU supports it), and I installed CentOS 6.3 on it. When it starts up, it only see 3.1GB. uname says: 2.6.32-279.1.1.el6.x86_64 #1 SMP BIOS-provided physical RAM map: BIOS-e820: 0000000000000000 - 000000000009fc00 (usable) BIOS-e820: 000000000009fc00 - 00000000000a0000 (reserved) BIOS-e820: 00000000000e0000 - 0000000000100000 (reserved) BIOS-e820: 0000000000100000 - 00000000cf65f000 (usable) BIOS-e820: 00000000cf65f000 - 00000000cf6e8000 (ACPI NVS) BIOS-e820: 00000000cf6e8000 - 00000000cf6ec000 (usable) BIOS-e820: 00000000cf6ec000 - 00000000cf6ff000 (ACPI data) BIOS-e820: 00000000cf6ff000 - 00000000cf700000 (usable) dmesg | grep -i memory says: initial memory mapped : 0 - 20000000 init_memory_mapping: 0000000000000000-00000000cf700000 Reserving 129MB of memory at 48MB for crashkernel (System RAM: 3319MB) PM: Registered nosave memory: 000000000009f000 - 00000000000a0000 PM: Registered nosave memory: 00000000000a0000 - 00000000000e0000 PM: Registered nosave memory: 00000000000e0000 - 0000000000100000 PM: Registered nosave memory: 00000000cf65f000 - 00000000cf6e8000 PM: Registered nosave memory: 00000000cf6ec000 - 00000000cf6ff000 Memory: 3184828k/3398656k available (5152k kernel code, 1016k absent, 212812k reserved, 7166k data, 1260k init) please try 'cgroup_disable=memory' option if you don't want memory cgroups Initializing cgroup subsys memory Freeing initrd memory: 16136k freed Non-volatile memory driver v1.3 agpgart-intel 0000:00:00.0: detected 8192K stolen memory crash memory driver: version 1.1 Freeing unused kernel memory: 1260k freed Freeing unused kernel memory: 972k freed Freeing unused kernel memory: 1732k freed Update: Memtest see all the 8GB, and dmidecode -t 17 | grep Size too. But free -m still see only 3.1 GB. Question: How can I repair/modify the system, to see all the 8GB RAM? Thanks in advance!

    Read the article

  • android memory management outside heap

    - by Daniel Benedykt
    Hi I am working on an application for android and we since we have lots of graphics, we use a lot of memory. I monitor the memory heap size and its about 3-4 Mb , and peeks of 5Mb when I do something that requires more memory (and then goes back to 3). This is not a big deal, but some other stuff is handled outside the heap memory, like loading of drawables. For example if I run the ddms tool outside eclipse, and go to sysinfo, I see that my app is taking 20Mb on the Droid and 12 on the G1, but heap size are the same in both, because data is the same but images are different. So the questions are: How do I know what is taking the memory outside the heap memory? What other stuff takes memory outside the heap memory? Complex layouts (big tree) ? Animations? Thanks Daniel

    Read the article

  • Help with strange memory behavior. Looking for leaks both in my brain and in my code.

    - by BastiBechtold
    I spent the last few days trying to find memory leaks in a program we are developing. First of all, I tried using some leak detectors. After fixing a few issues, they do not find any leaks any more. However, I am also monitoring my application using perfmon.exe. Performance Monitor reports that 'Private Bytes' and 'Working Set - Private' are steadily rising when the app is used. To me, this suggests that the program is using more and more memory the longer it runs. Internal resources seem to be stable however, so this sounds like leaking to me. The program is loading a DLL at runtime. I suspect that these leaks or whatever they are occur in that library and get purged when the library is unloaded, hence they won't get picked up by the leak detectors. I used both DevPartner BoundsChecker and Virtual Leak Detector to look for memory leaks. Both supposedly catch leaks in DLLs. Also, the memory consumption is increasing in steps and those steps roughly, but not exactly, coincide with certain GUI actions I perform in the application. If these were errors in our code, they should get triggered every single time the actions are performed and not just most of the time. Whenever I am confronted with so much strangeness, I begin to question my basic assumptions. So I turn to you, who know everything, for suggestions. Is there a flaw in my assumptions? Do you have an idea of how to go about troubleshooting a problem like this? Edit: I am currently using Microsoft Visual C++ (x86) on Windows 7 64. Edit2: I just used IBM Purify to hunt for leaks. First of all, it lists a full 30% of the program as leaked memory. This can not be true. I guess it is identifying the whole DLL as leaked or something like that. However, if I search for new leaks every few actions, it reports leaks that correspond with the size increase reported by Performance Monitor. This could be a lead to a leak. Sadly, I am only using the trial version of Purify, so it won't show me the actual location of those leaks. (These leaks only show up at runtime. When the program exits, there are no leaks whatsoever reported by any tool.)

    Read the article

  • Determining if Memory Pointer is Valid - C++

    - by Jim Fell
    It has been my observation that if free( ptr ) is called where ptr is not a valid pointer to system-allocated memory, an access violation occurs. Let's say that I call free like this: LPVOID ptr = (LPVOID)0x12345678; free( ptr ); This will most definitely cause an access violation. Is there a way to test that the memory location pointed to by ptr is valid system-allocated memory? It seems to me that the the memory management part of the Windows OS kernel must know what memory has been allocated and what memory remains for allocation. Otherwise, how could it know if enough memory remains to satisfy a given request? (rhetorical) That said, it seems reasonable to conclude that there must be a function (or set of functions) that would allow a user to determine if a pointer is valid system-allocated memory. Perhaps Microsoft has not made these functions public. If Microsoft has not provided such an API, I can only presume that it was for an intentional and specific reason. Would providing such a hook into the system prose a significant threat to system security? Situation Report Although knowing whether a memory pointer is valid could be useful in many scenarios, this is my particular situation: I am writing a driver for a new piece of hardware that is to replace an existing piece of hardware that connects to the PC via USB. My mandate is to write the new driver such that calls to the existing API for the current driver will continue to work in the PC applications in which it is used. Thus the only required changes to existing applications is to load the appropriate driver DLL(s) at startup. The problem here is that the existing driver uses a callback to send received serial messages to the application; a pointer to allocated memory containing the message is passed from the driver to the application via the callback. It is then the responsibility of the application to call another driver API to free the memory by passing back the same pointer from the application to the driver. In this scenario the second API has no way to determine if the application has actually passed back a pointer to valid memory.

    Read the article

  • ANTS CLR and Memory Profiler In Depth Review (Part 2 of 2 &ndash; Memory Profiler)

    - by ToStringTheory
    One of the things that people might not know about me, is my obsession to make my code as efficient as possible. Many people might not realize how much of a task or undertaking that this might be, but it is surely a task as monumental as climbing Mount Everest, except this time it is a challenge for the mind… In trying to make code efficient, there are many different factors that play a part – size of project or solution, tiers, language used, experience and training of the programmer, technologies used, maintainability of the code – the list can go on for quite some time. I spend quite a bit of time when developing trying to determine what is the best way to implement a feature to accomplish the efficiency that I look to achieve. One program that I have recently come to learn about – Red Gate ANTS Performance (CLR) and Memory profiler gives me tools to accomplish that job more efficiently as well. In this review, I am going to cover some of the features of the ANTS memory profiler set by compiling some hideous example code to test against. Notice As a member of the Geeks With Blogs Influencers program, one of the perks is the ability to review products, in exchange for a free license to the program. I have not let this affect my opinions of the product in any way, and Red Gate nor Geeks With Blogs has tried to influence my opinion regarding this product in any way. Introduction – Part 2 In my last post, I reviewed the feature packed Red Gate ANTS Performance Profiler.  Separate from the Red Gate Performance Profiler is the Red Gate ANTS Memory Profiler – a simple, easy to use utility for checking how your application is handling memory management…  A tool that I wish I had had many times in the past.  This post will be focusing on the ANTS Memory Profiler and its tool set. The memory profiler has a large assortment of features just like the Performance Profiler, with the new session looking nearly exactly alike: ANTS Memory Profiler Memory profiling is not something that I have to do very often…  In the past, the few cases I’ve had to find a memory leak in an application I have usually just had to trace the code of the operations being performed to look for oddities…  Sadly, I have come across more undisposed/non-using’ed IDisposable objects, usually from ADO.Net than I would like to ever see.  Support is not fun, however using ANTS Memory Profiler makes this task easier.  For this round of testing, I am going to use the same code from my previous example, using the WPF application. This time, I will choose the ‘Profile Memory’ option from the ANTS menu in Visual Studio, which launches the solution in its currently configured state/start-up project, and then launches the ANTS Memory Profiler to help.  It prepopulates all of the fields with the current project information, and all I have to do is select the ‘Start Profiling’ option. When the window comes up, it is actually quite barren, just giving ideas on how to work the profiler.  You start by getting to the point in your application that you want to profile, and then taking a ‘Memory Snapshot’.  This performs a full garbage collection, and snapshots the managed heap.  Using the same WPF app as before, I will go ahead and take a snapshot now. As you can see, ANTS is already giving me lots of information regarding the snapshot, however this is just a snapshot.  The whole point of the profiler is to perform an action, usually one where a memory problem is being noticed, and then take another snapshot and perform a diff between them to see what has changed.  I am going to go ahead and generate 5000 primes, and then take another snapshot: As you can see, ANTS is already giving me a lot of new information about this snapshot compared to the last.  Information such as difference in memory usage, fragmentation, class usage, etc…  If you take more snapshots, you can use the dropdown at the top to set your actual comparison snapshots. If you beneath the timeline, you will see a breadcrumb trail showing how best to approach profiling memory using ANTS.  When you first do the comparison, you start on the Summary screen.  You can either use the charts at the bottom, or switch to the class list screen to get to the next step.  Here is the class list screen: As you can see, it lists information about all of the instances between the snapshots, as well as at the bottom giving you a way to filter by telling ANTS what your problem is.  I am going to go ahead and select the Int16[] to look at the Instance Categorizer Using the instance categorizer, you can travel backwards to see where all of the instances are coming from.  It may be hard to see in this image, but hopefully the lightbox (click on it) will help: I can see that all of these instances are rooted to the application through the UI TextBlock control.  This image will probably be even harder to see, however using the ‘Instance Retention Graph’, you can trace an objects memory inheritance up the chain to see its roots as well.  This is a simple example, as this is simply a known element.  Usually you would be profiling an actual problem, and comparing those differences.  I know in the past, I have spotted a problem where a new context was created per page load, and it was rooted into the application through an event.  As the application began to grow, performance and reliability problems started to emerge.  A tool like this would have been a great way to identify the problem quickly. Overview Overall, I think that the Red Gate ANTS Memory Profiler is a great utility for debugging those pesky leaks.  3 Biggest Pros: Easy to use interface with lots of options for configuring profiling session Intuitive and helpful interface for drilling down from summary, to instance, to root graphs ANTS provides an API for controlling the profiler. Not many options, but still helpful. 2 Biggest Cons: Inability to automatically snapshot the memory by interval Lack of complete integration with Visual Studio via an extension panel Ratings Ease of Use (9/10) – I really do believe that they have brought simplicity to the once difficult task of memory profiling.  I especially liked how it stepped you further into the drilldown by directing you towards the best options. Effectiveness (10/10) – I believe that the profiler does EXACTLY what it purports to do.  Features (7/10) – A really great set of features all around in the application, however, I would like to see some ability for automatically triggering snapshots based on intervals or framework level items such as events. Customer Service (10/10) – My entire experience with Red Gate personnel has been nothing but good.  their people are friendly, helpful, and happy! UI / UX (9/10) – The interface is very easy to get around, and all of the options are easy to find.  With a little bit of poking around, you’ll be optimizing Hello World in no time flat! Overall (9/10) – Overall, I am happy with the Memory Profiler and its features, as well as with the service I received when working with the Red Gate personnel.  Thank you for reading up to here, or skipping ahead – I told you it would be shorter!  Please, if you do try the product, drop me a message and let me know what you think!  I would love to hear any opinions you may have on the product. Code Feel free to download the code I used above – download via DropBox

    Read the article

  • Can Anything be Done to Make Improv (a 1993 Win 3.1 App) handle larger Files?

    - by user75185
    My very favorite spradsheet is Improv, a 1993 Windows 3.1 application. It still puts Excel to shame for building spreadsheets and writing formulas. The only problem is because Improv was written when 1 Meg of RAM was state of the art, it becomes unstable when working with larger spreadsheets and often crashes and/or corrupts the data file. I am working on a project that greatly exceeds Improv's limits. Although it will ultimately require more robust databasing capability, I could save a lot of critical time if I could delay that headache and continue working in Improv for now. To that end, I moved to the only product I could find that comes close, Quantrix, which is nothing more than Improv updated to handle large spreadsheets and utilize today's technologies. The problems with Quantrix are its speed (significantly slower than Improv) and its $1000 price (which I cannot afford). I have already had 3 15 day extensions after the initial 30 day trial, so my time to use Quantrix as a bridge is at its end. Searches for Improv over the years have gotten me nowhere and, not surprisingly after reading some posts on this site, I got nothing for the money and time invested to find a programmer to write code to "fix" this problem. Improv is freely available as "abandonware" at http://vetusware.com/download/LotusImprov2.1/?id=5797 , and the best background info can be found on Wikipedia and at "Moose's Greatest Software Products of All Time - Lotus Improv" http://moosevalley.fhost.com.au/mooses_review_page_lotus_improv.html It is critically urgent for me to focus on analyzing the data asap. Working in a stable Improv would, without question, be the fastest route. To that end, I am looking for answers to the following questions and anything else that might be helpful: 1) Is it lawful to hire someone to fix Improv for my own use? If so, 2) About how much should it cost? 3) About how long should it take? 4) What skills should I be looking for &/or how should a post be worded? 5) Is there a niche site where it should it be posted? 6) What questions can I ask to quickly screen candidates? Since I am not a programmer, I need questions the answers to which leave no room to confuse me, whether intentional or not. For example, what tools or players should someone with an acceptable competency level have knowledge of?

    Read the article

  • tidy/efficient function writing in R

    - by romunov
    Excuse my ignorance, as I'm not a computer engineer but with roots in biology. I have become a great fan of pre-allocating objects (kudos to SO and R inferno by Patrick Burns) and would like to improve my coding habits. In lieu of this fact, I've been thinking about writing more efficient functions and have the following question. Is there any benefits in removing variables that will be overwritten at the start of the next loop, or is this just a waste of time? For the sake of argument, let's assume that the size of old and new variables is very similar or identical.

    Read the article

  • Memory Efficient file append

    - by lboregard
    i have several files whose content need to be merged into a single file. i have the following code that does this ... but it seems rather inefficient in terms of memory usage ... would you suggest a better way to do it ? the Util.MoveFile function simply accounts for moving files across volumes private void Compose(string[] files) { string inFile = ""; string outFile = "c:\final.txt"; using (FileStream fsOut = new FileStream(outFile + ".tmp", FileMode.Create)) { foreach (string inFile in files) { if (!File.Exists(inFile)) { continue; } byte[] bytes; using (FileStream fsIn = new FileStream(inFile, FileMode.Open)) { bytes = new byte[fsIn.Length]; fsIn.Read(bytes, 0, bytes.Length); } //using (StreamReader sr = new StreamReader(inFile)) //{ // text = sr.ReadToEnd(); //} // write the segment to final file fsOut.Write(bytes, 0, bytes.Length); File.Delete(inFile); } } Util.MoveFile(outFile + ".tmp", outFile); }

    Read the article

  • iOS6 MKMapView using a ton of memory, to the point of crashing the app, anyone else notice this?

    - by Jeremy Fox
    Has anyone else, who's using maps in their iOS 6 apps, noticing extremely high memory use to the point of receiving memory warnings over and over to the point of crashing the app? I've ran the app through instruments and I'm not seeing any leaks and until the map view is created the app consistently runs at around ~3mb Live Bytes. Once the map is created and the tiles are downloaded the Live Bytes jumps up to ~13mb Live Bytes. Then as I move the map around and zoom in and out the Live Bytes continuos to climb until the app crashes at around ~40mb Live Bytes. This is on an iPhone 4 by the way. On an iPod touch it crashes even earlier. I am reusing annotation views properly and nothing is leaking. Is anyone else seeing this same high memory usage with the new iOS 6 maps? Also, does anyone have a solution?

    Read the article

  • code throws std::bad_alloc, not enough memory or can it be a bug?

    - by Andreas
    I am parsing using a pretty large grammar (1.1 GB, it's data-oriented parsing). The parser I use (bitpar) is said to be optimized for highly ambiguous grammars. I'm getting this error: 1terminate called after throwing an instance of 'std::bad_alloc' what(): St9bad_alloc dotest.sh: line 11: 16686 Aborted bitpar -p -b 1 -s top -u unknownwordsm -w pos.dfsa /tmp/gsyntax.pcfg /tmp/gsyntax.lex arbobanko.test arbobanko.results Is there hope? Does it mean that it has ran out of memory? It uses about 15 GB before it crashes. The machine I'm using has 32 GB of RAM, plus swap as well. It crashes before outputting a single parse tree. The parser is an efficient CYK chart parser using bit vector representations; I presume it is already near the limit of memory efficiency. If it really requires too much memory I could sample from the grammar rules, but this will decrease parse accuracy of course.

    Read the article

  • Why am I seeing Zero errors in non-ECC RAM?

    - by Alexander Shcheblikin
    According to sources, memory errors are a very probable event: Some say the probability of a DRAM error is 95% in just 3 days of operation of a computer with just 4 GB of RAM, others say 32% of servers experience at least one error in a month with 8% of DIMMs being at fault. Contrary to those horrors, in my more than 10 years of personal computers use I have seen exactly none of the memory errors. I admit I never paid special attention to the subject. However, I have ventured multi-hour memtest86 runs couple of times and never seen an error either. Some of the factors that IMO should aggravate the memory problems: I build my computers out of the most "bulk commodity" parts: mainstream budget motherboards and the next to cheapest memory. also I usually max out the technology available, e.g. in the times of 32 bit OS'es I used 4 GB of RAM and with the current desktop CPUs and the newer 64 bit OS'es I use 32 GB of RAM. memory usage is moderately heavy with lots of virtual machines up running small and big tasks 24/7/365. But nevertheless, no memory-related problems ever found! How's that?

    Read the article

  • Apache memory leak with Subversion server

    - by bruce grissom
    Does anyone know of a way to fix the Apache memory leak in relation to Subversion Server? We have a windows server 2003 machine running Apache to host Subversion. From day one, we have had memory leak issues and have not found a solution yet. All we do is monitor our server when when the memory use reaches near the max it can handle we have to restart Apache.

    Read the article

  • Ubuntu 12.04 LTS vs Ubuntu 14.04 LTS memory usage

    - by geoffroy
    My droplet has 512 MB memory and is running Ubuntu 12.04 LTS 64 bits and a Rails 4 application + several workers. It's running well. I tried to deploy the same thing on a Ubuntu 14.04 LTS 64 bits droplet and I've got plenty of memory related problem (can't fork). Is Ubuntu 14.04 LTS using way more RAM than Ubuntu 12.04 LTS? Is there something I should know to lower memory usage ? Should I stick with Ubuntu 12.04 LTS?

    Read the article

  • Memory usage on debian webserver keeps going up

    - by Steven De Groote
    my webserver is running apache 1.3.x for a PHP application, along with mysql on the same machine. Most of the time it runs fine, CPU usage still with nice margin, but somehow memory usage keeps growing throughout uptime. While it looks like it is chunked from time to time, I've had moments my server going down because it's out of memory. Restarting apache or mysql only reduced memusage by 100M. Attached is an overview of monthly memory usage. The 2 massive drops are server restarts after out-of-memory situations. http://imageshack.us/photo/my-images/51/memorymonth.png/ Any explanations for his behaviour or how I could solve this? Thanks! Steven

    Read the article

  • Simple tool to graph memory usage?

    - by dbr
    Is there a script that will show memory usage as a graph, for example as a pie-chart, with each process being being a separate slice? I'm not looking for something like Munin to graph memory usage over time, but rather show the memory usage per-process at a single point in time. To make my request even more obscure, it is for a headless server (so no X applications). The simplest way would be to write a PNG file, or possibly an HTML file (which could use Javascript to allow the filtering of processes, changing between graph-types and so on)

    Read the article

  • Memory consumption of each accept() call on server running on Windows 2008 [migrated]

    - by Atul
    I've written a simple and small server application on Windows 2008 that just accepts connections and does nothing else. I am doing memory footprint assessment of socket calls, What I found that each connection (after accept()) consumes at least 2.5 KB of memory. Interestingly, the memory is not consumed by the process that has accept() call but it consumed by a OS process. I believe it might be because of data structures being created inside OS for each connection. Now, I have two key questions: Is it possible by any means to reduce this memory footprint (by changing any parameters, configuration etc) ? If yes how ? (Because 2K for each connection would be too much if we planning server to accept millions of connections) If my server is intended to accept million connections, is it good idea to use Windows 2008 ? or shall I switch to some other OS? Please advice me.

    Read the article

  • Tool for viewing used and free memory on windows system

    - by patrick
    I'm looking for a tool that will allow me to see all of the memory used by by Windows XP/2003 machine. I know that Process Explorer and others will show you what memory is being used by each process, but I need a full graphical view of all used and unused memory on my machine. I have seen an app that does this but cannot remember the name. Anyone know of a tool like this?

    Read the article

  • Mac has become insanely slow : Processes SystemUIServer, UserEventAgent and loginwindow using a lot of memory

    - by SatheeshJM
    I have been using my Mac for for many months without any problem. But recently all of a sudden the Mac became insanely slow. I opened Activity Manager to see what was happening. For three processes SystemUIServer, UserEventAgent and loginwindow, the memory gradually increases and reaches upto 2 GB for each process. This completely hangs up my Mac. I tried the following : 1. Restart Mac 2. Restart Mac in safe mode 3. Manually kill the processes 4. Remove Date and Time from Menu bar(this was supposed to be the problem for the SysteUIServer process's memory according to many users) 5. Removed the externally connected keyboard and mouse(some had suggested this for UserEventAgent's memory) No luck with any of those. The moment I log in, the memory spikes up. Any idea what the hell is happening? Please help.

    Read the article

  • Slow transfer with memory stick (819 kb/s)

    - by Nrew
    What do I do to optimize the file transfer rate of a Memory Stick Duo? The file transfer was not like this when it was still new. Can reformatting give new life to a memory stick? It takes about 20 minutes just to transfer 1Gb of file from computer to memory stick. The computer is decent enough. 2.50Ghz processor, 2Gb ram.

    Read the article

  • windows xp blue screen dumping physical memory

    - by dotnet-practitioner
    I get following blue screen after running my laptop for an hour... A problem has been detected and windows has been shut down to prevent damange to your computer. If this is the first time you've seen this stop error screen, restart your computer. If this screen appears again, follow these steps: Check to be sure you have adequate disk space. If a driver is identified in the stop message, disable the driver or check with the manufacturer for driver updates. Try changing video adapters. Check with your hardware vendor for any BIOS updates. Disable BIOS memory options such as cashing or shadowing. If you need to use safe mode to remove or disable components, restart your computer, press F8 to select advanced startup options, and the select safe mode. Technical Information: * STOP 0x0000008E (0xc0000005, 0x805B03F5, 0xF703DC7C, 0x00000000) Beginning dump of physical memory Physical memory dump complete. Contact you system administrator or technical support group for further assistance. so.... if this is a faulty memory.... from where I could buy RAM for following laptop.... TOSHIBA SATELLITE A45-S250 My local Frys store does not carry memory for this laptop.

    Read the article

  • Macbook memory upgrade question

    - by James Evans
    I've read some conflicting articles on Macbooks and memory upgrades. Some say you have to buy the "special" Mac memory (bulls$%t), others say manufacturers like Partriot and Ocz will work fine. My Macbook (non-pro) is about 6 mos old with it's 2 GB of memory (SO-DIMM 1066MHz DDR3). Does anyone have any definitive information of what will work? Thanks!

    Read the article

  • Macbook memory upgrade question

    - by James Evans
    I've read some conflicting articles on Macbooks and memory upgrades. Some say you have to buy the "special" Mac memory (bulls$%t), others say manufacturers like Partriot and Ocz will work fine. My Macbook (non-pro) is about 6 mos old with it's 2 GB of memory (SO-DIMM 1066MHz DDR3). Does anyone have any definitive information of what will work? Thanks!

    Read the article

  • How to get the installed Memory Type

    - by balexandre
    Windows 7 could be better at this, it tells everything about the computer CPU but only the Memory amount Microsoft should add information about DDR type, speed and maybe CL as well. While this never happens, What's the best and easy way to check the installed memory so we can buy and upgrade it? I was thinking a simple software so I don't need to install the full SiSoft Sandra for example, just looking for something small, only for the memory part.

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >