Search Results

Search found 12872 results on 515 pages for 'memory alignment'.

Page 49/515 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Iphone memory leak with malloc

    - by Icky
    Hello. I have memory leak, found by instruments and it is supposed to be in this line of code: indices = malloc( sizeof(indices[0]) * totalQuads * 6); This is actually a code snippet from a tutorial, something which i think is leak-free so to say. Now I reckon, the error is somewhere else, but I do not know, where. These are the last trackbacks: 5 ColorRun -[EAGLView initWithCoder:] /Users/thomaskopinski/programming/colorrun_3.26/Classes/EAGLView.m:98 4 ColorRun -[EAGLView initGame] /Users/thomaskopinski/programming/colorrun_3.26/Classes/EAGLView.m:201 3 ColorRun -[SpriteSheet initWithImageNamed:spriteWidth:spriteHeight:spacing:imageScale:] /Users/thomaskopinski/programming/colorrun_3.26/SpriteSheet.m:68 2 ColorRun -[Image initWithImage:scale:] /Users/thomaskopinski/programming/colorrun_3.26/Image.m:122 1 ColorRun -[Image initImpl] /Users/thomaskopinski/programming/colorrun_3.26/Image.m:158 0 libSystem.B.dylib malloc Does anyone know how to approach this?

    Read the article

  • P-invoke call fails if too much memory is assigned beforehand

    - by RandomEngy
    I've got a p-invoke call to an unmanaged DLL that was failing in my WPF app but not in a simple, starter WPF app. I tried to figure out what the problem was but eventually came to the conclusion that if I assign too much memory before making the call, the call fails. I had two separate blocks of code, both of which would succeed on their own, but that would cause failure if both were run. (They had nothing to do with what the p-invoke call is trying to do). What kind of issues in the unmanaged library would cause such an issue? I thought that the managed and unmanaged heaps were supposed to be automatically separated. The crash as far as I can tell is happening in a dynamically loaded secondary DLL from the one p-invoked into. Could that have something to do with it?

    Read the article

  • nsmutabledictionary is showing memory leak

    - by Narasimhaiah Kolli
    Why doing assigning nil to nsmutabledictioanry and allocating is crashing ans showing memory release at this point of place?? self.delegate.replenishAddedmaterials = nil; self.delegate.replenishAddedmaterials = [[NSMutableDictionary alloc] init]; MATERIAL_ITEM *materialItem = [[MATERIAL_ITEM alloc] init]; VENDOR_HEADER *vendor = [[VENDOR_HEADER alloc] init]; PURCHASING_ORG_HEADER *purOrg = [[PURCHASING_ORG_HEADER alloc] init]; [self.delegate.replenishAddedmaterials setObject:[NSMutableArray arrayWithObject:materialItem] forKey:materialItem]; [[self.delegate.replenishAddedmaterials objectForKey:materialItem] addObject:vendor]; [[self.delegate.replenishAddedmaterials objectForKey:materialItem] addObject:purOrg]; After executing allocation of nsmutabledictionary i am getting following message * -[MATERIAL_ITEM release]: message sent to deallocated instance 0x11e62810I have implemented my project in ARC

    Read the article

  • Why is memory management so visible in Java?

    - by Emil
    I'm playing around with writing some simple Spring-based web apps and deploying them to Tomcat. Almost immediately, I run into the need to customize the Tomcat's JVM settings with -XX:MaxPermSize (and -Xmx and -Xms); without this, the server easily runs out of PermGen space. Why is this such an issue for Java compared to other garbage collected languages? Comparing counts of "tune X memory usage" for X in Java, Ruby, Perl and Python, shows that Java has easily an order of magnitude more hits in Google than the other languages combined.

    Read the article

  • Memory management for "id<ProtocolName> variableName" type properties

    - by Malakim
    Hi, I'm having a problem with properties of the following type: id<ProtocolName> variableName; ..... ..... @property (nonatomic, retain) id<ProtocolName> variableName; I can access and use them just fine, but when I try to call [variableName release]; I get compiler warnings: '-release' not found in protocol(s) Do I need to define a release method in the interface, or how do I release the memory reserved for the variable? Thanks!

    Read the article

  • how free of memory happen in this case???

    - by Riyaz
    #include <stdio.h> void func(int arr[],int xNumOfElem) { int j; for(j=0; j<xNumOfElem; j++) { arr[j] = j + arr[j]; printf("%d\t",arr[j]); } printf("\n"); } int main() { int *a,k; a = (int*) malloc(sizeof(int)*10); for(k = 0; k<10; k++) { a[k] = k; printf("%d\t",a[k]); } printf("\n"); func(a,10); //Func call free(a); } Inside the the function "func" who will allocate/deallocate memory for dynamic array "arr". arr is an function argument.

    Read the article

  • C++ static classes & shared_ptr memory leaks

    - by HardCoder1986
    Hello! I can't understand why does the following code produce memory leaks (I am using boost::shared_ptr with static class instance). Could someone help me? #include <crtdbg.h> #include <boost/shared_ptr.hpp> using boost::shared_ptr; #define _CRTDBG_MAP_ALLOC #define NEW new(_NORMAL_BLOCK, __FILE__, __LINE__) static struct myclass { static shared_ptr<int> ptr; myclass() { ptr = shared_ptr<int>(NEW int); } } myclass_instance; shared_ptr<int> myclass::ptr; int main() { _CrtSetDbgFlag(_CRTDBG_ALLOC_MEM_DF | _CRTDBG_LEAK_CHECK_DF | _CRTDBG_CHECK_ALWAYS_DF | _CrtSetDbgFlag(_CRTDBG_REPORT_FLAG)); return 0; }

    Read the article

  • Controlling read and write access width to memory mapped registers in C

    - by srking
    I'm using and x86 based core to manipulate a 32-bit memory mapped register. My hardware behaves correctly only if the CPU generates 32-bit wide reads and writes to this register. The register is aligned on a 32-bit address and is not addressable at byte granularity. What can I do to guarantee that my C (or C99) compiler will only generate full 32-bit wide reads and writes in all cases? For example, if I do a read-modify-write operation like this: volatile uint32_t* p_reg = 0xCAFE0000; *p_reg |= 0x01; I don't want the compiler to get smart about the fact that only the bottom byte changes and generate 8-bit wide read/writes. Since the machine code is often more dense for 8-bit operations on x86, I'm afraid of unwanted optimizations. Disabling optimizations in general is not an option.

    Read the article

  • Allocating memory for a array to char pointer

    - by nunos
    The following piece of code gives a segmentation fault when allocating memory for the last arg. What am I doing wrong? Thanks. int n_args = 0, i = 0; while (line[i] != '\0') { if (isspace(line[i++])) n_args++; } for (i = 0; i < n_args; i++) command = malloc (n_args * sizeof(char*)); char* arg = NULL; arg = strtok(line, " \n"); while (arg != NULL) { arg = strtok(NULL, " \n"); command[i] = malloc ( (strlen(arg)+1) * sizeof(char) ); strcpy(command[i], arg); i++; } Thanks.

    Read the article

  • c: memory allocation (what's going on)

    - by facha
    Hi, everyone Please take a look at this piece of code. I'm allocating one byte for the first variable and another byte for the second one. However, it seems like the compiler allocates more (or I'm missing something). The program outputs both strings, even though their length is more the one byte. void main() { char* some1 = malloc(1); sprintf(some1,"cool"); char* some2 = malloc(1); sprintf(some2,"face"); printf("%s ",some1); printf("%s\n",some2); } Please, could anyone spot some light on what's going on when memory is being allocated.

    Read the article

  • Kepping object in memory (iPhone SDK)

    - by Chris
    I am trying to create a UIImageView called theImageView in the touchesBegan method that I can then then move to a new location in touchesMoved. Currently I am receiving an "undeclared" error in touchesMoved where I set the new location for theImageView. What can I do to keep theImageView in memory between these two methods? - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { ... UIImageView *theImageView = [[UIImageView alloc] initWithImage:[UIImage imageNamed:@"image.png"]]; theImageView.frame = CGRectMake(263, 228, 193, 300); [theImageView retain]; ... } - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { ... theImageView.frame = CGRectMake(300, 300, 193, 300); ... }

    Read the article

  • What is the cost of memory access?

    - by Jurily
    We like to think that a memory access is fast and constant, but on modern architectures/OSes, that's not necessarily true. Consider the following C code: int i = 34; int *p = &i; // do something that may or may not involve i and p {...} // 3 days later: *p = 643; What is the estimated cost of this last assignment in CPU instructions, if i is in L1 cache, i is in L2 cache, i is in L3 cache, i is in RAM proper, i is paged out to an SSD disk, i is paged out to a traditional disk? Where else can i be? Of course the numbers are not absolute, but I'm only interested in orders of magnitude. I tried searching the webs, but Google did not bless me this time.

    Read the article

  • Referencing invalid memory locations with C++ Iterators

    - by themoondothshine
    I am a big fan of GCC, but recently I noticed a vague anomaly. Using __gnu_cxx::__normal_iterator (ie, the most common iterator type used in libstdc++, the C++ STL) it is possible to refer to an arbitrary memory location and even change its value without causing an exception! Is this expected behavior? If so, isn't a security loophole? Here's an example: #include <iostream> using namespace std; int main() { basic_string<char> str("Hello world!"); basic_string<char>::iterator iter = str.end(); iter += str.capacity() + 99999; *iter = 'x'; cout << "Value: " << *iter << endl; }

    Read the article

  • Read data from specific memory address

    - by rapid
    Hello. How can I read (and put into new variable) data stored at specific memory address? For instance I know that: <nfqueue.queue; proxy of <Swig Object of type 'queue *' at 0xabd2b00> > And I want to have data stored at 0xabd2b00 in new variable so that I can work and use all functionalities of the object. Let's assume that I don't have access to the original variable that created this object.

    Read the article

  • Keeping object in memory (iPhone SDK)

    - by Chris
    I am trying to create a UIImageView called theImageView in the touchesBegan method that I can then then move to a new location in touchesMoved. Currently I am receiving an "undeclared" error in touchesMoved where I set the new location for theImageView. What can I do to keep theImageView in memory between these two methods? - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { ... UIImageView *theImageView = [[UIImageView alloc] initWithImage:[UIImage imageNamed:@"image.png"]]; theImageView.frame = CGRectMake(263, 228, 193, 300); [theImageView retain]; ... } - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { ... theImageView.frame = CGRectMake(300, 300, 193, 300); ... }

    Read the article

  • An explanation of memory usage on Windows server 2003

    - by Rich
    Hi, We've been working on a bit of puzzle at work. We have an application service installed on two machines, both running Windows server 2003. These services do exactly the same thing. However once loaded, one of the services uses 200mb less than the other service. We're at a bit of a loss to what might be causing this discrepancy. I was wondering if there was some kind of server setting that would cause an application to use more memory (heap block size) or anything to explain this. If anyone has any ideas on what may be causing this, or how to find out what is causing this I'd be very grateful. Cheers Rich

    Read the article

  • How to find out where my memory is going

    - by the_mandrill
    I've got the situation where the cycle of loading and then closing a document eats up a few Mb of RAM. This memory isn't being leaked as something owns it and cleans it up when the app exits (Visual Leak Detector and the Mac Leaks tool show agreement on this). However, I'd like to find out where it's going. I'm assuming it's some sort of cache in the application that gets populated when the document loads but not freed when the document is closed. Which methods or tools could I use to find out where these allocations are being made?

    Read the article

  • Advanced Memory Editing/Function Calling

    - by Saustin
    Hi, I've gotten extremely interested into coding trainers (Program that modifies value of a different process) for video games. I've done the simple 'god-mode' and 'unlimited money' things, but I want to do alot more than that. (Simple editing using WriteProcessMemory) There are memory addresses of functions on the internet of the video game I'm working on, and one of functions is like "CreateCar" and I'm wanting to call that function from an external program. My question: How can I call a function from an external process in C/C++, provided the function address, using a process handle or other method. PS: If anyone could link me to tools (I've got debuggers, no need for more..) that help with this sort of thing, that'd be nice.

    Read the article

  • How to avoid reallocation using the STL (C++)

    - by Tue Christensen
    This question is derived of the topic: http://stackoverflow.com/questions/2280655/vector-reserve-c I am using a datastructur of the type vector<vector<vector<double> > >. It is not possible to know the size of each of these vector (except the outer one) before items (doubles) are added. I can get an approximate size (upper bound) on the number of items in each "dimension". A solution with the shared pointers might be the way to go, but I would like to try a solution where the vector<vector<vector<double> > > simply has .reserve()'ed enough space (or in some other way has allocated enough memory). Will A.reserve(500) (assumming 500 is the size or, alternatively an upper bound on the size) be enough to hold "2D" vectors of large size, say [1000][10000]? The reason for my question is mainly because I cannot see any way of reasonably estimating the size of the interior of A at the time of .reserve(500). An example of my question: vector A; A.reserve(500+1); vector temp2; vector temp1 (666,666); for(int i=0;i<500;i++) { A.push_back(temp2); for(int j=0; j< 10000;j++) { A.back().push_back(temp1); } } Will this ensure that no reallocation is done for A? If temp2.reserve(100000) and temp1.reserve(1000) where added at creation will this ensure no reallocation at all will occur at all? In the above please disregard the fact that memory could be wasted due to conservative .reserve() calls. Thank you all in advance!

    Read the article

  • is this uibutton autoreleased ?

    - by dubbeat
    HI This is just a question to check my sanity really. I'm hunting memory leaks that show up in instruments but not the static analyzer. In one spot the analyzer is pointing to this block of code UIButton *randomButton = [UIButton buttonWithType:UIButtonTypeRoundedRect ]; randomButton.frame = CGRectMake(205, 145, 90, 22); // size and position of button [randomButton setTitle:@"Random" forState:UIControlStateNormal]; randomButton.backgroundColor = [UIColor clearColor]; randomButton.adjustsImageWhenHighlighted = YES; [randomButton addTarget:self action:@selector(getrandom:) forControlEvents:UIControlEventTouchUpInside]; [self.view addSubview:randomButton]; For some reason I thought the above code would auto release the button because I'm not calling init or alloc? If I add [randombutton release] at the bottom of the code my button fails to show. Could somebody describe to me the correct way to release a button from memory that is created in the above way? Or would I be better off making the button a class variable and sticking the release in the dealloc method?

    Read the article

  • Traditional IO vs memory-mapped

    - by Senne
    I'm trying to illustrate the difference in performance between traditional IO and memory mapped files in java to students. I found an example somewhere on internet but not everything is clear to me, I don't even think all steps are nececery. I read a lot about it here and there but I'm not convinced about a correct implementation of neither of them. The code I try to understand is: public class FileCopy{ public static void main(String args[]){ if (args.length < 1){ System.out.println(" Wrong usage!"); System.out.println(" Correct usage is : java FileCopy <large file with full path>"); System.exit(0); } String inFileName = args[0]; File inFile = new File(inFileName); if (inFile.exists() != true){ System.out.println(inFileName + " does not exist!"); System.exit(0); } try{ new FileCopy().memoryMappedCopy(inFileName, inFileName+".new" ); new FileCopy().customBufferedCopy(inFileName, inFileName+".new1"); }catch(FileNotFoundException fne){ fne.printStackTrace(); }catch(IOException ioe){ ioe.printStackTrace(); }catch (Exception e){ e.printStackTrace(); } } public void memoryMappedCopy(String fromFile, String toFile ) throws Exception{ long timeIn = new Date().getTime(); // read input file RandomAccessFile rafIn = new RandomAccessFile(fromFile, "rw"); FileChannel fcIn = rafIn.getChannel(); ByteBuffer byteBuffIn = fcIn.map(FileChannel.MapMode.READ_WRITE, 0,(int) fcIn.size()); fcIn.read(byteBuffIn); byteBuffIn.flip(); RandomAccessFile rafOut = new RandomAccessFile(toFile, "rw"); FileChannel fcOut = rafOut.getChannel(); ByteBuffer writeMap = fcOut.map(FileChannel.MapMode.READ_WRITE,0,(int) fcIn.size()); writeMap.put(byteBuffIn); long timeOut = new Date().getTime(); System.out.println("Memory mapped copy Time for a file of size :" + (int) fcIn.size() +" is "+(timeOut-timeIn)); fcOut.close(); fcIn.close(); } static final int CHUNK_SIZE = 100000; static final char[] inChars = new char[CHUNK_SIZE]; public static void customBufferedCopy(String fromFile, String toFile) throws IOException{ long timeIn = new Date().getTime(); Reader in = new FileReader(fromFile); Writer out = new FileWriter(toFile); while (true) { synchronized (inChars) { int amountRead = in.read(inChars); if (amountRead == -1) { break; } out.write(inChars, 0, amountRead); } } long timeOut = new Date().getTime(); System.out.println("Custom buffered copy Time for a file of size :" + (int) new File(fromFile).length() +" is "+(timeOut-timeIn)); in.close(); out.close(); } } When exactly is it nececary to use RandomAccessFile? Here it is used to read and write in the memoryMappedCopy, is it actually nececary just to copy a file at all? Or is it a part of memorry mapping? In customBufferedCopy, why is synchronized used here? I also found a different example that -should- test the performance between the 2: public class MappedIO { private static int numOfInts = 4000000; private static int numOfUbuffInts = 200000; private abstract static class Tester { private String name; public Tester(String name) { this.name = name; } public long runTest() { System.out.print(name + ": "); try { long startTime = System.currentTimeMillis(); test(); long endTime = System.currentTimeMillis(); return (endTime - startTime); } catch (IOException e) { throw new RuntimeException(e); } } public abstract void test() throws IOException; } private static Tester[] tests = { new Tester("Stream Write") { public void test() throws IOException { DataOutputStream dos = new DataOutputStream( new BufferedOutputStream( new FileOutputStream(new File("temp.tmp")))); for(int i = 0; i < numOfInts; i++) dos.writeInt(i); dos.close(); } }, new Tester("Mapped Write") { public void test() throws IOException { FileChannel fc = new RandomAccessFile("temp.tmp", "rw") .getChannel(); IntBuffer ib = fc.map( FileChannel.MapMode.READ_WRITE, 0, fc.size()) .asIntBuffer(); for(int i = 0; i < numOfInts; i++) ib.put(i); fc.close(); } }, new Tester("Stream Read") { public void test() throws IOException { DataInputStream dis = new DataInputStream( new BufferedInputStream( new FileInputStream("temp.tmp"))); for(int i = 0; i < numOfInts; i++) dis.readInt(); dis.close(); } }, new Tester("Mapped Read") { public void test() throws IOException { FileChannel fc = new FileInputStream( new File("temp.tmp")).getChannel(); IntBuffer ib = fc.map( FileChannel.MapMode.READ_ONLY, 0, fc.size()) .asIntBuffer(); while(ib.hasRemaining()) ib.get(); fc.close(); } }, new Tester("Stream Read/Write") { public void test() throws IOException { RandomAccessFile raf = new RandomAccessFile( new File("temp.tmp"), "rw"); raf.writeInt(1); for(int i = 0; i < numOfUbuffInts; i++) { raf.seek(raf.length() - 4); raf.writeInt(raf.readInt()); } raf.close(); } }, new Tester("Mapped Read/Write") { public void test() throws IOException { FileChannel fc = new RandomAccessFile( new File("temp.tmp"), "rw").getChannel(); IntBuffer ib = fc.map( FileChannel.MapMode.READ_WRITE, 0, fc.size()) .asIntBuffer(); ib.put(0); for(int i = 1; i < numOfUbuffInts; i++) ib.put(ib.get(i - 1)); fc.close(); } } }; public static void main(String[] args) { for(int i = 0; i < tests.length; i++) System.out.println(tests[i].runTest()); } } I more or less see whats going on, my output looks like this: Stream Write: 653 Mapped Write: 51 Stream Read: 651 Mapped Read: 40 Stream Read/Write: 14481 Mapped Read/Write: 6 What is makeing the Stream Read/Write so unbelievably long? And as a read/write test, to me it looks a bit pointless to read the same integer over and over (if I understand well what's going on in the Stream Read/Write) Wouldn't it be better to read int's from the previously written file and just read and write ints on the same place? Is there a better way to illustrate it? I've been breaking my head about a lot of these things for a while and I just can't get the whole picture..

    Read the article

  • Memory limit for running external executables within Asp.net

    - by itsbalur
    I am using WkhtmltoPdf in my C# web application running in .NET 4.0 to generate PDFs from HTML files. In general everything works fine except when the size of the HTML file is below 250KB. Once the HTML file size increases beyond that, the process which runs the wkhtmltopdf.exe gives an exception as below. On the Task Manager, I have seen that the Memory value for the wkhtmltopdf.exe process does not increase beyond a value of 40,096 K, which I believe is the reason why the process is abandoned in between. How can we configure such that the memory limit for external exes can be increased? Is there any other way of solving this issue? More info: When I run the conversion from the command line directly, the PDF is generated fine. So, its unlikely to be a problem with WkhtmlToPdf. The error is from localhost. I have tried the same on the DEV server, with the same result. Exception: > [Exception: Loading pages (1/6) [> > ] 0% [======> ] > 10% [======> ] 11% > [=======> ] 13% > [=========> ] 15% > [==========> ] 18% > [============> ] 20% > [=============> ] 22% > [==============> ] 24% > [===============> ] 26% > [=================> ] 29% > [==================> ] 31% > [===================> ] 33% > [=====================> ] 35% > [======================> ] 37% > [========================> ] 40% > [=========================> ] 42% > [==========================> ] 44% > [============================> ] 47% > [=============================> ] 49% > [==============================> ] 51% > [============================================================] 100% > Counting pages (2/6) > [============================================================] Object > 1 of 1 Resolving links (4/6) > [============================================================] Object > 1 of 1 Loading headers and footers (5/6) > Printing pages (6/6) [> > ] Preparing [=> > ] Page 1 of 49 [==> > ] Page 2 of 49 [===> > ] Page 3 of 49 [====> > ] Page 4 of 49 [======> > ] Page 5 of 49 [=======> > ] Page 6 of 49 [========> > ] Page 7 of 49 [=========> > ] Page 8 of 49 [==========> > ] Page 9 of 49 [============> > ] Page 10 of 49 [=============> > ] Page 11 of 49 [==============> > ] Page 12 of 49 [===============> > ] Page 13 of 49 [================> > ] Page 14 of 49 [==================> > ] Page 15 of 49 [===================> > ] Page 16 of 49 [====================> > ] Page 17 of 49 [=====================> > ] Page 18 of 49 [======================> > ] Page 19 of 49 [========================> > ] Page 20 of 49 [=========================> > ] Page 21 of 49 [==========================> > ] Page 22 of 49 [===========================> > ] Page 23 of 49 [============================> > ] Page 24 of 49 [==============================> > ] Page 25 of 49 [===============================> > ] Page 26 of 49 [=================================> > ] Page 27 of 49 [==================================> > ] Code that I use: var fileName = " - "; var wkhtmlDir = ConfigurationManager.AppSettings[Constants.AppSettings.ExportToPdfExecutablePath]; var wkhtml = ConfigurationManager.AppSettings[Constants.AppSettings.ExportToPdfExecutablePath] + "\\wkhtmltopdf.exe"; var p = new Process(); string switches = ""; switches += "--print-media-type "; switches += "--margin-top 10mm --margin-bottom 10mm --margin-right 5mm --margin-left 5mm "; switches += "--page-size A4 "; switches += "--disable-smart-shrinking "; var startInfo = new ProcessStartInfo { CreateNoWindow = true, FileName = wkhtml, Arguments = switches + " " + url + " " + fileName, UseShellExecute = false, RedirectStandardOutput = true, RedirectStandardError = true, RedirectStandardInput=true, WorkingDirectory=wkhtmlDir }; p.StartInfo = startInfo; p.Start();

    Read the article

  • Is it the address bus size or the data bus size that determines "8-bit , 16-bit ,32-bit ,64-bit " systems?

    - by learner
    My simple understanding is as follows. Memory (RAM) is composed of bits, groups of 8 which form bytes, each of which can be addressed ,and hence byte addressable memory. Address Bus stores the location of a byte of memory. If an address bus is of size 32 bits, that means it can hold upto 232 numbers and it hence can refer upto 232 bytes of memory = 4GB of memory and any memory greater than that is useless. Data bus is used to send the value to be written to/read off the memory. If I have a data bus of size 32 bits, it means a maximum of 4 bytes can be written to/read off the memory at a time. I find no relation between this size and the maximum memory size possible. But I read here that: Even though most systems are byte-addressable, it makes sense for the processor to move as much data around as possible. This is done by the data bus, and the size of the data bus is where the names 8-bit system, 16-bit system, 32-bit system, 64-bit system, etc.. come from. When the data bus is 8 bits wide, it can transfer 8 bits in a single memory operation. When the data bus is 32 bits wide (as is most common at the time of writing), at most, 32 bits can be moved in a single memory operation. This says that the size of the data bus is what gives an OS the name, 8bit, 16bit and so on. What is wrong with my understanding?

    Read the article

  • GA 8KNXP Rev1.0: 4 GB installed, only 3.5 GB recognized by BIOS

    - by hurikhan77
    I've installed 2x 1 GB and 4x 512 MB memory into my GA-8KNXP system which would sum up to 4 GB. The specification from the manual says: Maximum memory support: 4 GB. If all six slots are utilized, slot 5+6 may only equipped with single-sided RAM modules. And so I did. Anyway: The BIOS counts up to 3.5 GB and finishes there. Also my Linux system reports only 3.5 GB of memory although 4 GB memory support is activated in the kernel. So I suppose this is a memory mapping issue or a hardware issue. I've tried removing only on of the 512 MB memory modules leaving 5 modules in place. But that just stopped the system from powering on correctly (screen stays black although fans and leds come to live). Dual Channel was detected and enabled so the system technically found all 6 modules. "dmidecode" in Linux reports only memory in slots 1 to 4 and ignores slots 5+6, so it only detects 3 GB of memory. It also says the system would support up to 16 GB of memory with 4 GB modules per slot. I think technically the chipset should be able to offer and utilize the complete 4 GB memory range. Any clues what else I could check? Or do I have just to live with 0.5 GB wasted memory?

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >