Search Results

Search found 18385 results on 736 pages for 'flash memory'.

Page 153/736 | < Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >

  • How to access memory location in Java?

    - by Abhishek Jain
    Is it possible that we can access memory location in Java directly or indirectly? If we tries to print a object, it will print hashcode. Does hashcode signify indirectly to memory location? For two object at different memory location but still their hashcode can varies. -Abhishek

    Read the article

  • Quick question regarding CSS sprites and memory usage

    - by Andy E
    Well, it's more to do with images and memory in general. If I use the same image multiple times on a page, will each image be consolidated in memory? Or will each image use a seperate amount of memory? I'm concerned about this because I'm building a skinning system for a Windows Desktop Gadget, and I'm looking at spriting the images in the default skin so that I can keep the file system looking clean. At the same time I want to try and keep the memory footprint to a minimum. If I end up with a single file containing 100 images and re-use that image 100 times across the gadget I don't want to have performance issues. Cheers.

    Read the article

  • No memory window in Visual Studio 2010

    - by Fredrik Jansson
    I have VS2010 Premium RTM version on Windows 7 Ultimate x64. In the documentation they refer to the Memory 1-4 windows, supposedly under Debug-Windows-Memory. I have "Enable address-level debugging" enabled in VS (Options-Debugging). The problem is that I have no Memory menu item under Debug-Windows during debug of a c++ program. Under Debug-Windows I have only: Breakpoints Parallel Tasks Parallel Stacks Watch - Locals Call Stack Threads Have anyone else experienced this (and hopefully solved it)?

    Read the article

  • How to accss memory location in Java?

    - by Abhishek Jain
    Is it possible that we can access memory location in Java directly or indirectly? If we tries to print a object, it will print hashcode. Does hashcode signify indirectly to memory location? For two object at different memory location but still their hashcode can varies. -Abhishek

    Read the article

  • problem in allocating kernel memory by malloc(),

    - by basu sagar
    Is there any protection provided by kernel? Because when we tried to allocate memory using an malloc(), the kernel allowed to allocated around 124 MB of memory, and when we try to write into it, the kernel crashed. If there was protection of kernel memory area, this wouldn't have happened, I guess.

    Read the article

  • problem in allocating kernal memory by malloc(),

    - by basu sagar
    Is there any protection provided by kernel? Because when we tried to allocate memory using an malloc(), the kernel allowed to allocated around 124 MB of memory, and when we try to write into it, the kernel crashed. If there was protection of kernel memory area, this wouldn't have happened, i guess

    Read the article

  • SQL server virtual memory usage and performance

    - by user365035
    Hello, I have a very large DB used mostly for analytics. The performance overall is very sluggish. I just noticed that when running the query below, the amount of virtual memory used greatly exceeds the amount of physical memory available. Currently, physical memory is 10GB (10238k bytes) whereas the virtual memory returns significantly more - 8388607k bytes. That seems really wrong, but I'm at a bit of a loss on how to proceed. USE [master]; GO select cpu_count , hyperthread_ratio , physical_memory_in_bytes / 1048576 as 'mem_MB' , virtual_memory_in_bytes / 1048576 as 'virtual_mem_MB' , max_workers_count , os_error_mode , os_priority_class from sys.dm_os_sys_info

    Read the article

  • Can the STREAM and GUPS (single CPU) benchmark use non-local memory in NUMA machine

    - by osgx
    Hello I want to run some tests from HPCC, STREAM and GUPS. They will test memory bandwidth, latency, and throughput (in term of random accesses). Can I start Single CPU test STREAM or Single CPU GUPS on NUMA node with memory interleaving enabled? (Is it allowed by the rules of HPCC - High Performance Computing Challenge?) Usage of non-local memory can increase GUPS results, because it will increase 2- or 4- fold the number of memory banks, available for random accesses. (GUPS typically limited by nonideal memory-subsystem and by slow memory bank opening/closing. With more banks it can do update to one bank, while the other banks are opening/closing.) Thanks. UPDATE: (you may nor reorder the memory accesses that the program makes). But can compiler reorder loops nesting? E.g. hpcc/RandomAccess.c /* Perform updates to main table. The scalar equivalent is: * * u64Int ran; * ran = 1; * for (i=0; i<NUPDATE; i++) { * ran = (ran << 1) ^ (((s64Int) ran < 0) ? POLY : 0); * table[ran & (TableSize-1)] ^= stable[ran >> (64-LSTSIZE)]; * } */ for (j=0; j<128; j++) ran[j] = starts ((NUPDATE/128) * j); for (i=0; i<NUPDATE/128; i++) { /* #pragma ivdep */ for (j=0; j<128; j++) { ran[j] = (ran[j] << 1) ^ ((s64Int) ran[j] < 0 ? POLY : 0); Table[ran[j] & (TableSize-1)] ^= stable[ran[j] >> (64-LSTSIZE)]; } } The main loop here is for (i=0; i<NUPDATE/128; i++) { and the nested loop is for (j=0; j<128; j++) {. Using 'loop interchange' optimization, compiler can convert this code to for (j=0; j<128; j++) { for (i=0; i<NUPDATE/128; i++) { ran[j] = (ran[j] << 1) ^ ((s64Int) ran[j] < 0 ? POLY : 0); Table[ran[j] & (TableSize-1)] ^= stable[ran[j] >> (64-LSTSIZE)]; } } It can be done because this loop nest is perfect loop nest. Is such optimization prohibited by rules of HPCC?

    Read the article

  • USB drive dead after stopping copying process on Snow Leopard Server

    - by Anriëtte Combrink
    Hi there I was copying to a flash drive from our Snow Leopard server when I stopped the copying process half way through. The device then disappeared from the Desktop. So I unplugged it and plugged it right back in. The device just didn't show up. I unplugged it and plugged it into a Windows XP machine as well as a Windows 7 machine. On both machines, I right clicked "My Computer" and selected "Manage…". On both PC's, the device was located under Removable Storage, but had no size and no drive letter. It shows up in "My Computer", but when I choose "Format…" from the right-click menu (context menu), it says the drive could not be formatted. Can someone please advise me? The flash drives is about 5 mins old and should have no reason to be dead. I really can't loose this drive (I don't need the data on it, I just need it to work again), any help would be appreciated. Thanks in advance.

    Read the article

  • How to handle a memory leak in an external DLL

    - by Dugan
    I have an external (.Net) dll that I have to use for an application. Unfortunately that dll has several known memory leaks. We are working on getting the authors of the dll to fix the memory leaks, but in the mean time I was wondering what is the best way to use the dll without having to deal with the memory leaks?

    Read the article

  • XDocument holding onto Memory?

    - by Jon
    I have an appplication that does a XDocument.Load from a 20mb file and then gets passed to a form to view its contents: openFileDialog1.FileName = ""; if (openFileDialog1.ShowDialog() == DialogResult.OK) { AuditFile = XDocument.Load(openFileDialog1.FileName); fmAuditLogViewer AuditViewer = new fmAuditLogViewer(); AuditViewer.ReportDocument = AuditFile; AuditViewer.Init(); AuditViewer.ShowDialog(); AuditViewer.Dispose(); AuditFile.RemoveNodes(); AuditFile = null; } In Task Manager I can see the memory being used by my application shoot up when I open this file. When I have finished viewing this file in my application I call : myXDocument.RemoveNodes(); myXDocument = null; However the memory use in Task Manager is still pretty high against my app. Is the XDocument still being held in memory and can I decrease the memory usage by my app?

    Read the article

  • Storing a value in Memory Independent of Process

    - by Ganesh
    Hi, I need a way to store a value somewhere for temporarily by say Process A. Process A can exit the after storing the value in memory. After sometime Process B comes accesses the same location of memory and read the value. I need to store in memory, because I dont want the data to persistent across reboots. But as long as the system is up, it Independent of the Process the data must be accessible. I tried MailSlots and Temporary files in windows, both seem to have problem where the process reference count drops to zero , the entities dont persist in memory. What is a suitable mechanism for this in Windows preferably using Win32 API? Ganesh

    Read the article

  • ZipArchive memory problems on iPhone for large archive

    - by Mithin
    Hi, I am trying to compress multiple files into a single zip archive and I am running into low memory warning. Since the complete zip file is loaded into the memory I guess that's the problem. Is there a way by which I can manage the compression/decompression better using ZipArchive so that not all the data is in the memory at once? Thanks!

    Read the article

  • Migrate Windows Server 2008 to a new hard disk

    - by MainMa
    Hi, I have a machine with Windows Server 2008. I want to change the hard disk drive, but keep everything else. I don't have a cd/dvd drive and don't want to buy it. My first idea was to make a byte-to-byte copy of the disk with Paragon Advanced Recovery. The problem is that when I try to boot from a new hard disk, it says that there were hardware changes and that Windows must be repaired, inviting me to insert the installation disk and follow repair instructions. I searched and found that 1:1 copy is not a correct way to do things. The correct one is to restore Windows to a new hard disk from a full system backup. But to restore, I need to have a dvd drive. I tried to make a copy of the Windows Server 2008 .iso on an USB flash drive, but the drive is not bootable (while the same procedure applied to Paragon Advanced Recovery ISO produces a bootable recovery USB flash drive). Now what else can I do (except buying a dvd drive)? Is there a way either to make Windows work without doing recovery or recover Windows 2008 without using a cd drive?

    Read the article

  • memory usage in C# (.NET) app is very high, until I call System.GC.Collect()

    - by Chris Gray
    I've written an app that spins a few threads each of which read several MB of memory. Each thread then connects to the Internet and uploads the data. this occurs thousands of times and each upload takes some time I'm seeing a situation where (verified with windbg/sos and !dumpheap) that the Byte[] are not getting collected automatically, causing 100/150MB of memory to be reported in task manager if I call System.GC.Collect() i'm seeing a huge drop in memory, a drop of over 100MB I dont like calling System.GC.Collect() and my PC has tons of free memory. however if anyone looks at TaskManager they're going to be concerned, thinking my app is leaking horribly. tips?

    Read the article

  • Datamapper In Memory Database

    - by Daniel Ribeiro
    It is easy to setup Datamapper with a Sqlite3 in memory database with: DataMapper.setup :default, 'sqlite3::memory:'. However, when testing, I'd like to destroy the whole in memory database after each test, instead of invoking automigrate! as a shortcut on dropping everything. Is it possible? Or is it enough to set the default repository to nil, and let the garbage collector dispose of it?

    Read the article

  • Cannot delete flash9.ocx

    - by Kara Marfia
    Some kinda voodoo, indeed. Bought a new boot drive, and it's time to use the old drive for data. I thought I'd save some time by just wiping the unneeded system folders, instead of backing up, formatting, and restoring. Wups! I have a single Adobe file that absolutely will not be deleted. G:\Windows\SysWOW64\Macromed\Flash\flash9.ocx - though it may have started with a different file name. I'm able to rename it, oddly enough. To be clear, the drive is currently plugged in externally. So I can boot the computer, plug this drive in afterward, and immediately attempt to delete. "File in use" box reads "the action can't be completed because the file is open in another program". I'd format at this point, but it's my white whale, and I have to know if Adobe has inserted some nasty little registry hack - or whatever it is - making this impossible. Since I'm sure it'll come up, I've taken ownership of the file - and this was the trick preventing me from deleting anything else on the drive - full rights on the file permissions, you name it, I've fiddled with the file itself. I'm about to try uninstalling flash from the system drive, in case that aligns the planets properly. Sometimes I wish I were less stubborn, and could just format already.

    Read the article

  • XDocument + IEnumerable is causing out of memory exception in System.Xml.Linq.dll

    - by Manatherin
    Basically I have a program which, when it starts loads a list of files (as FileInfo) and for each file in the list it loads a XML document (as XDocument). The program then reads data out of it into a container class (storing as IEnumerables), at which point the XDocument goes out of scope. The program then exports the data from the container class to a database. After the export the container class goes out of scope, however, the garbage collector isn't clearing up the container class which, because its storing as IEnumerable, seems to lead to the XDocument staying in memory (Not sure if this is the reason but the task manager is showing the memory from the XDocument isn't being freed). As the program is looping through multiple files eventually the program is throwing a out of memory exception. To mitigate this ive ended up using System.GC.Collect(); to force the garbage collector to run after the container goes out of scope. this is working but my questions are: Is this the right thing to do? (Forcing the garbage collector to run seems a bit odd) Is there a better way to make sure the XDocument memory is being disposed? Could there be a different reason, other than the IEnumerable, that the document memory isnt being freed? Thanks. Edit: Code Samples: Container Class: public IEnumerable<CustomClassOne> CustomClassOne { get; set; } public IEnumerable<CustomClassTwo> CustomClassTwo { get; set; } public IEnumerable<CustomClassThree> CustomClassThree { get; set; } ... public IEnumerable<CustomClassNine> CustomClassNine { get; set; }</code></pre> Custom Class: public long VariableOne { get; set; } public int VariableTwo { get; set; } public DateTime VariableThree { get; set; } ... Anyway that's the basic structures really. The Custom Classes are populated through the container class from the XML document. The filled structures themselves use very little memory. A container class is filled from one XML document, goes out of scope, the next document is then loaded e.g. public static void ExportAll(IEnumerable<FileInfo> files) { foreach (FileInfo file in files) { ExportFile(file); //Temporary to clear memory System.GC.Collect(); } } private static void ExportFile(FileInfo file) { ContainerClass containerClass = Reader.ReadXMLDocument(file); ExportContainerClass(containerClass); //Export simply dumps the data from the container class into a database //Container Class (and any passed container classes) goes out of scope at end of export } public static ContainerClass ReadXMLDocument(FileInfo fileToRead) { XDocument document = GetXDocument(fileToRead); var containerClass = new ContainerClass(); //ForEach customClass in containerClass //Read all data for customClass from XDocument return containerClass; } Forgot to mention this bit (not sure if its relevent), the files can be compressed as .gz so I have the GetXDocument() method to load it private static XDocument GetXDocument(FileInfo fileToRead) { XDocument document; using (FileStream fileStream = new FileStream(fileToRead.FullName, FileMode.Open, FileAccess.Read, FileShare.Read)) { if (String.Compare(fileToRead.Extension, ".gz", true) == 0) { using (GZipStream zipStream = new GZipStream(fileStream, CompressionMode.Decompress)) { document = XDocument.Load(zipStream); } } else { document = XDocument.Load(fileStream); } return document; } } Hope this is enough information. Thanks Edit: The System.GC.Collect() is not working 100% of the time, sometimes the program seems to retain the XDocument, anyone have any idea why this might be?

    Read the article

  • C++ Memory Allocation & Linked List Implementation

    - by pws5068
    I'm writing software to simulate the "first-fit" memory allocation schema. Basically, I allocate a large X megabyte chunk of memory and subdivide it into blocks when chunks are requested according to the schema. I'm using a linked list called "node" as a header for each block of memory (so that we can find the next block without tediously looping through every address value. head_ptr = (char*) malloc(total_size + sizeof(node)); if(head_ptr == NULL) return -1; // Malloc Error .. :-( node* head_node = new node; // Build block header head_node->next = NULL; head_node->previous = NULL; // Header points to next block (which doesn't exist yet) memset(head_ptr,head_node, sizeof(node)); ` But this last line returns: error: invalid conversion from 'node*' to 'int' I understand why this is invalid.. but how can I place my node into the pointer location of my newly allocated memory?

    Read the article

  • when does static member gets memory.

    - by vaibhav
    I have a class which have a static member. As I understand all static members are common for all instance of the class. So it means static members would get memory only once. Where is this memory is allocated (Stack or Heap) and when this memory get allocated.

    Read the article

  • Increasing the JVM maximum heap size for memory intensive applications

    - by Alceu Costa
    I need to run a Java memory intensive application that uses more than 2GB, but I am having problems to increase the heap maximum size. So far, I have tried the following approaches: Setting the -Xmx parameter, e.g. -Xmx3000m. This approaches fails at the creation of the JVM. From what I've googled, it looks like that -Xmx must be less than 2GB. Using the -XX:+AggressiveHeap option. When I try this approach I get an 'Not enough memory' error that tells that the heap size is 1273.4 MB, even though my computer has 8GB of memory. Is there another approach that I can try to increase the maximum heap size of the JVM? Here's a summary of the computer specs: OS: Windows 7 (64 bit) Processor: Intel Core i7 (2.66 GHz) Memory: 8 GB

    Read the article

  • SQL server virtual memory usage and perofrmance

    - by user365035
    Hello, I have a very large DB used mostly for analytics. The performance overall is very sluggish. I just noticed that when running the query below, the amount of virtual memory used greatly exceed the amount of physical memory available. Currently, phsycial memory is 10GB (10238 bytes) where as the virtual memory returns significantly more 8388607 bytes. That seems really wrong, but I'm at a bit of a loss on how to proceed. USE [master]; GO select cpu_count , hyperthread_ratio , physical_memory_in_bytes / 1048576 as 'mem_MB' , virtual_memory_in_bytes / 1048576 as 'virtual_mem_MB' , max_workers_count , os_error_mode , os_priority_class from sys.dm_os_sys_info

    Read the article

  • Precise explanation of JavaScript <-> DOM circular reference issue

    - by Joey Adams
    One of the touted advantages of jQuery.data versus raw expando properties (arbitrary attributes you can assign to DOM nodes) is that jQuery.data is "safe from circular references and therefore free from memory leaks". An article from Google titled "Optimizing JavaScript code" goes into more detail: The most common memory leaks for web applications involve circular references between the JavaScript script engine and the browsers' C++ objects' implementing the DOM (e.g. between the JavaScript script engine and Internet Explorer's COM infrastructure, or between the JavaScript engine and Firefox XPCOM infrastructure). It lists two examples of circular reference patterns: DOM element → event handler → closure scope → DOM DOM element → via expando → intermediary object → DOM element However, if a reference cycle between a DOM node and a JavaScript object produces a memory leak, doesn't this mean that any non-trivial event handler (e.g. onclick) will produce such a leak? I don't see how it's even possible for an event handler to avoid a reference cycle, because the way I see it: The DOM element references the event handler. The event handler references the DOM (either directly or indirectly). In any case, it's almost impossible to avoid referencing window in any interesting event handler, short of writing a setInterval loop that reads actions from a global queue. Can someone provide a precise explanation of the JavaScript ↔ DOM circular reference problem? Things I'd like clarified: What browsers are effected? A comment in the jQuery source specifically mentions IE6-7, but the Google article suggests Firefox is also affected. Are expando properties and event handlers somehow different concerning memory leaks? Or are both of these code snippets susceptible to the same kind of memory leak? // Create an expando that references to its own element. var elem = document.getElementById('foo'); elem.myself = elem; // Create an event handler that references its own element. var elem = document.getElementById('foo'); elem.onclick = function() { elem.style.display = 'none'; }; If a page leaks memory due to a circular reference, does the leak persist until the entire browser application is closed, or is the memory freed when the window/tab is closed?

    Read the article

< Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >