Search Results

Search found 18385 results on 736 pages for 'flash memory'.

Page 34/736 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • Flushing writes in buffer of Memory Controller to DDR device

    - by Rohit
    At some point in my code, I need to push the writes in my code all the way to the DIMM or DDR device. My requirement is to ensure the write reaches the row,ban,column of the DDR device on the DIMM. I need to read what I've written to the main memory. I do not want caching to get me the value. Instead after writing I want to fetch this value from main memory(DIMM's). So far I've been using Intel's x86 instruction wbinvd(write back and invalidate cache). However this means the caches and TLB are flushed. Write-back requests go to the main memory. However, there is a reasonable amount of time this data might reside in the write buffer of the Memory Controller( Intel calls it integrated memory controller or IMC). The Memory Controller might take some more time depending on the algorithm that runs in the Memory Controller to handle writes. Is there a way I force all existing or pending writes in the write buffer of the memory controller to the DRAM devices ?? What I am looking for is something more direct and more low-level than wbinvd. If you could point me to right documents or specs that describe this I would be grateful. Generally, the IMC has a several registers which can be written or read from. From looking at the specs for that for the chipset I could not find anything useful. Thanks for taking the time to read this.

    Read the article

  • .NET memory leak?

    - by SA
    I have an MDI which has a child form. The child form has a DataGridView in it. I load huge amount of data in the datagrid view. When I close the child form the disposing method is called in which I dispose the datagridview this.dataGrid.Dispose(); this.dataGrid = null; When I close the form the memory doesn't go down. I use the .NET memory profiler to track the memory usage. I see that the memory usage goes high when I initially load the data grid (as expected) and then becomes constant when the loading is complete. When I close the form it still remains constant. However when I take a snapshot of the memory using the memory profiler, it goes down to what it was before loading the file. Taking memory snapshot causes it to forcefully run garbage collector. What is going on? Is there a memory leak? Or do I need to run the garbage collector forcefully? More information: When I am closing the form I no longer need the information. That is why I am not holding a reference to the data.

    Read the article

  • Memory interleaving

    - by Tim Green
    Hello, I have this question that has me rather confused. Suppose that a 1G x 32-bit main memory is built using 256M x 4-bit RAM chips and that this memory is byte-addressable. I have deduced that one would require 4*1G = 2^2*2*30 = 2^32 - so 32 bits to address the full memory. My problem now comes with, say, if you had memory (byte) address "14", determine which memory module this would go into. (There would have to be 8 chips per module to make the 32-bit wide memory, and 4 modules overall giving 32 chips in total. Modules are numbered from 0). In high-order interleave, it appears trivial that it's the first (0) memory module given a lot of the first few bits are 0. However, low-order interleave has me stumped. I can't figure out (for sure) how many bits are used to determine a memory module (possibly 2, given there are 4 in total?). The given solution is Module 3. This is not homework in the same sense so I will not be tagging it as such.

    Read the article

  • How to avoid the following purify detected memory leak in C++?

    - by Abhijeet
    Hi, I am getting the following memory leak.Its being probably caused by std::string. how can i avoid it? PLK: 23 bytes potentially leaked at 0xeb68278 * Suppressed in /vobs/ubtssw_brrm/test/testcases/.purify [line 3] * This memory was allocated from: malloc [/vobs/ubtssw_brrm/test/test_build/linux-x86/rtlib.o] operator new(unsigned) [/vobs/MontaVista/Linux/montavista/pro/devkit/x86/586/target/usr/lib/libstdc++.so.6] operator new(unsigned) [/vobs/ubtssw_brrm/test/test_build/linux-x86/rtlib.o] std::string<char, std::char_traits<char>, std::allocator<char>>::_Rep::_S_create(unsigned, unsigned, std::allocator<char> const&) [/vobs/MontaVista/Linux/montavista/pro/devkit/ x86/586/target/usr/lib/libstdc++.so.6] std::string<char, std::char_traits<char>, std::allocator<char>>::_Rep::_M_clone(std::allocator<char> const&, unsigned) [/vobs/MontaVista/Linux/montavista/pro/devkit/x86/586/tar get/usr/lib/libstdc++.so.6] std::string<char, std::char_traits<char>, std::allocator<char>>::string<char, std::char_traits<char>, std::allocator<char>>(std::string<char, std::char_traits<char>, std::alloc ator<char>> const&) [/vobs/MontaVista/Linux/montavista/pro/devkit/x86/586/target/usr/lib/libstdc++.so.6] uec_UEDir::getEntryToUpdateAfterInsertion(rcapi_ImsiGsmMap const&, rcapi_ImsiGsmMap&, std::_Rb_tree_iterator<std::pair<std::string<char, std::char_traits<char>, std::allocator< char>> const, UEDirData >>&) [/vobs/ubtssw_brrm/uectrl/linux-x86/../src/uec_UEDir.cc:2278] uec_UEDir::addUpdate(rcapi_ImsiGsmMap const&, LocalUEDirInfo&, rcapi_ImsiGsmMap&, int, unsigned char) [/vobs/ubtssw_brrm/uectrl/linux-x86/../src/uec_UEDir.cc:282] ucx_UEDirHandler::addUpdateUEDir(rcapi_ImsiGsmMap, UEDirUpdateType, acap_PresenceEvent) [/vobs/ubtssw_brrm/ucx/linux-x86/../src/ucx_UEDirHandler.cc:374]

    Read the article

  • Avoid an "out of memory error" in Java(eclipse), when using large data structure?

    - by gnomed
    OK, so I am writing a program that unfortunately needs to use a huge data structure to complete its work, but it is failing with a "out of memory error" during its initialization. While I understand entirely what that means and why it is a problem, I am having trouble overcoming it, since my program needs to use this large structure and I don't know any other way to store it. The program first indexes a large corpus of text files that I provide. This works fine. Then it uses this index to initialize a large 2D array. This array will have nXn entries, where "n" is the number of unique words in the corpus of text. For the relatively small chunk I am testing it on(about 60 files) it needs to make approximately 30,000x30,000 entries. this will probably be bigger once I run it on my full intended corpus too. It consistently fails every time, after it indexes, while it is initializing the data structure(to be worked on later). Things I have done include: revamp my code to use a primitive "int[]" instead of a "TreeMap" eliminate redundant structures, etc... Also, I have run eclipse with "eclipse -vmargs -Xmx2g" to max out my allocated memory I am fairly confident this is not going to be a simple line of code solution, but is most likely going to require a very new approach. I am looking for what that approach is, any ideas? Thanks, B.

    Read the article

  • Create a Persistent Bootable Ubuntu USB Flash Drive

    - by Trevor Bekolay
    Don’t feel like reinstalling an antivirus program every time you boot up your Ubuntu flash drive? We’ll show you how to create a bootable Ubuntu flash drive that will remember your settings, installed programs, and more! Previously, we showed you how to create a bootable Ubuntu flash drive that would reset to its initial state every time you booted it up. This is great if you’re worried about messing something up, and want to start fresh every time you start tinkering with Ubuntu. However, if you’re using the Ubuntu flash drive to diagnose and solve problems with your PC, you might find that a lot of problems require guess-and-test cycles. It would be great if the settings you change in Ubuntu and the programs you install stay installed the next time you boot it up. Fortunately, Universal USB Installer, a great little program from Pen Drive Linux, can do just that! Note: You will need a USB drive at least 2 GB large. Make sure you back up any files on the flash drive because this process will format the drive, removing any files currently on it. Once Ubuntu has been installed on the flash drive, you can move those files back if there is enough space. Put Ubuntu on your flash drive Universal-USB-Installer.exe does not need to be installed, so just double click on it to run it wherever you downloaded it. Click Yes if you get a UAC prompt, and you will be greeted with this window. Click I Agree. In the drop-down box on the next screen, select Ubuntu 9.10 Desktop i386. Don’t worry if you normally use 64-bit operating systems – the 32-bit version of Ubuntu 9.10 will still work fine. Some useful tools do not have 64-bit versions, so unless you’re planning on switching to Ubuntu permanently, the 32-bit version will work best. If you don’t have a copy of the Ubuntu 9.10 CD downloaded, then click on the checkbox to Download the ISO. You’ll be prompted to launch a web browser; click Yes. The download should start immediately. When it’s finished, return the the Universal USB Installer and click on Browse to navigate to the ISO file you just downloaded. Click OK and the text field will be populated with the path to the ISO file. Select the drive letter that corresponds to the flash drive that you would like to use from the dropdown box. If you’ve backed up the files on this drive, we recommend checking the box to format the drive. Finally, you have to choose how much space you would like to set aside for the settings and programs that will be stored on the flash drive. Considering that Ubuntu itself only takes up around 700 MB, 1 GB should be plenty, but we’re choosing 2 GB in this example because we have lots of space on this USB drive. Click on the Create button and then make yourself a sandwich – it will take some time to install no matter how fast your PC is. Eventually it will finish. Click Close. Now you have a flash drive that will boot into a fully capable Ubuntu installation, and any changes you make will persist the next time you boot it up! Boot into Ubuntu If you’re not sure how to set your computer to boot using the USB drive, then check out the How to Boot Into Ubuntu section of our previous article on creating bootable USB drives, or refer to your motherboard’s manual. Once your computer is set to boot using the USB drive, you’ll be greeted with splash screen with some options. Press Enter to boot into Ubuntu. The first time you do this, it may take some time to boot up. Fortunately, we’ve found that the process speeds up on subsequent boots. You’ll be greeted with the Ubuntu desktop. Now, if you change settings like the desktop resolution, or install a program, those changes will be permanently stored on the USB drive! We installed avast! Antivirus, and on the next boot, found that it was still in the Accessories menu where we left it. Conclusion We think that a bootable Ubuntu USB flash drive is a great tool to have around in case your PC has problems booting otherwise. By having the changes you make persist, you can customize your Ubuntu installation to be the ultimate computer repair toolkit! Download Universal USB Installer from Pen Drive Linux Similar Articles Productive Geek Tips Create a Bootable Ubuntu USB Flash Drive the Easy WayCreate a Bootable Ubuntu 9.10 USB Flash DriveReset Your Ubuntu Password Easily from the Live CDHow-To Geek on Lifehacker: Control Your Computer with Shortcuts & Speed Up Vista SetupHow To Setup a USB Flash Drive to Install Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Test Drive Windows 7 Online Download Wallpapers From National Geographic Site Spyware Blaster v4.3 Yes, it’s Patch Tuesday Generate Stunning Tag Clouds With Tagxedo Install, Remove and HIDE Fonts in Windows 7

    Read the article

  • How to boot Linux from a 16gb USB flash drive

    - by Chris Harris
    I'm trying to install Linux on a single partition of a USB flash drive that's larger than 4gb. The first place I went to is http://pendrivelinux.com. I can follow these instructions for installing Xubuntu 9.04 perfectly, which unfortunately break down when I try to scale it up beyond 4gb. There are several other tools to do this (unetbootin and usb-creator) which follow a very similar formula. I figured out that a big problem of mine was that all of these tools assume the USB drive is formatted in FAT32, which unfortunately cannot hold a single file larger than 4gb. This is unfortunate because I want to use just one partition, so that my persistance file, casper-rw, looks like one big partition to the OS once I've booted off of the USB drive. I then tried following a myriad of instructions involving formatting the drive as one large ext2 filesystem and using extlinux to create a single bootable ext2 file system. This doesn't work for me however, after about 20 attempts verifying and slightly tweaking the formula, I cannot seem to get a "good" bootable ext2 file system built. I'm not entirely sure what's going on, but it seems as though no matter how hard I try, I cannot get the ext2 file system to remain coherent after copying the Linux ISO contents over, copying the MBR, and executing extlinux to create the ext bootloader. Every time, after I follow these steps (in any order) and reboot, I get an unbootable USB drive. If I then mount the drive under Linux again, I see a mess of a file system (inodes have clearly been screwed up somewhere along the way). I suspected that the USB drive wasn't being fully flushed, so I tried using the "sync" and "unmount" commands before rebooting which didn't affect things at all. I guess I have several possible questions - but let's start with the obvious - is there something I'm missing to create a bootable ext2 USB flash drive that's large (e.g. 16gb)?

    Read the article

  • Diagnosing Solaris 8 server memory and swap space usage

    - by datSilencer
    Hello everyone. Essentially, my question is related to memory allocation for Solaris virtual machines. I am running a couple of old Sun ONE 6 Java web servers on two Solaris 8 virtual machines. I see that there's a reasonable amount of swap space being used, but I'm not exactly sure if this could indicate a need to add more RAM to these machines. At service peak hours (mornings usually), the response time of the web application these servers host jumps up to at most 11 seconds (somewhat detrimental for a relatively simple web page loading action). Average response time at non peak times is about 5 seconds. What would you be able to infer about the RAM usage for these machines from the ouput below? Is this information reasonably sufficient? Or would I need to run some other commands to rule out server memory starvation? Finally, since there is a Java application at the core of the setup, I've also thought about: 1) Trace the heap's Object allocation to detect potential memory leaks. 2) Do some performance profiling to see if this instead related to networking delays. I mention this since the application talks with a single Oracle Database, but I would doubt this to be the case since they're pretty close from a network segmentation perspective. I appreciate any kind of insight and feedback you could provide. Thanks for your time and help. Server 1: 40 processes: 38 sleeping, 1 zombie, 1 on cpu CPU states: 99.1% idle, 0.4% user, 0.4% kernel, 0.0% iowait, 0.0% swap Memory: 2048M real, 295M free, 865M swap in use, 3788M swap free PID USERNAME THR PRI NICE SIZE RES STATE TIME CPU COMMAND 12676 webservd 112 29 10 616M 242M sleep 103:37 0.48% webservd 18317 root 1 59 0 23M 19M sleep 67:24 0.08% perl 9479 support 1 59 0 6696K 2448K cpu/1 0:11 0.05% top 8012 root 10 59 0 34M 704K sleep 80:54 0.04% java 1881 root 33 29 10 110M 13M sleep 33:03 0.02% webservd 7808 root 1 59 0 83M 67M sleep 7:59 0.00% perl 1461 root 20 59 0 5328K 1392K sleep 6:49 0.00% syslogd 1691 root 2 59 0 27M 680K sleep 4:22 0.00% webservd 24386 root 1 59 0 15M 11M sleep 2:50 0.00% perl 23259 root 1 59 0 11M 4240K sleep 2:42 0.00% perl 24718 root 1 59 0 11M 5464K sleep 2:29 0.00% perl 22810 root 1 59 0 19M 11M sleep 2:21 0.00% perl 24451 root 1 53 2 11M 3800K sleep 2:18 0.00% perl 18501 root 1 56 1 11M 3960K sleep 2:18 0.00% perl 14450 root 1 56 1 15M 6920K sleep 1:49 0.00% perl Server 2 42 processes: 40 sleeping, 1 zombie, 1 on cpu CPU states: 98.8% idle, 0.4% user, 0.8% kernel, 0.0% iowait, 0.0% swap Memory: 1024M real, 31M free, 554M swap in use, 3696M swap free PID USERNAME THR PRI NICE SIZE RES STATE TIME CPU COMMAND 5607 webservd 74 29 10 284M 173M sleep 20:14 0.21% webservd 15919 support 1 59 0 4056K 2520K cpu/1 0:08 0.09% top 13138 root 10 59 0 34M 1952K sleep 210:51 0.08% java 13753 root 1 59 0 22M 12M sleep 170:15 0.07% perl 22979 root 33 29 10 112M 7864K sleep 85:07 0.04% webservd 22930 root 1 59 0 3424K 1552K sleep 17:47 0.01% xntpd 22978 root 2 59 0 27M 2296K sleep 10:49 0.00% webservd 13571 root 1 59 0 9400K 5112K sleep 5:52 0.00% perl 5606 root 2 29 10 29M 9056K sleep 0:36 0.00% webservd 15910 support 1 59 0 9128K 2616K sleep 0:00 0.00% sshd 13106 root 1 59 0 82M 3520K sleep 7:47 0.00% perl 13547 root 1 59 0 12M 5528K sleep 6:38 0.00% perl 13518 root 1 59 0 9336K 3792K sleep 6:24 0.00% perl 13399 root 1 56 1 8072K 3616K sleep 5:18 0.00% perl 13557 root 1 53 2 8248K 3624K sleep 5:12 0.00% perl

    Read the article

  • AMD 24 core server memory bandwidth

    - by ntherning
    I need some help to determine whether the memory bandwidth I'm seeing under Linux on my server is normal or not. Here's the server spec: HP ProLiant DL165 G7 2x AMD Opteron 6164 HE 12-Core 40 GB RAM (10 x 4GB DDR1333) Debian 6.0 Using mbw on this server I get the following numbers: foo1:~# mbw -n 3 1024 Long uses 8 bytes. Allocating 2*134217728 elements = 2147483648 bytes of memory. Using 262144 bytes as blocks for memcpy block copy test. Getting down to business... Doing 3 runs per test. 0 Method: MEMCPY Elapsed: 0.58047 MiB: 1024.00000 Copy: 1764.082 MiB/s 1 Method: MEMCPY Elapsed: 0.58012 MiB: 1024.00000 Copy: 1765.152 MiB/s 2 Method: MEMCPY Elapsed: 0.58010 MiB: 1024.00000 Copy: 1765.201 MiB/s AVG Method: MEMCPY Elapsed: 0.58023 MiB: 1024.00000 Copy: 1764.811 MiB/s 0 Method: DUMB Elapsed: 0.36174 MiB: 1024.00000 Copy: 2830.778 MiB/s 1 Method: DUMB Elapsed: 0.35869 MiB: 1024.00000 Copy: 2854.817 MiB/s 2 Method: DUMB Elapsed: 0.35848 MiB: 1024.00000 Copy: 2856.481 MiB/s AVG Method: DUMB Elapsed: 0.35964 MiB: 1024.00000 Copy: 2847.310 MiB/s 0 Method: MCBLOCK Elapsed: 0.23546 MiB: 1024.00000 Copy: 4348.860 MiB/s 1 Method: MCBLOCK Elapsed: 0.23544 MiB: 1024.00000 Copy: 4349.230 MiB/s 2 Method: MCBLOCK Elapsed: 0.23544 MiB: 1024.00000 Copy: 4349.359 MiB/s AVG Method: MCBLOCK Elapsed: 0.23545 MiB: 1024.00000 Copy: 4349.149 MiB/s On one of my other servers (based on Intel Xeon E3-1270): foo2:~# mbw -n 3 1024 Long uses 8 bytes. Allocating 2*134217728 elements = 2147483648 bytes of memory. Using 262144 bytes as blocks for memcpy block copy test. Getting down to business... Doing 3 runs per test. 0 Method: MEMCPY Elapsed: 0.18960 MiB: 1024.00000 Copy: 5400.901 MiB/s 1 Method: MEMCPY Elapsed: 0.18922 MiB: 1024.00000 Copy: 5411.690 MiB/s 2 Method: MEMCPY Elapsed: 0.18944 MiB: 1024.00000 Copy: 5405.491 MiB/s AVG Method: MEMCPY Elapsed: 0.18942 MiB: 1024.00000 Copy: 5406.024 MiB/s 0 Method: DUMB Elapsed: 0.14838 MiB: 1024.00000 Copy: 6901.200 MiB/s 1 Method: DUMB Elapsed: 0.14818 MiB: 1024.00000 Copy: 6910.561 MiB/s 2 Method: DUMB Elapsed: 0.14820 MiB: 1024.00000 Copy: 6909.628 MiB/s AVG Method: DUMB Elapsed: 0.14825 MiB: 1024.00000 Copy: 6907.127 MiB/s 0 Method: MCBLOCK Elapsed: 0.04362 MiB: 1024.00000 Copy: 23477.623 MiB/s 1 Method: MCBLOCK Elapsed: 0.04262 MiB: 1024.00000 Copy: 24025.151 MiB/s 2 Method: MCBLOCK Elapsed: 0.04258 MiB: 1024.00000 Copy: 24048.849 MiB/s AVG Method: MCBLOCK Elapsed: 0.04294 MiB: 1024.00000 Copy: 23847.599 MiB/s For reference here's what I get on my Intel based laptop: laptop:~$ mbw -n 3 1024 Long uses 8 bytes. Allocating 2*134217728 elements = 2147483648 bytes of memory. Using 262144 bytes as blocks for memcpy block copy test. Getting down to business... Doing 3 runs per test. 0 Method: MEMCPY Elapsed: 0.40566 MiB: 1024.00000 Copy: 2524.269 MiB/s 1 Method: MEMCPY Elapsed: 0.38458 MiB: 1024.00000 Copy: 2662.638 MiB/s 2 Method: MEMCPY Elapsed: 0.38876 MiB: 1024.00000 Copy: 2634.043 MiB/s AVG Method: MEMCPY Elapsed: 0.39300 MiB: 1024.00000 Copy: 2605.600 MiB/s 0 Method: DUMB Elapsed: 0.30707 MiB: 1024.00000 Copy: 3334.745 MiB/s 1 Method: DUMB Elapsed: 0.30425 MiB: 1024.00000 Copy: 3365.653 MiB/s 2 Method: DUMB Elapsed: 0.30342 MiB: 1024.00000 Copy: 3374.849 MiB/s AVG Method: DUMB Elapsed: 0.30491 MiB: 1024.00000 Copy: 3358.328 MiB/s 0 Method: MCBLOCK Elapsed: 0.07875 MiB: 1024.00000 Copy: 13003.670 MiB/s 1 Method: MCBLOCK Elapsed: 0.08374 MiB: 1024.00000 Copy: 12228.034 MiB/s 2 Method: MCBLOCK Elapsed: 0.07635 MiB: 1024.00000 Copy: 13411.216 MiB/s AVG Method: MCBLOCK Elapsed: 0.07961 MiB: 1024.00000 Copy: 12862.006 MiB/s So according to mbw my laptop is 3 times faster than the server!!! Please help me explain this. I've also tried to mount a ram disk and use dd to benchmark it and I get similar differences so I don't think mbw is to blame. I've checked the BIOS settings and the memory seem to be running at full speed. According to the hosting company the modules are all OK. Could this have something to do with NUMA? It seems like Node Interleaving is disabled on this server. Will enabling it (thus turning off NUMA) make a difference? foo1:~# numactl --hardware available: 4 nodes (0-3) node 0 cpus: 0 1 2 3 4 5 node 0 size: 8190 MB node 0 free: 7898 MB node 1 cpus: 6 7 8 9 10 11 node 1 size: 12288 MB node 1 free: 12073 MB node 2 cpus: 18 19 20 21 22 23 node 2 size: 12288 MB node 2 free: 12034 MB node 3 cpus: 12 13 14 15 16 17 node 3 size: 8192 MB node 3 free: 8032 MB node distances: node 0 1 2 3 0: 10 20 20 20 1: 20 10 20 20 2: 20 20 10 20 3: 20 20 20 10

    Read the article

  • Using UDF on a USB flash drive

    - by CesarB
    After failing to copy a file bigger than 4G to my 8G USB flash drive, I formatted it as ext3. While this is working fine for me so far, it will cause problems if I want to use it to copy files to someone which does not use Linux. I am thinking of formatting it as UDF instead, which I hope would allow it to be read (and possibly even written) on the three most popular operating systems (Windows, MacOS, and Linux), without having to install any extra drivers. However, from what I found on the web already, there seem to be several small gotchas related to which parameters are used to create the filesystem, which can reduce the compability (but most of the pages I found are about optical media, not USB flash drives). I would like to know: Which utility should I use to create the filesystem? (So far I have found mkudffs and genisoimage, and mkudffs seems the best option.) Which parameters should I use with the chosen utility for maximum compability? How compatible with the most common versions of these three operating systems UDF actually is? Is using UDF actually the best idea? Is there another filesystem which would have better compatibility, with no problematic restrictions like the FAT32 4G file size limit, and without having to install special drivers in every single computer which touches it?

    Read the article

  • Using UDF on a USB flash drive

    - by CesarB
    After failing to copy a file bigger than 4G to my 8G USB flash drive, I formatted it as ext3. While this is working fine for me so far, it will cause problems if I want to use it to copy files to someone which does not use Linux. I am thinking of formatting it as UDF instead, which I hope would allow it to be read (and possibly even written) on the three most popular operating systems (Windows, MacOS, and Linux), without having to install any extra drivers. However, from what I found on the web already, there seem to be several small gotchas related to which parameters are used to create the filesystem, which can reduce the compability (but most of the pages I found are about optical media, not USB flash drives). I would like to know: Which utility should I use to create the filesystem? (So far I have found mkudffs and genisoimage, and mkudffs seems the best option.) Which parameters should I use with the chosen utility for maximum compability? How compatible with the most common versions of these three operating systems UDF actually is? Is using UDF actually the best idea? Is there another filesystem which would have better compatibility, with no problematic restrictions like the FAT32 4G file size limit, and without having to install special drivers in every single computer which touches it?

    Read the article

  • No partition on USB Flash Drive?

    - by Skytunnel
    A friend gave me a corrupted USB memory stick to try recovery data from. But I've had some unusual results, so thought I'd share to see if anyone is familiar with this problem... First off I just tried opening from my own PC. Windows prompted to Format the drive, which I of course declined Downloaded TestDisk to anaylsis the drive. And right away I noticed something strange, on the listed drives it comes up as Disk /dev/sdc - 6144 B - USB Flash Drive That's right, the first USB flash drive smaller than a floppy disk!? Moving on anyway... first anaylsis comes up with: Partition sector doesn't have the endmark 0xAA55 TestDisk's Quick Search gave no results, moved on to Deeper Search: No partition found or selected for recovery This left me stumped. I tired a couple of other programs with no success I did manage to get a backup image, but it was just as small as TestDisk indicated, so nothing of use on it After a few hours trying various suggestions from other sources, I gave in and just tried formatting the drive. But returned the message: Windows was unable to complete the format. From googling that, the suggestion was to delete the partition. But there is no partition to delete in this case. most recently I've tried formatting from cmd, and got this result: Format D: /FS:FAT32 The type of the file system is RAW The new file system is FAT32 Verifying 0M 11 bad sectors were encountered during the format. These sectors cannot be guaranteed to have been cleaned The volume is too small for FAT32 Anyone got any suggestions? UPDATE: As per suggestion from @Karen, I tried running a CLEAN from DISKPART, results as follows DiskPart has encountered an error: The request could not be preformed because of an I/O device error.

    Read the article

  • Actionscript 3: Force program to wait until event handler is called

    - by Jeremy Swinarton
    I have an AS 3.0 class that loads a JSON file in using a URLRequest. package { import flash.display.MovieClip; import flash.display.Loader; import flash.net.URLRequest; import flash.net.URLLoader; import flash.events.Event; public class Tiles extends MovieClip { private var mapWidth:int,mapHeight:int; private var mapFile:String; private var mapLoaded:Boolean=false; public function Tiles(m:String) { init(m); } private function init(m:String):void { // Initiates the map arrays for later use. mapFile=m; // Load the map file in. var loader:URLLoader = new URLLoader(); loader.addEventListener(Event.COMPLETE, mapHandler); loader.load(new URLRequest("maps/" + mapFile)); } private function mapHandler(e:Event):void { mapLoaded=true; mapWidth=3000; } public function getMapWidth():int { if (mapLoaded) { return (mapWidth); } else { getMapWidth(); return(-1); } } } } When the file is finished loading, the mapHandler event makes changes to the class properties, which in turn are accessed using the getMapWidth function. However, if the getMapwidth function gets called before it finishes loading, the program will fail. How can I make the class wait to accept function calls until after the file is loaded?

    Read the article

  • Can VS2010 help me find memory leaks?

    - by Andrew Garrison
    I'm going through the pain right now of finding memory leaks in my application using WinDbg. Luckily, I've found a few good articles that give a very good step-by-step process of how to do it. Still, it is a fairly painful process. Does VS2010 have any built in features that can ease the burden of finding a memory leak in a Silverlight application? Of course, a memory leak in .NET sounds a bit like a misnomer, but what I intend to do is to find all objects that are still referencing an object that I believe should be garbage collected. For those that may be interested, here are some good articles on how to get started using WinDbg to find memory leaks in Silverlight: Finding Memory Leaks In Silverlight With WinDbg Hunting down memory leaks in Silverlight

    Read the article

  • How does a hard drive compare to Flash memory working as a hard drive in terms of speed?

    - by Jian Lin
    Some experiment I did with hard drive read/write speed was 10MB/s write and 40MB/s read, and with a USB Flash drive, it can be 5MB/s write and 10MB/s read. Also, if I put a virtual hard drive .vhd file in a hard drive or in a USB Flash drive and try a Virtual Machine using it, the one using the hard drive is quite fast, while the one using the USB Flash drive is close to not usable. So I wonder some early netbooks use 4GB or 8GB flash memory as the hard drive, and even the Apple Mac Air has an option of using flash memory instead of a hard drive. But in those situation, will the speed be slower than using a hard drive, like in the case of a USB Flash drive?

    Read the article

  • Setting up SQL Server 2005 to use all available memory in 32bit Windows Server 2003 - and verifying

    - by Rizwan Kassim
    There are a number of questions along this line - but they either sometimes contradict each other, or don't show how to properly verify that everything is actually working - hopefully this can be comprehensive... I'm running SQL Server 2005 SP3 Standard on Windows Server 2003 R2 Standard. My server has 8GB of memory installed - my system is almost entirely used as a Database Server - there are some services running on them, but the OS + services can run within 1Gb of RAM. What I've done (please tell me if I'm doing something wrong): /3GB in the boot.ini. (To increase the amount of user-space memory available - info) /PAE in the boot.ini. (Windows claimed to be doing PAE even without this switch, somethow.) Enabled AWE in SQL Server. Enabled Lock Pages in Memory Option for users SYSTEM and Local Service. (info). SQL Server Standard doesn't seem to use this until Cumulative Update 4, which isn't installed on my server. (info) Set Min/Max Memory to : 1024Mb/5112Mb After doing all the above, we definately saw a level of improvement - but I'd like now to verify my settings, make sure that I'm making full use of the memory available. (There appeared to be a slowdown when max = 7Gb, so I edged off from that value, but it might have been just perceptual.) To verify, I checked the following levels in PerfMon : Process(sqlserv):Working Set : 76386304 SQL Server(Memory Manager) : Total Server Memory : 3538944 (I saw a doc that noted that this wasn't the full memory used by SQL Server, so I'm not sure whether to trust it) So -- my questions... Should my max be around 7Gb? If not, what should it be? Why is total server memory at 3.5G, when it's been allocated 5G? What is the proper metric for the amount of memory allocated to SQL Server? The Working Set seems a bit large... Am I possibly missing any steps in the setup? Any recommended resources on starting to tune the caching system now? Thanks

    Read the article

  • known memory leaks in 3ds max?

    - by Denise
    I've set up a script in 3ds max to render a bunch of animations into frames. To do this, I open up a file with all of the materials, load an animation (as a bip) onto the figure, then render. We were seeing a problem where eventually the script would fail because it was unable to open the next file-- max had consumed all of the system memory. Closing max, of course, freed the memory, and we were able to continue with the script. I checked out the heapfree variable, hoping to see a memory leak within my script, hoping to see a memory leak within my own (maxscript) code-- but the amount of free space was the same after every animation. Then, it must be 3ds max which is consuming all of that memory. Nothing in max need be saved from animation to animation-- is there some way to get max to free that memory? (I've tried resetMaxFile() and manually deleting all of the objects in the scene). Is there any known sets of operations that cause max to grow out of control?

    Read the article

  • how to install flash for opera browser? ubuntu 12 04

    - by santosamaru
    for the opera:plugins setting its had been setup as enable to use the flash player .. and also i do trying to follow the instruction from I am testing the new Opera 11, but it keeps telling me I need to install flash player it still not help me .. what i have after following that link instructions is root@santos:/home/santos# cp /usr/lib/flashplugin-installer/libflashplayer.so ~/.opera/plugins/libflashplayer.so cp: cannot create regular file `/root/.opera/plugins/libflashplayer.so': No such file or directory root@santos:/home/santos# sudo apt-get gecko-mediaplayer E: Invalid operation gecko-mediaplayer root@santos:/home/santos# cp /usr/lib/flashplugin-installer/libflashplayer.so ~/.opera/plugins/libflashplayer.so cp: cannot create regular file `/root/.opera/plugins/libflashplayer.so': No such file or directory anyone can help me to solve this ?

    Read the article

  • How to find out memory layout of your data structure implementation on Linux 64bit machine

    - by ajay
    In this article, http://cacm.acm.org/magazines/2010/7/95061-youre-doing-it-wrong/fulltext the author talks about the memory layouts of 2 data structures - The Binary Heap and the B-Heap and compares how one has better memory layout than the other. http://deliveryimages.acm.org/10.1145/1790000/1785434/figs/f5.jpg http://deliveryimages.acm.org/10.1145/1790000/1785434/figs/f6.jpg I want to get hands on experience on this. I have an implementation of a N-Ary Tree and I want to find out the memory layout of my data structure. What is the best way to come up with a memory layout like the one in the article? Secondly, I think it is easier to identify the memory layout if it is an array based implementation. If the implementation of a Tree uses pointers then what Tools do we have or what kind of approach is required to map it's memory layout? Thanks!

    Read the article

  • Any large USB sticks with integrated card readers?

    - by Al
    I have one of Kingston's DataTraveller Micro Reader USB sticks, a fantastic memory stick with an integrated micro SD and M2 card reader. However, I've gradually filled it to the brim and am looking for a larger stick. Unfortunately, Kingston don't make them any bigger than the 4GB one that I currently have and I was hoping to go to 16GB now that they've come down in price. Does anyone know if any manufacturers make something similar: a 16GB stick with a micro SD card reader integrated (I'm not bothered about the M2 reader).

    Read the article

  • Does the speed of memory define its voltage?

    - by Zak
    What I'm asking here is, if I order PC3200 memory is it all going to be 1.8V, or will some be 2.5, some 1.8, etc... I don't mean variation within a specific part #, but rather across part numbers, is there a variation where some PC 3200 memory would be incompatible with others because it is 2.5 v 1.8V .

    Read the article

  • Avoid linux out-of-memory application teardown

    - by Eddie Parker
    I'm finding that on occasion my Linux box runs out of memory and it starts tearing down random processes to deal with it. I'm curious what administrators do to avoid this? Is the only real solution to up the amount of memory (will upping the swap alone help?), or is there better ways to set up the box with software to avoid this? (i.e., quotas, or some such?). I'd appreciate some feedback. Cheers, -e-

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >