Search Results

Search found 69840 results on 2794 pages for 'swap file'.

Page 3/2794 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • How to I configure a swap partition using swapspace

    - by jcalfee314
    I finally have the swapspace project installed and running (via init.d). The purpose is to have a dynamically re-sizing swap partition. I'm clueless however on how to use it. It has good documentation but just does not go into that last step. How to I configure a swap partition using swapspace? The process is probably the same for any 3rd party program that would provide a swap space implementation to the kernel. I know this was intended to run as a process because the project provides an init.d script.

    Read the article

  • Typing filename in standard open file dialog (Windows 7) - file name suggestion

    - by bybor
    When you use standard windows open file dialog and start typing it puts files whose name starts with what you type to drop down list. But on another pc with same Windows 7 it also puts first of them into input box in which you type - like FF does with URLS, allowing you to immediately press Enter (without pressing 'Down' to select file). I don't know why this behavior is different, but I want suggested file name shown in input box. How could it be achieved? Thanks.

    Read the article

  • Use a Free Tool to Edit, Delete, or Restore the Default Hosts File in Windows

    - by Lori Kaufman
    The hosts file in Windows contains mappings of IP addresses to host names, like an address book for your computer. Your PC uses IP addresses to find websites, so it needs to translate the host names into IP addresses to access websites. When you enter a host name in a browser to visit a website, that host name is looked up in DNS servers to find the IP address. If you enter IP addresses and host names for websites you visit often, these websites will load faster, because the hosts file is loaded into memory when Windows start and overrides DNS server queries, creating a shortcut to the sites. Because the hosts file is checked first, you can also use it to block websites from tracking your activities on the internet, as well as block ads, banners, third-party cookies, and other intrusive elements on webpages. Your computer has its own host address, known as its “localhost” address. The IP address for localhost is 127.0.0.1. To block sites and website elements, you can enter the host name for the unwanted site in the hosts file and associate it with the localhost address. Blocking ads and other undesirable webpage elements, can also speed up the loading of websites. You don’t have to wait for all those items to load. The default hosts file that comes with Windows does not contain any host name/IP address mappings. You can add mappings manually, such as the IP address 74.125.224.72 for www.google.com. As an example of blocking an ad server website, you can enter the following line in your hosts file to block doubleclick.net from serving you ads. How To Use USB Drives With the Nexus 7 and Other Android Devices Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder? Why Your Android Phone Isn’t Getting Operating System Updates and What You Can Do About It

    Read the article

  • Swap implication in Linux and way to increase it

    - by vimalnath
    I used top command to print this on Linux box: [root@localhost ~]# top top - 23:38:38 up 361 days, 12:16, 2 users, load average: 0.09, 0.06, 0.01 Tasks: 129 total, 2 running, 126 sleeping, 1 stopped, 0 zombie Cpu(s): 0.0% us, 0.2% sy, 0.0% ni, 96.5% id, 3.4% wa, 0.0% hi, 0.0% si Mem: 2074712k total, 1996948k used, 77764k free, 16632k buffers Swap: 1052248k total, 1052248k used, 0k free, 331540k cached I am not sure what Swap:0k free means in the last line. Is this normal behavior for a linux box to have value of 0 Thanks

    Read the article

  • How to disable Mac OS X from using swap when there still is "Inactive" memory?

    - by Motin
    A common phenomena in my day to day usage (and several other's according to various posts throughout the internet) of OS X, the system seems to become slow whenever there is no more "Free" memory available. Supposedly, this is due to swapping, since heavy disk activity is apparent and that vm_stat reports many pageouts. (Correct me from wrong) However, the amount of "Inactive" ram is typically around 12.5%-25% of all available memory (^1.) when swapping starts/occurs/ends. According to http://support.apple.com/kb/ht1342 : Inactive memory This information in memory is not actively being used, but was recently used. For example, if you've been using Mail and then quit it, the RAM that Mail was using is marked as Inactive memory. This Inactive memory is available for use by another application, just like Free memory. However, if you open Mail before its Inactive memory is used by a different application, Mail will open quicker because its Inactive memory is converted to Active memory, instead of loading Mail from the slower hard disk. And according to http://developer.apple.com/library/mac/#documentation/Performance/Conceptual/ManagingMemory/Articles/AboutMemory.html : The inactive list contains pages that are currently resident in physical memory but have not been accessed recently. These pages contain valid data but may be released from memory at any time. So, basically: When a program has quit, it's memory becomes marked as Inactive and should be claimable at any time. Still, OS X will prefer to start swapping out memory to the Swap file instead of just claiming this memory, whenever the "Free" memory gets to low. Why? What is the advantage of this behavior over, say, instantly releasing Inactive memory and not even touch the swap file? Some sources (^2.) indicate that OS X would page out the "Inactive" memory to swap before releasing it, but that doesn't make sense now does it if the memory may be released from memory at any time? Swapping is expensive, releasing is cheap, right? Can this behavior be changed using some preference or known hack? (Preferably one that doesn't include disabling swap/dynamic_pager altogether and restarting...) I do appreciate the purge command, as well as the concept of Repairing disk permissions to force some Free memory, but those are ways to painfully force more Free memory than to actually fixing the swap/release decision logic... Btw a similar question was asked here: http://forums.macnn.com/90/mac-os-x/434650/why-does-os-x-swap-when/ and here: http://hintsforums.macworld.com/showthread.php?t=87688 but even though the OPs re-asked the core question, none of the replies addresses an answer to it... ^1. UPDATE 17-mar-2012 Since I first posted this question, I have gone from 4gb to 8gb of installed ram, and the problem remains. The amount of "Inactive" ram was 0.5gb-1.0gb before and is now typically around 1.0-2.0GB when swapping starts/occurs/ends, ie it seems that around 12.5%-25% of the ram is preserved as Inactive by osx kernel logic. ^2. For instance http://apple.stackexchange.com/questions/4288/what-does-it-mean-if-i-have-lots-of-inactive-memory-at-the-end-of-a-work-day : Once all your memory is used (free memory is 0), the OS will write out inactive memory to the swapfile to make more room in active memory. UPDATE 17-mar-2012 Here is a round-up of the methods that have been suggested to help so far: The purge command "Used to approximate initial boot conditions with a cold disk buffer cache for performance analysis. It does not affect anonymous memory that has been allocated through malloc, vm_allocate, etc". This is useful to prevent osx to swap-out the disk cache (which is ridiculous that osx actually does so in the first place), but with the downside that the disk cache is released, meaning that if the disk cache was not about to be swapped out, one would simply end up with a cold disk buffer cache, probably affecting performance negatively. The FreeMemory app and/or Repairing disk permissions to force some Free memory Doesn't help releasing any memory, only moving some gigabytes of memory contents from ram to the hd. In the end, this causes lots of swap-ins when I attempt to use the applications that were open while freeing memory, as a lot of its vm is now on swap. Speeding up swap-allocation using dynamicpagerwrapper Seems a good thing to do in order to speed up swap-usage, but does not address the problem of osx swapping in the first place while there is still inactive memory. Disabling swap by disabling dynamicpager and restarting This will force osx not to use swap to the price of the system hanging when all memory is used. Not a viable alternative... Disabling swap using a hacked dynamicpager Similar to disabling dynamicpager above, some excerpts from the comments to the blog post indicate that this is not a viable solution: "The Inactive Memory is high as usual". "when your system is running out of memory, the whole os hangs...", "if you consume the whole amount of memory of the mac, the machine will likely hang" To sum up, I am still unaware of a way of disabling Mac OS X from using swap when there still is "Inactive" memory. If it isn't possible, maybe at least there is an explanation somewhere of why osx prefers to swap out memory that may be released from memory at any time?

    Read the article

  • How do I get 12.04 to recognize swap partition so that I can hibernate?

    - by Kayla
    I justed installed 12.04 and used gparted to erase and enlarge my swap partition. When I rebooted, gparted said that the file partition for the swap was unknown. Gparted doesn't let me change the file partition to "linux-swap". It does let me change it to NTFS, but when I reboot, it goes back to "unknown". Thanks in advance for your help. Output from sudo swapon -s: Filename Type Size Used Priority /dev/mapper/cryptswap1 partition 9025532 0 -1 Output from sudo fdisk -l: Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x9d63ac84 Device Boot Start End Blocks Id System /dev/sda1 * 2048 2459647 1228800 7 HPFS/NTFS/exFAT /dev/sda2 2459648 197836472 97688412+ 7 HPFS/NTFS/exFAT /dev/sda3 466890752 488395119 10752184 7 HPFS/NTFS/exFAT /dev/sda4 197836798 466890751 134526977 5 Extended /dev/sda5 197836800 448837631 125500416 83 Linux /dev/sda6 448839680 466890751 9025536 82 Linux swap / Solaris Partition table entries are not in disk order Disk /dev/mapper/cryptswap1: 9242 MB, 9242148864 bytes 255 heads, 63 sectors/track, 1123 cylinders, total 18051072 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x951b7f53 Disk /dev/mapper/cryptswap1 doesn't contain a valid partition table

    Read the article

  • Specifying a file name for the FTP and File based transports in OSB

    - by [email protected]
    A common question I receive is how to incorporate a variable value into a file name when using the FTP, SFTP, or File transports in Oracle Service Bus.  For example, if one of the fields in a message being put down to a file by the File transport is an order number variable, then how can you make the order number become part of the file name?  Another example might be if you want to specify the date in the file name.  The transport configuration wizard in OSB does not have an option to allow for this, other than allowing you to specify a static prefix of suffix variable.

    Read the article

  • Moving the OS X swap file to a faster drive

    - by Milky Joe
    I have a new Mac Mini that's running the latest version of Snow Leopard. The internal drive is a bit of a slouch. I'd like to move the swap file (or whatever it's called is OS X) to my faster external drive (Firewire 800, permanently connected). Is this possible? I've read that the old solutions aren't working in 10.6. My Mac has 2GB of RAM, so the swap file is used quite a bit when I'm doing intensive work (Photoshop etc).

    Read the article

  • Limiting memory usage and mimimizing swap thrashing on Unix / Linux

    - by camelccc
    I have a few machines that I machine that I use for running large numbers of jobs where I try to limit the number of jobs so as not to exceed the available RAM of the machine. Occasionally I mis-estimate how much memory some of the jobs will take, and the machine starts thrashing the swap file. I resolve this by sending the kill -s STOP to one of the jobs so that it can get swapped out. Does anyone know of a utility that will monitor a server for processes by a specific name, and then pause the one with the smallest memory footprint is the total memory consumption reaches a desired threshold so that the larger ones can run and complete with a minimum of swap file thrashing? Paused processes then need to be resumed once some existing processes have completed.

    Read the article

  • yum update failed

    - by Nemanja Djuric
    I have problem doint yum update on my OpenVZ VPS i get this error message : (56/69): glibc-devel-2.5-81.el5_8.7.x86_64.rpm | 2.4 MB 00:00 (57/69): libstdc++-devel-4.1.2-52.el5_8.1.x86_64.rpm | 2.8 MB 00:00 (58/69): binutils-2.17.50.0.6-20.el5_8.3.x86_64.rpm | 2.9 MB 00:00 (59/69): cpp-4.1.2-52.el5_8.1.x86_64.rpm | 2.9 MB 00:00 (60/69): device-mapper-multipath-0.4.7-48.el5_8.1.x86_64 | 3.0 MB 00:00 (61/69): mysql-5.1.58-jason.1.x86_64.rpm | 3.5 MB 00:03 (62/69): coreutils-5.97-34.el5_8.1.x86_64.rpm | 3.6 MB 00:00 (63/69): gcc-c++-4.1.2-52.el5_8.1.x86_64.rpm | 3.8 MB 00:00 (64/69): glibc-2.5-81.el5_8.7.x86_64.rpm | 4.8 MB 00:01 (65/69): gcc-4.1.2-52.el5_8.1.x86_64.rpm | 5.3 MB 00:01 (66/69): glibc-2.5-81.el5_8.7.i686.rpm | 5.4 MB 00:01 (67/69): python-libs-2.4.3-46.el5_8.2.x86_64.rpm | 5.9 MB 00:01 (68/69): mysql-server-5.1.58-jason.1.x86_64.rpm | 13 MB 00:07 (69/69): glibc-common-2.5-81.el5_8.7.x86_64.rpm | 16 MB 00:03 -------------------------------------------------------------------------------- Total 2.4 MB/s | 106 MB 00:44 Running rpm_check_debug Running Transaction Test Finished Transaction Test Transaction Check Error: file /etc/my.cnf from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/bin/mysqlaccess from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/man/man1/my_print_defaults.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/man/man1/mysql.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/man/man1/mysql_config.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/man/man1/mysql_find_rows.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/man/man1/mysql_waitpid.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/man/man1/mysqlaccess.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/man/man1/mysqladmin.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/man/man1/mysqldump.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/man/man1/mysqlshow.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/charsets/Index.xml from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/charsets/cp1250.xml from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/charsets/cp1251.xml from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/czech/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/danish/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/dutch/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/english/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/estonian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/french/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/german/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/greek/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/hungarian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/italian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/japanese/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/korean/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/norwegian-ny/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/norwegian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/polish/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/portuguese/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/romanian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/russian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/serbian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/slovak/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/spanish/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/swedish/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /usr/share/mysql/ukrainian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.77-4.el5_6.6.i386 file /etc/my.cnf from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/bin/mysql_find_rows from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/bin/mysqlaccess from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/my_print_defaults.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysql.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysql_config.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysql_find_rows.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysql_waitpid.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysqlaccess.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysqladmin.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysqldump.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/man/man1/mysqlshow.1.gz from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/charsets/Index.xml from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/charsets/cp1250.xml from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/charsets/cp1251.xml from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/czech/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/danish/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/dutch/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/english/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/estonian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/french/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/german/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/greek/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/hungarian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/italian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/japanese/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/korean/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/norwegian-ny/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/norwegian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/polish/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/portuguese/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/romanian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/russian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/serbian/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/slovak/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/spanish/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/swedish/errmsg.sys from install of mysql-5.1.58-jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 file /usr/share/mysql/ukrainian/errmsg.sys from install of mysql-5.1.58- jason.1.x86_64 conflicts with file from package mysql-5.0.95-1.el5_7.1.i386 Error Summary Thank you for help, Best regards, Nemanja

    Read the article

  • Disadvantages of using a swap file/partition on an SSD, even when swappiness is set to 0

    - by pjv
    What are the disadvantages of using a swap file/partition on an SSD, even when swappiness is set to 0 I'm particularly interested in the /proc/sys/vm/swappiness=0 case. How much writes are still done, in practice, to that swap file, and does it have a negative impact to the SSD or any other disadvantage? Or would it nearly compare to not having a swap file? I am pretty aware of what swappiness=0 means, just not of what it amounts to in practice. My question stems from a problem I am experiencing without a swap: http://stackoverflow.com/questions/4567972/error-executing-aapt-all-of-the-sudden. There are similar questions regarding SSD and swap but they don't go in-depth into the swappiness=0: Disadvantages of not having a swap partition, Should I keep my swap file on an SSD drive?

    Read the article

  • Folder/File permission transfer between alike file structure

    - by Tyler Benson
    So my company has recently upgraded to a new SAN but the person who copied all the data over must have done a drag n' drop or basic copy to move everything. Apparently Xcopy is not something he cared to use. So now I am left with the task of duplicating all the permissions over. The structure has changed a bit ( as in more files/folders have been added) but for the most part has been stayed unchanged. I'm looking for suggestions to help automate this process. Can I use XCopy to transfer ONLY permissions to one tree from another? Would i just ignore any folders/permissions that don't line up correctly? Thanks a ton in advance, Tyler

    Read the article

  • C# File Exception: cannot access the file because it is being used by another process

    - by Lirik
    I'm trying to download a file from the web and save it locally, but I get an exception: C# The process cannot access the file 'blah' because it is being used by another process. This is my code: File.Create("data.csv"); // create the file request = (HttpWebRequest)WebRequest.CreateDefault(new Uri(url)); request.Timeout = 30000; response = (HttpWebResponse)request.GetResponse(); using (Stream file = File.OpenWrite("data.csv"), // <-- Exception here input = response.GetResponseStream()) { // Save the file using Jon Skeet's CopyStream method CopyStream(input, file); } I've seen numerous other questions with the same exception, but none of them seem to apply here. Any help?

    Read the article

  • AtomicSwap instead of AtomicCompareAndSwap ?

    - by anon
    I know that on MacOSX / PosiX systems, there is atomic-compare-and-swap for C/C++ code via g++. However, I don't need the compare -- I just want to atomically swap two values. Is there an atomic swap operation available? [Everythign I can find is atomic_compare_and_swap ... and I just want to do the swap, without comparing]. Thanks!

    Read the article

  • How to Customize the File Open/Save Dialog Box in Windows

    - by Lori Kaufman
    Generally, there are two kinds of Open/Save dialog boxes in Windows. One kind looks like Windows Explorer, with the tree on the left containing Favorites, Libraries, Computer, etc. The other kind contains a vertical toolbar, called the Places Bar. The Windows Explorer-style Open/Save dialog box can be customized by adding your own folders to the Favorites list. You can, then, click the arrows to the left of the main items, except the Favorites, to collapse them, leaving only the list of default and custom Favorites. The Places Bar is located along the left side of the File Open/Save dialog box and contains buttons providing access to frequently-used folders. The default buttons on the Places Bar are links to Recent Places, Desktop, Libraries, Computer, and Network. However, you change these links to be links to custom folders of your choice. We will show you how to customize the Places Bar using the registry and using a free tool in case you are not comfortable making changes in the registry. Use Your Android Phone to Comparison Shop: 4 Scanner Apps Reviewed How to Run Android Apps on Your Desktop the Easy Way HTG Explains: Do You Really Need to Defrag Your PC?

    Read the article

  • How to recover data from NTFS partition that was made into a Swap partition?

    - by Raghav Mehta
    I have extremely important stuff on my windows partition which during the ubuntu 10.10 installation,when it said that I should create something called swap space, I selected it to be a swap space (without even knowing what it actually meant) The Grub2 doesn't show up so I don't get a choice to boot Ubuntu or Windows. I don't get my windows partition as a removable device in Ubuntu either. When I go to disk utility and select the sda2 (i.e.. my windows partition) and click edit partition and select HPFS/NTFS for the type and tick bootable and click OK the small processing sign keep on rotating on the bottom right of the sda2 in the chart and after about 10 to 15 minutes it gives an unknown error and thus, I am still unable to use my windows. I am even worse than a beginner who doesn't know a thing about Ubuntu so please be patient and help me out.

    Read the article

  • How large of a swap partition is needed to hibernate?

    - by Closure Cowboy
    I've read this question, but it doesn't definitively answer my question. If I want my computer to be able to hibernate, do I need to have a swap partition as large as my RAM, or will Ubuntu wisely be able to hibernate if the swap partition can fit the currently-in-use RAM? I'm about to install Ubuntu on a computer with a lot of RAM, and a relatively small hard drive, so I don't want to use more hard drive space than necessary. I wanted to avoid giving my actual specifications to keep this question more general, though I'll give them if necessary.

    Read the article

  • Opening a file opens the folder the file is in, not the file itself

    - by Pepe Lebuntu
    Whenever I try to open a file (such as an .odt, or .doc) from say, the Dash or the Firefox Downloads, Ubuntu 11.10 opens Nautilus to the the folder where the file is, rather than just going to the application and loading the file straight away. In previous releases, when I clicked on a downloaded file, it just went straight to LibreOffice, and it was fine. This is adding a superfluous step in the process. How do I associate the correct extensions?

    Read the article

  • Windows Swap (Page File): Enable or Disable?

    - by d03boy
    From my personal experience I've noticed that disabling the page file in Windows XP has given me, in general, the most speed gain out of any other software change I can make. Obviously this has to be done when a significant amount of RAM is available. Typically I find that it works nicely with +2GB of RAM. The only issues I've ever really had were loading up Adobe Photoshop. Is this really a speed improvement or am I imagining it? Note: In order to actually turn it off, you must not just set it to 0MB, but disable it. Otherwise Windows will just expand it when it needs to in order to meet its needs.

    Read the article

  • Check to see if file transfer is complete

    - by Cymon
    We have a daily job that processes files delivered from an external source. The process usually runs fine without any issues but every once in a while we have an issue of attempting to process a file that is not completely transferred. The external source SCPs these files from a UNIX server to our Windows server. From there we try to process the files. Is there a way to check to see if a file is still being transferred? Does UNIX put a lock on a file while SCPing it that we could check on the Windows side?

    Read the article

  • PHP File Downloading Questions

    - by nsearle
    Hey All! I am currently running into some problems with user's downloading a file stored on my server. I have code set up to auto download a file once the user hits the download button. It is working for all files, but when the size get's larger than 30 MB it is having issues. Is there a limit on user download? Also, I have supplied my example code and am wondering if there is a better practice than using the PHP function 'file_get_contents'. Thank You all for the help! $path = $_SERVER['DOCUMENT_ROOT'] . '../path/to/file/'; $filename = 'filename.zip'; $filesize = filesize($path . $filename); @header("Content-type: application/zip"); @header("Content-Disposition: attachment; filename=$filename"); @header("Content-Length: $filesize") echo file_get_contents($path . $filename);

    Read the article

  • Using Multiple File Handles for Single File

    - by Ryan Rosario
    I have an O(n^2) operation that requires me to read line i from a file, and then compare line i to every line in the file. This repeats for all i. I wrote the following code to do this with 2 file handles, but it does not yield the result I am looking for. I imagine this is a simple error on my part. IN1 = open("myfile.dat","r") IN2 = open("myfile.dat","r") for line1 in IN1: for line2 in IN2: print line1.strip(), line2.strip() IN1.close() IN2.close() The result: Hello Hello Hello World Hello This Hello is Hello an Hello Example Hello of Hello Using Hello Two Hello File Hello Pointers Hello to Hello Read Hello One Hello File The output should contain 15^2 lines.

    Read the article

  • Why do I get swap space related errors when I still have lots of free memory in Solaris 10?

    - by Tom Duckering
    I am seeing a few of my services suffering/crashing with errors along the lines of "Error allocating memory" or "Can't create new process" etc. I'm slightly confused by this since logs show that at the time the system has lots of free memory (around 26GB in one case) of memory available and is not particularly stressed in any other way. After noting a JVM crash with similar error with the added query of "Out of swap space?" it made me dig a little deeper. It turns out that someone has configured our zone with a 2GB swap file. Our zone doesn't have capped memory and currently has access to as much of the 128GB of the RAM as it need. Our SAs are planning to cap this at 32GB when they get the chance. My current thinking is that whilst there is memory aplenty for the OS to allocate, the swap space seems grossly undersized (based on other answers here). It seems as though Solaris is wanting to make sure there's enough swap space in case things have to swap out (i.e. it's reserving the swap space). Is this thinking right or is there some other reason that I get memory allocation errors with this large amount of memory free and seemingly undersized swap space?

    Read the article

  • Should I completely turn off swap for linux webserver?

    - by Poma
    Recently my friend told me that it is a good idea to turn off swap on linux webservers with enough memory. My server has 12 GB and currently uses 4GB (not counting cache and buffers) under peak load. His argument was that in a normal situation server will never use all of its RAM so the only way it can encounter OutOfMemory situation is due to some bug/ddos/etc. So in case swap is turned off system will run out of memory that will eventually crash the program hogging memory (most likely the web server process) and probably some other processes. In case swap is turned on it will eat both RAM and swap and eventually will result in the same crash, but before that it will offload crucial processes like sshd to swap and start to do a lot of swap operations resulting in major slowdown. This way when under ddos system may go into a completely unusable condition due to huge lags and I probably will not be unable to log in and kill webserver process or deny all incoming traffic (all but ssh). Is this right? Am I missing something (like the fact that swap partition is very useful in some way even if I have enough RAM)? Should I turn it off?

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >