Search Results

Search found 85897 results on 3436 pages for 'flat file source'.

Page 44/3436 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • Best Solution for Load Balancing NFS File Access?

    - by DairyKnight
    I'm trying to find an optimum solution for accessing the NFS file share in my company. We have a central file server in North America and has 30GB~50GB of updated data everyday. And it's very slow for our Europe and Asia branches to access directly. Therefore, I'm trying to setup two replicate servers in those continents. I'm currently using rsync, but wonder if there exists a better solution acts more like a distributed RAID, which allows the user to transparently access the file whether synced or not. And user request will be dispatched to remote server if the file is not yet synced. I'm now looking into DRBD, but it seems not to have the functionality of auto-dispatching requests. Does anyone know if there's a better solution?

    Read the article

  • Corrupt file indicative of corrupt hard drive?

    - by Elipsicon
    I have noticed that two files on my (almost full) 2 TB hard drive have been corrupted. One file has 20 kB (!) corrupted, i.e. consecutive 20 kB have changed, even though the modification date of the file hasn't changed and I haven't worked with this file for over a year. This tells me that something "below" the file system level has messed with the data and the only thing I can think of is hardware failure, most likely hard disk failure. I've tested my RAM already and it works flawlessly. I'm using ext4 on Linux, if that is of any help. Is this normal? Is it time to change my hard drive disk before something worse happens? What can I do to prevent that from happening in the future? Is there some built-in feature of, or extension to ext4 that includes additional error correcting code and/or watches files for changes that haven't been caused by the OS?

    Read the article

  • Puppet file transfer slow

    - by Noodles
    I have a puppet master and slaves in different datacenters. The latency between them is ~40ms. When I run "puppet agent --test" on a slave to apply the latest manifest it takes ~360 seconds to finish. After doing some digging I can see the main cause of the slow down is file transfers. It seems it's taking ~10 seconds to transfer each file. The files are only small (configuration files) so I can't understand why they would take so long. This is an example of a file in my manifest: file { "/etc/rsyncd.conf" : owner => "root", group => "root", mode => 644, source => "puppet:///files/rsyncd/rsyncd.conf" } Running puppet-profiler I see this: 10.21s - File[/etc/rsyncd.conf] It also seems I cannot update more than one server at once using puppet. If I run two servers at the same time then puppet takes twice as long. I have changed the puppet master from using webrick to mongrel, but this doesn't seem to help. This is making deploying changes painful. A simple config change can take an hour to roll out to all servers.

    Read the article

  • Windows File System Analysis

    - by bouvierr
    I am looking for a FREE tool to perform analyses on the NTFS file system of my Windows 7 PC. I want to easily see the amount of data distributed throught out the entire file system. The following applications seem very good, but they are not free and probably overkill for my requirements: FolderSizes 5 MailMeter Windows File System Reporting Tool I am aware that some applications (like Folder Size 2.5) can add a column in Windows Explorer to show the size of each folder, but I am looking for something more like a reporting tool. Thank you for your suggestions.

    Read the article

  • OSX: Selecting default application for all unknown and different file types (extensions)

    - by Leo
    I work in cluster computing and am using Mac OS X 10.6. I send off hundreds of computing jobs a day, and each one comes back with with a different extension. For example, svmGeneSelect.o12345 which is the std output of my svmGeneSelect job which is job number 12345. I don't control the extensions. All files are plain text. I want OSX to open any file extension that it hasn't seen before with my favorite text editor when I click on it. Or even better set up file association defaults for extension patterns ie textEdit for extensions matching *.o*. I do NOT want to create file associations for individual files since this extension will only ever exist once, and I do not want to go through the process of selecting the application to use for each file. Thanks for any help you can offer.

    Read the article

  • Farseer Physics Engine and the Ms-PL License

    - by Stephen Tierney
    Am I able to produce code for a game which uses the Farseer engine and release my code under an open source license other than the Ms-PL? My concern is with the following section from the license: If you distribute any portion of the software in source code form, you may do so only under this license by including a complete copy of this license with your distribution. If you distribute any portion of the software in compiled or object code form, you may only do so under a license that complies with this license. If I do not include Farseer in my source code distribution does this give me an exemption from this clause as I am not distributing the software? My code merely uses its functions. No where in the license does it force you to provide source code for derivative works or linking works, it simply gives you the option of "if you distribute".

    Read the article

  • Difference between sh file.sh and file.sh

    - by RAS
    I have two questions : What is the difference between executing sh filename.sh and filename.sh? How can I make both of them giving me the same output ? I'm asking this question as right now I'm facing a problem. I'm trying to run a Java + SWT application from terminal. When I do filename.sh, it gives me the desired output. But when I do sh filename.sh or bash filename.sh, it throws me an error : Exception in thread "main" java.lang.NoClassDefFoundError: MainForm/java Caused by: java.lang.ClassNotFoundException: MainForm.java at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:247) Could not find the main class: MainForm.java. Program will exit. I know this question is already asked here but I'm still not clear about it. I have gone through the following links : What is the difference between ./ and sh to run a script? Can scripts run even when they are not set as executable? Can anyone help me with this?

    Read the article

  • Unable to write into character device file in Ubuntu

    - by Surjya Narayana Padhi
    I just written a linux character driver. I created one character device file named X. I can see that file in /dev folder. Now I want to do some read/write operation into this file. I opened the filed in VI editor and write some text into it. I used :wq and exited. It didn't show any error. Now when I do cat on that same file I am not able to see any content. I tried it several times. The same situation. Please let me know If I am doing something wrong....

    Read the article

  • How (recipe) to build only one kernel module?

    - by Pro Backup
    I have a bug in a Linux kernel module that causes the stock Ubuntu 14.04 kernel to oops (crash). That is why I want to edit/patch the source of only that single kernel module to add some extra debug output. The kernel module in question is mvsas and not necessary to boot. For that reason I don't see any need to update any initrd images. I have read a lot of information (as shown below) and find the setup and build process confusion. I need two recipes: to setup/configure the build environment once steps to do after editing any source file of this kernel module (.c and .h) and converting that edit into a new kernel module (.ko) The sources that have been used are: build one kernel module - Google search http://www.linuxquestions.org/questions/linux-kernel-70/rebuilding-a-single-kernel-module-595116/ http://stackoverflow.com/questions/8744087/how-to-recompile-just-a-single-kernel-module http://www.pixelbeat.org/docs/rebuild_kernel_module.html How do I build a single in-tree kernel module? http://ubuntuforums.org/showthread.php?t=1153067 http://ubuntuforums.org/showthread.php?t=2112166 http://ubuntuforums.org/showthread.php?t=1115593 build one kernel module ubuntu - Google search 'make +single +kernel +module' - Ask Ubuntu 'make +kernel +module' - Ask Ubuntu My makefile results in: No rule to make target `arch/x86/tools/relocs.c', needed '"Invalid module format"' - Ask Ubuntu Driver installation: compiling source code for newer kernel Modprobe: 'Invalid nodule format', yet works after insmod "Symbol version dump" "is missing" - Google search http://stackoverflow.com/questions/9425523/should-i-care-that-the-symbol-version-dump-is-missing-how-do-i-get-one Where can I find the corresponding Module.symvers and .config files for 12.04.3 i386 server? "no symbol version for module_layout" when trying to load usbhid.ko Broken links inside Linux header file folder 'make modules_install' - Ask Ubuntu 'modules_install' - Ask Ubuntu Empty build directory in custom compiled kernel Not able to see pr_info output In which directory are the kernel source files and how can I recompile it? How can I compile and install that patched libata-eh.c file? 'modules_install +depmod' - Ask Ubuntu modules_install depmod - Google search "make modules_install" - Google search http://www.csee.umbc.edu/courses/undergraduate/CMSC421/fall02/burt/projects/howto_build_kernel.html http://unix.stackexchange.com/questions/20864/what-happens-in-each-step-of-the-linux-kernel-building-process https://wiki.ubuntu.com/KernelCustomBuild http://www.cyberciti.biz/tips/build-linux-kernel-module-against-installed-kernel-source-tree.html http://www.linuxforums.org/forum/kernel/170617-solved-make-modules_install-different-path.html "make prepare" - Google search "make prepare" "scripts/kconfig/conf --silentoldconfig Kconfig" - Google search http://ubuntuforums.org/showthread.php?t=1963515 ubuntu "make prepare" version - Google search http://stackoverflow.com/questions/8276245/how-to-compile-a-kernel-module-against-a-new-source https://help.ubuntu.com/community/Kernel/Compile How do I compile a kernel module? How to add a custom driver to my kernel? Compile and loading kernel module without compiling the kernel

    Read the article

  • Enable File sharing in Windows Vista

    - by LiveEn
    There seems to be a problem in my Windows Vista.. In the network and sharing centre only the network discovery is visible. I cant find a option for file sharing as mentions in other websites. There is no folder sharing option on any folder. Can someone please tell me how to enable file sharing in my Windows Visa? i cant share any of my file in the network.

    Read the article

  • AFP, SMB, NFS which is the best data transfer protocol ?

    - by Kami
    I have a computer with large hard disks running Gentoo. I have to serve med/big files via a wired network to Apple devices (all of them running OS X). Which protocol is the best for the following needs ? : Speed Ease of use (by the clients and the server) Less limited (max file size, limited charset for filenames) Security

    Read the article

  • 'No such file or directory' error in bash, but the file exists?

    - by michael
    On Ubuntu, I get a 'No such file or directory' error when I try to execute a command. I have checked with 'ls -la' , the file 'adb' is there and it has 'x' flag So why I am getting a 'No such file or directory'? ~/Programs/android-sdk-linux_x86/platform-tools$ ./adb bash: ./adb: No such file or directory ~/Programs/android-sdk-linux_x86/platform-tools$ ls -la total 34120 drwxrwxr-x 3 silverstri silverstri 4096 2011-10-08 18:50 . drwxrwxr-x 8 silverstri silverstri 4096 2011-10-08 18:51 .. -rwxrwxr-x 1 silverstri silverstri 3764858 2011-10-08 18:50 aapt -rwxrwxr-x 1 silverstri silverstri 366661 2011-10-08 18:50 adb -rwxrwxr-x 1 silverstri silverstri 906346 2011-10-08 18:50 aidl -rwxrwxr-x 1 silverstri silverstri 328445 2011-10-08 18:50 dexdump -rwxrwxr-x 1 silverstri silverstri 2603 2011-10-08 18:50 dx drwxrwxr-x 2 silverstri silverstri 4096 2011-10-08 18:50 lib -rwxrwxr-x 1 silverstri silverstri 14269620 2011-10-08 18:50 llvm-rs-cc -rwxrwxr-x 1 silverstri silverstri 14929076 2011-10-08 18:50 llvm-rs-cc-2 -rw-rw-r-- 1 silverstri silverstri 241 2011-10-08 18:50 llvm-rs-cc.txt -rw-rw-r-- 1 silverstri silverstri 332494 2011-10-08 18:50 NOTICE.txt -rw-rw-r-- 1 silverstri silverstri 291 2011-10-08 18:50 source.properties

    Read the article

  • Slow File Copy observed copying 40GB files across network to iSCSI device

    - by Rick
    Here's a curious ones for the gurus: Setup: Source Machine: Windows Server 2003 R2 machine with local hard drive. VHD file of 40GB. 1 x 1Gbps network card, Cat6 cable, switch. Target Machine: Windows Server 2008 R2 machine with iSCSI connection to iSCSI target on separate machine (1TB, RAID5). 1 x 1Gbps network card, Cat6 cable, connected to same switch as for Source Machine. Second 1Gbps network card, Cat6 cable, connected via isolated switch to the iSCSI target. Switches are Netgear JGS524 model (web managed). If I copy from the Win2003R2 machine to Win2008R2 machine local drive I get 40GB in 45 minutes, 36 seconds. If I copy from the Win2008R2 machine to the iSCSI target (local drive to iSCSI target) I get 40GB in 37 minutes 56 seconds. If I copy from the Win2003R2 machine to the iSCSI target via the Win2008R2 machine I get 40GB in 3 hours, 50 minutes, 24 seconds. All copies were done via the following command issued on the Win2008R2 box: XCOPY <source> <target> /J XCOPY /J - Copies using unbuffered I/O. Recommended for very large files. So, what's the bit I'm missing here? Why does a back-to-back copy take in total 1 hour, 23 minutes, 32 seconds when a "straight through" copy take almost 3 times as long? Switches show no errors, network hovers around the 3% utilisation mark for the duration of the copy (whereas the "back-to-back" copies are around the 25% utilisation mark). What have I missed?

    Read the article

  • SQL SERVER – New Look for CodePlexProject – Hosting for Open Source Software

    - by pinaldave
    Codeplex is my favorite site. CodePlex is Microsoft’s free open source project hosting site. You can create projects to share with the world, collaborate with others on their projects, and download open source software. It is great place to find so many open source project available to explore. All the softwares are free and open source. I often go there at intervals to check what is new in SQL Server field as well on other technologies. Yesterday when I visited it, I had nice surprise as it has total makeover and looks very decent as well elegant at the same time. I have noticed that when I talk about Codeplex is user community, not everybody knows about it. The quickest way I explain what is codeplex is that I start naming few of the projects which are available there and suddenly I start noticing a few hands going up knowing the projects. This is indirect way to prove that many of us know CodePlex usability but do not pay special attention to what it is actually. Let me name a few popular projects of the CodePlex here. SQL Server Sample Database [link] Image Resizer for Windows [link] Ajax Control Toolkit [link] Skype Voice Changer [link] Silverlight Toolkit [link] Windows 7 USB/DBD Download Tool [link] Orchard Project [link] There are very interesting SQL Server projects available on Codeplex as well. I am listing few of them here for reference in listed in no particular order. SQL Server Sample Database [link] SQL Server Compact ToolBox [link] Microsoft Drivers for PHP for SQL Server [link] Internals Viewer for SQL Server [link] SQL Server Spatial Tooks [link] SQL Monitor – managing sql server performance [link] SQL Server 2008 Extended Events SSMS Addin [link] How many of above mentioned project have you come across earlier? Leave a comment it will be interesting to know what our community is familiar with. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Game works on netbeans but not outside netbeans in jar file?

    - by Michael Haywood
    I am creating a basic game of 'Pong'. I have finished the game apart from a few glitches I need to remove. The game runs perfectly in netbeans but if I create a jar file errors come up causing it to not work. I am quite new to java but I believe it is something to do with my code looking for the images but the images have not been loaded up yet. Here is the error. How can I get this to work outside of netbeans in a jar file? C:\Users\michael>java -jar "C:\Users\michael\Documents\NetBeansProjects\Pong\dis t\Pong.jar" Exception in thread "main" java.lang.NullPointerException at javax.swing.ImageIcon.<init>(Unknown Source) at pong.BallMainMenu.<init>(BallMainMenu.java:19) at pong.Board.gameInit(Board.java:93) at pong.Board.addNotify(Board.java:86) at java.awt.Container.addNotify(Unknown Source) at javax.swing.JComponent.addNotify(Unknown Source) at java.awt.Container.addNotify(Unknown Source) at javax.swing.JComponent.addNotify(Unknown Source) at java.awt.Container.addNotify(Unknown Source) at javax.swing.JComponent.addNotify(Unknown Source) at javax.swing.JRootPane.addNotify(Unknown Source) at java.awt.Container.addNotify(Unknown Source) at java.awt.Window.addNotify(Unknown Source) at java.awt.Frame.addNotify(Unknown Source) at java.awt.Window.show(Unknown Source) at java.awt.Component.show(Unknown Source) at java.awt.Component.setVisible(Unknown Source) at java.awt.Window.setVisible(Unknown Source) at pong.Pong.<init>(Pong.java:16) at pong.Pong.main(Pong.java:23)

    Read the article

  • Reproducible file corruption for files on windows share

    - by bbuser
    We have about 40 file servers in our intranet to distribute software packages. The servers have names like example01, example02 etc. Every name resolves to a single IP-address (A-record) and the IP resolves back to that name (PTR) for every single server. The thing is, that for a certain file (mypackage.cab) I get different results depending on whether I use: \\192.0.2.01\fs\pkg\X12345678 or \\example01.foo\fs\pkg\X12345678 While in one case the file is correct in the other case the file has exactly the right size, but it is all zeros. For a certain combination of client and server I can reproduce this reliably. It doesn´t matter if I download in Windows Explorer, via robocopy or even from Linux with smbclient. It´s always the same, one file corrupt, the other ok. It happens only for certain combinations of clients and servers, not others. For example: client01 example01.foo -> OK (192.0.2.01 is also OK) client01 example02.foo -> broken (but 192.0.2.02 is OK) client02 example01.foo -> broken (but 192.0.2.01 is OK) client02 example02.foo -> OK (192.0.2.02 is also OK) client03 example06.foo -> OK (but 192.0.2.06 is broken) client03 example07.foo -> OK (192.0.2.07 is also OK) etc... In some cases I get the broken file when I use the IP address in other cases when I use the name. For every client the majority of servers is Ok, but from every client I tested I have at least 4 cases of broken files. All this happens only for mypackage.cab (about 5k in size), it never happened for any of the other files in the same directory. Confused? Certainly I am. Any idea what can cause this or any idea what to try to figure it out is welcome. Clients are Windows XP. Servers are NetApp filers I don´t have access to. I can (and will) contact the filer team again, but first I have to have an idea what is going on.

    Read the article

  • Is using something other than XML advisable for my configuration file?

    - by Earlz
    I have a small tool I'm designing which would require a configuration file of some sort. The configuration file in my case is really more of a database, but it needs to be lightweight, and if needed the end-user should find it easily editable. However, it also will contain a lot of things in it. (depending on certain factors, could be 1Mb or more) I've decided I'd rather use plain ol' text, rather than trying to use SQLite or some such. However, with using text, I also have to deal with the variety of formats. So far, my options are XML JSON Custom format The data in my file is quite simple consisting for the most part of key-value type things. So, a custom format wouldn't be that difficult... but I'd rather not have to worry about writing the support for it. I've never seen JSON used for configuration files. And XML would bloat the file size substantially I think. (I also just has a dislike of XML in general). What should I do in this case? Factors to consider: This configuration file can be uploaded to a web service(so size matters) Users must be able to edit it by hand if necessary(ease of editing and reading matters) Must be able to generate and process automatically (speed doesn't matter a lot, but not excessively slow) The "keys" and "values" are plain strings, but must be escaped because they can contain anything. (unicode and escaping has to work easily)

    Read the article

  • Reproducible file corruption for files on windows share

    - by bbuser
    We have about 40 file servers in our intranet to distribute software packages. The servers have names like example01, example02 etc. Every name resolves to a single IP-address (A-record) and the IP resolves back to that name (PTR) for every single server. The thing is, that for a certain file (mypackage.cab) I get different results depending on whether I use: \\192.0.2.01\fs\pkg\X12345678 or \\example01.foo\fs\pkg\X12345678 While in one case the file is correct in the other case the file has exactly the right size, but it is all zeros. For a certain combination of client and server I can reproduce this reliably. It doesn´t matter if I download in Windows Explorer, via robocopy or even from Linux with smbclient. It´s always the same, one file corrupt, the other ok. It happens only for certain combinations of clients and servers, not others. For example: client01 example01.foo -> OK (192.0.2.01 is also OK) client01 example02.foo -> broken (but 192.0.2.02 is OK) client02 example01.foo -> broken (but 192.0.2.01 is OK) client02 example02.foo -> OK (192.0.2.02 is also OK) client03 example06.foo -> OK (but 192.0.2.06 is broken) client03 example07.foo -> OK (192.0.2.07 is also OK) etc... In some cases I get the broken file when I use the IP address in other cases when I use the name. For every client the majority of servers is Ok, but from every client I tested I have at least 4 cases of broken files. All this happens only for mypackage.cab (about 5k in size), it never happened for any of the other files in the same directory. Confused? Certainly I am. Any idea what can cause this or any idea what to try to figure it out is welcome. Clients are Windows XP. Servers are NetApp filers I don´t have access to. I can (and will) contact the filer team again, but first I have to have an idea what is going on.

    Read the article

  • Load balancing a Windows File Share using HA-Proxy

    - by NathanE
    After pulling my hair out over DFS I just had this weird and potentially dangerous idea come into my head whereby, just possibly, I might be able to use HA-Proxy to load balance a file share between servers. I've done some remedial packet traces and it does appear that TCP port 445 is the only thing involved in using Windows file sharing. I've always thought for many years that UDP 139, 135 etc were also involved in at least establishing the connection - but apparently not! So I setup a basic test: listen SMBTest *:445 mode tcp server Smb1 172.16.61.201:445 server Smb2 172.16.61.202:445 And you'll never guess what... it works??? (!) Now obviously there is the whole concern about synchronisation between the file servers (of course). That could easily be taken care of with a little bit of Robocopy script. And considering I only need a HA read-only file share there wouldn't be any issues with regard to file locking etc. Can anyone tell me if what I'm playing with here is fire? I really didn't think it would work at all and now I'm a little shocked. What would be the downsides? Could this be relied upon for a production environment?

    Read the article

  • Automated method to convert .reg registry file to reg.exe commands

    - by nhinkle
    Occasionally I need to put registry entries into batch files to use in login scripts, unattended installers, etc. While it's pretty easy to add one or two registry commands to a batch file using reg.exe, when there is a large amount of registry data, it becomes tedious. I usually just end up merging an external reg file in those cases, which I would like to avoid, since it ruins the self-contained nature of the batch file. Does anybody know of any tools which can automatically convert a .reg file to a series of REG ADD and REG DELETE commands? This would make life a lot easier! Thanks.

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • Right Clicking Network File Very Slow Over VPN

    - by Reafidy
    I am having real troubles with a VPN connection. The VPN connection is not used for a internet connection just for file browsing. File browsing is slow, taking about 3-4 seconds to bring up a list of folders. I can live with this, however the problem is when I right click on a file. Sometimes the right click menu comes up instantly but sometimes it brings up the wait icon for anywhere between 30 seconds to a few minutes before displaying the menu. I ran speedtest.net and the results were: 3.08 Down/0.13 Up (Mbps) 0.13Mbps = 16kbps upload. So I am not experiencing miracles with opening files. A 120kb file can take anywhere from 5 to 30 seconds. Sometimes transfering/opening files happens as expected other times its slow but the real issue is with the right click as mentioned. Anyone have any ideas? Using PPTP Clients are all windows 7 pro

    Read the article

  • Outlook 2010 + Move IMAP PST file = Outlook data file cannot be accessed.

    - by GWB
    I set up a new IMAP account in Outlook 2010. It works but creates IMAP PST file in C:\Users\User\AppData\Local\Microsoft\Outlook. I want the file on my data drive in D:\Users\User\Documents\Outlook Files (the same folder where outlook automatically creates the local Outlook PST. I followed the instructions here to move the IMAP PST. Testing the account (send/receive) works fine, but if I try to manually send an email I get error 0x8004010F Outlook data file cannot be accessed. I've tried repairing the PST using SCANPST (it always finds errors), and deleting and recreating the account but I get the same error. If I move the PST file back, it works again, but this is not ideal. Note: I don't think this is a duplicate of this question as the cause is different and the solution does not help.

    Read the article

  • Batch file to create many files with special characters

    - by MollyO
    Essential info: I have a file "DB_OUTPUT.TXT" with 304 lines that I need to turn into 304 files (one per line). Each line contains many special characters and may be up to tens of thousands of characters long. For these reasons, I'm having difficulty using a cmd.exe batch file (which limits the amount of input) and the echo command (which would try to execute each special character, short of me having to escape them all). I also have a file "DB_OUTPUT_FILENAMES.TXT" containing a distinct filename for each line-soon-to-be-file from "db_output.txt". So line 1 of DB_OUTPUT.TXT needs to be the body of a new file with a name equal to line 1 of DB_OUTPUT_FILENAMES.TXT. Extra info: As you may have guessed, DB_OUTPUT.TXT is output from a database; it contains 304 records with 6 or 7 columns at a fixed width with the last column being a SQL query. Each of these lines (db records) will be used as a script to create new database objects, which is why the special characters need to be preserved. Question: Is there a way to do this in a batch-like fashion? I'd be happy with either a Windows solution or a Linux one.

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >