Search Results

Search found 40663 results on 1627 pages for 'huge files'.

Page 27/1627 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • How to Sync Specific Folders With Dropbox

    - by Matthew Guay
    Would you like to sync specific folders with Dropbox instead of automatically syncing all of your folders to all of your computers?  Here’s how using Selective Sync available in the latest Beta version of Dropbox. Dropbox is a great tool for keeping your important files synced between your computers, and we have covered many interesting things you can do with your Dropbox account.  But until now, there was no way to only sync certain folders with each computer; it was all or nothing.  This could be frustrating if you wanted to store large files from one computer but didn’t want them on a computer with a smaller hard drive.  The latest Beta version of Dropbox allows you to selectively choose which folders to sync between computers. Please Note: This feature is currently only available in the 0.8 beta version of Dropbox. Setup the new Beta Download the new beta version of Dropbox 0.8 (link below); choose the correct download for your system.  Run the installer as normal. It only took a couple seconds to install, though it made the taskbar disappear briefly at the end of the installation on our tests.  Strangely, the installer doesn’t let you know it’s finished installing; if you already had a previous version of Dropbox installed, it will simply start working from your system tray as before.  If this is a new installation of Dropbox, you will be asked to enter your Dropbox account info or create a new account.   Selectively Sync Folders By default, Dropbox will still sync all of your Dropbox folders to all of your computers.  Once this beta is installed, you can choose individual folders or subfolders you don’t want to sync.  Right-click the Dropbox icon in your system tray and select Preferences. Click the Advanced tab on the top, and then click the new Selective Sync button. Now uncheck any folders you don’t want to sync to this computer.  These folders will still exist on your other machines and in the Dropbox web interface, but they will not be downloaded to this computer. The default view only shows your top-level folders in your Dropbox account.  If you wish to sync certain folders but exclude their subfolders, click the Switch to Advanced View button.   Expand any folder and uncheck any subfolders you don’t want to sync.  Notice that the parent folder’s check box is filled now, showing that it is partially synced. Click OK when you’ve made the changes you want.  Dropbox will then make sure you know these folders will stop syncing to this computer; click OK again if you’re sure you don’t want to sync these folders.   Dropbox will cleanup your folder and remove the files and folders you don’t want synced.   Next time you open your Dropbox folder, you’ll notice that the folders we unchecked are no longer in this computer’s Dropbox folder.  They are still in our Dropbox online account, and on any other computers we’re syncing with. If you add a new folder with the same name as a folder you stopped syncing, you’ll notice a grey minus icon over the folder.  This folder will not sync with your other computers or your online Dropbox account. If you want to add these folders back to this computer’s Dropbox, just repeat the steps, this time checking the folders you want to sync.  If you have any folders that were not syncing before, their names will have (Selective Sync Conflict) added to the end, and will sync with all of your computers. Conclusion We’re excited that we can now choose exactly which folders we want synced on each computer.  Since everything is still synced with the online Dropbox, we can still access any of the folders from anywhere.  This makes your Dropbox much more versatile, and can help you keep the folders synced exactly the way you want. Links Download the new Dropbox 0.8.64 beta Signup for Dropbox Similar Articles Productive Geek Tips Add "My Dropbox" to Your Windows 7 Start MenuSync Your Pidgin Profile Across Multiple PCs with DropboxUser Guide to Dropbox Shared FoldersUse Any Folder For Your Ubuntu Desktop (Even a Dropbox Folder)Shut Down or Reboot a Solaris System TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7 Google Earth replacement Icon (Icons we like) Build Great Charts in Excel with Chart Advisor tinysong gives a shortened URL for you to post on Twitter (or anywhere)

    Read the article

  • Web Site Performance and Assembly Versioning – Part 2 Versioning Combined Files Using Subversion

    - by capgpilk
    Ok so it took a while to post this second part. Many apologies, we had a big roll out of a new platform at work and many things had to get sidelined. So this is the second part in a short series of website performance and using versioning to help improve it. Minification and Concatination of JavaScript and CSS Files Versioning Combined Files Using Subversion – this post Versioning Combined Files Using Mercurial – published shortly In the previous post we used AjaxMin to shrink js and css files then concatenated them into one file each which had the file name of site-script.combined.min.js and site-style.combined.min.css. These file names are fine, but you can configure IIS 7 to cache these static files and so lower the amount of data transferred between server and client. This is done by editing the response headers in IIS. 1. In IIS7 Manager, choose the directory where these files are located and select HTTP Response Headers. 2. Check the Expire Web Content and set a time period well into the future. 3. When refreshing the web page, the server will respond with HTTP 304 forcing the browser to retrieve the file from its cache. 4. As can be seen in FireBug, the Cache-Control header has a max age of 31536000 seconds which equates to 365 days.   The server will always send this HTTP 304 message unless the file changes forcing it to send new content. To help force this we can change the file name based on the latest build using the SVN revision number in the filename. So we have lowered data transfer on content that hasn’t changed, but forced it to be sent when you have made a change to the css or js files. Now to get the SVN revision number in to the file name. 1. Import the MSBuildCommunityTasks targets which can be dowloaded from here. 1: <Import Project="$(MSBuildExtensionsPath) 2: \MSBuildCommunityTasks 3: \MSBuild.Community.Tasks.Targets" /> 2. Edit the BeforeBuild target to call out to svn and get the latest revision 1: <SvnVersion LocalPath="$(MSBuildProjectDirectory)" 2: ToolPath="$(ProgramFiles)\VisualSVN Server\bin"> 3: <Output TaskParameter="Revision" PropertyName="Revision" /> 4: </SvnVersion> 3. Set it to update the project AssemblyInfo.cs file for the svn revision. 1: <FileUpdate Files="Properties\AssemblyInfo.cs" 2: Regex="(\d+)\.(\d+)\.(\d+)\.(\d+)" 3: ReplacementText="$1.$2.$3.$(Revision)" /> 4. Now edit the AfterBuild target to get the full dll version. You could combine these two steps and just get the version from svn, I am working on one project that updates the AssemblyInfo file and another project that allows manual editing of the file, but needs that version within the file name; so I just combined the two for this post. 1: <MSBuild.ExtensionPack.Framework.Assembly 2: TaskAction="GetInfo" 3: NetAssembly="$(OutputPath)\mydll.dll"> 4: <Output TaskParameter="OutputItems" ItemName="Info" /> 5: </MSBuild.ExtensionPack.Framework.Assembly> 6: <Message Text="Version: %(Info.AssemblyVersion)" 7: Importance="High" /> 5. Use this Info.AssemblyVersion to write out the combined css and js files as described in the last post. 1: <WriteLinestoFile File="Scripts\site-%(Info.AssemblyVersion).combined.min.js" 2: Lines="@(JSLinesSite)" Overwrite="true" />   In the next post I will cover doing the same, but for a Mercurial repository.

    Read the article

  • Java: fastest way to do random reads on huge disk file(s)

    - by cocotwo
    I've got a moderately big set of data, about 800 MB or so, that is basically some big precomputed table that I need to speed some computation by several orders of magnitude (creating that file took several mutlicores computers days to produce using an optimized and multi-threaded algo... I do really need that file). Now that it has been computed once, that 800MB of data is read only. I cannot hold it in memory. As of now it is one big huge 800MB file but splitting in into smaller files ain't a problem if it can help. I need to read about 32 bits of data here and there in that file a lot of time. I don't know before hand where I'll need to read these data: the reads are uniformly distributed. What would be the fastest way in Java to do my random reads in such a file or files? Ideally I should be doing these reads from several unrelated threads (but I could queue the reads in a single thread if needed). Is Java NIO the way to go? I'm not familiar with 'memory mapped file': I think I don't want to map the 800 MB in memory. All I want is the fastest random reads I can get to access these 800MB of disk-based data. btw in case people wonder this is not at all the same as the question I asked not long ago: http://stackoverflow.com/questions/2346722/java-fast-disk-based-hash-set

    Read the article

  • LNK2005: delete already defined error in VC++

    - by user333422
    Hi, I am newbie to VC++, so please forgive me if this is stupid. I have got 7 solutions under one project. Six of these build static libraries which is linked inside 7th to produce an exe. The runtime configuration of all the projects is MultiThreaded Debug. sln which is used to produce exe is using MFC other slns use standard runtiem library. I tried changing those to MFC, but still I am getting the same error. All the six slns build successfully. When I try to build exe - get the follwoing error: nafxcwd.lib(afxmem.obj) : error LNK2005: "void __cdecl operator delete(void *,int,char const *,int)" (??3@YAXPAXHPBDH@Z) already defined in tara_common.lib(fileStream.obj) This is weird because tara_common is one of the libs that I generated and fileStream.cpp is file that is just using delete on a pointer. I built it in verbose mod, so I am attaching the output. ENVIRONMENT SPACE _ACP_ATLPROV=C:\Program Files\Microsoft Visual Studio 9.0\VC\Bin\ATLProv.dll _ACP_INCLUDE=C:\Program Files\Microsoft Visual Studio 9.0\VC\include;C:\Program Files\Microsoft Visual Studio 9.0\VC\atlmfc\include;C:\Program Files\Microsoft SDKs\Windows\v6.0A\include;C:\Program Files\Microsoft SDKs\Windows\v6.0A\include _ACP_LIB=C:\fta\tara\database\build\Debug;C:\Program Files\Microsoft Visual Studio 9.0\VC\lib;C:\Program Files\Microsoft Visual Studio 9.0\VC\atlmfc\lib;C:\Program Files\Microsoft Visual Studio 9.0\VC\atlmfc\lib\i386;C:\Program Files\Microsoft SDKs\Windows\v6.0A\lib;C:\Program Files\Microsoft SDKs\Windows\v6.0A\lib;C:\Program Files\Microsoft Visual Studio 9.0\;C:\Program Files\Microsoft Visual Studio 9.0\lib _ACP_PATH=C:\Program Files\Microsoft Visual Studio 9.0\VC\bin;C:\Program Files\Microsoft SDKs\Windows\v6.0A\bin;C:\Program Files\Microsoft Visual Studio 9.0\Common7\Tools\bin;C:\Program Files\Microsoft Visual Studio 9.0\Common7\tools;C:\Program Files\Microsoft Visual Studio 9.0\Common7\ide;C:\Program Files\HTML Help Workshop;C:\Program Files\Microsoft SDKs\Windows\v6.0A\bin;c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727;C:\Program Files\Microsoft Visual Studio 9.0\;C:\WINDOWS\SysWow64;;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;c:\Program Files\Microsoft SQL Server\90\Tools\binn\;C:\Program Files\TortoiseSVN\bin;C:\Program Files\Microsoft Visual Studio 9.0\VC\bin;C:\Program Files\GnuWin32\bin;C:\Python26 ALLUSERSPROFILE=C:\Documents and Settings\All Users CLIENTNAME=Console CommonProgramFiles=C:\Program Files\Common Files ComSpec=C:\WINDOWS\system32\cmd.exe FP_NO_HOST_CHECK=NO HOMEDRIVE=C: INCLUDE=C:\Program Files\Microsoft Visual Studio 9.0\VC\include;C:\Program Files\Microsoft Visual Studio 9.0\VC\atlmfc\include;C:\Program Files\Microsoft SDKs\Windows\v6.0A\include;C:\Program Files\Microsoft SDKs\Windows\v6.0A\include LIB=C:\fta\tara\database\build\Debug;C:\Program Files\Microsoft Visual Studio 9.0\VC\lib;C:\Program Files\Microsoft Visual Studio 9.0\VC\atlmfc\lib;C:\Program Files\Microsoft Visual Studio 9.0\VC\atlmfc\lib\i386;C:\Program Files\Microsoft SDKs\Windows\v6.0A\lib;C:\Program Files\Microsoft SDKs\Windows\v6.0A\lib;C:\Program Files\Microsoft Visual Studio 9.0\;C:\Program Files\Microsoft Visual Studio 9.0\lib LIBPATH=c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727;C:\Program Files\Microsoft Visual Studio 9.0\VC\atlmfc\lib;C:\Program Files\Microsoft Visual Studio 9.0\VC\lib LOGONSERVER=\xxx NUMBER_OF_PROCESSORS=1 OS=Windows_NT PATH=C:\Program Files\Microsoft Visual Studio 9.0\VC\bin;C:\Program Files\Microsoft SDKs\Windows\v6.0A\bin;C:\Program Files\Microsoft Visual Studio 9.0\Common7\Tools\bin;C:\Program Files\Microsoft Visual Studio 9.0\Common7\tools;C:\Program Files\Microsoft Visual Studio 9.0\Common7\ide;C:\Program Files\HTML Help Workshop;C:\Program Files\Microsoft SDKs\Windows\v6.0A\bin;c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727;C:\Program Files\Microsoft Visual Studio 9.0\;C:\WINDOWS\SysWow64;;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;c:\Program Files\Microsoft SQL Server\90\Tools\binn\;C:\Program Files\TortoiseSVN\bin;C:\Program Files\Microsoft Visual Studio 9.0\VC\bin;C:\Program Files\GnuWin32\bin;C:\Python26 PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH PROCESSOR_ARCHITECTURE=x86 PROCESSOR_IDENTIFIER=x86 Family 6 Model 23 Stepping 10, GenuineIntel PROCESSOR_LEVEL=6 PROCESSOR_REVISION=170a ProgramFiles=C:\Program Files SESSIONNAME=Console SystemDrive=C: SystemRoot=C:\WINDOWS VisualStudioDir=C:\Documents and Settings\sgupta\My Documents\Visual Studio 2008 VS90COMNTOOLS=C:\Program Files\Microsoft Visual Studio 9.0\Common7\Tools\ WecVersionForRosebud.710=2 windir=C:\WINDOWS COMMAND LINES: Creating temporary file "c:\fta\tools\channel_editor\IvoDB\Debug\RSP00011018082288.rsp" with contents [ /VERBOSE /OUT:"C:\fta\tools\channel_editor\Builds\IvoDB_1_35_Debug.exe" /INCREMENTAL /LIBPATH:"......\3rdparty\boost_1_42_0\stage\lib" /MANIFEST /MANIFESTFILE:"Debug\IvoDB_1_35_Debug.exe.intermediate.manifest" /MANIFESTUAC:"level='asInvoker' uiAccess='false'" /DELAYLOAD:"OleAcc.dll" /DEBUG /PDB:"C:\fta\tools\channel_editor\Builds\IvoDB_1_35_Debug.pdb" /SUBSYSTEM:WINDOWS /DYNAMICBASE /NXCOMPAT /MACHINE:X86 ......\tara\database\build\Debug\tara_database.lib ......\tara\common\build\Debug\tara_common.lib ......\3rdparty\sqliteWrapper\Debug\sqliteWrapper.lib ......\3rdparty\sqlite_3_6_18\Debug\sqlite.lib ......\stsdk\modules\win32\Debug\modules.lib ......\stsdk\axeapi\win32\Debug\axeapi.lib DelayImp.lib ".\Debug\AntennaSettings.obj" ".\Debug\AudioVideoSettings.obj" ".\Debug\CMDatabase.obj" ".\Debug\CMSettings.obj" ".\Debug\ColorFileDialog.obj" ".\Debug\ColorStatic.obj" ".\Debug\DragDropListCtrl.obj" ".\Debug\DragDropTreeCtrl.obj" ".\Debug\FavouriteEdit.obj" ".\Debug\FavTab.obj" ".\Debug\FindProgram.obj" ".\Debug\HyperLink.obj" ".\Debug\IvoDB.obj" ".\Debug\IvoDBDlg.obj" ".\Debug\IvoDBInfo.obj" ".\Debug\IvoDBInfoTab.obj" ".\Debug\IvoDbStruct.obj" ".\Debug\LayoutHelper.obj" ".\Debug\MainTab.obj" ".\Debug\OperTabCtrl.obj" ".\Debug\ParentalLock.obj" ".\Debug\ProgramEdit.obj" ".\Debug\ProgramTab.obj" ".\Debug\PVRSettings.obj" ".\Debug\SatTab.obj" ".\Debug\SettingsBase.obj" ".\Debug\SettingsTab.obj" ".\Debug\STBSettings.obj" ".\Debug\stdafx.obj" ".\Debug\TimeDate.obj" ".\Debug\TransponderEdit.obj" ".\Debug\TreeTab.obj" ".\Debug\UserPreferences.obj" ".\Debug\Xmodem.obj" ".\Debug\IvoDB.res" ".\Debug\IvoDB_1_35_Debug.exe.embed.manifest.res" ] Creating command line "link.exe @c:\fta\tools\channel_editor\IvoDB\Debug\RSP00011018082288.rsp /NOLOGO /ERRORREPORT:PROMPT" **Processed /DEFAULTLIB:atlsd.lib Processed /DEFAULTLIB:ws2_32.lib Processed /DEFAULTLIB:mswsock.lib Processed /DISALLOWLIB:mfc90d.lib Processed /DISALLOWLIB:mfcs90d.lib Processed /DISALLOWLIB:mfc90.lib Processed /DISALLOWLIB:mfcs90.lib Processed /DISALLOWLIB:mfc90ud.lib Processed /DISALLOWLIB:mfcs90ud.lib Processed /DISALLOWLIB:mfc90u.lib Processed /DISALLOWLIB:mfcs90u.lib Processed /DISALLOWLIB:uafxcwd.lib Processed /DISALLOWLIB:uafxcw.lib Processed /DISALLOWLIB:nafxcw.lib Found "void __cdecl operator delete(void *)" (??3@YAXPAX@Z) Referenced in axeapi.lib(ipcgeneric.obj) Referenced in axeapi.lib(ipccommon.obj) Referenced in axeapi.lib(activeobject.obj) Referenced in nafxcwd.lib(nolib.obj) Referenced in sqliteWrapper.lib(DbConnection.obj) Referenced in sqliteWrapper.lib(Statement.obj) Referenced in axeapi.lib(nvstorage.obj) Referenced in axeapi.lib(avctrler.obj) Referenced in tara_common.lib(trace.obj) Referenced in tara_common.lib(ssPrintf.obj) Referenced in tara_common.lib(taraConfig.obj) Referenced in tara_common.lib(stream.obj) Referenced in tara_common.lib(STBConfigurationStorage.obj) Referenced in tara_common.lib(STBConfiguration.obj) Referenced in tara_common.lib(configParser.obj) Referenced in tara_common.lib(fileStream.obj) Referenced in tara_database.lib(SatStream.obj) Referenced in tara_database.lib(Service.obj) Referenced in tara_database.lib(ServiceList.obj) Referenced in tara_common.lib(playerConfig.obj) Referenced in UserPreferences.obj Referenced in Xmodem.obj Referenced in tara_database.lib(init.obj) Referenced in tara_database.lib(Satellite.obj) Referenced in TransponderEdit.obj Referenced in TreeTab.obj Referenced in TreeTab.obj Referenced in UserPreferences.obj Referenced in stdafx.obj Referenced in TimeDate.obj Referenced in TimeDate.obj Referenced in TransponderEdit.obj Referenced in SettingsTab.obj Referenced in SettingsTab.obj Referenced in STBSettings.obj Referenced in STBSettings.obj Referenced in SatTab.obj Referenced in SatTab.obj Referenced in SettingsBase.obj Referenced in SettingsBase.obj Referenced in ProgramTab.obj Referenced in ProgramTab.obj Referenced in PVRSettings.obj Referenced in PVRSettings.obj Referenced in ParentalLock.obj Referenced in ParentalLock.obj Referenced in ProgramEdit.obj Referenced in ProgramEdit.obj Referenced in MainTab.obj Referenced in MainTab.obj Referenced in OperTabCtrl.obj Referenced in OperTabCtrl.obj Referenced in IvoDBInfoTab.obj Referenced in IvoDbStruct.obj Referenced in LayoutHelper.obj Referenced in LayoutHelper.obj Referenced in IvoDBDlg.obj Referenced in IvoDBInfo.obj Referenced in IvoDBInfo.obj Referenced in IvoDBInfoTab.obj Referenced in HyperLink.obj Referenced in IvoDB.obj Referenced in IvoDB.obj Referenced in IvoDBDlg.obj Referenced in FavTab.obj Referenced in FavTab.obj Referenced in FindProgram.obj Referenced in FindProgram.obj Referenced in DragDropTreeCtrl.obj Referenced in DragDropTreeCtrl.obj Referenced in FavouriteEdit.obj Referenced in FavouriteEdit.obj Referenced in ColorFileDialog.obj Referenced in ColorStatic.obj Referenced in DragDropListCtrl.obj Referenced in DragDropListCtrl.obj Referenced in CMDatabase.obj Referenced in CMDatabase.obj Referenced in CMSettings.obj Referenced in CMSettings.obj Referenced in AntennaSettings.obj Referenced in AntennaSettings.obj Referenced in AudioVideoSettings.obj Referenced in AudioVideoSettings.obj Loaded nafxcwd.lib(afxmem.obj) nafxcwd.lib(afxmem.obj) : error LNK2005: "void __cdecl operator delete(void *,int,char const *,int)" (??3@YAXPAXHPBDH@Z) already defined in tara_common.lib(fileStream.obj)** Please help me out, I have already wasted 3 days looking on net and trying all the possible solutions I found. thanks in advance, SG

    Read the article

  • Move the uploads folder in Wordpress

    - by Victor Hurdugaci
    Currently, my Wordpress' upload folder is located in \wp-content\uploads. Initially there was no structure so all files were put there. After a while it was changed to upload the files in \wp-content\uploads\YEAR\MONTH. Now that folder contains a mix of files (those starting with + are folders) like: +wp-content | +2010 | | +02 | | | File-1 | | | File-2 | | | .. | | | File-n | | +01 | | | File-1 | | | File-2 | | | .. | | | File-n | +2009 | | +12 | | | File-1 | | | File-2 | | | .. | | | File-n | | +11 | | | File-1 | | | File-2 | | | .. | | | File-n | +.. | | | .. | Unstructured-file-1 | Unstructured-file-2 | ... | Unstructured-file-n Based on the dates of the unstructured files, I would like to move them in a structured hierarchy (based on date, move it to a folder \wp-content\uploads\YEAR\MONTH). Now, my questions are: Where do I write and execute a script to the movement (I don't have full access to the server, just to a cPanel and to the Wordpress Admin page)? What must be updated so that the links in posts, that reference the unstructured files, point to the new location of those files? Not fully related to the previous description: is it alright to move the whole uploads folder to another location, like \uploads? PS: Moving the files/updating the database manually is not an option :)

    Read the article

  • Recently "exposed" to Clicksor

    - by I take Drukqs
    Previous information concerning my issue can be found here: Virus...? Tons of people talking. Not sure where else to ask. I heard the loud sounds and whatnot from one of the ads but nothing was harmed and other than having the life scared out of me nothing was immediately affected. Literally nothing has changed. I really don't know what to do. My PC seems fine and from what Malwarebytes and Spybot tells me none of my files have been infected with anything. If you need more information I will be glad to supply it. Thanks in advance. Malwarebytes quick and full scan: Clean. Spybot S&D scan: Clean. HijackThis log: Logfile of Trend Micro HijackThis v2.0.4 Scan saved at 5:23:33 PM, on 2/2/2011 Platform: Windows 7 (WinNT 6.00.3504) MSIE: Internet Explorer v8.00 (8.00.7600.16700) Boot mode: Normal Running processes: C:\Program Files (x86)\DeviceVM\Browser Configuration Utility\BCU.exe C:\Program Files (x86)\NEC Electronics\USB 3.0 Host Controller Driver\Application\nusb3mon.exe C:\Program Files (x86)\Common Files\InstallShield\UpdateService\issch.exe C:\Program Files (x86)\Common Files\Java\Java Update\jusched.exe C:\Program Files (x86)\Pidgin\pidgin.exe C:\Program Files (x86)\Steam\Steam.exe C:\Program Files (x86)\Mozilla Firefox\firefox.exe C:\Program Files (x86)\uTorrent\uTorrent.exe C:\Fraps\fraps.exe C:\Program Files (x86)\Mozilla Firefox\plugin-container.exe C:\Program Files (x86)\Trend Micro\HiJackThis\HiJackThis.exe R1 - HKCU\Software\Microsoft\Internet Explorer\Main,Search Page = http://go.microsoft.com/fwlink/?LinkId=54896 R0 - HKCU\Software\Microsoft\Internet Explorer\Main,Start Page = http://go.microsoft.com/fwlink/?LinkId=69157 R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Default_Page_URL = http://go.microsoft.com/fwlink/?LinkId=69157 R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Default_Search_URL = http://go.microsoft.com/fwlink/?LinkId=54896 R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Search Page = http://go.microsoft.com/fwlink/?LinkId=54896 R0 - HKLM\Software\Microsoft\Internet Explorer\Main,Start Page = http://go.microsoft.com/fwlink/?LinkId=69157 R0 - HKLM\Software\Microsoft\Internet Explorer\Search,SearchAssistant = R0 - HKLM\Software\Microsoft\Internet Explorer\Search,CustomizeSearch = R0 - HKLM\Software\Microsoft\Internet Explorer\Main,Local Page = C:\Windows\SysWOW64\blank.htm R0 - HKCU\Software\Microsoft\Internet Explorer\Toolbar,LinksFolderName = R3 - URLSearchHook: SearchHook Class - {BC86E1AB-EDA5-4059-938F-CE307B0C6F0A} - C:\Program Files (x86)\DeviceVM\Browser Configuration Utility\AddressBarSearch.dll F2 - REG:system.ini: UserInit=userinit.exe O2 - BHO: AcroIEHelperStub - {18DF081C-E8AD-4283-A596-FA578C2EBDC3} - C:\Program Files (x86)\Common Files\Adobe\Acrobat\ActiveX\AcroIEHelperShim.dll O2 - BHO: Windows Live ID Sign-in Helper - {9030D464-4C02-4ABF-8ECC-5164760863C6} - C:\Program Files (x86)\Common Files\Microsoft Shared\Windows Live\WindowsLiveLogin.dll O2 - BHO: Java(tm) Plug-In 2 SSV Helper - {DBC80044-A445-435b-BC74-9C25C1C588A9} - C:\Program Files (x86)\Java\jre6\bin\jp2ssv.dll O4 - HKLM\..\Run: [BCU] "C:\Program Files (x86)\DeviceVM\Browser Configuration Utility\BCU.exe" O4 - HKLM\..\Run: [JMB36X IDE Setup] C:\Windows\RaidTool\xInsIDE.exe O4 - HKLM\..\Run: [NUSB3MON] "C:\Program Files (x86)\NEC Electronics\USB 3.0 Host Controller Driver\Application\nusb3mon.exe" O4 - HKLM\..\Run: [ISUSScheduler] "C:\Program Files (x86)\Common Files\InstallShield\UpdateService\issch.exe" -start O4 - HKLM\..\Run: [ISUSPM Startup] C:\PROGRA~2\COMMON~1\INSTAL~1\UPDATE~1\ISUSPM.exe -startup O4 - HKLM\..\Run: [Adobe Reader Speed Launcher] "C:\Program Files (x86)\Adobe\Reader 10.0\Reader\Reader_sl.exe" O4 - HKLM\..\Run: [Adobe ARM] "C:\Program Files (x86)\Common Files\Adobe\ARM\1.0\AdobeARM.exe" O4 - HKLM\..\Run: [SunJavaUpdateSched] "C:\Program Files (x86)\Common Files\Java\Java Update\jusched.exe" O4 - HKLM\..\RunOnce: [Malwarebytes' Anti-Malware] C:\Program Files (x86)\Malwarebytes' Anti-Malware\mbamgui.exe /install /silent O4 - HKCU\..\Run: [ISUSPM Startup] C:\PROGRA~2\COMMON~1\INSTAL~1\UPDATE~1\ISUSPM.exe -startup O4 - HKUS\S-1-5-19\..\Run: [Sidebar] %ProgramFiles%\Windows Sidebar\Sidebar.exe /autoRun (User 'LOCAL SERVICE') O4 - HKUS\S-1-5-19\..\RunOnce: [mctadmin] C:\Windows\System32\mctadmin.exe (User 'LOCAL SERVICE') O4 - HKUS\S-1-5-20\..\Run: [Sidebar] %ProgramFiles%\Windows Sidebar\Sidebar.exe /autoRun (User 'NETWORK SERVICE') O4 - HKUS\S-1-5-20\..\RunOnce: [mctadmin] C:\Windows\System32\mctadmin.exe (User 'NETWORK SERVICE') O10 - Unknown file in Winsock LSP: c:\program files (x86)\common files\microsoft shared\windows live\wlidnsp.dll O10 - Unknown file in Winsock LSP: c:\program files (x86)\common files\microsoft shared\windows live\wlidnsp.dll O23 - Service: @%SystemRoot%\system32\Alg.exe,-112 (ALG) - Unknown owner - C:\Windows\System32\alg.exe (file missing) O23 - Service: AppleChargerSrv - Unknown owner - C:\Windows\system32\AppleChargerSrv.exe (file missing) O23 - Service: Browser Configuration Utility Service (BCUService) - DeviceVM, Inc. - C:\Program Files (x86)\DeviceVM\Browser Configuration Utility\BCUService.exe O23 - Service: DES2 Service for Energy Saving. (DES2 Service) - Unknown owner - C:\Program Files (x86)\GIGABYTE\EnergySaver2\des2svr.exe O23 - Service: @%SystemRoot%\system32\efssvc.dll,-100 (EFS) - Unknown owner - C:\Windows\System32\lsass.exe (file missing) O23 - Service: @%systemroot%\system32\fxsresm.dll,-118 (Fax) - Unknown owner - C:\Windows\system32\fxssvc.exe (file missing) O23 - Service: InstallDriver Table Manager (IDriverT) - Macrovision Corporation - C:\Program Files (x86)\Common Files\InstallShield\Driver\11\Intel 32\IDriverT.exe O23 - Service: JMB36X - Unknown owner - C:\Windows\SysWOW64\XSrvSetup.exe O23 - Service: @keyiso.dll,-100 (KeyIso) - Unknown owner - C:\Windows\system32\lsass.exe (file missing) O23 - Service: @comres.dll,-2797 (MSDTC) - Unknown owner - C:\Windows\System32\msdtc.exe (file missing) O23 - Service: @%SystemRoot%\System32\netlogon.dll,-102 (Netlogon) - Unknown owner - C:\Windows\system32\lsass.exe (file missing) O23 - Service: NVIDIA Driver Helper Service (NVSvc) - Unknown owner - C:\Windows\system32\nvvsvc.exe (file missing) O23 - Service: @%systemroot%\system32\psbase.dll,-300 (ProtectedStorage) - Unknown owner - C:\Windows\system32\lsass.exe (file missing) O23 - Service: @%systemroot%\system32\Locator.exe,-2 (RpcLocator) - Unknown owner - C:\Windows\system32\locator.exe (file missing) O23 - Service: @%SystemRoot%\system32\samsrv.dll,-1 (SamSs) - Unknown owner - C:\Windows\system32\lsass.exe (file missing) O23 - Service: @%SystemRoot%\system32\snmptrap.exe,-3 (SNMPTRAP) - Unknown owner - C:\Windows\System32\snmptrap.exe (file missing) O23 - Service: @%systemroot%\system32\spoolsv.exe,-1 (Spooler) - Unknown owner - C:\Windows\System32\spoolsv.exe (file missing) O23 - Service: @%SystemRoot%\system32\sppsvc.exe,-101 (sppsvc) - Unknown owner - C:\Windows\system32\sppsvc.exe (file missing) O23 - Service: Steam Client Service - Valve Corporation - C:\Program Files (x86)\Common Files\Steam\SteamService.exe O23 - Service: NVIDIA Stereoscopic 3D Driver Service (Stereo Service) - NVIDIA Corporation - C:\Program Files (x86)\NVIDIA Corporation\3D Vision\nvSCPAPISvr.exe O23 - Service: @%SystemRoot%\system32\ui0detect.exe,-101 (UI0Detect) - Unknown owner - C:\Windows\system32\UI0Detect.exe (file missing) O23 - Service: @%SystemRoot%\system32\vaultsvc.dll,-1003 (VaultSvc) - Unknown owner - C:\Windows\system32\lsass.exe (file missing) O23 - Service: @%SystemRoot%\system32\vds.exe,-100 (vds) - Unknown owner - C:\Windows\System32\vds.exe (file missing) O23 - Service: @%systemroot%\system32\vssvc.exe,-102 (VSS) - Unknown owner - C:\Windows\system32\vssvc.exe (file missing) O23 - Service: @%SystemRoot%\system32\Wat\WatUX.exe,-601 (WatAdminSvc) - Unknown owner - C:\Windows\system32\Wat\WatAdminSvc.exe (file missing) O23 - Service: @%systemroot%\system32\wbengine.exe,-104 (wbengine) - Unknown owner - C:\Windows\system32\wbengine.exe (file missing) O23 - Service: @%Systemroot%\system32\wbem\wmiapsrv.exe,-110 (wmiApSrv) - Unknown owner - C:\Windows\system32\wbem\WmiApSrv.exe (file missing) O23 - Service: @%PROGRAMFILES%\Windows Media Player\wmpnetwk.exe,-101 (WMPNetworkSvc) - Unknown owner - C:\Program Files (x86)\Windows Media Player\wmpnetwk.exe (file missing) -- End of file - 7889 bytes

    Read the article

  • Why does my system slow down or freeze when there is heavy disk activity?

    - by user72270
    Im a first-time user to Ubuntu-12.04 with WUBI installation. My NoteBook Information : Dell vostro 3450 : i5 2410m, 3 gb ram, intel hd3000, amd 6630m hybrid. Surfing and playing games works flawlessly, however, I'm having huge problems when installing applications and generally copying and moving files. When doing so, system is significantly slower and freezes quite often (Firefox gets bluish, sometimes even black n white). I would say that Ubuntu allocates too much resources on file transfers and installing, but even these tasks are very slow. Here is very specific example : today, i tried to move 6 GB file from win 7 installation. It was good at first, i jumped to firefox but after a while firefox started to randomly turn bluish and mouse was randomly stopping working. It was gradually worse and worse and it got to a point when firefox black n whited and mouse wasn't working at all. I raged and went for some meal, when i got back screen was black. It probably unlogged me due to inactivity, when i pushed random button to bring screen to life i had to wait few minutes to let it show me only my screen background. No log in screen, just background and working mouse. NoteBook fan was working at 100 % so I assumed that file transfer was going on and I left it to work. Nothing then changed for a full hour so I hard rebooted it. File transfer unsuccessful, It transfered hardly 2 gigs. Is this normal ? What to do in these situations ? It didn't let me load system manager and not even terminal. Thanks.

    Read the article

  • Books and resources for Java Performance tuning - when working with databases, huge lists

    - by Arvind
    Hi All, I am relatively new to working on huge applications in Java. I am working on a Java web service which is pretty heavily used by various clients. The service basically queries the database (hibernate) and then works with a lot of Lists (there are adapters to convert list returned from DB to the interface which the service publishes) and I am seeing lot of issues with the service like high CPU usage or high heap space. While I can troubleshoot the performance issues using a profiler, I want to actually learn about what all I need to take care when I actually write code. Like what kind of List to use or things like using StringBuilder instead of String, etc... Is there any book or blogs which I can refer which will help me while I write new services? Also my application is multithreaded - each service call from a client is a new thread, and I want to know some best practices around that area as well. I did search the web but I found many tips which are not relevant in the latest Java 6 releases, so wanted to know what kind of resources would help a developer starting out now on Java for heavily used applications. Arvind

    Read the article

  • jsf and huge web site

    - by darko petreski
    Hi All, I need to program a very huge web site. The site should contain public part with search engine, user registration forms, cart and all other stuff. The private part should contain administrative web application with a lot of data grids, complex edit forms, security checks, services and so on. Also I need very good support for html mailing and pdf exports. Web services are also required. I am considering to use jsf. Can you recommend me books or other stuff where I can find all this info for building what i need? Is seam good option for this? I have looked their site and page seam on production. The sites listed there are far from professional and enterprise. Is there any book that explains how to build complete web site with jsf technology from login, security, sending mails, pdf conversion, time dependent background processes and everything else that big sites needs. I do not have time reading 20 books. In every php book all this stuff is explained but I have not seen a single java book that at least from high view will explain all of this. Regards

    Read the article

  • Collision detection of huge number of circles

    - by Tomek Tarczynski
    What is the best way to check collision of huge number of circles? It's very easy to detect collision between two circles, but if we check every combination then it is O(n^2) which definitely not an optimal solution. We can assume that circle object has following properties: -Coordinates -Radius -Velocity -Direction Velocity is constant, but direction can change. I've come up with two solutions, but maybe there are some better solutions. Solution 1 Divide whole space into overlapping squares and check for collision only with circles that are in the same square. Squares needs to overlap so there won't be a problem when circle moves from one square to another. Solution 2 At the beginning distances between every pair of circles need to be calculated. If the distance is small then these pair is stored in some list, and we need to check for collision in every update. If the distance is big then we store after which update there can be a collision (it can be calculated because we know the distance and velocitites). It needs to be stored in some kind of priority queue. After previously calculated number of updates distance needs to be checked again and then we do the same procedure - put it on the list or again in the priority queue.

    Read the article

  • Memory mapped files and "soft" page faults. Unavoidable?

    - by Robert Oschler
    I have two applications (processes) running under Windows XP that share data via a memory mapped file. Despite all my efforts to eliminate per iteration memory allocations, I still get about 10 soft page faults per data transfer. I've tried every flag there is in CreateFileMapping() and CreateFileView() and it still happens. I'm beginning to wonder if it's just the way memory mapped files work. If anyone there knows the O/S implementation details behind memory mapped files I would appreciate comments on the following theory: If two processes share a memory mapped file and one process writes to it while another reads it, then the O/S marks the pages written to as invalid. When the other process goes to read the memory areas that now belong to invalidated pages, this causes a soft page fault (by design) and the O/S knows to reload the invalidated page. Also, the number of soft page faults is therefore directly proportional to the size of the data write. My experiments seem to bear out the above theory. When I share data I write one contiguous block of data. In other words, the entire shared memory area is overwritten each time. If I make the block bigger the number of soft page faults goes up correspondingly. So, if my theory is true, there is nothing I can do to eliminate the soft page faults short of not using memory mapped files because that is how they work (using soft page faults to maintain page consistency). What is ironic is that I chose to use a memory mapped file instead of a TCP socket connection because I thought it would be more efficient. Note, if the soft page faults are harmless please note that. I've heard that at some point if the number is excessive, the system's performance can be marred. If soft page faults intrinsically are not significantly harmful then if anyone has any guidelines as to what number per second is "excessive" I'd like to hear that. Thanks.

    Read the article

  • finding ALL cycles in a huge sparse matrix

    - by Andy
    Hi there, First of all I'm quite a Java beginner, so I'm not sure if this is even possible! Basically I have a huge (3+million) data source of relational data (i.e. A is friends with B+C+D, B is friends with D+G+Z (but not A - i.e. unmutual) etc.) and I want to find every cycle within this (not necessarily connected) directed graph. I've found this thread (http://stackoverflow.com/questions/546655/finding-all-cycles-in-graph/549402#549402) which has pointed me to Donald Johnson's (elementary) cycle-finding algorithm which, superficially at least, looks like it'll do what I'm after (I'm going to try when I'm back at work on Tuesday - thought it wouldn't hurt to ask in the meanwhile!). I had a quick scan through the code of the Java implementation of Johnson's algorithm (in that thread) and it looks like a matrix of relations is the first step, so I guess my questions are: a) Is Java capable of handling a 3+million*3+million matrix? (was planning on representing A-friends-with-B by a binary sparse matrix) b) Do I need to find every connected subgraph as my first problem, or will cycle-finding algorithms handle disjoint data? c) Is this actually an appropriate solution for the problem? My understanding of "elementary" cycles is that in the graph below, rather than picking out A-B-C-D-E-F it'll pick out A-B-F, B-C-D etc. but that's not the end of the world given the task. E / \ D---F / \ / \ C---B---A d) If necessary, I can simplify the problem by enforcing mutuality in relations - i.e. A-friends-with-B <== B-friends-with-A, and if really necessary I can maybe cut down the data size, but realistically it is always going to be around the 1mil mark. z) Is this a P or NP task?! Am I biting off more than I can chew? Thanks all, any help appreciated! Andy

    Read the article

  • How to manipulate *huge* amounts of data

    - by Alejandro
    Hi there! I'm having the following problem. I need to store huge amounts of information (~32 GB) and be able to manipulate it as fast as possible. I'm wondering what's the best way to do it (combinations of programming language + OS + whatever you think its important). The structure of the information I'm using is a 4D array (NxNxNxN) of double-precission floats (8 bytes). Right now my solution is to slice the 4D array into 2D arrays and store them in separate files in the HDD of my computer. This is really slow and the manipulation of the data is unbearable, so this is no solution at all! I'm thinking on moving into a Supercomputing facility in my country and store all the information in the RAM, but I'm not sure how to implement an application to take advantage of it (I'm not a professional programmer, so any book/reference will help me a lot). An alternative solution I'm thinking on is to buy a dedicated server with lots of RAM, but I don't know for sure if that will solve the problem. So right now my ignorance doesn't let me choose the best way to proceed. What would you do if you were in this situation? I'm open to any idea. Thanks in advance!

    Read the article

  • How can I split my conkeror-rc config over multiple files?

    - by Ryan Thompson
    Short version: can you help me fill in this code? var conkeror_settings_dir = ".conkeror.mozdev.org/settings"; function load_all_js_files_in_dir (dir) { var full_path = get_home_directory().appendRelativePath(dir); // YOUR CODE HERE } load_all_js_files_in_dir(conkeror_settings_dir); Background I'm trying out Conkeror for web browsing. It's an emacs-like browser running on Mozilla's rendering engine, using javascript as configuration language (filling the role that elisp plays for emacs). In my emacs config, I have split my customizations into a series of files, where each file is a single unit of related options (for example, all my perl-related settings might be in perl-settings.el. All these settings files are loaded automatically by a function in my .emacs that simply loads every elisp file under my "settings" directory. I am looking to structure my Conkeror config in the same way, with my main conkeror-rc file basically being a stub that loads all the js files under a certain directory relative to my home directory. Unfortunately, I am much less literate in javascript than I am in elisp, so I don't even know how to "source" a file.

    Read the article

  • Read/Write/Find/Replace huge csv file

    - by notapipe
    I have a huge (4,5 GB) csv file.. I need to perform basic cut and paste, replace operations for some columns.. the data is pretty well organized.. the only problem is I cannot play with it with Excel because of the size (2000 rows, 550000 columns). here is some part of the data: ID,Affection,Sex,DRB1_1,DRB1_2,SENum,SEStatus,AntiCCP,RFUW,rs3094315,rs12562034,rs3934834,rs9442372,rs3737728 D0024949,0,F,0101,0401,SS,yes,?,?,A_A,A_A,G_G,G_G D0024302,0,F,0101,7,SN,yes,?,?,A_A,G_G,A_G,?_? D0023151,0,F,0101,11,SN,yes,?,?,A_A,G_G,G_G,G_G I need to remove 4th, 5th, 6th, 7th, 8th and 9th columns; I need to find every _ character from column 10 onwards and replace it with a space ( ) character; I need to replace every ? with zero (0); I need to replace every comma with a tab; I need to remove first row (that has column names; I need to replace every 0 with 1, every 1 with 2 and every ? with 0 in 2nd column; I need to replace F with 2, M with 1 and ? with 0 in 3rd column; so that in the resulting file the output reads: D0024949 1 2 A A A A G G G G D0024302 1 2 A A G G A G 0 0 D0023151 1 2 A A G G G G G G (both input and output should read one line per row, ne extra blank row) Is there a memory efficient way of doing that with java(and I need a code to do that) or a usable tool for playing with this large data so that I can easily apply Excel functionality..

    Read the article

  • Activator.CreateInstance uses a huge amount of memory

    - by Marco
    I have been playing a bit with Silverlight and try to port my Silverlight 3.0 application to Silverlight 4.0. My application loads different XAP files and upon a user request create an instance of a Xaml user control and adds it to the main container, in a sort of MEF approach in order I can have an extensible and pluggable application. The application is pretty huge and to keep acceptable the performances and the initial loading I have built up some helper classes to load in the background all pages and user controls that might be used later on. On Silverlight 3.0 everything was running smoothly without any problem so far. Switching to SL 4.0 I have noticed that when the process approaches to create the instances of the user controls using Activator.CreateInstance, the layout freezes unexpectedly for a minute and sometimes for more. Looking at the task manager the memory usage of IE jumps from 50MB to 400MB and sometimes to 1.5 GB. If the process won't take that much the layout is rendered properly and the memory falls back to 50 MB. Otherwise everything crashes due to out of memory exeption. Does anybody encountered the same problem? Or has anybody a solution about this tricky issue?

    Read the article

  • Where to create/keep secret files for license information/trials on Windows/Mac OS X/Linux?

    - by BastiBense
    I'm writing a commercial product which uses a simple registration mechanism and allows the user to use the application for a demo period before purchasing. My application must somewhere store the registration information (if entered) and/or the date of the first launch to calculate if the user is still within the demo/trail period. While I'm pretty much finished with the registration mechanism itself, I now have to find a good way to store the registration information on the user's disk. The most obvious idea would be to store the trial period in the preferences file, but since user tend to delete/tinker with those from time to time, it might be a good idea to keep the registration information in a separate, more hidden file. So here's my question: What is the best place/strategy to keep and create such hidden files on Windows, Mac OS X and Linux? Here is what came to my mind so far: Linux/Mac OS X Most Unix-like systems are rather locked down when it comes to places a user can write files to. In most cases this is only the /tmp directory and the user's home directory. I guess the easiest here is probably to create a file with a dot-prefix to make it less visible, then give it a name that won't make it obvious that it's associated with my application. Windows Probably much like Linux/Mac OS X - more recent Windows versions become more restrictive when it comes to file system permissions. Anyway, I'd like to hear your ideas and thoughs. Even better if you have already implemented something similar in the past. Thanks! Update For me the places for such files is more relevant than the discussion of the question if this way for copy protection is good or bad.

    Read the article

  • How to transform huge xml files in java?

    - by fx42
    As the title says it, I have a huge xml file (GBs) <root> <keep> <stuff> ... </stuff> <morestuff> ... </morestuff> </keep> <discard> <stuff> ... </stuff> <morestuff> ... </morestuff> </discard> </root> and I'd like to transform it into a much smaller one which retains only a few of the elements. My parser should do the following: 1. Parse through the file until a relevant element starts. 2. Copy the whole relevant element (with children) to the output file. go to 1. step 1 is easy with SAX and impossible for DOM-parsers. step 2 is annoying with SAX, but easy with the DOM-Parser or XSLT. so what? - is there a neat way to combine SAX and DOM-Parser to do the task?

    Read the article

  • Local server updates for the network

    - by Brendon
    I have setup one computer on our network as the file server. Because Internet here in Tanzania is both slow and expensive I would like that one system to download all the updates and then the other 10 computers on the network to get those update files from the server. I'm a bit of a noobie to Ubuntu, but really want to learn how to get this working smoothly so as to help other NGOs and schools here in Tanzania. Brendon

    Read the article

  • How to clean and add options to the Open With list of apps

    - by Luis Alvarado
    After installing several PPAs (Wine, PoL) and opening several files with other apps (Like changing from Totem to VLC) I discovered that the Open With option had 2 problems: Many items on the list are duplicated (As seen on the image for "A Wine Program") Sometimes the app I want to use to open is not shown there (For example, Virtualbox or VLC) So how can I edit this list to clean the duplicates and add missing apps from the list.

    Read the article

  • "Ghost" output from locate?

    - by Hailwood
    I deleted some files, but they seem to still exist. Can anyone please explain the output of this: m@work:~$ locate cfx.css | xargs rm m@work:~$ locate cfx.css /var/www/wfox/hbr.co.nz/cfx/a/c/cfx.css /var/www/wfox/modules/gallery/cfx/a/c/cfx.css /var/www/wfox/phoenix/fp.co.nz/cfx/a/c/cfx.css /var/www/wfox/tmp.co.nz/cfx/a/c/cfx.css m@work:~$ cat /var/www/wfox/hbr.co.nz/cfx/a/c/cfx.css cat: /var/www/wfox/hbr.co.nz/cfx/a/c/cfx.css: No such file or directory

    Read the article

  • Application Crash cleared the content of the Folder

    - by Ameya
    Recently while working on the LinuxDC++ over the network the application crashed while downloading files. Now my Downloads folder which had at least 60-80GB of data is completely cleaned but the system is not reporting the available the correct free space. Is there way to restore the contents of the folder only as the solution available are for the whole partition. I just want to recover the contents from one folder.

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Getting huge lags when downloading files via Service

    - by Copa
    I have a Service which receives URLs to download. The Service than downloads these URLs and save them to a file on the SD Card. When I put more than 2 items in the download queue my device is unusable. It nearly freezes. Huge lags and so on. Any idea? Sourcecode: private static boolean downloading = false; private static ArrayList<DownloadItem> downloads = new ArrayList<DownloadItem>(); /** * called once when the service started */ public static void start() { Thread thread = new Thread(new Runnable() { @Override public void run() { while (true) { if (downloads.size() > 0 && !downloading) { downloading = true; DownloadItem item = downloads.get(0); downloadSingleFile(item.getUrl(), item.getFile()); } } } }); thread.start(); } public static void addDownload(DownloadItem item) { downloads.add(item); } private static void downloadSuccessfullFinished() { if (downloads.size() > 0) downloads.get(0).setDownloaded(true); downloadFinished(); } private static void downloadFinished() { // remove the first entry; it has been downloaded if (downloads.size() > 0) downloads.remove(0); downloading = false; } private static void downloadSingleFile(String url, File output) { final int maxBufferSize = 4096; HttpResponse response = null; try { response = new DefaultHttpClient().execute(new HttpGet(url)); if (response != null && response.getStatusLine().getStatusCode() == 200) { // request is ok InputStream is = response.getEntity().getContent(); BufferedInputStream bis = new BufferedInputStream(is); RandomAccessFile raf = new RandomAccessFile(output, "rw"); ByteArrayBuffer baf = new ByteArrayBuffer(maxBufferSize); long current = 0; long i = 0; // read and write 4096 bytes each time while ((current = bis.read()) != -1) { baf.append((byte) current); if (++i == maxBufferSize) { raf.write(baf.toByteArray()); baf = new ByteArrayBuffer(maxBufferSize); i = 0; } } if (i > 0) // write the last bytes to the file raf.write(baf.toByteArray()); baf.clear(); raf.close(); bis.close(); is.close(); // download finished get start next download downloadSuccessfullFinished(); return; } } catch (Exception e) { // not successfully downloaded downloadFinished(); return; } // not successfully downloaded downloadFinished(); return; }

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >