Search Results

Search found 10860 results on 435 pages for 'bad blocks'.

Page 45/435 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • Is there a way to change the root password while still logged in? I did something bad by accdient -_-

    - by Robert
    So I was trying to add my printer, and I wasn't able to make any changes due to the fact that cups was not accepting my root password. I was Googling some changes and trying to fix the problem when one of the commands CHANGED MY SUDO PASSWORD! Can someone please tell me which one of these is the culprit? I was trying to these commands: cat /etc/group | grep root cat /etc/group | grep myUserName usermod -a -G lpadmin myUserName sudo usermod -a -G lpadmin myUserName sudo gedit /etc/cups/cupsd.conf lppasswd -a myUserName lppasswd -a root sudo lppasswd -a myUserName I think it was this one, but I know which passwords I put in! There was nothing which I typed in besides my strong password or my easy temporary password. Unless I made a typo... please no. restart cups sudo password root This is so not cool, I was just trying to add a printer :'( Please help my stupidity!

    Read the article

  • would it be bad to put <span> tags within the <head>, for grouping meta data in schema.org format?

    - by hdavis84
    Alright, I'm currently practicing schema.org microdata, and trying to find the best route for every site I build. I have found that i can piggyback itemprops on open graph meta tags. I would like to piggyback more itemprops on opengraph meta tags. However, schema.org requires you to change itemtypes to define all aspects of a "thing". Say I'm defining a LocalBusiness. Open graph has street address, locality, and region i'd like to piggyback on. I'd have to do something like: <html lang="en" itemscope itemtype="http://schema.org/LocalBusiness"> <head> ... <meta itemprop="name" content="Business Name" /> <meta property="og:url" itemprop="url" content="http://example.com" /> <meta property="og:image" itemprop="image" content="http://example.com/logo.png" /> <span itemprop="address" itemscope itemtype="http://schema.org/PostalAddress"> <meta property="og:street-address" itemprop="streetAddress" content="1234 Amazing Rd." /> <meta property="og:locality" itemprop="addressLocality" content="Greenfield" /> <meta property="og:region" itemprop="addressRegion" content="IN" /> </span> </head> Although there's more that can be added in, this is enough of an example to show what I'm trying to achieve. I've searched the web to see if it is an issue to use spans in the head or not, because I don't want invalid markup. I know I can mark up the address information in the body of the pages, but the route above would be more efficient. Does anyone have an answer for this?

    Read the article

  • Is it bad practice for a module to contain more information than it needs?

    - by gekod
    I just wanted to ask for your opinion on a situation that occurs sometimes and which I don't know what would be the most elegant way to solve it. Here it goes: We have module A which reads an entry from a database and sends a request to module B containing ONLY the information from the entry module B would need to accomplish it's job (to keep modularity I just give it the information it needs - module B has nothing to do with the rest of the information from the read DB entry). Now after finishing it's job, module B has to reply to a module C if it succeeded or failed. To do this module B replies with the information it has gotten from module A and some variable meaning success or fail. Now here comes the problem: module C needs to find that entry again BUT the information it has gotten from module B is not enough to uniquely find the exact same entry again. I don't think that module A giving more information to module B which it doesn't need to do it's job but which it could then give back to module C would be a good practice because this would mean giving some module information it doesn't really need. What do you think?

    Read the article

  • Linking competitor with the same keyword i am targeting : Good or Bad for Seo?

    - by Badal Surana
    i am linking one of my competitors from my site for the same keyword which is i am targeting for my site.(My competitor is paying me for that) For Example: Me and my competitor both are targeting on keyword "foo" and my competitor paying me for linking his site from my site with keyword "foo" What i want to know is if i do that will my site's position go down in Google search results? or it will make no difference??

    Read the article

  • Multiple classes in a single .cs file - good or bad?

    - by Sergio
    Is it advisable to create multiple classes within a .cs file or should each .cs file have an individual class? For example: public class Items { public class Animal { } public class Person { } public class Object { } } Dodging the fact for a minute that this is a poor example of good architecture, is having more than a single class in a .cs file a code smell?

    Read the article

  • Why do I have to add a PPA twice (once to add it to the list of repo, second time to fix a BAD GPG)

    - by Luis Alvarado
    I notice the following: I add a ppa using add-apt-repository, for example the wine ppa, mozilla security, nvidia drivers, etc.. When I go to the Update Manager and tell it to CHECK for updates it throws me a PPA error. To solve the error I add the same PPA again. Why do I have to add the PPA again (This also can be done by adding the received key alone with apt-key) but why does this problem happen anyway.

    Read the article

  • I don't program in my spare time. Does that make me a bad developer?

    - by not-my-real-name
    A lot of blogs and advice on the web seem to suggest that in order to become a great developer, doing just your day job is not enough. For example, you should contribute to open source projects in your spare time, write smartphone apps, etc. In fact a lot of this advice seems to suggest that if you don't love programming enough to do it all day long then you're probably in the wrong career. That doesn't ring true with me. I enjoy my work, but when I come home from the office I'm not in the mood to jump straight back onto the computer and start coding away until bedtime. I only have a certain number of hours free time each day, and I'd rather spend them on other hobbies, seeing friends or going outside than in front of the computer. I do get a kick out of programming, and do hack around outside of work occasionally. I'm committed to my personal development and spend time reading tech blogs and books as a way to keep learning and becoming better. But that doesn't extend so far as to my wanting to use all my spare time for coding. Does this mean I'm not a 'true' software developer at heart? Is it possible to become a good software developer without doing extra outside your job? I'd be very interested to hear what you think.

    Read the article

  • Is using SVN for development and CM a bad practice?

    - by GatorGuy
    I have a bit of experience with SVN as a pure programmer/developer. Within my company, however, we use SVN as our configuration management tool. I thought using SVN for development at the same time was OK since we could use branches and the trunk for dev, and tags for releases. To me, the tags were the CM part, and the branches/trunk were the dev part. Recently a person, who develops high level code (but outside of the "pure SW" group) mentioned that the existing philosophy (mixing SVN for dev and CM) was wrong... in his opinion. His reasoning is that he thinks the company's CM tool should always link to run-able SW (so branches would break this rule). He also mentioned that a CM tool shouldn't be a backup utility for daily or incremental commits. Finally, he doesn't like the idea of having to jump from revision 143 to 89 in order to get a working copy... and further that CM tools shouldn't allow reversion to a broken state. In general he wants to separate the CM and back-up/dev utilties that SVN offers. Honestly, I am new and the person with this perspective is one of seniority, experience, and success, so I want to field this dilemma with the stackoverflow userbase to see if his approach has merit. My question: Should SVN be purely used for development, and another tool for CM (or vice versa)? Why? If so, what tools would you suggest for this combo? Or do you think that integrating both CM and dev into SVN is the best approach? Why? Thanks.

    Read the article

  • OO Software Architecture - base class that everything inherits from. Bad/good idea?

    - by ale
    I am reviewing a proposed OO software architecture that looks like this: Base Foo Something Bar SomethingElse Where Base is a static class. My immediate thought was that every object in any class will inherit all the methods in Base which would create a large object. Could this cause problems for a large system? The whole architecture is hierarchical.. the 'tree' is much bigger than this really. Does this sort of architecture have a name (hierarchical?!). What are the known pros and cons?

    Read the article

  • Is it a bad idea to ask an interviewer what the greatest strength and weakness of their development team is?

    - by epignosisx
    I was wondering if this was a good question to ask a possible employer when interviewing for a developer position: What is the greatest strength and weakness of your development team? We all get this question when we are in an interview, so why not ask them in return? I think it is a very good question because we could find out about the team, and how this strength or weakness could affect us, but I don't want to annoy the interviewer. Is there any downside to asking this question when interviewing for a developer position?

    Read the article

  • Is C++ really a bad language for beginners? [duplicate]

    - by Chris
    This question already has an answer here: Is C++ suitable as a first language? 24 answers I'm learning C++ right now, and it's the first language I'm learning. I keep seeing on stackexchange and other forums (Reddit, etc.) that I should drop C++ and learn a higher level language like Python or Java. The only arguments I see are that "C++ is harder to learn and is more low-level than others." which don't really give a reason NOT to learn it. I want to know if there are any actual reasons for dropping C++ and taking up another, "easier" language. Or if I should keep focusing on it, and just learn others later (which is what I plan on doing).

    Read the article

  • Space in img:s "ALT" attribute good/bad for search engines?

    - by Camran
    I am trying to make it easier for search engines to crawl my website, as it is almost 100% dynamic. I have a couple of transparent images which are actually links to sections of my page. I wonder, if I add an "alt" attribute containing space characters to explain the target, will this improve SE rankings etc? For example: <img src="blabla.png" alt="post new classified"> Or will this just result in errors? Ànd, what should I put in the alt attribute if I can't use space? PS: Another different and short question, will javascript-rich content make a page less important to crawlers? Thanks

    Read the article

  • Why did this command ":(){ :|: & };:" make my system lag so bad I had to reboot?

    - by blade19899
    Danger! Do not run this command to "test" it unless you are prepared for a crash and/or force-rebooting your system I was in my Virtualbox running 12.04 trying to compile an app, and while waiting I happened to chance upon a forum where a comment said: Try :(){ :|: & };: Fun, too, and doesn't need root. Without thinking I ran it in my gnome-terminal it made my 12.04 virtualbox lag so badly I had to shut it down. My question is what does this command do? :(){ :|: & };:

    Read the article

  • Bad previous code. To fix or not to fix?

    - by Viniyo Shouta
    As a freelancer programmer I am often asked to edit part of an application source code in order to add functionalities, fix bugs etc. While I'm on my adventure journey to study the source to do what I'm asked correctly I run into code like: World::User* GetWorld() { map<DWORD,World*>::iterator it = mapWld.find( m_userWorldId ) if( it != mapWld.end() ) return &it->second; return NULL; } if( pUser->GetWorld()->GetId() == 250 ) If I investigate further I end up finding that the DWORD class member of User, userWorldId can be a value non-found in the map mapWld, which will lead to a casuality as also known as crash! The obviously valid way to do it is: World* pWorld = pUser->GetWorld(); if( pWorld && pWorld->GetId() == 250 )//... Sometimes when it's something just 'small' I end up sort of 'fixing' it. But sometimes when I'm on a 500 thousand line source code and this kind of code is everywhere there is no much can do. The question is if it's politically correct to fix some of these things. Think of it; You are not paid to fix it. Perhaps you think it's right, but it was necessarily done that way for some reason and you should not be messing with it. You do not have authorization, you do not own the source and none of the copyrights belong to you. You have authorization to edit issues accordingly to the owners but you're in a hurry, you have many other projects to do, it's the end of the month, you must pay the bills. Sincerely, I think of it as seeing an animal die from a disease in front of you, you have the cure in your hands but you do nothing. What is the best to do in this scenario?

    Read the article

  • Storing a pass-by-reference parameter as a pointer - Bad practice?

    - by Karl Nicoll
    I recently came across the following pattern in an API I've been forced to use: class SomeObject { public: // Constructor. SomeObject(bool copy = false); // Set a value. void SetValue(const ComplexType &value); private: bool m_copy; ComplexType *m_pComplexType; ComplexType m_complexType; }; // ------------------------------------------------------------ SomeObject::SomeObject(bool copy) : m_copy(copy) { } // ------------------------------------------------------------ void SomeObject::SetValue(const ComplexType &value) { if (m_copy) m_complexType.assign(value); else m_pComplexType = const_cast<ComplexType *>(&value); } The background behind this pattern is that it is used to hold data prior to it being encoded and sent to a TCP socket. The copy weirdness is designed to make the class SomeObject efficient by only holding a pointer to the object until it needs to be encoded, but also provide the option to copy values if the lifetime of the SomeObject exceeds the lifetime of a ComplexType. However, consider the following: SomeObject SomeFunction() { ComplexType complexTypeInstance(1); // Create an instance of ComplexType. SomeObject encodeHelper; encodeHelper.SetValue(complexTypeInstance); // Okay. return encodeHelper; // Uh oh! complexTypeInstance has been destroyed, and // now encoding will venture into the realm of undefined // behaviour! } I tripped over this because I used the default constructor, and this resulted in messages being encoded as blank (through a fluke of undefined behaviour). It took an absolute age to pinpoint the cause! Anyway, is this a standard pattern for something like this? Are there any advantages to doing it this way vs overloading the SetValue method to accept a pointer that I'm missing? Thanks!

    Read the article

  • Is there ever a time when creating a level/world editor with your game is a bad idea?

    - by Borgel
    I have created a few smaller games on my own in the past. My approach has always been to create a completed editor where it has all the functionality needed to save a level file and load it into the game. This has always made most sense to me but I keep hearing from people that a game is never fully done in the editor. I have never worked in a game development team and so I don't have first hand experience, but not adding everything needed to make the game to the editor just seams wrong. Am I missing something? Is there ever a reason not to add a tool to the editor?

    Read the article

  • Is it bad practice to run Node.js and apache in parallel?

    - by Camil Staps
    I have an idea in mind and would like to know if that's the way to go for my end application. Think of my application as a social networking system in which I want to implement chat functionality. For that, I'd like to push data from the server to the client. I have heard I could use Node.js for that. In the meanwhile, I want a 'regular' system for posting status updates and such, for which I'd like to use PHP and an apache server. The only way I can think of is having Node.js and apache running parallel. But is that the way to go here? I'd think there would be a somewhat neater solution for this.

    Read the article

  • Using <strong> for introductory paragraph to a post - a bad idea?

    - by user1515699
    I have a news website and on most posts the first paragraph is in bold. Currently the authors are just using <strong> to bold the paragraph, would it be better from an SEO point of view to rather use a paragraph class that is styled with p.bold {font-weight:bold;} <p class="bold">. Does <strong> on the first paragraph send the wrong message to search engines? The text is important but the main reason it is in bold is because it is the opening paragraph. I realise <strong> is used to emphasise certain words on a page

    Read the article

  • Does the direction of storage make us bad data citizens?

    - by simonsabin
      My career started at a company where we hardly had email, the network was a 10base2 affair with cables running all around the office. You used floppy disks and the thought of a GB of data was absurd. You had to look after every byte and only keep what you really needed. Whilst the cost of the spinning disks gradually falls the cost and size of flash storage continues to plummet. The new Crucial SSD is £380 for 1TB I can now keep 128GB of data on a SD card the size of my finger. It only costs...(read more)

    Read the article

  • Can an Employer turn you down if you have said the fact about current work culture being bad [closed]

    - by MansonRix
    I had recently an interview where I scored good in 1st two round of technical interview . Then in the 3rd round was the managerial round where the guy started about my experience and whether I have vaptured any requirement and handled and trained any teams. This went pretty well for around 50 mins . Then there was the awkward question , Interviewer: why amI looking for a change? Me: coz I want to explore my carrier options? Interviewer: But your current company is big enough and you can explore options over there? (This was supposedly the trap) Me: Apart from that I am missing the flexibilty of working with Us and Europe based company as my current company is not that flexible. Interviewer: What exactly you don't find flexible. Me: The login time . Even if you get late by 1sec you might have to explin. Though this is not a big problem , still I will prefer flexibilty as we are working really hard. Interviewer: Allright ( Then couple of more questions) , Hope to C U Ya , that's pretty much it . Now I called up HR and they say , they are yet to get the feedback from Interviewer. Did I screw it? I mean does some one really have to pretend always by saying positive things about company and manager though not saying negative things?

    Read the article

  • Is `catch(...) { throw; }` a bad practice?

    - by ereOn
    While I agree that catching ... without rethrowing is indeed wrong, I however believe that using constructs like this: try { // Stuff } catch (...) { // Some cleanup throw; } Is acceptable in cases where RAII is not applicable. (Please, don't ask... not everybody in my company likes object-oriented programming and RAII is often seen as "useless school stuff"...) My coworkers says that you should always know what exceptions are to be thrown and that you can always use constructs like: try { // Stuff } catch (exception_type1&) { // Some cleanup throw; } catch (exception_type2&) { // Some cleanup throw; } catch (exception_type3&) { // Some cleanup throw; } Is there a well admited good practice regarding these situations?

    Read the article

  • Resizing a LUKS encrypted volume

    - by mgorven
    I have a 500GiB ext4 filesystem on top of LUKS on top of an LVM LV. I want to resize the LV to 100GiB. I know how to resize ext4 on top of an LVM LV, but how do I deal with the LUKS volume? mgorven@moab:~% sudo lvdisplay /dev/moab/backup --- Logical volume --- LV Name /dev/moab/backup VG Name moab LV UUID nQ3z1J-Pemd-uTEB-fazN-yEux-nOxP-QQair5 LV Write Access read/write LV Status available # open 1 LV Size 500.00 GiB Current LE 128000 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 2048 Block device 252:3 mgorven@moab:~% sudo cryptsetup status backup /dev/mapper/backup is active and is in use. type: LUKS1 cipher: aes-cbc-essiv:sha256 keysize: 256 bits device: /dev/mapper/moab-backup offset: 3072 sectors size: 1048572928 sectors mode: read/write mgorven@moab:~% sudo tune2fs -l /dev/mapper/backup tune2fs 1.42 (29-Nov-2011) Filesystem volume name: backup Last mounted on: /srv/backup Filesystem UUID: 63877e0e-0549-4c73-8535-b7a81eb363ed Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean with errors Errors behavior: Continue Filesystem OS type: Linux Inode count: 32768000 Block count: 131071616 Reserved block count: 0 Free blocks: 112894078 Free inodes: 32044830 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 992 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 RAID stride: 128 RAID stripe width: 128 Flex block group size: 16 Filesystem created: Sun Mar 11 19:24:53 2012 Last mount time: Sat May 19 13:29:27 2012 Last write time: Fri Jun 1 11:07:22 2012 Mount count: 0 Maximum mount count: 100 Last checked: Fri Jun 1 11:03:50 2012 Check interval: 31104000 (12 months) Next check after: Mon May 27 11:03:50 2013 Lifetime writes: 118 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 383bcbc5-fde9-4720-b98e-2d6224713ecf Journal backup: inode blocks

    Read the article

  • Ext3 fs: Block bitmap for group 1 not in group (block 0). is fs dead?

    - by ip
    My company has a server with one big partition with Mysql database and php files. Now this partition seems to be corrupted, as reported from kernel messages when I tried to mount it manually: [329862.817837] EXT3-fs error (device loop1): ext3_check_descriptors: Block bitmap for group 1 not in group (block 0)! [329862.817846] EXT3-fs: group descriptors corrupted! I've tried to recovery it running tools from a PLD livecd. These are the tools I have tested: - e2retrieve - testdisk - photorec - dd_rescue/dd_rhelp - ddrescue - fsck.ext2 - e2salvage without any success. dumpe2fs 1.41.3 (12-Oct-2008) Filesystem volume name: /dev/sda3 Last mounted on: <not available> Filesystem UUID: dd51610b-6de0-4392-a6f3-67160dbc0343 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal filetype sparse_super Default mount options: (none) Filesystem state: not clean with errors Errors behavior: Continue Filesystem OS type: Linux Inode count: 9502720 Block count: 18987570 Reserved block count: 949378 Free blocks: 11555345 Free inodes: 11858398 First block: 0 Block size: 4096 Fragment size: 4096 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 16384 Inode blocks per group: 512 Last mount time: Wed Mar 24 09:31:03 2010 Last write time: Mon Apr 12 11:46:32 2010 Mount count: 10 Maximum mount count: 30 Last checked: Thu Jan 1 01:00:00 1970 Check interval: 0 (<none>) Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 128 Journal inode: 8 Journal backup: inode blocks dumpe2fs: A block group is missing an inode table while reading journal inode e2fsck 1.41.3 (12-Oct-2008) fsck.ext3: Group descriptors look bad... trying backup blocks... fsck.ext3: A block group is missing an inode table while checking ext3 journal for /dev/sda3 I tried also backup superblocks, same error result. There's any other tools I have to test before considering these disk definitely unrecoverable? Many thanks, ip

    Read the article

  • how can i move ext3 partition to the beginning of drive without losing data?

    - by Felipe Alvarez
    I have a 500GB external drive. It had two partitions, each around 250GB. I removed the first partition. I'd like to move the 2nd to the left, so it consumes 100% of the drive. How can this be accomplished without any GUI tools (CLI only)? fdisk Disk /dev/sdd: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xc80b1f3d Device Boot Start End Blocks Id System /dev/sdd2 29374 60801 252445410 83 Linux parted Model: ST350032 0AS (scsi) Disk /dev/sdd: 500GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 2 242GB 500GB 259GB primary ext3 type=83 dumpe2fs Filesystem volume name: extstar Last mounted on: <not available> Filesystem UUID: f0b1d2bc-08b8-4f6e-b1c6-c529024a777d Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal dir_index filetype needs_recovery sparse_super large_file Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 15808608 Block count: 63111168 Reserved block count: 0 Free blocks: 2449985 Free inodes: 15799302 First block: 0 Block size: 4096 Fragment size: 4096 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8208 Inode blocks per group: 513 Filesystem created: Mon Feb 15 08:07:01 2010 Last mount time: Fri May 21 19:31:30 2010 Last write time: Fri May 21 19:31:30 2010 Mount count: 5 Maximum mount count: 29 Last checked: Mon May 17 14:52:47 2010 Check interval: 15552000 (6 months) Next check after: Sat Nov 13 14:52:47 2010 Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: d0363517-c095-4f53-baa7-7428c02fbfc6 Journal backup: inode blocks Journal size: 128M

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >