Search Results

Search found 22986 results on 920 pages for 'allocation unit size'.

Page 61/920 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • Would SSD drives benefit from a non-default allocation unit size?

    - by davebug
    The default allocation unit size recommended when formatting a drive in our current set-up is 4096 bytes. I understand the basics of the pros and cons of larger and smaller sizes (performance boost vs. space preservation) but it seems the benefits of a solid state drive (seek times massively lower than hard disks) may create a situation where a much smaller allocation size is not detrimental. Were this the case it would at least partially help to overcome the disadvantage of SSD (massively higher prices per GB). Is there a way to determine the 'cost' of smaller allocation sizes specifically related to seek times? Or are there any studies or articles recommending a change from the default based on this newer tech? (Assume the most average scattering of sizes program files, OS files, data, mp3s, text files, etc.)

    Read the article

  • GMP variable's bit size..

    - by kishorebjv
    How to know the size of a declared variable in GMP??or how can we decide the size of an integer in GMP? mpz_random(temp,1); in manual it is given that this function allocates 1limb(=32bits for my comp) size to the "temp".... but it is having 9 digit number only.. SO i dont think that 32 bit size number holds only 9 digits number.. So please help me to know the size of integer variable in GMP .. thanks in adv..

    Read the article

  • Text Box size is different in IE 6 and FireFox 3.6

    - by user299873
    I am facing issues with text box size when veiwing in Fire Fox 3.6. < input class="dat" type="text" name="rejection_reason" size="51" maxlength="70" onchange="on_change();" style is as: .dat { font-family : verdana,arial,helvetica; font-size : 8pt; font-weight : bold; text-align : left; vertical-align : middle; background-color : White; } Text box size in Fire Fox is bit smaller than IE6. Not sure why IE6 and FireFox displaying text box of diff size.

    Read the article

  • Changing text size on a ggplot bump plot

    - by Tom Liptrot
    Hi, I'm fairly new to ggplot. I have made a bumpplot using code posted below. I got the code from someones blog - i've lost the link.... I want to be able to increase the size of the lables (here letters which care very small on the left and right of the plot) without affecting the width of the lines (this will only really make sense after you have run the code) I have tried changing the size paramater but that always alter the line width as well. Any suggestion appreciated. Tom require(ggplot2) df<-matrix(rnorm(520), 5, 10) #makes a random example colnames(df) <- letters[1:10] Price.Rank<-t(apply(df, 1, rank)) dfm<-melt(Price.Rank) names(dfm)<-c( "Date","Brand", "value") p <- ggplot(dfm, aes(factor(Date), value, group = Brand, colour = Brand, label = Brand)) p1 <- p + geom_line(aes(size=2.2, alpha=0.7)) + geom_text(data = subset(dfm, Date == 1), aes(x = Date , size =2.2, hjust = 1, vjust=0)) + geom_text(data = subset(dfm, Date == 5), aes(x = Date , size =2.2, hjust = 0, vjust=0))+ theme_bw() + opts(legend.position = "none", panel.border = theme_blank()) p1 + theme_bw() + opts(legend.position = "none", panel.border = theme_blank())

    Read the article

  • RijndaelManaged Padding when data matches block size

    - by trampster
    If I use PKCS7 padding in RijndaelManaged with 16 bytes of data then I get 32 bytes of data output. It appears that for PKCS7 when the data size matches the block size it adds a whole extra block of data. If I use Zeros padding for 16 bytes of data I get out 16 bytes of data. So for Zeros padding if the data matches the block size then it doesn't pad. I have searched through the documentation and it says nothing about this difference in padding behavior. Can someone please point me to some kind of documentation which specifies what the padding behavior should be for the different padding modes when the data size matches the block size.

    Read the article

  • GWT - Retrieve size of a widget that is not displayed

    - by Garagos
    I need to set the size of an absolutePanel regarding to its child size, but the getOffset* methods return 0 because (i think) the child as not been displayed yet. A Quick example: AbsolutePanel aPanel = new AbsolutePanel(); HTML text = new HTML(/*variable lenght text*/); int xPosition = 20; // actually variable aPanel.add(text, xPosition, 0); aPanel.setSize(xPosition + text .getOffsetWidth() + "px", "50px"); // 20px 50px I could also solve my problem by using the AbsolutePanel size to set the child position and size: AbsolutePanel aPanel = new AbsolutePanel(); aPanel.setSize("100%", "50px"); HTML text = new HTML(/*variable lenght text*/); int xPosition = aPanel.getOffsetWidth() / 3; // Once again, getOffsetWidth() returns 0; aPanel.add(text, xPosition, 0); In both case, i have to find a way to either: retrieve the size of a widget that has not been displayed be notified when a widget is displayed

    Read the article

  • Unit testing a controller in ASP.NET MVC 2 with RedirectToAction

    - by Rob Walker
    I have a controller that implements a simple Add operation on an entity and redirects to the Details page: [HttpPost] public ActionResult Add(Thing thing) { // ... do validation, db stuff ... return this.RedirectToAction<c => c.Details(thing.Id)); } This works great (using the RedirectToAction from the MvcContrib assembly). When I'm unit testing this method I want to access the ViewData that is returned from the Details action (so I can get the newly inserted thing's primary key and prove it is now in the database). The test has: var result = controller.Add(thing); But result here is of type: System.Web.Mvc.RedirectToRouteResult (which is a System.Web.Mvc.ActionResult). It doesn't hasn't yet executed the Details method. I've tried calling ExecuteResult on the returned object passing in a mocked up ControllerContext but the framework wasn't happy with the lack of detail in the mocked object. I could try filling in the details, etc, etc but then my test code is way longer than the code I'm testing and I feel I need unit tests for the unit tests! Am I missing something in the testing philosophy? How do I test this action when I can't get at its returned state?

    Read the article

  • Zend Framework: How to start PHPUnit testing Forms?

    - by Andrew
    I am having trouble getting my filters/validators to work correctly on my form, so I want to create a Unit test to verify that the data I am submitting to my form is being filtered and validated correctly. I started by auto-generating a PHPUnit test in Zend Studio, which gives me this: <?php require_once 'PHPUnit/Framework/TestCase.php'; /** * Form_Event test case. */ class Form_EventTest extends PHPUnit_Framework_TestCase { /** * @var Form_Event */ private $Form_Event; /** * Prepares the environment before running a test. */ protected function setUp () { parent::setUp(); // TODO Auto-generated Form_EventTest::setUp() $this->Form_Event = new Form_Event(/* parameters */); } /** * Cleans up the environment after running a test. */ protected function tearDown () { // TODO Auto-generated Form_EventTest::tearDown() $this->Form_Event = null; parent::tearDown(); } /** * Constructs the test case. */ public function __construct () { // TODO Auto-generated constructor } /** * Tests Form_Event->init() */ public function testInit () { // TODO Auto-generated Form_EventTest->testInit() $this->markTestIncomplete( "init test not implemented"); $this->Form_Event->init(/* parameters */); } /** * Tests Form_Event->getFormattedMessages() */ public function testGetFormattedMessages () { // TODO Auto-generated Form_EventTest->testGetFormattedMessages() $this->markTestIncomplete( "getFormattedMessages test not implemented"); $this->Form_Event->getFormattedMessages(/* parameters */); } } so then I open up terminal, navigate to the directory, and try to run the test: $ cd my_app/tests/unit/application/forms $ phpunit EventTest.php Fatal error: Class 'Form_Event' not found in .../tests/unit/application/forms/EventTest.php on line 19 So then I add a require_once at the top to include my Form class and try it again. Now it says it can't find another class. I include that one and try it again. Then it says it can't find another class, and another class, and so on. I have all of these dependencies on all these other Zend_Form classes. What should I do? How should I go about testing my Form to make sure my Validators and Filters are being attached correctly, and that it's doing what I expect it to do. Or am I thinking about this the wrong way?

    Read the article

  • How to implement or emulate an "abstract" OCUnit test class?

    - by Quinn Taylor
    I have a number of Objective-C classes organized in an inheritance hierarchy. They all share a common parent which implements all the behaviors shared among the children. Each child class defines a few methods that make it work, and the parent class raises an exception for the methods designed to be implemented/overridden by its children. This effectively makes the parent a pseudo-abstract class (since it's useless on its own) even though Objective-C doesn't explicitly support abstract classes. The crux of this problem is that I'm unit testing this class hierarchy using OCUnit, and the tests are structured similarly: one test class that exercises the common behavior, with a subclass corresponding to each of the child classes under test. However, running the test cases on the (effectively abstract) parent class is problematic, since the unit tests will fail in spectacular fashion without the key methods. (The alternative of repeating the common tests across 5 test classes is not really an acceptable option.) The non-ideal solution I've been using is to check (in each test method) whether the instance is the parent test class, and bail out if it is. This leads to repeated code in every test method, a problem that becomes increasingly annoying if one's unit tests are highly granular. In addition, all such tests are still executed and reported as successes, skewing the number of meaningful tests that were actually run. What I'd prefer is a way to signal to OCUnit "Don't run any tests in this class, only run them in its child classes." To my knowledge, there isn't (yet) a way to do that, something similar to a +(BOOL)isAbstractTest method I can implement/override. Any ideas on a better way to solve this problem with minimal repetition? Does OCUnit have any ability to flag a test class in this way, or is it time to file a Radar? Edit: Here's a link to the test code in question. Notice the frequent repetition of if (...) return; to start a method, including use of the NonConcreteClass() macro for brevity.

    Read the article

  • Where do you take mocking - immediate dependencies, or do you grow the boundaries...?

    - by Peter Mounce
    So, I'm reasonably new to both unit testing and mocking in C# and .NET; I'm using xUnit.net and Rhino Mocks respectively. I'm a convert, and I'm focussing on writing behaviour specifications, I guess, instead of being purely TDD. Bah, semantics; I want an automated safety net to work above, essentially. A thought struck me though. I get programming against interfaces, and the benefits as far as breaking apart dependencies goes there. Sold. However, in my behaviour verification suite (aka unit tests ;-) ), I'm asserting behaviour one interface at a time. As in, one implementation of an interface at a time, with all of its dependencies mocked out and expectations set up. The approach seems to be that if we verify that a class behaves as it should against its collaborating dependencies, and in turn relies on each of those collaborating dependencies to have signed that same quality contract, we're golden. Seems reasonable enough. Back to the thought, though. Is there any value in semi-integration tests, where a test-fixture is asserting against a unit of concrete implementations that are wired together, and we're testing its internal behaviour against mocked dependencies? I just re-read that and I think I could probably have worded it better. Obviously, there's going to be a certain amount of "well, if it adds value for you, keep doing it", I suppose - but has anyone else thought about doing that, and reaped benefits from it outweighing the costs?

    Read the article

  • Simplifying Testing through design considerations while utilizing dependency injection

    - by Adam Driscoll
    We are a few months into a green-field project to rework the Logic and Business layers of our product. By utilizing MEF (dependency injection) we have achieved high levels of code coverage and I believe that we have a pretty solid product. As we have been working through some of the more complex logic I have found it increasingly difficult to unit test. We are utilizing the CompositionContainer to query for types required by these complex algorithms. My unit tests are sometimes difficult to follow due to the lengthy mock object setup process that must take place, just right, to allow for certain circumstances to be verified. My unit tests often take me longer to write than the code that I'm trying to test. I realize this is not only an issue with dependency injection but with design as a whole. Is poor method design or lack of composition to blame for my overly complex tests? I've tried base classing tests, creating commonly used mock objects and ensuring that I utilize the container as much as possible to ease this issue but my tests always end up quite complex and hard to debug. What are some tips that you've seen to keep such tests concise, readable, and effective?

    Read the article

  • Emulating Test::More::done_testing - what is the most idiomatic way?

    - by DVK
    I have to build unit tests for in environment with a very old version of Test::More (perl5.8 with $Test::More::VERSION being '0.80') which predates the addition of done_testing(). Upgrading to newer Test::More is out of the question for practical reasons. And I am trying to avoid using no_tests - it's generally a bad idea not catching when your unit test dies prematurely. What is the most idiomatic way of running a configurable amount of tests, assuming no no_tests or done_testing() is used? Details: My unit tests usually take the form of: use Test::More; my @test_set = ( [ "Test #1", $param1, $param2, ... ] ,[ "Test #1", $param1, $param2, ... ] # ,... ); foreach my $test (@test_set) { run_test($test); } sub run_test { # $expected_tests += count_tests($test); ok(test1($test)) || diag("Test1 failed"); # ... } The standard approach of use Test::More tests => 23; or BEGIN {plan tests => 23} does not work since both are obviously executed before @tests is known. My current approach involves making @tests global and defining it in the BEGIN {} block as follows: use Test::More; BEGIN { our @test_set = (); # Same set of tests as above my $expected_tests = 0; foreach my $test (@tests) { my $expected_tests += count_tests($test); } plan tests = $expected_tests; } our @test_set; # Must do!!! Since first "our" was in BEGIN's scope :( foreach my $test (@test_set) { run_test($test); } # Same sub run_test {} # Same I feel this can be done more idiomatically but not certain how to improve. Chief among the smells is the duplicate our @test_test declarations - in BEGIN{} and after it.

    Read the article

  • Mocking non-virtual methods in C++ without editing production code?

    - by wk1989
    Hello, I am a fairly new software developer currently working adding unit tests to an existing C++ project that started years ago. Due to a non-technical reason, I'm not allowed to modify any existing code. The base class of all my modules has a bunch of methods for Setting/Getting data and communicating with other modules. Since I just want to unit testing each individual module, I want to be able to use canned values for all my inter-module communication methods. I.e. for a method Ping() which checks if another module is active, I want to have it return true or false based on what kind of test I'm doing. I've been looking into Google Test and Google Mock, and it does support mocking non-virtual methods. However the approach described (http://code.google.com/p/googlemock/wiki/CookBook#Mocking_Nonvirtual_Methods) requires me to "templatize" the original methods to take in either real or mock objects. I can't go and templatize my methods in the base class due to the requirement mentioned earlier, so I need some other way of mocking these virtual methods Basically, the methods I want to mock are in some base class, the modules I want to unit test and create mocks of are derived classes of that base class. There are intermediate modules in between my base Module class and the modules that I want to test. I would appreciate any advise! Thanks, JW EDIT: A more concrete examples My base class is lets say rootModule, the module I want to test is leafModule. There is an intermediate module which inherits from rootModule, leafModule inherits from this intermediate module. In my leafModule, I want to test the doStuff() method, which calls the non virtual GetStatus(moduleName) defined in the rootModule class. I need to somehow make GetStatus() to return a chosen canned value. Mocking is new to me, so is using mock objects even the right approach?

    Read the article

  • implementation musical instrument using audio unit

    - by Develop.Kim
    post same question at apple developer forum ,too hi first sorry that my english is poor.. i want develop iphone application that playing musical instrument like 'ocarina' but don't need blow mic features. so first i tried to find that how implementation 'virtual musical instrument ' in iphone development. the during the decide implementation using 'Audio Unit' to report this article (link) so i want two kind of questions. i recognize that the 'musical instrument' can be divided into three sound that 'attack', 'sustain' , 'release'. 'decay' maybe included (link) . How implementation when audio unit base 'AUInstrumentBase' each sound ? i download sample 'SinSynth' (link) . i want play note this instrument unit for analyze source and study. Is there way to using AULab? expected the way using MIDI input . but i don't have MIDI. in addition, i wonder that i would think it right the way. to ask the advice... thank for reading poor english my article.

    Read the article

  • Unit testing a 'legacy' WPF Application

    - by sc_ray
    The product I have been working on has been in development for the past six years. It started as a generic data entry portal into an insanely complex part WPF/part legacy application. The system has been developed for all these years without a single Unit test in its fold. Now, the point has been raised for a comprehensive unit testing framework. I have been recruited recently to work on this product and have been tasked to get the 'Testing' in order. Since the team that worked on the product for the last six years adopted 'Agile', the project lacks any documentation of the business rules or any design documents. I have been trying to write unit tests for some of the modules. But I am not sure what to Mock, how to setup my Test fixture and eventually what to Test for, since a casual glance of the methods does not reveal its intentions. Also, it has come to my attention that the code was not developed with a particular methodology in mind. Given the situation, I was wondering if the good people of Stackoverflow could provide me with some advise on how to salvage this situation. I have heard about the book 'Working with Legacy Code' that has something to say about this general situation but I was thinking about getting some pointers from individuals who have encountered similar situations within the technology stack(C#,VB,C++,.NET 3.5,WCF,SQL Server 2005).

    Read the article

  • Running unittest with typical test directory structure.

    - by Major Major
    The very common directory structure for even a simple Python module seems to be to separate the unit tests into their own test directory: new_project/ antigravity/ antigravity.py test/ test_antigravity.py setup.py etc. for example see this Python project howto. My question is simply What's the usual way of actually running the tests? I suspect this is obvious to everyone except me, but you can't just run python test_antigravity.py from the test directory as its import antigravity will fail as the module is not on the path. I know I could modify PYTHONPATH and other search path related tricks, but I can't believe that's the simplest way - it's fine if you're the developer but not realistic to expect your users to use if they just want to check the tests are passing. The other alternative is just to copy the test file into the other directory, but it seems a bit dumb and misses the point of having them in a separate directory to start with. So, if you had just downloaded the source to my new project how would you run the unit tests? I'd prefer an answer that would let me say to my users: "To run the unit tests do X."

    Read the article

  • How can this Ambient Context become null?

    - by Mark Seemann
    Can anyone help me explain how TimeProvider.Current can become null in the following class? public abstract class TimeProvider { private static TimeProvider current = DefaultTimeProvider.Instance; public static TimeProvider Current { get { return TimeProvider.current; } set { if (value == null) { throw new ArgumentNullException("value"); } TimeProvider.current = value; } } public abstract DateTime UtcNow { get; } public static void ResetToDefault() { TimeProvider.current = DefaultTimeProvider.Instance; } } Observations All unit tests that directly reference TimeProvider also invokes ResetToDefault() in their Fixture Teardown. There is no multithreaded code involved. Once in a while, one of the unit tests fail because TimeProvider.Current is null (NullReferenceException is thrown). This only happens when I run the entire suite, but not when I just run a single unit test, suggesting to me that there is some subtle test interdependence going on. It happens approximately once every five or six test runs. When a failure occurs, it seems to be occuring in the first executed tests that involves TimeProvider.Current. More than one test can fail, but only one fails in a given test run. FWIW, here's the DefaultTimeProvider class as well: public class DefaultTimeProvider : TimeProvider { private readonly static DefaultTimeProvider instance = new DefaultTimeProvider(); private DefaultTimeProvider() { } public override DateTime UtcNow { get { return DateTime.UtcNow; } } public static DefaultTimeProvider Instance { get { return DefaultTimeProvider.instance; } } } I suspect that there's some subtle interplay going on with static initialization where the runtime is actually allowed to access TimeProvider.Current before all static initialization has finished, but I can't quite put my finger on it. Any help is appreciated.

    Read the article

  • Quick, Linux-compatible unit-aware calculator

    - by endolith
    I want to be able to press a keyboard combination, start typing a mathematical expression that includes units and slightly advanced math (not just a four-function calculator), and get a result immediately, in units that I specify, that I can copy and paste. Currently I open Firefox and press Ctrl+K, type in the search box, and it usually gives me a result in the drop-down from Google Calculator. It doesn't always, though, so I press "=" at the end, wait for a result, remove the equals, wait for a result, realize it doesn't understand the way I typed a unit, open the result in a new tab, etc. it sucks. Wolfram Alpha is smarter, but very much slower, and the output is all images, not text, and I don't have a quick widget for it, if such a thing could even exist. GNU units has a ton of units, which is great, and I can define my own units, which is great, but they have to be written in specific, unintuitive ways, it doesn't handle much advanced math, and I'd need to open a terminal, start units, etc. I hate the command line. I wasted a lot of time trying to make front-ends for units in Deskbar and Launchy, but I'm not a real coder and I don't use either of those anymore. Any other solutions or enhancements of these?

    Read the article

  • Optimal Networking Setup for a 2-Story unit?

    - by user29336
    I am moving into a 4 bedroom two-story unit. It’s roughly 2,200 sq ft. I want absolute max throughput possible to be achieved in all focal points. We’re all in internet related industries. Between gaming and web-development latency and throughput are major factors for us. Here’s our main focal points: 1) Garage (office). downstairs 2) Each bedroom x4. upstairs 3) Living room. downstairs The fastest line we can get is Comcast 50mbdown/5up (Wideband). I am looking for the best way to achieve wireless and wired performance for our setup. Our gaming computers may be in our bedroom, and we also may bring it down to the office every now and then for “LAN” sessions. Most wireless will be happening downstairs with our laptops, but since we may do LAN sessions then hard wired latency may be important there too. My concerns: If we do only wireless there would be too much latency for gaming. I don’t know if placing one D-link DGL 4500 on the top floor would be enough; which I currently own. (http://dlink.com/us/en/home-solutions/support/product/dgl-4500-xtreme-n-gaming-router) As far as I’m aware wireless signals transfer best top down. Would this wireless router be enough on top floor and that’s it? My second strategy was a combination of wiring and wireless but I’m not sure what’s easiest way to do this? This is a place we’re renting, so I’m not sure how much leeway we have with wiring, but we’re all pretty competent... if we can’t drill through a wall we can probably “stitch” them across the edges wherever needed. Thoughts on the optimal way to do this?

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • Is big (as much as big) size display (Monitor) always better for Development?

    - by Jitendra Vyas
    Is bigger size display ( Monitor) always better for Development? I'm going to buy a new LCD Monitor. I mostly work in Adobe Photoshop, HTML, CSS, jQuery and Wordpress. Budget is not a problem. Many options are there for LCD Monitor SIZE My questions are Would it better for maximum size, or large size monitor are not good always? Would it better to buy 21.5 inch x 2 than one 30 inch monitor? Which monitor size would you would prefer between the size of 21.5 inch - 30 inch, if bugdet is not a problem?

    Read the article

  • How to determine the size of a package in terminal prior to downloading?

    - by user14590
    When using apt-get install <package_name>, and there are dependencies that need to be downloaded, the terminal outputs names of additional packages and total size, and asks for confirmation before downloading. But, when dependencies are satisfied and nothing but the named package needs to be downloaded there is no size output and no confirmation. When using Synaptic, I can see the total size that new packages that will use after installation but no way to see the size that needs to be downloaded, except to go from package to package and use properties to see the compressed size. I would like to know if there is a way to see the size of a package(s) in terminal and Synaptic prior to downloading and installing it/them?

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >