Search Results

Search found 16467 results on 659 pages for 'request filtering'.

Page 81/659 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • If Inner Join can be thought of as Cross Join but filtering out the records satisfying the condition

    - by Jian Lin
    If an Inner Join can be thought of as a cross join and then getting the records that satisfy the condition, then a LEFT OUTER JOIN can be thought of as that, plus ONE record on the left table that doesn't satisfy the condition. In other words, it is not a cross join that "goes easy" on the left records (even when the condition is not satisfied), because then the left record can appear many times (as many as how many records there are in the right table). So the LEFT OUTER JOIN is the Cross JOIN with the records satisfying the condition, plus ONE record from the LEFT TABLE that doesn't satisfy the condition.

    Read the article

  • Is it legal to have different SOAP namespaces/versions between the request and response?

    - by Lord Torgamus
    THIRD EDIT: I now believe that this problem is due to a SOAP version mismatch (1.1 request, 1.2 response) masquerading as a namespace problem. Is it illegal to mix versions, or just bad style? Am I completely out of luck if I can't change my SOAP version or the service's? SECOND EDIT: Clarified error message, and tried to reduce "tl;dr"-ness. EDIT: [Link deleted, not related] Using soapUI, I'm sending a request that starts with: <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" ... and getting a response that starts with: <soap:Envelope xmlns:soap="http://www.w3.org/2003/05/soap-envelope" ... I know the service is getting the info, because processes down the line are working. However, my soapUI teststep fails. It has two active assertions: "SOAP Response" and "Not SOAP Fault." The failure marker is next to "SOAP Response," with the following message: line -1: Element Envelope@http://www.w3.org/2003/05/soap-envelope is not a valid Envelope@http://schemas.xmlsoap.org/soap/envelope/ document or a valid substitution. I have tried mixing and matching the namespace prefixes and schema URLs. Changing prefixes seems to have no effect; changing URLs causes a VersionMismatch error. I have also tried to use a substitution group, but that doesn't seem to be legal.

    Read the article

  • How to post a SOAP request from a browser?

    - by understack
    Is it possible to send a SOAP request directly from a browser to service provider? And then parse the output in javascript to show the result? For example, if I've a SOAP request like this : POST /InStock HTTP/1.1 Host: www.example.org Content-Type: application/soap+xml; charset=utf-8 Content-Length: nnn <?xml version="1.0"?> <soap:Envelope xmlns:soap="http://www.w3.org/2001/12/soap-envelope" soap:encodingStyle="http://www.w3.org/2001/12/soap-encoding"> <soap:Body xmlns:m="http://www.example.org/stock"> <m:GetStockPrice> <m:StockName>IBM</m:StockName> </m:GetStockPrice> </soap:Body> </soap:Envelope> Then can I get the 'IBM stock price' by clicking on a link on a web page? And show result after xml processing. EDIT Can I send the whole envelope as POST data?

    Read the article

  • Outlook 2007 addins for filtering attachments accordingly to recipients.

    - by Susanta
    My question is that I need to send attached mail to domain users and non domain users. Domain users will receive .lnk of the attached file where as non domain users will receive physical file. Now I am doing by capturing send event of outlook and internally divided mail in two parts for domain users I crated .lnk of the file and attached it and sent to user. Where as for non domain users i attached the physical file and sent to the user. But these things are done by sending two mails internally so I am not able to maintain CC, BCC information. I need to do these things in one mail. So it is possible in outlook addins to filter attachments accordingly to recipients.

    Read the article

  • ExpertPDF and Caching of URLs

    - by Josh
    We are using ExpertPDF to take URLs and turn them into PDFs. Everything we do is through memory, so we build up the request and then read the stream into ExpertPDF and then write the bits to file. All the files we have been requesting so far are just plain HTML documents. Our designers update CSS files or change the HTML and rerequest the documents as PDFs, but often times, things are getting cached. Take, for example, if I rename the only CSS file and view the HTML page through a web browser, the page looks broke because the CSS doesn't exist. But if I request that page through the PDF Generator, it still looks ok, which means somewhere the CSS is cached. Here's the relevant PDF creation code: // Create a request HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(url); request.UserAgent = "IE 8.0"; request.ContentType = "application/x-www-form-urlencoded"; request.Method = "GET"; // Send the request HttpWebResponse resp = (HttpWebResponse)request.GetResponse(); if (resp.IsFromCache) { System.Web.HttpContext.Current.Trace.Write("FROM THE CACHE!!!"); } else { System.Web.HttpContext.Current.Trace.Write("not from cache"); } // Read the response pdf.SavePdfFromHtmlStream(resp.GetResponseStream(), System.Text.Encoding.UTF8, "Output.pdf"); When I check the trace file, nothing is being loaded from cache. I checked the IIS log file and found a 200 response coming from the request, even after a file had been updated (I would expect a 302). We've tried putting the No-Cache attribute on all HTML pages, but still no luck. I even turned off all caching at the IIS level. Is there anything in ExpertPDF that might be caching somewhere or something I can do to the request object to do a hard refresh of all resources? UPDATE I put ?foo at the end of my style href links and this updates the CSS everytime. Is there a setting someplace that can prevent stylesheets from being cached so I don't have to do this inelegant solution?

    Read the article

  • How we Can post Video and Image in ASIHTTPRequest ?

    - by GhostRider
    I want to post one image to my webservice and one video too , but problem is that when it go to video part it give me Excess-bad Error NSString *url = [NSString stringWithFormat:@"http://example.com/add_videoxml.php"]; networkQueue = [[ASINetworkQueue alloc] init]; [networkQueue cancelAllOperations]; [networkQueue setShowAccurateProgress:YES]; //[networkQueue setUploadProgressDelegate:progressBar]; [networkQueue setDelegate:self]; [networkQueue setRequestDidFinishSelector:@selector(requestFinished:)]; [networkQueue setRequestDidFailSelector: @selector(requestFailed:)]; request= [[ASIFormDataRequest alloc] initWithURL:[NSURL URLWithString:url]] ; [request setPostValue:@"284" forKey:@"id"]; [request setPostValue:@"show" forKey:@"show"]; [request addRequestHeader:@"Content-Type" value:@"multipart/form-data;boundary=---------------------------1842378953296356978857151853"]; NSData *imgData=UIImageJPEGRepresentation(userImage, 0.9); if(imgData != nil){ [request setFile:imgData withFileName:@"Loveatnight" andContentType:@"image/jpeg" forKey:@"image"]; } //[request addRequestHeader:@"Content-Type" // value:@"multipart/form-data;boundary=---------------------------1842378953296356978857151853"]; if(videoData != nil){ [request setFile:videoData withFileName:@"Loveishard" andContentType:@"image/jpeg" forKey:@"uploadfile"]; }// error is come on that line [request setTimeOutSeconds:500]; //NSLog(@"%@",request); [networkQueue addOperation:request]; [networkQueue go]; Added by the OP [request setFile:videoData withFileName:@"Loveishard" andContentType:@"video/quicktime" forKey:@"uploadfile"]; i use this becuase my video formate is mov , but it again give error

    Read the article

  • With Maven, how would I prevent Maven from filtering certain properties but allowing others?

    - by Benny
    The problem is that I'm trying to build a project that has in its resources a build.xml file. Basically, I package my project as a jar with Maven2, and then use ant installer to install my project. There is a property in the build.xml file that I need to filter called build.date, but there are other properties that I don't want to filter, like ${basedir}, because it's used by the ant installer but gets replaced by Maven's basedir variable. So, I need to somehow tell Maven to filter ${build.date}, but not ${basedir}. I tried creating a properties file as a filter with "basedir=${basedir}" as one of the properties, but I get the following error: Resolving expression: '${basedir}': Detected the following recursive expression cycle: [basedir] Any suggestions would be much appreciated. Thanks, B.J.

    Read the article

  • Unable to send '+' through AJAX post?

    - by Harish Kurup
    I am using Ajax POST method to send data, but i am not able to send '+'(operator to the server i.e if i want to send 1+ or 20k+ it will only send 1 or 20k..just wipe out '+') HTML code goes here.. <form method='post' onsubmit='return false;' action='#'> <input type='input' name='salary' id='salary' /> <input type='submit' onclick='submitVal();' /> </form> and javascript code goes here, function submitVal() { var sal=document.getElementById("salary").value; alert(sal); var request=getHttpRequest(); request.open('post','updateSal.php',false); request.setRequestHeader("Content-Type","application/x-www-form-urlencoded"); request.send("sal="+sal); if(request.readyState == 4) { alert("update"); } } function getHttpRequest() { var request=false; if(window.XMLHttpRequest) { request=new XMLHttpRequest(); } else if(window.ActiveXObject) { try { request=new ActiveXObject("Msxml2.XMLHTTP"); } catch(e) { try { request=new ActiveXObject("Microsoft.XMLHTTP"); } catch(e) { request=false; } } } return request; } in the function submitVal() it first alert's the salary value as it is(if 1+ then alerts 1+), but when it is posted it just post's value without '+' operator which is needed... is it any problem with query string, as the PHP backend code is working fine...

    Read the article

  • a + sing in email address

    - by d.andreykiv
    Hi. I need to submit an email address with a "+" sign and validate in on server. But server receives email like "[email protected]" as "aaa [email protected]". I send all data as POST request with following code NSURL* url = [NSURL URLWithString:[NSString stringWithFormat:@"%@%@", url, @"/signUp"]]; NSString *post = [NSString stringWithFormat:@"&email=%@&userName=%@&password=%@", user.email, user.userName, user.password]; NSData *postData = [post dataUsingEncoding:NSASCIIStringEncoding allowLossyConversion:NO]; NSData* data = [self sendRequest:url postData:postData]; post variable before encoding has value &[email protected]&userName=Asdfasdfadsfadsf&password=sdfasdf after encoding it is same &[email protected]&userName=Asdfasdfadsfadsf&password=sdfasdf Method I use to send request looks like following code: -(id) sendRequest:(NSURL*) url postData:(NSData*)postData { // Create request NSMutableURLRequest *request = [[NSMutableURLRequest alloc] init]; NSString *postLength = [NSString stringWithFormat:@"%d",[postData length]]; [request setURL:url]; [request setHTTPMethod:@"POST"]; [request setValue:@"application/x-www-form-urlencoded" forHTTPHeaderField:@"Current-Type"]; [request setValue:postLength forHTTPHeaderField:@"Content-Length"]; [request setHTTPBody:postData]; NSURLResponse *urlResponse; NSData *data = [NSURLConnection sendSynchronousRequest:request returningResponse:&urlResponse error:nil]; [request release]; return data; }

    Read the article

  • Zend Framework: The form filter I am using is not filtering!

    - by Andrew
    So I have a form that is using the Zend_Filter_Null filter. When I call it directly, it works: $makeZeroNull = new Zend_Filter_Null(); $null = $makeZeroNull->filter('0'); //$null === null However, when I try to add it to an element in my form, it doesn't filter the value when I call getValue(). class My_Form extends Zend_Form { public function init() { $makeZeroNull = new Zend_Filter_Null(); $this->addElement('text', 'State_ID', array('filters' => array($makeZeroNull))); } } //in controller if ($form->isValid($_POST)) { $zero = $form->State_ID->getValue(); //getValue() should return null, but it is returning 0 } What is going on? What am I doing wrong?

    Read the article

  • Is it possible to filter data used by pivot table based on filtering the rows in a source table in Excel?

    - by Geoffrey Stoel
    I have developed a dashboard in Excel 2007 that uses one source table in a sheet (being filled with a query on our data warehouse) and multiple pivot tables making different cross sections on this data. I use the GETPIVOTDATA in almost a hundred formulas to give me the right value for a specific indicator in my dashboard. This all works fine. However I now have received the question to make the dashboard for 5 different segments. As you can imagine I don't want to create 5 different workbooks for this and need to maintain the dashboard logic on all of them. So my question is the following. Is it possible to automatically (through VBA or any other means) filter the results in my source table which is the source for my pivot tables and thus for my dashboard values. So schematically: DATABASE_VIEW -- SOURCE_TABLE -- 12 pivot tables -- 100 GETPIVOTDATA functions Preferably I would like to load all the segments in the source_table (one view on my database) and then filter the data in the source table, which results in filterd source_dat for my pivots. This way I can (without requerying the db) quickly change between segments in the dashboards (refreshing pivots only). Data in the source table has the column: CUSTOMER_SEGMENT available to filter upon. Any help is appreciated. Geoffrey

    Read the article

  • Ajax Request not working. onSuccess and onFailure not triggering

    - by Kye
    Hi all, trying to make a page which will recursively call a function until a limit has been reached and then to stop. It uses an ajax query to call an external script (which just echo's "done" for now) howver with neither onSuccess or onFailure triggering i'm finding it hard to find the problem. Here is the javascript for it. In the header for the webpage there is a script to an ajax.js document which contains the request data. I know the ajax.js works as I've used it on another website var Rooms = "1"; var Items = "0"; var ccode = "9999/1"; var x = 0; function echo(string,start){ var ajaxDisplay = document.getElementById('ajaxDiv'); if(start) {ajaxDisplay.innerHTML = string;} else {ajaxDisplay.innerHTML = ajaxDisplay.innerHTML + string;} } function locations() { echo("Uploading location "+x+" of " + Rooms,true); Ajax.Request("Perform/location.php", { method:'get', parameters: {ccode: ccode, x: x}, onSuccess: function(reply) {alert("worked"); if(x<Rooms) { x++; locations(); } else { x=0; echo("Done",true); } }, onFailure: function() {alert("not worked"); echo("not done"); } }); alert("boo"); } Any help or advice will be most appreciated.

    Read the article

  • What do I gain by filtering URLs through Perl's URI module?

    - by sid_com
    Do I gain something when I transform my $url like this: $url = URI-new( $url )? #!/usr/bin/env perl use warnings; use strict; use 5.012; use URI; use XML::LibXML; my $url = 'http://stackoverflow.com/'; $url = URI->new( $url ); my $doc = XML::LibXML->load_html( location => $url, recover => 2 ); my @nodes = $doc->getElementsByTagName( 'a' ); say scalar @nodes;

    Read the article

  • Creating a HTTP handler for IIS that transparently forwards request to different port?

    - by Lasse V. Karlsen
    I have a public web server with the following software installed: IIS7 on port 80 Subversion over apache on port 81 TeamCity over apache on port 82 Unfortunately, both Subversion and TeamCity comes with their own web server installations, and they work flawlessly, so I don't really want to try to move them all to run under IIS, if that is even possible. However, I was looking at IIS and I noticed the HTTP redirect part, and I was wondering... Would it be possible for me to create a HTTP handler, and install it on a sub-domain under IIS7, so that all requests to, say, http://svn.vkarlsen.no/anything/here is passed to my HTTP handler, which then subsequently creates a request to http://localhost:81/anything/here, retrieves the data, and passes it on to the original requestee? In other words, I would like IIS to handle transparent forwards to port 81 and 82, without using the redirection features. For instance, Subversion doesn't like HTTP redirect and just says that the repository has been moved, and I need to relocate my working copy. That's not what I want. If anyone thinks this can be done, does anyone have any links to topics I need to read up on? I think I can manage the actual request parts, even with authentication, but I have no idea how to create a HTTP handler. Also bear in mind that I need to handle sub-paths and documents beneath the top-level domain, so http://svn.vkarlsen.no/whatever/here needs to be handled by a single handler, I cannot create copies of the handler for all sub-directories since paths are created from time to time.

    Read the article

  • Where should my "filtering" logic reside with Linq-2-SQL and ASP.NET-MVC in View or Controller?

    - by Nate Bross
    I have a main Table, with several "child" tables. TableA and TableAChild1 and TableAChild2. I have a view which shows the information in TableA, and then has two columns of all items in TableAChild1 and TableAChild2 respectivly, they are rendered with Partial views. Both child tables have a bit field for VisibleToAll, and depending on user role, I'd like to either display all related rows, or related rows where VisibleToAll = true. This code, feels like it should be in the controller, but I'm not sure how it would look, because as it stands, the controller (limmited version) looks like this: return View("TableADetailView", repos.GetTableA(id)); Would something like this be even work, and would it be bad what if my DataContext gets submitted, would that delete all the rows that have VisibleToAll == false? var tblA = repos.GetTableA(id); tblA.TableAChild1 = tblA.TableAChild1.Where(tmp => tmp.VisibleToAll == true); tblA.TableAChild2 = tblA.TableAChild2.Where(tmp => tmp.VisibleToAll == true); return View("TableADetailView", tblA); It would also be simple to add that logic to the RendarPartial call from the main view: <% Html.RenderPartial("TableAChild1", Model.TableAChild1.Where(tmp => tmp.VisibleToAll == true); %>

    Read the article

  • How can I make an prototype ajax request with an array of values as a parameter?

    - by andresbravog
    i'm trying to make a ajax update in prototype with some values from a multirecordselect that sends a requests like. Parameters: {"action"=>"use_campaign", "campaigns"=> ["27929","27932"] , "advertiser_id"=>"", "controller"=>"admin/reporting", "ad_id"=>""} as you can see the request sends the "campaigns" elements as an array of values, i'm trying to do the same with this js code over prototype 7. // get the campaigns var campaign_ids = {}; var campaigns = $('filter_form').getInputs("hidden","report[campaigns][]"); campaigns.each( function(field) { campaign_ids.push(field.value); }); new Ajax.Updater('ad_filter', '/admin/reporting/use_campaign', { method : 'get', asynchronous : true, evalScripts : true, parameters : { 'advertiser_id' : $('filter_form')['report[advertiser_id]'].value, 'ad_id' : $('filter_form')['report[ad_id]'].value, 'campaigns' : campaign_ids } }); the campaigns_ids is getting the correct info as an array like: [ "27929", "27932" ] but seems that prototype ajax update is sending a request like: http://my_domain/admin/reporting/use_campaign?ad_id=&advertiser_id=&campaigns=27929&campaigns=27932 what sends parameters like: Parameters: {"action"=>"use_campaign", "campaigns"=> "27929" , "advertiser_id"=>"", "controller"=>"admin/reporting", "ad_id"=>""} I also tryed with Object.toJSON(campaign_ids) but i only get an escaped string like Parameters: {"action"=>"use_campaign", "campaigns"=>"[\"27929\",\"27932\"]" , "advertiser_id"=>"", "controller"=>"admin/reporting", "ad_id"=>""} There is anyway to do this as I wish? Thanks for all.

    Read the article

  • Showing Loading screen during REST service request in android app ?

    - by sat
    Currently here is what I am following, As soon as my app is launched, I have to send a request for REST service, It will take little time , so I thought of showing loading screen, In onCreate() of my Activity , first thing will be to show loading screen(progress dialog) , And I kick off the background Activity using AsyncTask , i.e. requesting for REST service and onPostexecute() I close the dialog and then I do setContentView(myxml); and update the UI . Can this approach be improved ? Problem which I faced was , Sometimes , Garbage collector may start(due to various reasons) and my app hangs at loading screen forever , because of Garbage collector , even request for REST service is not sent and because of it some wake up call comes and rest is disaster and Force close. But sometimes even ForceClose doesnot come fast , may be because of GC. so I cannot even go back and stuck in loading screen. Only thing which I can do at that point is to come back HOME. After that If I come back to my app its still loading , so definitely this approach seems to be a bad design. Whats the right approach ?

    Read the article

  • Zend Framework: My custom form filter is not filtering!

    - by Andrew
    So I have a form that is using a custom filter (which is really just a copy of Zend_Filter_Null). When I call it directly, it works: $makeZeroNull = new My_Filter_MakeZeroNull(); $null = $makeZeroNull->filter('0'); //$null === null However, when I try to add it to an element in my form, it doesn't filter the value when I call getValue(). class My_Form extends Zend_Form { public function init() { $makeZeroNull = new My_Filter_MakeZeroNull(); $this->addElement('text', 'State_ID', array('filters' => array($makeZeroNull))); } } //in controller if ($form->isValid($_POST)) { $zero = $form->State_ID->getValue(); //getValue() should return null, but it is returning 0 } What is going on? What am I doing wrong?

    Read the article

  • How to write simple Where Clause for dynamic filtering in linq when we use groups in join

    - by malik
    I have simple search page i want to filter the results. var TransactionStats = from trans in context.ProductTransactionSet.Include("SPL") select new { trans.InvoiceNo, ProductGroup = from tranline in trans.ProductTransactionLines group tranline by tranline.ProductTransaction.TransactionID into ProductGroupDetil select new { TransactionDateTime = ProductGroupDetil.Select (Content => Content.TransactionDateTime) } }; I want to use TransactionDateTime in where clause when required. if (_FilterCrieteria.DateFrom.HasValue) { TransactionStats.Where ( a => a.ProductGroup.Where ( dt => dt.DateofTransaction >= _FilterCrieteria.DateFrom && dt.DateofTransaction >= _FilterCrieteria.DateFrom ) ) } Can any one correct the syntax?

    Read the article

  • Tag filtering Query using T-SQL and Linq-to-SQL?

    - by EdenMachine
    I'm trying to figure out how to allow a user to enter in a string of tags (keywords separated by spaces) in a textbox to filter a grid of results. Here are the tables: PACKETS *PacketID Name PACKETTAGS *PacketTagID PacketID TagID Tags *TagID Name Here is the basic query without the WHERE parameters: SELECT Packets.Name, Tags.Name AS Tag, PacketTags.PacketTagID FROM Packets INNER JOIN PacketTags ON Packets.PacketID = PacketTags.PacketID INNER JOIN Tags ON PacketTags.TagID = Tags.TagID I need to filter out all the Packets that don't have tags that match any of the words BUT ALSO ONLY include the Packets that have the tags entered in the string of text (spaces separate the tags when entered into the textbox) I'm starting with the basics by figuring this out in t-SQL first but ultimately I need to be able to do this in Linq-to-SQL

    Read the article

  • How to ensure nginx serves a request from an external IP?

    - by Matt
    I have a strange situation, where my nginx setup stopped handling external requests. I'm pretty stuck. If I hit the domain without a subdomain, I properly get redirected, however, if I request the full url, that fails and doesn't log anything, anywhere. I am able to curl localhost on the server itself, however when I attempt to curl from an external machine, it fails with: curl: (7) couldn't connect to host I've also noticed that bots can get through, I've seen Google hit the log every now and then. My nginx.conf file: upstream mongrels { server 127.0.0.1:5000; } server { listen 80; server_name culini.com; rewrite ^/(.*) http://www.culini.com/$1 permanent; } # the server directive is nginx's virtual host directive. server { # port to listen on. Can also be set to an IP:PORT listen 80; # Set the max size for file uploads to 50Mb client_max_body_size 50M; # sets the domain[s] that this vhost server requests for server_name www.culini.com; # doc root root /var/www/culini/current/public; log_format app '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for" [$upstream_addr $upstream_response_time $upstream_status]'; # vhost specific access log access_log /var/www/culini/current/log/nginx.access.log app; error_log /var/www/culini/current/log/nginx.error.log debug; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect false; proxy_max_temp_file_size 0; proxy_intercept_errors on; proxy_ignore_client_abort on; if (-f $request_filename) { break; } if (!-f $request_filename) { proxy_pass http://mongrels; break; } } } Please, please, any help would be greatly appreciated.

    Read the article

  • How to terminate a request in JSP (not the "return;")

    - by Genom
    I am programming a website with JSP. There are pages where user must be logged in to see it. If they are not logged in, they should see a login form. I have seen in a php code that you can make a .jsp page (single file), which checkes, whether the user is logged in or not. If not it will show the login form. If the user is logged in, nothing will be done. So in order to do that I use this structure in my JSPs: Headers, menus, etc. etc... normal stuff which would be shown such as body, footer to a logged in user. This structure is very easy to apply to all webpages. So I don't have to apply checking algorithm to each webpage! I can simply add this "" and the page is secure! So my problem is that if the user is not logged in, then only the log in form should be shown and the footer. So code should bypass the body. Therefore structured my checklogin.jsp so: If user is not logged in show the login form and footer and terminate request. The problem is that I don't know how to terminate the request... If I use "return;" then only the checklogin.jsp stops but server continues to process parent page! Therefore page has 2 footers! (1 from parent page and 1 from checklogin.jsp). How can I avoid this? (There is exit(); in php for this by the way!) Thanks for any suggestions!

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >