Search Results

Search found 48340 results on 1934 pages for 'microsoft test manager'.

Page 409/1934 | < Previous Page | 405 406 407 408 409 410 411 412 413 414 415 416  | Next Page >

  • Structure map and generics (in XML config)

    - by James D
    Hi I'm using the latest StructureMap (2.5.4.264), and I need to define some instances in the xml configuration for StructureMap using generics. However I get the following 103 error: Unhandled Exception: StructureMap.Exceptions.StructureMapConfigurationException: StructureMap configuration failures: Error: 103 Source: Requested PluginType MyTest.ITest`1[[MyTest.Test,MyTest]] configured in Xml cannot be found Could not create a Type for 'MyTest.ITest`1[[MyTest.Test,MyTest]]' System.ApplicationException: Could not create a Type for 'MyTest.ITest`1[[MyTest.Test,MyTest]]' ---> System.TypeLoadException: Could not loa d type 'MyTest.ITest`1' from assembly 'StructureMap, Version=2.5.4.264, Culture=neutral, PublicKeyToken=e60ad81abae3c223'. at System.RuntimeTypeHandle._GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark, Boolean loadTypeFromPartialName) at System.RuntimeTypeHandle.GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark) at System.RuntimeType.PrivateGetType(String typeName, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& s tackMark) at System.Type.GetType(String typeName, Boolean throwOnError) at StructureMap.Graph.TypePath.FindType() --- End of inner exception stack trace --- at StructureMap.Graph.TypePath.FindType() at StructureMap.Configuration.GraphBuilder.ConfigureFamily(TypePath pluginTypePath, Action`1 action) A simply replication of the code is as follows: public interface ITest<T> { } public class Test { } public class Concrete : ITest<Test> { } Which I then wish to define in the XML configuration something as follows: <DefaultInstance PluginType="MyTest.ITest`1[[MyTest.Test,MyTest]],MyTest" PluggedType="MyTest.Concrete,MyTest" Scope="Singleton" /> I've been racking my brain, however I can't see what I'm doing wrong - I've used Type.GetType to verify the type actually is valid which it is. Anyone have any ideas? Thanks !

    Read the article

  • Mocking non-virtual methods in C++ without editing production code?

    - by wk1989
    Hello, I am a fairly new software developer currently working adding unit tests to an existing C++ project that started years ago. Due to a non-technical reason, I'm not allowed to modify any existing code. The base class of all my modules has a bunch of methods for Setting/Getting data and communicating with other modules. Since I just want to unit testing each individual module, I want to be able to use canned values for all my inter-module communication methods. I.e. for a method Ping() which checks if another module is active, I want to have it return true or false based on what kind of test I'm doing. I've been looking into Google Test and Google Mock, and it does support mocking non-virtual methods. However the approach described (http://code.google.com/p/googlemock/wiki/CookBook#Mocking_Nonvirtual_Methods) requires me to "templatize" the original methods to take in either real or mock objects. I can't go and templatize my methods in the base class due to the requirement mentioned earlier, so I need some other way of mocking these virtual methods Basically, the methods I want to mock are in some base class, the modules I want to unit test and create mocks of are derived classes of that base class. There are intermediate modules in between my base Module class and the modules that I want to test. I would appreciate any advise! Thanks, JW EDIT: A more concrete examples My base class is lets say rootModule, the module I want to test is leafModule. There is an intermediate module which inherits from rootModule, leafModule inherits from this intermediate module. In my leafModule, I want to test the doStuff() method, which calls the non virtual GetStatus(moduleName) defined in the rootModule class. I need to somehow make GetStatus() to return a chosen canned value. Mocking is new to me, so is using mock objects even the right approach?

    Read the article

  • Insert Registration Data in MySQL using PHP

    - by J M 4
    I may not be asking this in the best way possible but i will try my hardest. Thank you ahead of time for your help: I am creating an enrollment website which allows an individual OR manager to enroll for medical testing services for professional athletes. I will NOT be using the site as a query DB which anybody can view information stored within the database. The information is instead simply stored, and passed along in a CSV format to our network provider so they can use as needed after the fact. There are two possible scenarios: Scenario 1 - Individual Enrollment If an individual athlete chooses to enroll him/herself, they enter their personal information, submit their payment information (credit/bank account) for processing, and their information is stored in an online database as Athlete1. Scenario 2 - Manager Enrollment If a manager chooses to enroll several athletes he manages/ promotes for, he enters his personal information, then enters the personal information for each athlete he wishes to pay for (name, address, ssn, dob, etc), then submits payment information for ALL athletes he is enrolling. This number can range from 1 single athlete, up to 20 athletes per single enrollment (he can return and complete a follow up enrollment for additional athletes). Initially, I was building the database to house ALL information regardless of enrollment type in a single table which housed over 400 columns (think 20 athletes with over 10 fields per athlete such as name, dob, ssn, etc). Now that I think about it more, I believe create multiple tables (manager(s), athlete(s)) may be a better idea here but still not quite sure how to go about it for the following very important reasons: Issue 1 If I list the manager as the parent table, I am afraid the individual enrolling athlete will not show up in the primary table and will not be included in the overall registration file which needs to be sent on to the network providers. Issue 2 All athletes being enrolled by a manager are being stored in SESSION as F1FirstName, F2FirstName where F1 and F2 relate to the id of the fighter. I am not sure technically speaking how to store multiple pieces of information within the same table under separate rows using PHP. For example, all athleteswill have a first name. The very basic theory of what i am trying to do is: If number_of_athletes 1, store F1FirstName in row 1, column 1 of Table "Athletes"; store F1LastName in row 1, column 2 of Table "Athletes"; store F2FirstName in row 2, column 1 of Table "Athletes"; store F2LastName in row 2, column 2 of table "Athletes"; Does this make sense? I know this question is very long and probably difficult so i appreciate the guidance.

    Read the article

  • Xcode Unit Testing - Accessing Resources from the application's bundle?

    - by Ben Scheirman
    I'm running into an issue and I wanted to confirm that I'm doing things the correct way. I can test simple things with my SenTestingKit tests, and that works okay. I've set up a Unit Test Bundle and set it as a dependency on the main application target. It successfully runs all tests whenever I press cmd+B. Here's where I'm running into issues. I have some XML files that I need to load from the resources folder as part of the application. Being a good unit tester, I want to write unit tests around this to make sure that they are loading properly. So I have some code that looks like this: NSString *filePath = [[NSBundle mainBundle] pathForResource:@"foo" ofType:@"xml"]; This works when the application runs, but during a unit test, mainBundle points to the wrong bundle, so this line of code returns nil. So I changed it up to utilize a known class like this: NSString *filePath = [[NSBundle bundleForClass:[Config class]] pathForResource:@"foo" ofType:@"xml"]; This doesn't work either, because in order for the test to even compile code like this, it Config needs to be part of the Unit Test Target. If I add that, then the bundle for that class becomes the Unit Test bundle. (Ugh!) Am I approaching this the wrong way?

    Read the article

  • C++ iterator and const_iterator problem for own container class

    - by BaCh
    Hi there, I'm writing an own container class and have run into a problem I can't get my head around. Here's the bare-bone sample that shows the problem. It consists of a container class and two test classes: one test class using a std:vector which compiles nicely and the second test class which tries to use my own container class in exact the same way but fails miserably to compile. #include <vector> #include <algorithm> #include <iterator> using namespace std; template <typename T> class MyContainer { public: class iterator { public: typedef iterator self_type; inline iterator() { } }; class const_iterator { public: typedef const_iterator self_type; inline const_iterator() { } }; iterator begin() { return iterator(); } const_iterator begin() const { return const_iterator(); } }; // This one compiles ok, using std::vector class TestClassVector { public: void test() { vector<int>::const_iterator I=myc.begin(); } private: vector<int> myc; }; // this one fails to compile. Why? class TestClassMyContainer { public: void test(){ MyContainer<int>::const_iterator I=myc.begin(); } private: MyContainer<int> myc; }; int main(int argc, char ** argv) { return 0; } gcc tells me: test2.C: In member function ‘void TestClassMyContainer::test()’: test2.C:51: error: conversion from ‘MyContainer::iterator’ to non-scalar type ‘MyContainer::const_iterator’ requested I'm not sure where and why the compiler wants to convert an iterator to a const_iterator for my own class but not for the STL vector class. What am I doing wrong?

    Read the article

  • Getting a senior job without the yrs of experience asked for

    - by dotnetdev
    Hi, My manager in my current company feels that I am selling myself short by getting another job - but as a senior. He feels I have sold myself short and missing out on a good salary by getting another junior job, given how he (my manager) has a lot of faith in my development skills. However, I have not worked long enough for a proper senior job (5 years +) but then the senior developer we do have in my current company isn't given senior tasks (judged by difficulty). How would I get a senior job if I lack the commercial experience? My manager still feels without that, I have the ability/knowledge (I help my manager with C# too). Thanks

    Read the article

  • Drawing up a session value within SQL query? PHP and MySQL

    - by Derek
    Hi, on one of my web pages I want my manager user to view all activities assigned to them (personally). In order to do this, I need something like this: $sql = "SELECT * FROM activities WHERE manager = $_SESSION['SESS_FULLNAME']"; Now obviously this syntax is all wrong, but because I am new to this stuff, is there a way I can call up the full name from the user's session within a query? This is so that when I call up the database values to be displayed within the web page, only the activities for the manager who is logged in is displayed. For example, the activities table has a manager column of a full name entry. Any help is much appreciated. Thanks.

    Read the article

  • encode array to JSON in PHP to get [elm1,elm2,elm3] instead of {elm1:{},elm2:{},elm3:{}}

    - by Itay Moav
    I am trying to json_encode an array, which is what returns from a Zend_DB query. var_dump gives: (Manually adding 0 member does not change the picture). array(3) { [1]=> array(3) { ["comment_id"]=> string(1) "1" ["erasable"]=> string(1) "1" ["comment"]=> string(6) "test 1" } [2]=> array(3) { ["comment_id"]=> string(1) "2" ["erasable"]=> string(1) "1" ["comment"]=> string(6) "test 1" } [3]=> array(3) { ["comment_id"]=> string(1) "3" ["erasable"]=> string(1) "1" ["comment"]=> string(6) "jhghjg" } } The encoded string looks: {"1":{"comment_id":"1","erasable":"1","comment":"test 1"}, "2":{"comment_id":"2","erasable":"1","comment":"test 1"}, "3":{"comment_id":"3","erasable":"1","comment":"jhghjg"}} While I need it as: [{"comment_id":"1","erasable":"1","comment":"test 1"}, {"comment_id":"2","erasable":"1","comment":"test 1"}, {"comment_id":"3","erasable":"1","comment":"jhghjg"}] As the php.ini/json__encode documentation says it should. Any ideas?

    Read the article

  • UML binary association aggregatee has access to aggregator

    - by user314172
    Firstly, I'd like to thank those who answered my previous question ages ago. Currently I'm engaging more in the design phase UMLs, as this is my first medium scale deployment I'm entrusted with. This is extremely simple, but it bugs me so. If (Component) owns (Manager of Component), and (Manager of Component) has a reference to (Component) through which it manages it; how do you fully describe the relationship? I know it is aggregative, but how do you describe (Manager of Component) possessing a reference/pointer to the (Component) that physically owns the (Manager of Component) ? Example: Lidar owns a LidarManager

    Read the article

  • How can I use CssResources in UiBinder a generated Cell?

    - by confile
    I want to generate a Cell for a CellWidget with the UiBinder (UiRenderer). What I did to generate the cell is in MyCell.java: public class MyCell implements AbstractCell<MyDto> { public interface Resources extends ClientBundle { @Source({Css.DEFAULT_CSS }) Css css(); } public interface Css extends CssResource { String DEFAULT_CSS = "test/MyStyle.css"; String test(); } interface MyUiRenderer extends UiRenderer { void render(SafeHtmlBuilder sb, String name, SafeStyles styles); } private static MyUiRenderer renderer = GWT.create(MyUiRenderer.class); Resources resources = GWT.create(Resources.class); @Override public void render(SafeHtmlBuilder safeHtmlBuilder, MyDto model) { SafeStyles style = SafeStylesUtils.fromTrustedString(resources.css().test().toString()); renderer.render(safeHtmlBuilder, model.getName(), style); } } My MyCell.ui.xml file looks like this: <!DOCTYPE ui:UiBinder SYSTEM "http://dl.google.com/gwt/DTD/xhtml.ent"> <ui:UiBinder xmlns:ui='urn:ui:com.google.gwt.uibinder'> <ui:with field="name" type="java.lang.String" /> <ui:with field='styles' type='com.google.gwt.safecss.shared.SafeStyles'/> <div style="{styles}"><ui:text from="{name}" /></div> </ui:UiBinder> MyStyle.css: .test { background-color: red; font-size: 20px; display: flex; ... } When I run my code I get the following error: [DEBUG] [mobile] - Rebinding test.client.app.MyCell.MyUiRenderer [DEBUG] [mobile] - Invoking generator com.google.gwt.uibinder.rebind.UiBinderGenerator [ERROR] [mobile] - java.lang.String required, but {styles} returns com.google.gwt.safecss.shared.SafeStyles: <div style='{styles}'> (:9) [ERROR] [mobile] - Deferred binding failed for 'test.client.app.MyCell.MyUiRenderer'; expect subsequent failures [ERROR] [mobile] - (GWT.java:72) 2014-06-08 17:15:05,214 [FATAL] Uncaught Exception: Then I tried to this: in my UiBinder but it does not work. How can I use css style from a CssResource in my UiRenderer?

    Read the article

  • How To Include Transitive Dependencies

    - by Brad Rhoads
    I have 2 gradle projects: an Android app and a RoboSpock test. My build.gradle for the Android app has . . . dependencies { compile fileTree(dir: 'libs', include: '*.jar') compile ('com.actionbarsherlock:actionbarsherlock:4.4.0@aar') { exclude module: 'support-v4' } } . . . and builds correctly by itself, e.g assembleRelease works. I'm stuck getting the test to work. I gets lots of errors such as: package com.google.zxing does not exist Those seem to indicate that the .jar files aren't being picked up. Here's my build.gradle for the test project: buildscript { repositories { mavenLocal() mavenCentral() } dependencies { classpath 'com.android.tools.build:gradle:0.9.+' classpath 'org.robospock:robospock-plugin:0.4.0' } } repositories { mavenLocal() mavenCentral() } apply plugin: 'groovy' dependencies { compile "org.codehaus.groovy:groovy-all:1.8.6" compile 'org.robospock:robospock:0.4.4' } dependencies { compile fileTree(dir: ':android:libs', include: '*.jar') compile (project(':estanteApp')) { transitive = true } } sourceSets.test.java.srcDirs = ['../android/src/', '../android/build/source/r/debug'] test { testLogging { lifecycle { exceptionFormat "full" } } } project.ext { robospock = ":estanteApp" // project to test } apply plugin: 'robospock' As that shows, I've tried adding transitive = true and including the .jar files explicitly. But no matter what I try, I end up with the package does not exist error.

    Read the article

  • MySQL updating a field to result of a function

    - by jdborg
    mysql> CREATE FUNCTION test () -> RETURNS CHAR(16) -> NOT DETERMINISTIC -> BEGIN -> RETURN 'IWantThisText'; -> END$$ Query OK, 0 rows affected (0.00 sec) mysql> SELECT test(); +------------------+ | test() | +------------------+ | IWantThisText | +------------------+ 1 row in set (0.00 sec) mysql> UPDATE `table` -> SET field = test() -> WHERE id = 1 Query OK, 1 row affected, 1 warning (0.01 sec) Rows matched: 1 Changed: 1 Warnings: 1 mysql> SHOW WARNINGS; +---------+------+----------------------------------------------------------------+ | Level | Code | Message | +---------+------+----------------------------------------------------------------+ | Warning | 1265 | Data truncated for column 'test' at row 1 | +---------+------+----------------------------------------------------------------+ 1 row in set (0.00 sec) mysql> SELECT field FROM table WHERE id = 1; +------------------+ | field | +------------------+ | NULL | +------------------+ 1 row in set (0.00 sec) What I am doing wrong? I just want field to be set to the returned value of test() Forgot to mention field is VARCHR(255)

    Read the article

  • Why won't the following PDO transaction won't work in PHP?

    - by jfizz
    I am using PHP version 5.4.4, and a MySQL database using InnoDB. I had been using PDO for awhile without utilizing transactions, and everything was working flawlessly. Then, I decided to try to implement transactions, and I keep getting Internal Server Error 500. The following code worked for me (doesn't contain transactions). try { $DB = new PDO('mysql:host=localhost;dbname=database', 'root', 'root'); $DB->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); $dbh = $DB->prepare("SELECT * FROM user WHERE username = :test"); $dbh->bindValue(':test', $test, PDO::PARAM_STR); $dbh->execute(); } catch(Exception $e){ $dbh->rollback(); echo "an error has occured"; } Then I attempted to utilize transactions with the following code (which doesn't work). try { $DB = new PDO('mysql:host=localhost;dbname=database', 'root', 'root'); $DB->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); $dbh = $DB->beginTransaction(); $dbh->prepare("SELECT * FROM user WHERE username = :test"); $dbh->bindValue(':test', $test, PDO::PARAM_STR); $dbh->execute(); $dbh->commit(); } catch(Exception $e){ $dbh->rollback(); echo "an error has occured"; } When I run the previous code, I get an Internal Server Error 500. Any help would be greatly appreciated! Thanks!

    Read the article

  • Troubleshooting multiple GET variables In PHP

    - by V413HAV
    This may be a very simple question but I don't what's the wrong thing am doing here... To explain you clearly, I've set a real simple example below: <ul> <li><a href="test.php?link1=true">Link 1</a></li> <li><a href="test.php?link2=true">Link 2</a></li> <li><a href="test.php?link3=true">Link 3</a></li> </ul> <?php if(isset($_GET['link1'])) { if(($_GET['link1']) == 'true') { echo 'This Is Link 1'; } } if(isset($_GET['link2'])) { if(($_GET['link2']) == 'true') { echo 'This Is Link 2'; } } if(isset($_GET['link3'])) { if(($_GET['link3']) == 'true') { echo 'This Is Link 3'; } } ?> This is a test.php page, here I've set 3 different arguments for $_GET, and show contents accordingly, now everything works perfect, the only thing am not understanding is how to block this kind of url say if user clicks on link 1 the url will be : http://localhost/test.php?link1=true And the Output of this url is This is Link 1 Now if I change this url to : http://localhost/test.php?link3=true&link2=true&link1=true And the Output what I get is This Is Link 1This Is Link 2This Is Link 3 Now this is ok here, but it's very annoying if someone types this and see's forms one below the other, any way I can stop this tampering?

    Read the article

  • Ways to support manually executed tests? (that can be used on a Mac)

    - by Rinzwind
    Are there any tools that can be used on a Mac to support manually executed tests? I have a number of tests that I'm executing manually and which I'm currently documenting using merely a plain text file. "Tools" can be interpreted rather loosely here, anything that's a step up from the plain text file would be useful: a template for some suitable application, supporting AppleScript scripts, a web-based system, a full-blown application ... Some things that would be great to have better support for (see also the example below): Checking off each step while you're manually executing the test. Showing the next step(s) in a small window that is always kept in front of all other windows. Automatically updating the 'last tested' and 'using svn revision' info. Keeping a record of all previous testing rounds (not just the last one). ... Any suggestions for any such "tools" that can be used on a Mac? An example (faked) entry from the plain text file to give you a better idea of what I'm looking for: - Check that exported web pages render properly in Safari. Last tested: 2010-03-24 Using SVN revision: 1000 Steps: - Open a new document. - Add some items to the document. - Export the document to a web page "Test.html" in a new folder "Export Test" on the Desktop. - Open the web page in Safari, script: tell application "Finder" open file "Test.html" of folder "Export Test" of desktop end tell Expected results: - The web page should appear properly with all items shown. Clean up steps: - Remove the folder "Export Test" from the Desktop. ( Note: for those unaware, the snippet of AppleScript in the above can be executed from most text editing applications through the Services menu by selecting the snippet and using: the application menu Services Script Editor Run as AppleScript. This is quite useful to automate some steps for tests that are difficult to automate as a whole. )

    Read the article

  • Zend Framework - How do make a hierarchy without it being a module?

    - by Josh
    Here is my specific issue. I want to make an api level which then under that you can select which method you will use. For example: test.com/api/rest test.com/api/xmlprc Currently I have api mapping to a module directory. I then setup a route to make it a rest route. test.com/api is a rest route, but I would rather have it be test.com/api/rest. This would allow me later to add others. In Bootstrap.php: $front = Zend_Controller_Front::getInstance(); $router = $front->getRouter(); $route = new Zend_Controller_Router_Route( 'api/:module/:controller/:id/*', array('module' =>'default') ); $router-addRoute('api', $route); $restRoute = new Zend_Rest_Route($front, array(), array( 'rest' )); $router-addRoute('rest', $restRoute); return $router; In application.ini: resources.frontController.moduleDirectory = APPLICATION_PATH "/modules" I know it will involve routes, but sometimes I find the Zend Framework documentation to be a little hard to follow. When I go to test.com/rest/controller/ it works how it should, but if I go to test.com/api/rest/ it tells me that my module is api and controller is rest.

    Read the article

  • Scan file contents into an array of a structure.

    - by ZaZu
    Hello, I have a structure in my program that contains a particular array. I want to scan a random file with numbers and put the contents into that array. This is my code : ( NOTE : This is a sample from a bigger program, so I need the structure and arrays as declared ) The contents of the file are basically : 5 4 3 2 5 3 4 2 #include<stdio.h> #define first 500 #define sec 500 struct trial{ int f; int r; float what[first][sec]; }; int trialtest(trial *test); main(){ trial test; trialtest(&test); } int trialtest(trial *test){ int z,x,i; FILE *fin; fin=fopen("randomfile.txt","r"); for(i=0;i<5;i++){ fscanf(fin,"%5.2f\t",(*test).what[z][x]); } fclose(fin); return 0; } But the problem is, whenever this I run this code, I get this error : (25) : warning 508 - Data of type 'double' supplied where a pointer is required I tried adding do{ for(i=0;i<5;i++){ q=fscanf(fin,"%5.2f\t",(*test).what[z][x]); } }while(q!=EOF); But that didnt work either, it gives the same error. Does anyone have a solution to this problem ?

    Read the article

  • Adding an element to a multidimensional array

    - by stef
    How can I loop through the array below and an element per array, with key "url_slug" and value "foo"? I tried with array_push but that gets rid of the key names (it seems?) Doing a foreach($array as $k = $v) doesn't do it either, I think. The new array should be exactly the same only having 4 elements per array instead of 3, with the key / values above. Array ( [0] => Array ( [name_en] => Test 5 [url_name_nl] => test-5 [cat_name] => mobile ) [1] => Array ( [name_en] => Test 10 [url_name_nl] => test-10 [cat_name] => mobile ) [2] => Array ( [name_en] => Test 25 [url_name_nl] => test-25 [cat_name] => mobile ) ) EDIT: full working solution. A little more complex than originally described foreach ($prods as $key => &$value) { if($key == "cat_name") $slug = $value['cat_name']; $url_slug = $this->lang->line($slug); $value['url_slug'] = $url_slug; }

    Read the article

  • How to access property in sub method after it runs?

    - by Warren
    I'm having a hard time working this one out but I think it should be pretty simple. Basically, I have this method which talks to a webservice and I need to return some data from the sub method, the "authCode". What am I doing wrong? How can I get the authCode out of the manager, or can I create a block or something to to ensure that the manager block runs first? Am I even using the right words - blocks, sub methods??? Please help :) - (NSString *)getAuthCodeEXAMPLE { __block NSString *returnString = @"nothing yet!"; NSURL *baseURL = [NSURL URLWithString:BaseURLString]; NSDictionary *parametersGetAuthCode = @{@"req": @"getauth"}; AFHTTPSessionManager *manager = [[AFHTTPSessionManager alloc] initWithBaseURL:baseURL]; manager.responseSerializer = [AFJSONResponseSerializer serializer]; [manager POST:APIscript parameters:parametersGetAuthCode success:^(NSURLSessionDataTask *task, id responseObject) { if ([task.response isKindOfClass:[NSHTTPURLResponse class]]) { NSHTTPURLResponse *r = (NSHTTPURLResponse *)task.response; if ([r statusCode] == 200) { self.returnedData = responseObject; NSString *authCode = [self.returnedData authcode]; NSLog(@"Authcode: %@", authCode); returnString = authCode; } } } failure:^(NSURLSessionDataTask *task, NSError *error) { UIAlertView *alertView = [[UIAlertView alloc] initWithTitle:@"Error" message:[error localizedDescription] delegate:nil cancelButtonTitle:@"Ok" otherButtonTitles:nil]; [alertView show]; }]; //this currently returns "nothing yet!" return returnString; }

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Understanding Request Validation in ASP.NET MVC 3

    - by imran_ku07
         Introduction:             A fact that you must always remember "never ever trust user inputs". An application that trusts user inputs may be easily vulnerable to XSS, XSRF, SQL Injection, etc attacks. XSS and XSRF are very dangerous attacks. So to mitigate these attacks ASP.NET introduced request validation in ASP.NET 1.1. During request validation, ASP.NET will throw HttpRequestValidationException: 'A potentially dangerous XXX value was detected from the client', if he found, < followed by an exclamation(like <!) or < followed by the letters a through z(like <s) or & followed by a pound sign(like &#123) as a part of query string, posted form and cookie collection. In ASP.NET 4.0, request validation becomes extensible. This means that you can extend request validation. Also in ASP.NET 4.0, by default request validation is enabled before the BeginRequest phase of an HTTP request. ASP.NET MVC 3 moves one step further by making request validation granular. This allows you to disable request validation for some properties of a model while maintaining request validation for all other cases. In this article I will show you the use of request validation in ASP.NET MVC 3. Then I will briefly explain the internal working of granular request validation.       Description:             First of all create a new ASP.NET MVC 3 application. Then create a simple model class called MyModel,     public class MyModel { public string Prop1 { get; set; } public string Prop2 { get; set; } }             Then just update the index action method as follows,   public ActionResult Index(MyModel p) { return View(); }             Now just run this application. You will find that everything works just fine. Now just append this query string ?Prop1=<s to the url of this application, you will get the HttpRequestValidationException exception.           Now just decorate the Index action method with [ValidateInputAttribute(false)],   [ValidateInput(false)] public ActionResult Index(MyModel p) { return View(); }             Run this application again with same query string. You will find that your application run without any unhandled exception.           Up to now, there is nothing new in ASP.NET MVC 3 because ValidateInputAttribute was present in the previous versions of ASP.NET MVC. Any problem with this approach? Yes there is a problem with this approach. The problem is that now users can send html for both Prop1 and Prop2 properties and a lot of developers are not aware of it. This means that now everyone can send html with both parameters(e.g, ?Prop1=<s&Prop2=<s). So ValidateInput attribute does not gives you the guarantee that your application is safe to XSS or XSRF. This is the reason why ASP.NET MVC team introduced granular request validation in ASP.NET MVC 3. Let's see this feature.           Remove [ValidateInputAttribute(false)] on Index action and update MyModel class as follows,   public class MyModel { [AllowHtml] public string Prop1 { get; set; } public string Prop2 { get; set; } }             Note that AllowHtml attribute is only decorated on Prop1 property. Run this application again with ?Prop1=<s query string. You will find that your application run just fine. Run this application again with ?Prop1=<s&Prop2=<s query string, you will get HttpRequestValidationException exception. This shows that the granular request validation in ASP.NET MVC 3 only allows users to send html for properties decorated with AllowHtml attribute.            Sometimes you may need to access Request.QueryString or Request.Form directly. You may change your code as follows,   [ValidateInput(false)] public ActionResult Index() { var prop1 = Request.QueryString["Prop1"]; return View(); }             Run this application again, you will get the HttpRequestValidationException exception again even you have [ValidateInput(false)] on your Index action. The reason is that Request flags are still not set to unvalidate. I will explain this later. For making this work you need to use Unvalidated extension method,     public ActionResult Index() { var q = Request.Unvalidated().QueryString; var prop1 = q["Prop1"]; return View(); }             Unvalidated extension method is defined in System.Web.Helpers namespace . So you need to add using System.Web.Helpers; in this class file. Run this application again, your application run just fine.             There you have it. If you are not curious to know the internal working of granular request validation then you can skip next paragraphs completely. If you are interested then carry on reading.             Create a new ASP.NET MVC 2 application, then open global.asax.cs file and the following lines,     protected void Application_BeginRequest() { var q = Request.QueryString; }             Then make the Index action method as,    [ValidateInput(false)] public ActionResult Index(string id) { return View(); }             Please note that the Index action method contains a parameter and this action method is decorated with [ValidateInput(false)]. Run this application again, but now with ?id=<s query string, you will get HttpRequestValidationException exception at Application_BeginRequest method. Now just add the following entry in web.config,   <httpRuntime requestValidationMode="2.0"/>             Now run this application again. This time your application will run just fine. Now just see the following quote from ASP.NET 4 Breaking Changes,   In ASP.NET 4, by default, request validation is enabled for all requests, because it is enabled before the BeginRequest phase of an HTTP request. As a result, request validation applies to requests for all ASP.NET resources, not just .aspx page requests. This includes requests such as Web service calls and custom HTTP handlers. Request validation is also active when custom HTTP modules are reading the contents of an HTTP request.             This clearly state that request validation is enabled before the BeginRequest phase of an HTTP request. For understanding what does enabled means here, we need to see HttpRequest.ValidateInput, HttpRequest.QueryString and HttpRequest.Form methods/properties in System.Web assembly. Here is the implementation of HttpRequest.ValidateInput, HttpRequest.QueryString and HttpRequest.Form methods/properties in System.Web assembly,     public NameValueCollection Form { get { if (this._form == null) { this._form = new HttpValueCollection(); if (this._wr != null) { this.FillInFormCollection(); } this._form.MakeReadOnly(); } if (this._flags[2]) { this._flags.Clear(2); this.ValidateNameValueCollection(this._form, RequestValidationSource.Form); } return this._form; } } public NameValueCollection QueryString { get { if (this._queryString == null) { this._queryString = new HttpValueCollection(); if (this._wr != null) { this.FillInQueryStringCollection(); } this._queryString.MakeReadOnly(); } if (this._flags[1]) { this._flags.Clear(1); this.ValidateNameValueCollection(this._queryString, RequestValidationSource.QueryString); } return this._queryString; } } public void ValidateInput() { if (!this._flags[0x8000]) { this._flags.Set(0x8000); this._flags.Set(1); this._flags.Set(2); this._flags.Set(4); this._flags.Set(0x40); this._flags.Set(0x80); this._flags.Set(0x100); this._flags.Set(0x200); this._flags.Set(8); } }             The above code indicates that HttpRequest.QueryString and HttpRequest.Form will only validate the querystring and form collection if certain flags are set. These flags are automatically set if you call HttpRequest.ValidateInput method. Now run the above application again(don't forget to append ?id=<s query string in the url) with the same settings(i.e, requestValidationMode="2.0" setting in web.config and Application_BeginRequest method in global.asax.cs), your application will run just fine. Now just update the Application_BeginRequest method as,   protected void Application_BeginRequest() { Request.ValidateInput(); var q = Request.QueryString; }             Note that I am calling Request.ValidateInput method prior to use Request.QueryString property. ValidateInput method will internally set certain flags(discussed above). These flags will then tells the Request.QueryString (and Request.Form) property that validate the query string(or form) when user call Request.QueryString(or Request.Form) property. So running this application again with ?id=<s query string will throw HttpRequestValidationException exception. Now I hope it is clear to you that what does requestValidationMode do. It just tells the ASP.NET that not invoke the Request.ValidateInput method internally before the BeginRequest phase of an HTTP request if requestValidationMode is set to a value less than 4.0 in web.config. Here is the implementation of HttpRequest.ValidateInputIfRequiredByConfig method which will prove this statement(Don't be confused with HttpRequest and Request. Request is the property of HttpRequest class),    internal void ValidateInputIfRequiredByConfig() { ............................................................... ............................................................... ............................................................... ............................................................... if (httpRuntime.RequestValidationMode >= VersionUtil.Framework40) { this.ValidateInput(); } }              Hopefully the above discussion will clear you how requestValidationMode works in ASP.NET 4. It is also interesting to note that both HttpRequest.QueryString and HttpRequest.Form only throws the exception when you access them first time. Any subsequent access to HttpRequest.QueryString and HttpRequest.Form will not throw any exception. Continuing with the above example, just update Application_BeginRequest method in global.asax.cs file as,   protected void Application_BeginRequest() { try { var q = Request.QueryString; var f = Request.Form; } catch//swallow this exception { } var q1 = Request.QueryString; var f1 = Request.Form; }             Without setting requestValidationMode to 2.0 and without decorating ValidateInput attribute on Index action, your application will work just fine because both HttpRequest.QueryString and HttpRequest.Form will clear their flags after reading HttpRequest.QueryString and HttpRequest.Form for the first time(see the implementation of HttpRequest.QueryString and HttpRequest.Form above).           Now let's see ASP.NET MVC 3 granular request validation internal working. First of all we need to see type of HttpRequest.QueryString and HttpRequest.Form properties. Both HttpRequest.QueryString and HttpRequest.Form properties are of type NameValueCollection which is inherited from the NameObjectCollectionBase class. NameObjectCollectionBase class contains _entriesArray, _entriesTable, NameObjectEntry.Key and NameObjectEntry.Value fields which granular request validation uses internally. In addition granular request validation also uses _queryString, _form and _flags fields, ValidateString method and the Indexer of HttpRequest class. Let's see when and how granular request validation uses these fields.           Create a new ASP.NET MVC 3 application. Then put a breakpoint at Application_BeginRequest method and another breakpoint at HomeController.Index method. Now just run this application. When the break point inside Application_BeginRequest method hits then add the following expression in quick watch window, System.Web.HttpContext.Current.Request.QueryString. You will see the following screen,                                              Now Press F5 so that the second breakpoint inside HomeController.Index method hits. When the second breakpoint hits then add the following expression in quick watch window again, System.Web.HttpContext.Current.Request.QueryString. You will see the following screen,                            First screen shows that _entriesTable field is of type System.Collections.Hashtable and _entriesArray field is of type System.Collections.ArrayList during the BeginRequest phase of the HTTP request. While the second screen shows that _entriesTable type is changed to Microsoft.Web.Infrastructure.DynamicValidationHelper.LazilyValidatingHashtable and _entriesArray type is changed to Microsoft.Web.Infrastructure.DynamicValidationHelper.LazilyValidatingArrayList during executing the Index action method. In addition to these members, ASP.NET MVC 3 also perform some operation on _flags, _form, _queryString and other members of HttpRuntime class internally. This shows that ASP.NET MVC 3 performing some operation on the members of HttpRequest class for making granular request validation possible.           Both LazilyValidatingArrayList and LazilyValidatingHashtable classes are defined in the Microsoft.Web.Infrastructure assembly. You may wonder why their name starts with Lazily. The fact is that now with ASP.NET MVC 3, request validation will be performed lazily. In simple words, Microsoft.Web.Infrastructure assembly is now taking the responsibility for request validation from System.Web assembly. See the below screens. The first screen depicting HttpRequestValidationException exception in ASP.NET MVC 2 application while the second screen showing HttpRequestValidationException exception in ASP.NET MVC 3 application.   In MVC 2:                 In MVC 3:                          The stack trace of the second screenshot shows that Microsoft.Web.Infrastructure assembly (instead of System.Web assembly) is now performing request validation in ASP.NET MVC 3. Now you may ask: where Microsoft.Web.Infrastructure assembly is performing some operation on the members of HttpRequest class. There are at least two places where the Microsoft.Web.Infrastructure assembly performing some operation , Microsoft.Web.Infrastructure.DynamicValidationHelper.GranularValidationReflectionUtil.GetInstance method and Microsoft.Web.Infrastructure.DynamicValidationHelper.ValidationUtility.CollectionReplacer.ReplaceCollection method, Here is the implementation of these methods,   private static GranularValidationReflectionUtil GetInstance() { try { if (DynamicValidationShimReflectionUtil.Instance != null) { return null; } GranularValidationReflectionUtil util = new GranularValidationReflectionUtil(); Type containingType = typeof(NameObjectCollectionBase); string fieldName = "_entriesArray"; bool isStatic = false; Type fieldType = typeof(ArrayList); FieldInfo fieldInfo = CommonReflectionUtil.FindField(containingType, fieldName, isStatic, fieldType); util._del_get_NameObjectCollectionBase_entriesArray = MakeFieldGetterFunc<NameObjectCollectionBase, ArrayList>(fieldInfo); util._del_set_NameObjectCollectionBase_entriesArray = MakeFieldSetterFunc<NameObjectCollectionBase, ArrayList>(fieldInfo); Type type6 = typeof(NameObjectCollectionBase); string str2 = "_entriesTable"; bool flag2 = false; Type type7 = typeof(Hashtable); FieldInfo info2 = CommonReflectionUtil.FindField(type6, str2, flag2, type7); util._del_get_NameObjectCollectionBase_entriesTable = MakeFieldGetterFunc<NameObjectCollectionBase, Hashtable>(info2); util._del_set_NameObjectCollectionBase_entriesTable = MakeFieldSetterFunc<NameObjectCollectionBase, Hashtable>(info2); Type targetType = CommonAssemblies.System.GetType("System.Collections.Specialized.NameObjectCollectionBase+NameObjectEntry"); Type type8 = targetType; string str3 = "Key"; bool flag3 = false; Type type9 = typeof(string); FieldInfo info3 = CommonReflectionUtil.FindField(type8, str3, flag3, type9); util._del_get_NameObjectEntry_Key = MakeFieldGetterFunc<string>(targetType, info3); Type type10 = targetType; string str4 = "Value"; bool flag4 = false; Type type11 = typeof(object); FieldInfo info4 = CommonReflectionUtil.FindField(type10, str4, flag4, type11); util._del_get_NameObjectEntry_Value = MakeFieldGetterFunc<object>(targetType, info4); util._del_set_NameObjectEntry_Value = MakeFieldSetterFunc(targetType, info4); Type type12 = typeof(HttpRequest); string methodName = "ValidateString"; bool flag5 = false; Type[] argumentTypes = new Type[] { typeof(string), typeof(string), typeof(RequestValidationSource) }; Type returnType = typeof(void); MethodInfo methodInfo = CommonReflectionUtil.FindMethod(type12, methodName, flag5, argumentTypes, returnType); util._del_validateStringCallback = CommonReflectionUtil.MakeFastCreateDelegate<HttpRequest, ValidateStringCallback>(methodInfo); Type type = CommonAssemblies.SystemWeb.GetType("System.Web.HttpValueCollection"); util._del_HttpValueCollection_ctor = CommonReflectionUtil.MakeFastNewObject<Func<NameValueCollection>>(type); Type type14 = typeof(HttpRequest); string str6 = "_form"; bool flag6 = false; Type type15 = type; FieldInfo info6 = CommonReflectionUtil.FindField(type14, str6, flag6, type15); util._del_get_HttpRequest_form = MakeFieldGetterFunc<HttpRequest, NameValueCollection>(info6); util._del_set_HttpRequest_form = MakeFieldSetterFunc(typeof(HttpRequest), info6); Type type16 = typeof(HttpRequest); string str7 = "_queryString"; bool flag7 = false; Type type17 = type; FieldInfo info7 = CommonReflectionUtil.FindField(type16, str7, flag7, type17); util._del_get_HttpRequest_queryString = MakeFieldGetterFunc<HttpRequest, NameValueCollection>(info7); util._del_set_HttpRequest_queryString = MakeFieldSetterFunc(typeof(HttpRequest), info7); Type type3 = CommonAssemblies.SystemWeb.GetType("System.Web.Util.SimpleBitVector32"); Type type18 = typeof(HttpRequest); string str8 = "_flags"; bool flag8 = false; Type type19 = type3; FieldInfo flagsFieldInfo = CommonReflectionUtil.FindField(type18, str8, flag8, type19); Type type20 = type3; string str9 = "get_Item"; bool flag9 = false; Type[] typeArray4 = new Type[] { typeof(int) }; Type type21 = typeof(bool); MethodInfo itemGetter = CommonReflectionUtil.FindMethod(type20, str9, flag9, typeArray4, type21); Type type22 = type3; string str10 = "set_Item"; bool flag10 = false; Type[] typeArray6 = new Type[] { typeof(int), typeof(bool) }; Type type23 = typeof(void); MethodInfo itemSetter = CommonReflectionUtil.FindMethod(type22, str10, flag10, typeArray6, type23); MakeRequestValidationFlagsAccessors(flagsFieldInfo, itemGetter, itemSetter, out util._del_BitVector32_get_Item, out util._del_BitVector32_set_Item); return util; } catch { return null; } } private static void ReplaceCollection(HttpContext context, FieldAccessor<NameValueCollection> fieldAccessor, Func<NameValueCollection> propertyAccessor, Action<NameValueCollection> storeInUnvalidatedCollection, RequestValidationSource validationSource, ValidationSourceFlag validationSourceFlag) { NameValueCollection originalBackingCollection; ValidateStringCallback validateString; SimpleValidateStringCallback simpleValidateString; Func<NameValueCollection> getActualCollection; Action<NameValueCollection> makeCollectionLazy; HttpRequest request = context.Request; Func<bool> getValidationFlag = delegate { return _reflectionUtil.GetRequestValidationFlag(request, validationSourceFlag); }; Func<bool> func = delegate { return !getValidationFlag(); }; Action<bool> setValidationFlag = delegate (bool value) { _reflectionUtil.SetRequestValidationFlag(request, validationSourceFlag, value); }; if ((fieldAccessor.Value != null) && func()) { storeInUnvalidatedCollection(fieldAccessor.Value); } else { originalBackingCollection = fieldAccessor.Value; validateString = _reflectionUtil.MakeValidateStringCallback(context.Request); simpleValidateString = delegate (string value, string key) { if (((key == null) || !key.StartsWith("__", StringComparison.Ordinal)) && !string.IsNullOrEmpty(value)) { validateString(value, key, validationSource); } }; getActualCollection = delegate { fieldAccessor.Value = originalBackingCollection; bool flag = getValidationFlag(); setValidationFlag(false); NameValueCollection col = propertyAccessor(); setValidationFlag(flag); storeInUnvalidatedCollection(new NameValueCollection(col)); return col; }; makeCollectionLazy = delegate (NameValueCollection col) { simpleValidateString(col[null], null); LazilyValidatingArrayList array = new LazilyValidatingArrayList(_reflectionUtil.GetNameObjectCollectionEntriesArray(col), simpleValidateString); _reflectionUtil.SetNameObjectCollectionEntriesArray(col, array); LazilyValidatingHashtable table = new LazilyValidatingHashtable(_reflectionUtil.GetNameObjectCollectionEntriesTable(col), simpleValidateString); _reflectionUtil.SetNameObjectCollectionEntriesTable(col, table); }; Func<bool> hasValidationFired = func; Action disableValidation = delegate { setValidationFlag(false); }; Func<int> fillInActualFormContents = delegate { NameValueCollection values = getActualCollection(); makeCollectionLazy(values); return values.Count; }; DeferredCountArrayList list = new DeferredCountArrayList(hasValidationFired, disableValidation, fillInActualFormContents); NameValueCollection target = _reflectionUtil.NewHttpValueCollection(); _reflectionUtil.SetNameObjectCollectionEntriesArray(target, list); fieldAccessor.Value = target; } }             Hopefully the above code will help you to understand the internal working of granular request validation. It is also important to note that Microsoft.Web.Infrastructure assembly invokes HttpRequest.ValidateInput method internally. For further understanding please see Microsoft.Web.Infrastructure assembly code. Finally you may ask: at which stage ASP NET MVC 3 will invoke these methods. You will find this answer by looking at the following method source,   Unvalidated extension method for HttpRequest class defined in System.Web.Helpers.Validation class. System.Web.Mvc.MvcHandler.ProcessRequestInit method. System.Web.Mvc.ControllerActionInvoker.ValidateRequest method. System.Web.WebPages.WebPageHttpHandler.ProcessRequestInternal method.       Summary:             ASP.NET helps in preventing XSS attack using a feature called request validation. In this article, I showed you how you can use granular request validation in ASP.NET MVC 3. I explain you the internal working of  granular request validation. Hope you will enjoy this article too.   SyntaxHighlighter.all()

    Read the article

  • CodePlex Daily Summary for Tuesday, December 14, 2010

    CodePlex Daily Summary for Tuesday, December 14, 2010Popular ReleasesFlickrNet API Library: 3.1.4000: Newest release. Now contains dedicated Windows Phone 7 DLL as well as all previous DLLs. Also contains Windows Help file documentation now as standard.mojoPortal: 2.3.5.8: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2358-released.aspx Note that we have separate deployment packages for .NET 3.5 and .NET 4.0 The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code. To download the source code see the Source Code Tab I recommend getting the latest source code using TortoiseHG, you can get the source code corresponding to this release here.Microsoft All-In-One Code Framework: Visual Studio 2010 Code Samples 2010-12-13: Code samples for Visual Studio 2010SuperWebSocket: SuperWebSocket Drop 2: Changes: based on SuperSocket 1.3 supported sub protocol supported SSL/TLS encryption (wss) in Sync socket mode fixed some data communication bugsSSH.NET Library: 2010.12.13: Fixes SFTP issue when you try to uploaded or download multiple files simultaneously. Usage example can be found hereRequest Tracker Data Access: 1.0.0.0: First releaseSQL Monitor: SQL Monitor 2.4: 1. auto adjust datagrids in query 2. disable activities related commands until activities tab is active.SuperSocket, an extensible socket application framework: SuperSocket 1.3 beta 1: SuperSocket 1.3 is built on .NET 4.0 framework. Bug fixes: fixed a potential bug that the running state hadn't been updated after socket server stopped fixed a synchronization issue when clearing timeout session fixed a bug in ArraySegmentList fixed a bug on getting configuration value Third-part library upgrades: upgraded SuperSocket to .NET 4.0 upgraded EntLib 4.1 to 5.0 New features: supported UDP socket support custom protocol (can support binary protocol and other complecate...Wii Backup Fusion: Wii Backup Fusion 0.9 Beta: - Aqua or brushed metal style for Mac OS X - Shows selection count beside ID - Game list selection mode via settings - Compare Files <-> WBFS game lists - Verify game images/DVD/WBFS - WIT command line for log (via settings) - Cancel possibility for loading games process - Progress infos while loading games - Localization for dates - UTF-8 support - Shortcuts added - View game infos in browser - Transfer infos for log - All transfer routines rewritten - Extract image from image/WBFS - Support....NETTER Code Starter Pack: v1.0.beta: '.NETTER Code Starter Pack ' contains a gallery of Visual Studio 2010 solutions leveraging latest and new technologies and frameworks based on Microsoft .NET Framework. Each Visual Studio solution included here is focused to provide a very simple starting point for cutting edge development technologies and framework, using well known Northwind database (for database driven scenarios). The current release of this project includes starter samples for the following technologies: ASP.NET Dynamic...NuGet (formerly NuPack): NuGet 1.0 Release Candidate: NuGet is a free, open source developer focused package management system for the .NET platform intent on simplifying the process of incorporating third party libraries into a .NET application during development. This release is a Visual Studio 2010 extension and contains the the Package Manager Console and the Add Package Dialog. This new build targets the newer feed (http://go.microsoft.com/fwlink/?LinkID=206669) and package format. See http://nupack.codeplex.com/documentation?title=Nuspe...Free Silverlight & WPF Chart Control - Visifire: Visifire Silverlight, WPF Charts v3.6.5 Released: Hi, Today we are releasing final version of Visifire, v3.6.5 with the following new feature: * New property AutoFitToPlotArea has been introduced in DataSeries. AutoFitToPlotArea will bring bubbles inside the PlotArea in order to avoid clipping of bubbles in bubble chart. You can visit Visifire documentation to know more. http://www.visifire.com/visifirechartsdocumentation.php Also this release includes few bug fixes: * Chart threw exception while adding new Axis in Chart using Vi...PHPExcel: PHPExcel 1.7.5 Production: DonationsDonate via PayPal via PayPal. If you want to, we can also add your name / company on our Donation Acknowledgements page. PEAR channelWe now also have a full PEAR channel! Here's how to use it: New installation: pear channel-discover pear.pearplex.net pear install pearplex/PHPExcel Or if you've already installed PHPExcel before: pear upgrade pearplex/PHPExcel The official page can be found at http://pearplex.net. Want to contribute?Please refer the Contribute page.SwapWin: SwapWin 0.2: Updates: Bring all windows that are swapped to foreground. Make the window sent to primary screen active.??????????: All-In-One Code Framework ??? 2010-12-10: ?????All-In-One Code Framework(??) 2010?12??????!!http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=128165 ?????release?,???????ASP.NET, WinForm, Silverlight????12?Sample Code。???,??????????sample code。 ?????:http://blog.csdn.net/sjb5201/archive/2010/12/13/6072675.aspx ??,??????MSDN????????????。 http://social.msdn.microsoft.com/Forums/zh-CN/codezhchs/threads ?????????????????,??Email ????UOB & ME: UOB_ME 2.5: latest versionAutoLoL: AutoLoL v1.4.3: AutoLoL now supports importing the build pages from Mobafire.com as well! Just insert the url to the build and voila. (For example: http://www.mobafire.com/league-of-legends/build/unforgivens-guide-how-to-build-a-successful-mordekaiser-24061) Stable release of AutoChat (It is still recommended to use with caution and to read the documentation) It is now possible to associate *.lolm files with AutoLoL to quickly open them The selected spells are now displayed in the masteries tab for qu...SubtitleTools: SubtitleTools 1.2: - Added auto insertion of RLE (RIGHT-TO-LEFT EMBEDDING) Unicode character for the RTL languages. - Fixed delete rows issue.PHP Manager for IIS: PHP Manager 1.1 for IIS 7: This is a final stable release of PHP Manager 1.1 for IIS 7. This is a minor incremental release that contains all the functionality available in 53121 plus additional features listed below: Improved detection logic for existing PHP installations. Now PHP Manager detects the location to php.ini file in accordance to the PHP specifications Configuring date.timezone. PHP Manager can automatically set the date.timezone directive which is required to be set starting from PHP 5.3 Ability to ...Algorithmia: Algorithmia 1.1: Algorithmia v1.1, released on December 8th, 2010.New ProjectsAugmented Reality system in Soccer video: Augmented reality system and camera calibration system in soccer videos based on homography and vanishing points. Code generated with Visual C++ (best compiler is .net)Database Schema Provider: Database Schema Provider gets a database schema in unified format independent on the type of database. It uses ADO.NET data provider for Entity Framework. Dicke Bertha: Many many cool features... DNN Bookmark: DNN Bookmarks is a DNN module that aggregates the most popular social bookmarking tools and also allows you to bookmark your DNN web siteDough: Dough is a UI starter kit built using ASP.Net MVC and ExtJS. It's name comes from the concept of Amish friendship bread, a type of bread or cake made from a sourdough starter that is often shared in a manner similar to a chain letter.Garra - Gerenciador Financeiro: Garra é um sistema completo de controle financeiro: contas a pagar, contas a receber, investimentos, etc...ghcwp7: ghcwp7Hackathon - DotNetNuke Razor User Locator: The DotNetNuke Razor User Locator module demonstrates how Razor can be used to author DNN modules. This module shows where recent users to a web site came from based on their IP address. Hackathon. DotNetNuke Razor. Flickr Badge: Flickr badge desktop module allows you to display image thumbnails from Flickr and preview them inside DotNetNuke or on Flickr (controlled by module settings). Image thumbnails can be loaded by tag, user id, user group id, user set id and more.Hackathon: Razor Youtube Gallery: This is a DotNetNuke module which allows a website admin to add several relevant Youtube videos to a pane. The end user watches the selected Youtube video play, while scrolling through thumbnails of other videos to play those without refreshing the page. jQuery UI MVC3 Demo: Demo and possibly a skeleton for using jQuery UI in MVC3 (currently RC2).Microsoft Office Communicator History manager: Needs to save conversation history just on the local workstation (folder) then application could redstore it and show in simple window (mode) or user could open folder and lock on it manuallyMSTest Extensions - Msbuild: This project contains various msbuild tasks that extend helps with test execution using Mcrosoft testing frameworkRayCharlesTracer: this is a scholar project for a raytracer.Refunctor: F# interactive inside Reflector.Request Tracker Data Access: Best Practical RT (Request Tracker) data access .NET library for REST interface.SurfzApp: An application that does data mining on web resources of interest for Swedish windsurfers...Tesseract Solutions Corp. Data Access Base: Tesseract Data Access speeds up data access in .Net projects. Developed in C# .Net 4. It is a C#, class based ORM.TimBazinga EVoting: Undergrad project - designing an e-voting software system.Tiny Library CQRS: Tiny Library CQRS is a small demonstration project which demonstrates the concept of Domain Driven Design and the CQRS architecture pattern. This project relies on the Apworks DDD framework.Toptoys: toptoysWebGroup: WebGroup makes it easier for your website members to comunicate online. It work like Web IM + Forum + Twitter. It can be easily used in your current project. Developed in C#.WpfCustomChromeLibrary: WpfCustomChromeLibrary makes it easier to create WPF applications with custom chrome and caption buttons (min/max/close). You'll no longer have to do all the dirty work yourself in each application where you want a custom chrome. It's developed in XAML and C#.

    Read the article

  • CodePlex Daily Summary for Sunday, November 11, 2012

    CodePlex Daily Summary for Sunday, November 11, 2012Popular ReleasesZXMAK2: Version 2.7.2.0: show extended rzx error info fix reset lag for PROFI ULA 5.xx fix reset behavior fix PROFI ULA timings (thanks to solegstar) fix #FF port for PROFI ULA add ATM710 memory module add new predefined machine configs: ATM Turbo 2, PROFI 3.XX???????: Monitor 2012-11-11: This is the first releaseVidCoder: 1.4.5 Beta: Removed the old Advanced user interface and moved x264 preset/profile/tune there instead. The functionality is still available through editing the options string. Added ability to specify the H.264 level. Added ability to choose VidCoder's interface language. If you are interested in translating, we can get VidCoder in your language! Updated WPF text rendering to use the better Display mode. Updated HandBrake core to SVN 5045. Removed logic that forced the .m4v extension in certain ...ImageGlass: Version 1.5: http://i1214.photobucket.com/albums/cc483/phapsuxeko/ImageGlass/1.png v1.5.4401.3015 Thumbnail bar: Increase loading speed Thumbnail image with ratio Support personal customization: mouse up, mouse down, mouse hover, selected item... Scroll to show all items Image viewer Zoom by scroll, or selected rectangle Speed up loading Zoom to cursor point New background design and customization and others... v1.5.4430.483 Thumbnail bar: Auto move scroll bar to selected image Show / Hi...Building Windows 8 Apps with C# and XAML: Full Source Chapters 1 - 10 for Windows 8 Fix 002: This is the full source from all chapters of the book, compiled and tested on Windows 8 RTM. Includes: A fix for the Netflix example from Chapter 6 that was missing a service reference A fix for the ImageHelper issue (images were not being saved) - this was due to the buffer being inadequate and required streaming the writeable bitmap to a buffer first before encoding and savingmyCollections: Version 2.3.2.0: New in this version : Added TheGamesDB.net API for Games and NDS Added Support for Windows Media Center Added Support for myMovies Added Support for XBMC Added Support for Dune HD Added Support for Mede8er Added Support for WD HDTV Added Fast search options Added order by Artist/Album for music You can now create covers and background for games You can now update your ID3 tag with the info of myCollections Fixed several provider Performance improvement New Splash ...Draw: Draw 1.0: Drawing PadPlayer Framework by Microsoft: Player Framework for Windows 8 (v1.0): IMPORTANT: List of breaking changes from preview 7 Ability to move control panel or individual elements outside media player. more info... New Entertainment app theme for out of the box support for Windows 8 Entertainment app guidelines. more info... VSIX reference names shortened. Allows seeing plugin name from "Add Reference" dialog without resizing. FreeWheel SmartXML now supports new "Standard" event callback type. Other minor misc fixes and improvements ADDITIONAL DOWNLOADSSmo...WebSearch.Net: WebSearch.Net 3.1: WebSearch.Net is an open-source research platform that provides uniform data source access, data modeling, feature calculation, data mining, etc. It facilitates the experiments of web search researchers due to its high flexibility and extensibility. The platform can be used or extended by any language compatible for .Net 2 framework, from C# (recommended), VB.Net to C++ and Java. Thanks to the large coverage of knowledge in web search research, it is necessary to model the techniques and main...Umbraco CMS: Umbraco 4.10.0: NugetNuGet BlogRead the release blog post for 4.10.0. Whats newMVC support New request pipeline Many, many bugfixes (see the issue tracker for a complete list) Read the documentation for the MVC bits. Breaking changesWe have done all we can not to break backwards compatibility, but we had to do some minor breaking changes: Removed graphicHeadlineFormat config setting from umbracoSettings.config (an old relic from the 3.x days) U4-690 DynamicNode ChildrenAsList was fixed, altering it'...SQL Server Partitioned Table Framework: Partitioned Table Framework Release 1.0: SQL Server 2012 ReleaseSharePoint Manager 2013: SharePoint Manager 2013 Release ver 1.0.12.1106: SharePoint Manager 2013 Release (ver: 1.0.12.1106) is now ready for SharePoint 2013. The new version has an expanded view of the SharePoint object model and has been tested on SharePoint 2013 RTM. As a bonus, the new version is also available for SharePoint 2010 as a separate download.D3D9Client: D3D9Client R7: New release for Orbiter 2010-P1 - Added horizon/sun angle for night-lights into the configuration file (default 10deg) - Some runway lights related bugs are fixed - Added more configuration options for runway lightsFiskalizacija za developere: FiskalizacijaDev 1.2: Verzija 1.2. je, prije svega, odgovor na novu verziju Tehnicke specifikacije (v1.1.) koja je objavljena prije nekoliko dana. Pored novosti vezanih uz (sitne) izmjene u spomenutoj novoj verziji Tehnicke dokumentacije, projekt smo prošili sa nekim dodatnim feature-ima od kojih je vecina proizašla iz vaših prijedloga - hvala :) Novosti u v1.2. su: - Neusuglašenost zahtjeva (http://fiskalizacija.codeplex.com/workitem/645) - Sample projekt - iznosi se množe sa 100 (http://fiskalizacija.codeplex.c...MFCMAPI: October 2012 Release: Build: 15.0.0.1036 Full release notes at SGriffin's blog. If you just want to run the MFCMAPI or MrMAPI, get the executables. If you want to debug them, get the symbol files and the source. The 64 bit builds will only work on a machine with Outlook 2010 64 bit installed. All other machines should use the 32 bit builds, regardless of the operating system. Facebook BadgeDictationTool: DictationCool-WPF: • Open a media file to start a new dication. • Open a dct file to continue a dictation. • Compare your dictation with original text if exists. • Save your dictation to dct file, and restore it to continue later. • Save the compared result to html file.MCEBuddy 2.x: MCEBuddy 2.3.7: Changelog for 2.3.7 (32bit and 64bit) 1. Improved performance of MP4 Fast and M4V Fast Profiles (no deinterlacing, removed --decomb) 2. Improved priority handling 3. Added support for Pausing and Resume conversions 4. Added support for fallback to source directory if network destination directory is unavailable 5. MCEBuddy now installs ShowAnalyzer during installation 6. Added support for long description atom in iTunesDyanamic Reports (RDLC) - SharePoint 2010 Visual WebPart: Initial Release: This is a Initial Release.HTML Renderer: HTML Renderer 1.0.0.0 (3): Major performance improvement (http://theartofdev.wordpress.com/2012/10/25/how-i-optimized-html-renderer-and-fell-in-love-with-vs-profiler/) Minor fixes raised in issue tracker and discussions.Window Manager: Window Manager 1.0: First releaseNew Projectsarteytex: este es una prueba blockworld: An implementation of a goal stack planner.Customer Note: customer note is windows store applicationDraw: ?????????????:??????、CAD??、????。Football Management: Football Management System is web management system for football (soccer) leagues, teams and players. Hijri Converter API: This project is aimed to create a simple RESTful API using VB and ASP.NET to do Hijri-to-Gregorian and Gregorian-to-Hijri conversion.httpclient?????????: httpclient?????????(1)??????????(2)?????????(3)??2012-11-06??,???????。 Imagine Cup 2013: Develop project to Imagine Cup 2013MyAppReji: MyAppN2F Request: The N2F Request object is used to handle interactions between N2F and the global $_REQUEST variable, sanitizing any results which are returned.Orchard Metro Theme: Orchard Metro Theme is a clean and flexible multi-zone theme.Poker Clock And Goodies: poker w8ProjectASPReviewer: Review website for notebooks, tablets and smartphones.Prototype: Its about making an proto type for the final project.Prototype - 7COM0207: 7COM0207 web scripting module, Assignment 2QuickToAD: QuickToAD is a foundational development project for the purpose of jump-starting data-driven application projects.Release Manager: Release Manager is a project to design and develop Windows based Release Management Software.ResW File Code Generator: A Visual Studio 2012 Custom Tool for generating a strongly typed helper class for accessing localized resources from a .ResW file.SEO Tools: This is a website containing some commonly used SEO tools. I have only added a blog ping utility at this time but there is more to come. Thales communicator: A C# library that helps communicate with Thales HSMTrivial: A trivia framework: Trivial is a C# framework that helps you creating custom trivia-like applications.

    Read the article

  • 10 Steps to access Oracle stored procedures from Crystal Reports

    Requirements to access Oracle stored procedures from CR The following requirements must be met in order for CR to access an Oracle stored procedure: 1. You must create a package that defines the REF CURSOR. This REF CURSOR must be strongly bound to a static pre-defined structure (see Strongly Bound REF CURSORs vs Weakly Bound REF CURSORs). This package must be created separately and before the creation of the stored procedure. NOTE Crystal Reports 9 native connections will support Oracle stored procedures created within packages as well as Oracle stored procedures referencing weakly bound REF CURSORs. Crystal Reports 8.5 native connections will support Oracle stored procedures referencing weakly bound REF CURSORs. 2. The procedure must have a parameter that is a REF CURSOR type. This is because CR uses this parameter to access and define the result set that the stored procedure returns. 3. The REF CURSOR parameter must be defined as IN OUT (read/write mode). After the procedure has opened and assigned a query to the REF CURSOR, CR will perform a FETCH call for every row from the query's result. This is why the parameter must be defined as IN OUT. 4. Parameters can only be input (IN) parameters. CR is not designed to work with OUT parameters. 5. The REF CURSOR variable must be opened and assigned its query within the procedure. 6. The stored procedure can only return one record set. The structure of this record set must not change, based on parameters. 7. The stored procedure cannot call another stored procedure. 8. If using an ODBC driver, it must be the CR Oracle ODBC driver (installed by CR). Other Oracle ODBC drivers (installed by Microsoft or Oracle) may not function correctly. 9. If you are using the CR ODBC driver, you must ensure that in the ODBC Driver Configuration setup, under the Advanced Tab, the option 'Procedure Return Results' is checked ON. 10. If you are using the native Oracle driver and using hard-coded date selection within the procedure, the date selection must use either a string representation format of 'YYYY-DD-MM' (i.e. WHERE DATEFIELD = '1999-01-01') or the TO_DATE function with the same format specified (i.e. WHERE DATEFIELD = TO_DATE ('1999-01-01','YYYY-MM-DD'). For more information, refer to kbase article C2008023. 11. Most importantly, this stored procedure must execute successfully in Oracle's SQL*Plus utility. If all of these conditions are met, you must next ensure you are using the appropriate database driver. Please refer to the sections in this white paper for a list of acceptable database drivers. span.fullpost {display:none;}

    Read the article

  • 10 Steps to access Oracle stored procedures from Crystal Reports

    Requirements to access Oracle stored procedures from CR The following requirements must be met in order for CR to access an Oracle stored procedure: 1. You must create a package that defines the REF CURSOR. This REF CURSOR must be strongly bound to a static pre-defined structure (see Strongly Bound REF CURSORs vs Weakly Bound REF CURSORs). This package must be created separately and before the creation of the stored procedure. NOTE Crystal Reports 9 native connections will support Oracle stored procedures created within packages as well as Oracle stored procedures referencing weakly bound REF CURSORs. Crystal Reports 8.5 native connections will support Oracle stored procedures referencing weakly bound REF CURSORs. 2. The procedure must have a parameter that is a REF CURSOR type. This is because CR uses this parameter to access and define the result set that the stored procedure returns. 3. The REF CURSOR parameter must be defined as IN OUT (read/write mode). After the procedure has opened and assigned a query to the REF CURSOR, CR will perform a FETCH call for every row from the query's result. This is why the parameter must be defined as IN OUT. 4. Parameters can only be input (IN) parameters. CR is not designed to work with OUT parameters. 5. The REF CURSOR variable must be opened and assigned its query within the procedure. 6. The stored procedure can only return one record set. The structure of this record set must not change, based on parameters. 7. The stored procedure cannot call another stored procedure. 8. If using an ODBC driver, it must be the CR Oracle ODBC driver (installed by CR). Other Oracle ODBC drivers (installed by Microsoft or Oracle) may not function correctly. 9. If you are using the CR ODBC driver, you must ensure that in the ODBC Driver Configuration setup, under the Advanced Tab, the option 'Procedure Return Results' is checked ON. 10. If you are using the native Oracle driver and using hard-coded date selection within the procedure, the date selection must use either a string representation format of 'YYYY-DD-MM' (i.e. WHERE DATEFIELD = '1999-01-01') or the TO_DATE function with the same format specified (i.e. WHERE DATEFIELD = TO_DATE ('1999-01-01','YYYY-MM-DD'). For more information, refer to kbase article C2008023. 11. Most importantly, this stored procedure must execute successfully in Oracle's SQL*Plus utility. If all of these conditions are met, you must next ensure you are using the appropriate database driver. Please refer to the sections in this white paper for a list of acceptable database drivers. span.fullpost {display:none;}

    Read the article

< Previous Page | 405 406 407 408 409 410 411 412 413 414 415 416  | Next Page >