Search Results

Search found 8886 results on 356 pages for 'parse tree'.

Page 102/356 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • Ways to dynamically render a real world 3d environment in Unity3D

    - by Jake M
    Using Unity3D and C# I am attempting to display a 3d version of a real world location. Inside my Unity3D app, the user will specify the GPS coordinates of a location, then my app will have to generate a 3d plane(anything doesn't have to be a plane) of that location. The plane will show a 500 metre by 500 metre 3d snapshot of that location. How would you suggest I achieve this in Unity3D? What methodology would you use to achieve this? NOTE: I understand that this is a very difficult endevour(to render real world locations dynamically in Unity3d) so I expect to perform many actions to achieve this. I just don't know of all the technologies out there and which would be best for my needs For example: Suggested methodology 1: Prompt user to specify GPS coords Use Google earth API and HTTP to programmatically obtain a .khm file describing that location(Not sure if google earth provides that capability does it?) Unzip the .khm so I have the .dae file Convert that file to a .3ds file using ??? third party converter(is there a converter that exists?) Import .3ds into Unity3D at runtime as a plane(is this possible)? Suggested methodology 2: Prompt user to specify GPS coords Use Google earth API and HTTP to programmatically obtain a .khm file describing that location(Not sure if google earth provides that capability does it?) Unzip the .khm so I have the .dae file Parse .dae file using my own C# parser I will write(do you think its possible to write a .dae parser that can parse the .dae into an array of Vector3 that describe the height map of that location?) Dynamically create a plane in Unity3D and populate it with my array/list of Vector3 points(is it possible to create a plane this way?) Maybe I am meant to create a mesh instead of a plane? Can you think of any other ways I could render a real world 3d environment in Unity3D?

    Read the article

  • Defining a function that is both a generator and recursive [on hold]

    - by user96454
    I am new to python, so this code might not necessarily be clean, it's just for learning purposes. I had the idea of writing this function that would display the tree down the specified path. Then, i added the global variable number_of_py to count how many python files were in that tree. That worked as well. Finally, i decided to turn the whole thing into a generator, but the recursion breaks. My understanding of generators is that once next() is called python just executes the body of the function and "yields" a value until we hit the end of the body. Can someone explain why this doesn't work? Thanks. import os from sys import argv script, path = argv number_of_py = 0 lines_of_code = 0 def list_files(directory, key=''): global number_of_py files = os.listdir(directory) for f in files: real_path = os.path.join(directory, f) if os.path.isdir(real_path): list_files(real_path, key=key+' ') else: if real_path.split('.')[-1] == 'py': number_of_py += 1 with open(real_path) as g: yield len(g.read()) print key+real_path for i in list_files(argv[1]): lines_of_code += i print 'total number of lines of code: %d' % lines_of_code print 'total number of py files: %d' % number_of_py

    Read the article

  • Can JSON be made easily and safely editable by the non-technical Excel crowd?

    - by glitch
    I'm looking for a data storage format that's very intuitive and easy to edit. It should be ideally targeted towards the same crowd as Excel. At the same time I would like the data structure to be a tree. Ideally this would be JSON, since it offers both the tree aspect and allows for more interesting constructs like arrays. That and parsing libraries for JSON are ubiquitous, so I don't have to reinvent the wheel. The problem is that, at least with a non-specialized text editor, JSON is a giant pain to edit for a non-technical user. I'm thinking along the lines of someone who might have used Excel in the past, but never a real text editor. Someone who might not be comfortable with the idea of preserving JSON syntax by hand. Are there data formats out there that would fit this profile? I'd very much prefer this to be a JSON actually, but then it would require a solid editing tool that would hide the underlying implementation from the user. Think Excel and how it abstracts CSV syntax from the user. The reason I'm looking for something like this is because the team has been working with pretty hierarchical data for a while now and we've hit the limits of how easy it is to represent in simple CSVs without having to create complex rules for how represent hierarchy semantics from each row. Any suggestions?

    Read the article

  • How to structure my GUI agnostic project?

    - by Nezreli
    I have a project which loads from database a XML file which defines a form for some user. XML is transformed into a collection of objects whose classes derive from single parent. Something like Control - EditControl - TextBox Control - ContainterControl - Panel Those classes are responsible for creation of GUI controls for three different enviroments: WinForms, DevExpress XtraReports and WebForms. All three frameworks share mostly the same control tree and have a common single parent (Windows.Forms.Control, XrControl and WebControl). So, how to do it? Solution a) Control class has abstract methods Control CreateWinControl(); XrControl CreateXtraControl(); WebControl CreateWebControl(); This could work but the project has to reference all three frameworks and the classes are going to be fat with methods which would support all three implementations. Solution b) Each framework implementation is done in separate projects and have the exact class tree like the Core project. All three implementations are connected using a interface to the Core class. This seems clean but I'm having a hard time wrapping my head around it. Does anyone have a simpler solution or a suggestion how should I approach this task?

    Read the article

  • Really impossible to have Gimp 2.8 on Ubuntu 11.10?

    - by ubuntico
    For days, I have been trying to find a solution and repositories to install Gimp 2.8 on Ubuntu 11.10. Each time I get this error: Tried to update pango via sudo apt-get install pango-graphite [sudo] password for xxx: Reading package lists... Done Building dependency tree Reading state information... Done pango-graphite is already the newest version. Then also tried sudo apt-get install libgdk-pixbuf2* Reading package lists... Done Building dependency tree Reading state information... Done Note, selecting 'libgdk-pixbuf2.0-0' for regex 'libgdk-pixbuf2*' Note, selecting 'libgdk-pixbuf2.0-dev' for regex 'libgdk-pixbuf2*' Note, selecting 'libgdk-pixbuf2.0-doc' for regex 'libgdk-pixbuf2*' Note, selecting 'libgdk-pixbuf2-ruby' for regex 'libgdk-pixbuf2*' Note, selecting 'libgdk-pixbuf2-ruby1.8' for regex 'libgdk-pixbuf2*' Note, selecting 'libgdk-pixbuf2-ruby1.8-dbg' for regex 'libgdk-pixbuf2*' Note, selecting 'libgdk-pixbuf2.0-common' for regex 'libgdk-pixbuf2*' libgdk-pixbuf2-ruby is already the newest version. libgdk-pixbuf2-ruby1.8 is already the newest version. libgdk-pixbuf2-ruby1.8-dbg is already the newest version. libgdk-pixbuf2.0-doc is already the newest version. libgdk-pixbuf2.0-0 is already the newest version. libgdk-pixbuf2.0-dev is already the newest version. libgdk-pixbuf2.0-common is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 31 not upgraded. And still error :(. Please help as I do not want to upgrade to Ubuntu 12.04 just to have Gimp. Really a mission impossible????

    Read the article

  • "unresolvable problem" error when upgrading from 12.04 to 14.04

    - by flyingfisch
    So I have solved this issue, but now I have another problem: An unresolvable problem occurred while calculating the upgrade. This can be caused by: * Upgrading to a pre-release version of Ubuntu * Running the current pre-release version of Ubuntu * Unofficial software packages not provided by Ubuntu If none of this applies, then please report this bug using the command 'ubuntu-bug ubuntu-release-upgrader-core' in a terminal. I am not upgrading to a pre-release version of Ubuntu and I am not running a pre-release either. I have unchecked all my 3rd-party packages using Ubuntu Software Manager, EditSoftware Sources... What else might be wrong? UPDATE After doing sudo update-manager -d and sudo apt-get update;sudo apt-get dist-upgrade as per JimB's post, and then running sudo do-release-upgrade, here what I get: Err http://extras.ubuntu.com trusty/main Translation-en Err http://extras.ubuntu.com trusty/main Translation-en_US Err http://extras.ubuntu.com trusty/main Translation-en Ign http://extras.ubuntu.com trusty/main Translation-en_US Ign http://extras.ubuntu.com trusty/main Translation-en Fetched 0 B in 0s (0 B/s) Checking package manager Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Calculating the changes Calculating the changes Could not calculate the upgrade An unresolvable problem occurred while calculating the upgrade. This can be caused by: * Upgrading to a pre-release version of Ubuntu * Running the current pre-release version of Ubuntu * Unofficial software packages not provided by Ubuntu If none of this applies, then please report this bug using the command 'ubuntu-bug ubuntu-release-upgrader-core' in a terminal. Restoring original system state Aborting Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done === Command detached from window (Mon Aug 18 23:53:10 2014) === === Command terminated with exit status 1 (Mon Aug 18 23:53:10 2014) ===

    Read the article

  • Microeconomical simulation: coordination/planning between self-interested trading agents

    - by Milton Manfried
    In a typical perfect-information strategy game like Chess, an agent can calculate its best move by searching the state tree for the best possible move, while assuming that the opponent will also make the best possible move (i.e. Mini-max). I would like to use this approach in a "game" modeling economic activity, where the possible "moves" would be to buy or sell for a given price, and the goal, rather than a specific class of states (e.g. Checkmate), would be to maximize some function F of the agent's state (e.g. F(money, widget) = 10*money + widget). How to handle buy/sell actions that require coordination between both parties, at the very least agreement upon a price? The cheap way out would be to set the price beforehand, maybe based upon the current supply -- but the idea of this simulation is to examine how prices emerge when freely determined by "perfectly rational" agents. A great example of what I do not want is the trading algorithm in SugarScape -- paraphrasing from Growing Artificial Societies p101-102: when a pair of agents interact to trade, they each compute their internal valuations of the goods, then a bargaining process is conducted and a price is agreed to. If this price makes both agents better off, they complete the transaction The protocol itself is beautiful, but what it cannot capture (as far as I can tell) is the ability for an agent to pay more than it might otherwise for a good, because it knows that it can sell it for even more at a later date -- what appears to be called "strategic thinking" in this pape at Google Books Multi-Agent-Based Simulation III: 4th International Workshop, MABS 2003... to get realistic behavior like that, it seems one would either (1) have to build an outrageously-complex internal valuation system which could at best only cover situations that were planned for at compile-time, or otherwise (2) have some mechanism to search the state tree... which would require some way of planning future trades. Note: The chess analogy only works as far as the state-space search goes; the simulation isn't intended to be "zero sum", so a literal mini-max search wouldn't be appropriate -- and ideally, it should work with more than two agents.

    Read the article

  • How to determine the end of list has been reached?

    - by Sweta Dwivedi
    I'm trying to animate my object according to a set of recorded values from kinect skeleton stream by saving the (x,y,z) stream from the skeletal data into a list and then set my objects x and y position from the x,y of the list. However, once the list end has been reached it starts to animate again from the start. I don't want that - I just want the model position to keep going in the positive X direction. Is there any way I can check if end of the list has been reached and to just update the model position in x direction? Or is there any other way to continue moving my sprite once the points in the list are over... i dont want it to start animating all the way again.. protected override void Update(GameTime gameTime) { //position += spriteSpeed * (float)gameTime.ElapsedGameTime.TotalSeconds; //// TODO: Add your update logic here using (StreamReader r = new StreamReader(f)) { string line; Viewport view = graphics.GraphicsDevice.Viewport; int maxWidth = view.Width; int maxHeight = view.Height; while((line = r.ReadLine()) != null) { string[] temp = line.Split(','); int x = (int) Math.Floor(((float.Parse(temp[0]) * 0.5f) + 0.5f) * maxWidth); int y = (int) Math.Floor(((float.Parse(temp[1]) * -0.5f) + 0.5f) * maxHeight); motion_2.Add(new Point(x, y)); } } position.X = motion_2[i].X; position.Y = motion_2[i].Y; i++; a_butterfly_up.Update(gameTime); a_butterfly_side.Update(gameTime); G_vidPlayer.Play(mossV); base.Update(gameTime); }

    Read the article

  • Parsing mathematical experssions with two values that have parenthesis and minus signs

    - by user45921
    I'm trying to parse equations like these which only has two values or the square root of a certain value from a text file: 100+100 -100-100 -(100)+(-100) sqrt(100) by the minues signs, parenthesis and the operator symbol in the middle and the square root, and I have no idea how to start off... I've got the file part done and the simple calculation parts except that I couldnt get my program to solve the equations in the above. #include <stdio.h> #include <string.h> #include <stdlib.h> #include <math.h> main(){ FILE *fp; char buff[255], sym,sym2,del1,del2,del3,del4; double num1, num2; int ret; fp = fopen("input.txt","r"); while(fgets(buff,sizeof(buff),fp)!=NULL){ char *tok = buff; sscanf(tok,"%lf%c%lf",&num1,&sym,&num2); switch(sym){ case '+': printf("%lf\n", num1+num2); break; case '-': printf("%lf\n", num1-num2); break; case '*': printf("%lf\n", num1*num2); break; case '/': printf("%lf\n", num1/num2); break; default: printf("The input value is not correct\n"); break; } } fclose(fp); } that is what have I written for the other basic operations without parenthesis and the minus sign for the second value and it works great for the simple ones. I'm using a switch method to calculate the add, sub, mul and divide but I'm not sure how to properly use the sscanf function (if I am not using it properly) or if there is another way using a function like strtok to properly parse the parenthesis and the minus signs. Any kind help?

    Read the article

  • Unable to install Skype ("Unmet dependencies" error)

    - by Alex Maslakov
    Trying to install skype on Ubuntu 12, I faced and issue. When I type: sudo apt-get update sudo apt-get install skype I get an error Reading package lists... Done Building dependency tree Reading state information... Done skype is already the newest version. You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: skype : Depends: lib32stdc++6 (>= 4.1.1-21) but it is not going to be installed Depends: lib32asound2 (> 1.0.14) but it is not going to be installed Depends: ia32-libs but it is not going to be installed Depends: libc6-i386 (>= 2.7-1) but it is not going to be installed Depends: lib32gcc1 (>= 1:4.1.1-21+ia32.libs.1.19) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). How do I solve it? Is the way I'm using the right one to install skype? UPDATE: If I try to do sudo apt-get install lib32stdc++6 lib32asound2 ia32-libs libc6-i386 lib32gcc1 skype then I get Reading package lists... Done Building dependency tree Reading state information... Done skype is already the newest version. You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: ia32-libs : Depends: ia32-libs-multiarch lib32asound2 : Depends: libasound2 (= 1.0.25-1ubuntu10) E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution).

    Read the article

  • Central renderer for a given scene

    - by Loggie
    When creating a central rendering system for all game objects in a given scene I am trying to work out the best way to go about passing the scene to the render system to be rendered. If I have a scene managed by an arbitrary structure, i.e., an octree, bsp trees, quad-tree, kd tree, etc. What is the best way to pass this to the render system? The obvious problem is that if simply given the root node of the structure, the render system would require an intrinsic knowledge of the structure in order to traverse the structure. My solution to this is to clip all objects outside the frustum in the scene manager and then create a list of the objects which are left and pass this simple list to the render system, be it an array, a vector, a linked list, etc. (This would be a structure required by the render system as a means to know which objects should be rendered). The list would of course attempt to minimise OpenGL state changes by grouping objects that require the same rendering operations to be performed on them. I have been thinking a lot about this and started searching various terms on here and followed any additional information/links but I have not really found a definitive answer. The case may be that there is no definitive answer but I would appreciate some advice and tips. My question is, is this a reasonable solution to the problem? Are there any improvements that I could make? Are there any caveats I should know about? Side question: Am I right in assuming that octrees, bsp trees, etc are all forms of BVH?

    Read the article

  • Parsing an header with two different version [ID3] avoiding code duplication?

    - by user66141
    I really hope you could give me some interesting viewpoints for my situation, my ways to approach my issue are not to my liking . I am writing an mp3 parser , starting with an ID3v2 parser . Right now I`m working on the extended header parsing , my issue is that the optional header is defined differently in version 2.3 and 2.4 of the tag . The 2.3 version optional header is defined as follows : struct ID3_3_EXTENDED_HEADER{ DWORD dwExtHeaderSize; //Extended header size (either 6 or 8 bytes , excluded) WORD wExtFlags; //Extended header flags DWORD dwSizeOfPadding; //Size of padding (size of the tag excluding the frames and headers) }; While the 2.4 version is defined : struct ID3_4_EXTENDED_HEADER{ DWORD dwExtHeaderSize; //Extended header size (synchsafe int) BYTE bNumberOfFlagBytes; //Number of flag bytes BYTE bFlags; //Flags }; How could I parse the header while minimizing code duplication ? Using two different functions to parse each version sounds less great , using a single function with a different flow for each occasion is similar , any good practices for this kind of issues ? any tips for avoiding code duplication ? anything would be great .

    Read the article

  • Cannot install ia32-lib package

    - by A British Person
    I have several programs that reuquire 32 bit packages (pointing to the ia32-lib package). However, when I try to install it, this happens. spirit@ubuntu:~$ sudo apt-get install ia32-libs Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: ia32-libs : Depends: ia32-libs-multiarch but it is not installable E: Unable to correct problems, you have held broken packages. No big whoop, packages die all the time. I tried a month later however and I still got this error, trying to install the specific package produces this error. spirit@ubuntu:~$ sudo apt-get install ia32-libs-multiarch Reading package lists... Done Building dependency tree Reading state information... Done Package ia32-libs-multiarch is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'ia32-libs-multiarch' has no installation candidate I am no Linux whizz-kid, but this seems to be that the package doesn't exist. I searched for Skype in the software centre (I was told this installs the 32-bit packages) and it does not appear in the software centre, and the downloadable from their website produces an error about - funnily enough - no 32-bit packages. Anyone who helps me will get a medal from the gods with the weight of a thousand planets. Just don't wear it for god's sake.

    Read the article

  • Using visitor pattern with large object hierarchy

    - by T. Fabre
    Context I've been using with a hierarchy of objects (an expression tree) a "pseudo" visitor pattern (pseudo, as in it does not use double dispatch) : public interface MyInterface { void Accept(SomeClass operationClass); } public class MyImpl : MyInterface { public void Accept(SomeClass operationClass) { operationClass.DoSomething(); operationClass.DoSomethingElse(); // ... and so on ... } } This design was, however questionnable, pretty comfortable since the number of implementations of MyInterface is significant (~50 or more) and I didn't need to add extra operations. Each implementation is unique (it's a different expression or operator), and some are composites (ie, operator nodes that will contain other operator/leaf nodes). Traversal is currently performed by calling the Accept operation on the root node of the tree, which in turns calls Accept on each of its child nodes, which in turn... and so on... But the time has come where I need to add a new operation, such as pretty printing : public class MyImpl : MyInterface { // Property does not come from MyInterface public string SomeProperty { get; set; } public void Accept(SomeClass operationClass) { operationClass.DoSomething(); operationClass.DoSomethingElse(); // ... and so on ... } public void Accept(SomePrettyPrinter printer) { printer.PrettyPrint(this.SomeProperty); } } I basically see two options : Keep the same design, adding a new method for my operation to each derived class, at the expense of maintainibility (not an option, IMHO) Use the "true" Visitor pattern, at the expense of extensibility (not an option, as I expect to have more implementations coming along the way...), with about 50+ overloads of the Visit method, each one matching a specific implementation ? Question Would you recommand using the Visitor pattern ? Is there any other pattern that could help solve this issue ?

    Read the article

  • Toolset agnostic build server and Silverlight projects

    - by Marko Apfel
    Problem Normally I try to have my continuous integration as most a possible toolset free to ensure that no local stuff could have an impact to my build. My Silverlight app references a special compile target in a folder outside my developer tree: <Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.targets" /> So I copied the stuff from this folder to a local one and changed the call to this target in my csproj: <Import Project="..\..\..\tools\WebApplications\Microsoft.WebApplication.targets" /> And now Visual Studio Conversion Wizard welcomes my with this: Solution Regardless of which line I write – this conversion comes back again and again, if the line has another form than <Import Project="$(MSBuildExtensionsPath32)\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.targets" /> So it seems that there is no simple way to change this behaviour. Workaraound I must accept, that this line must be in the csproj and to run the build the toolset must be copied to the build server at the correct location. So go to your development machine where Visual Studio is installed and copy the folder “C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\WebApplications” to your build server at the equivalent location.   Xmas wishes to Microsoft: Please provide technologies to let us developers bundle all needed stuff for a project in one developer tree. It should be possible that one checkout starts us up! No additional installations regardless whether it is a developing machine or dedicated build or continuous integration server. Silverlight is only one example, code analysis configurations could also be terrible and much more …

    Read the article

  • Need a Quick Sure Method to Produce a Formatted Explain Plan? This will help!

    - by user702295
    Please use the following on the production machine to get formatted explain plan and sql trace using the SLOW sql (e.g. 'T_COMB_LIST.COMB_ID = 216') or any other value that takes longer: -- Open new session is SQL*Plus */ -- Make sure you are using updated PLAN_TABLE -- This can be done by dropping it and recreate it by running: -- SQL> @?/rdbms/admin/utlxplan.sql) set lines 1000 set pages 1000 spool xplan_1.txt EXPLAIN PLAN FOR <<<<Replace this line with exactly the same query you used above. Force hard parse by modifying the case of a character>>>> @?/rdbms/admin/utlxplp spool off EXIT --Open a second session is SQL*Plus ALTER SESSION SET max_dump_file_size = unlimited; ALTER SESSION SET tracefile_identifier = '10046'; ALTER SESSION SET statistics_level = ALL; ALTER SESSION SET events '10046 trace name context forever, level 12'; <<<<Replace this line with exactly the same query you used above. Force hard parse by modifying the case of a character>>>> select 'verify cursor closed' from dual; ALTER SYSTEM SET EVENTS '10046 trace name context off'; EXIT Make sure spooled file is formatted properly and that the 10046 trace has relevant explain plan in it.  Please Upload both files (10046 trace is generated in udump). Need instructions to find udump?   sqlplus "/ as sysdba" show parameters dump_dest This will show you bdump, cdump and udump locations.

    Read the article

  • What will be the better way for data retrieval on application that needs to handle limited amount of data?

    - by Milanix
    Just moved this question from Stack Overflow. Since, adding my code snippets itself would make this question really long. Instead, I am pretty interested in knowing a better ways for data retrieval on application that needs to handle limited amount of data which isn't updated regularly. Let's take this example: I am writing an application which gets a schedule as an XML from server. I have written a logic in order to parse XML version and update database only if the version is newer than the local version. Although the update is checked automatically/manually on daily basis based on user preference, the actual version update happens only once per few months or so. Since, this is done by some other authority which doesn't provide API but, rather inform publicly on their changes. The actual XML contains a "(n number of groups)(days in a week) (n number of schedule)" . The group is usually 6 and the number of schedule is usually 2. So basically there would usually be only around 100 strings. Now although I have used SQLite at the moment. I want to know how to make update on database. Should I show progress dialog that the application is updating and exit the app when it's done? Since, my updates are infrequent i don't think this will really harm user experience but, is there any better ways to do it? Because I don't want update to be made when user is searching which is done using database. This will cause an database already open exception. At least I have faced this problem before. Is it better to rather parse XML every time when user wants to view certain things or to use SQLite? Since, I make lots of use of adapter in my app to create lists, will that degrade the performance?

    Read the article

  • Uninstall ax88179 package

    - by jackbenny
    I've installed the ax88179 package from the PPA (since the ax88179 driver isn't in the 3.8 kernel). But now I'd like to install kernel 3.11.6 and this module is already included here. So I'd like to uninstall the the module from the package but this fails with the following error message The following packages will be REMOVED: ax88179* 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. After this operation, 313 kB disk space will be freed. Do you want to continue [Y/n]? (Reading database ... 202833 files and directories currently installed.) Removing ax88179 ... Error! There are no instances of module: ax88179_178a 1.6.0 located in the DKMS tree. rm: cannot remove ‘/usr/src/AX88179_178A_LINUX_DRIVER_v1.6.0_SOURCE’: No such file or directory dpkg: error processing ax88179 (--purge): subprocess installed pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already ln: failed to create symbolic link ‘/usr/src/AX88179_178A_LINUX_DRIVER_v1.7.0_SOURCE’: File exists Error! DKMS tree already contains: ax88179_178a-1.7.0 You cannot add the same module/version combo more than once. Module ax88179_178a/1.7.0 already built for kernel 3.8.0-32-generic/4 Module ax88179_178a/1.7.0 already installed on kernel 3.8.0-32-generic/x86_64 Errors were encountered while processing: ax88179 E: Sub-process /usr/bin/dpkg returned an error code (1) It complains about both version 1.6 and 1.7. I've updated to 1.7 a couple of days ago. --force doesn't help either. I just want to get rid of it since when I'm running 3.11.6 the versions interfere with each other.

    Read the article

  • Is the addition of a duration to a date-time defined in ISO 8601?

    - by Benjamin
    I've writing a date-time library, and need to implement the addition of a duration to a date-time. If I add a 1 month duration: P1M to the 31st March 2012: 2012-03-31, does the standard define what the result is? Because the resulting date (31st April) does not exist, there are at least two options: Fall back to the last day of the resulting month. This is the approach currently taken by the ThreeTen API, the (alpha) reference implementation of JSR-310: ZonedDateTime date = ZonedDateTime.parse("2012-03-31T00:00:00Z"); Period duration = Period.parse("P1M"); System.out.println(date.plus(duration).toString()); // 2012-04-30T00:00Z Carry the extra day to the next month. This is the approach taken by the DateTime class in PHP: $date = new DateTime('2012-03-31T00:00:00Z'); $duration = new DateInterval('P1M'); echo $date->add($duration)->format('c'); // 2012-05-01T00:00:00+00:00 I'm surprised that two date-time libraries contradict on this point, so I'm wondering whether the standard defines the result of this operation?

    Read the article

  • How can you easily determine the textureRect for tiled maps in SFML 2.0?

    - by ThePlan
    I'm working on creating a 2d map prototype, and I've come across the rendering bit of it. I have a tilesheet with tiles, each tile is 30x30 pixels, and there's a 1px border to delimitate them. In SFML the usual method of drawing a part of a tilesheet is declaring an IntRect with the rectangle coordinates then calling the setTextureRectangle() method to a sprite. In a small game it would work, but I have well over 45 tiles and adding more every day, I can't declare 45 intRects for every material, the map is not optimized yet, it would get even worse if I would have to call the setTextureRect() method, aside from declaring 45 rectangleInts. How could I simplify this task? All I need is a very simple and flexible solution for extracting a region of the tilesheet. Basically I have a Tile class. I create multiple instances of tiles (vectors) and each tile has a position and a material. I parse a map file and as I parse it I set the materials of the map according to the parsed map file, and all I need to do is render. Basically I need to do something like this: switch(tile.getMaterial()) { case GRASS: material_sprite.setTextureRect(something); window.draw(material_sprite); break; case WATER: material_sprite.setTextureRect(something); window.draw(material_sprite); break; // handle more cases }

    Read the article

  • Lubuntu 13.04, kernel still 3.8.0-32-generic

    - by Blue Ice
    My question is very similar to Ubuntu 13.10, kernel still 3.8.0-31-generic. Recently was updating to Saucy and the ethernet cable got unplugged. So I decided to run Software Update again, to reinstall files. It returned that "everything is up to date". But according to these command-line searches, that is incorrect. How can I install Saucy now safely from the command line? sudo apt-get install lubuntu-desktop Reading package lists... Done Building dependency tree Reading state information... Done lubuntu-desktop is already the newest version. The following package was automatically installed and is no longer required: linux-image-extra-3.8.0-19-generic Use 'apt-get autoremove' to remove it. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. sudo apt-get dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. sudo apt-get update Get:1 http://extras.ubuntu.com raring Release.gpg [72 B] Hit http://extras.ubuntu.com raring Release Get:2 http://az-1.hpcloud.mirror.websitedevops.com raring Release.gpg Hit http://extras.ubuntu.com raring/main Sources Hit http://extras.ubuntu.com raring/main i386 Packages Get:3 http://az-1.hpcloud.mirror.websitedevops.com raring-updates Release.gpg Get:4 http://az-1.hpcloud.mirror.websitedevops.com raring-backports Release.gpg Get:5 http://az-1.hpcloud.mirror.websitedevops.com raring-security Release.gpg Get:6 http://az-1.hpcloud.mirror.websitedevops.com raring Release Ign http://extras.ubuntu.com raring/main Translation-en_US Ign http://extras.ubuntu.com raring/main Translation-en Hit http://ppa.launchpad.net raring Release.gpg Ign http://az-1.hpcloud.mirror.websitedevops.com raring Release E: GPG error: http://az-1.hpcloud.mirror.websitedevops.com raring Release: The following signatures were invalid: NODATA 1 NODATA 2 lsb_release -rd Description: Ubuntu 13.04 Release: 13.04 uname -r 3.8.0-32-generic

    Read the article

  • Error: No mapping exists from object type....

    - by jakesankey
    Here is the code for my simple parsing application. I am getting an error that states 'No mapping exists from type System.Text.RegularExpressions.Match to a known managed provider native type'. This started to occur when I switched from using Split('_') to RegEx.Match for defining RNumberE, RNumberD, etc. Any guidance is appreciated. using System; using System.Data; using System.Data.SQLite; using System.IO; using System.Text.RegularExpressions; using System.Threading; using System.Collections.Generic; using System.Linq; using System.Data.SqlClient; namespace JohnDeereCMMDataParser { internal class Program { public static List<string> GetImportedFileList() { List<string> ImportedFiles = new List<string>(); using (SqlConnection connect = new SqlConnection(@"Server=FRXSQLDEV;Database=RX_CMMData;Integrated Security=YES")) { connect.Open(); using (SqlCommand fmd = connect.CreateCommand()) { fmd.CommandText = @"SELECT FileName FROM CMMData;"; fmd.CommandType = CommandType.Text; SqlDataReader r = fmd.ExecuteReader(); while (r.Read()) { ImportedFiles.Add(Convert.ToString(r["FileName"])); } } } return ImportedFiles; } private static void Main(string[] args) { Console.Title = "John Deere CMM Data Parser"; Console.WriteLine("Preparing CMM Data Parser... done"); Console.WriteLine("Scanning for new CMM data..."); Console.ForegroundColor = ConsoleColor.Gray; using (SqlConnection con = new SqlConnection(@"Server=FRXSQLDEV;Database=RX_CMMData;Integrated Security=YES")) { con.Open(); using (SqlCommand insertCommand = con.CreateCommand()) { Console.WriteLine("Connecting to SQL server..."); SqlCommand cmdd = con.CreateCommand(); string[] files = Directory.GetFiles(@"C:\Documents and Settings\js91162\Desktop\CMM WENZEL\", "*_*_*.txt", SearchOption.AllDirectories); List<string> ImportedFiles = GetImportedFileList(); insertCommand.Parameters.Add(new SqlParameter("@FeatType", DbType.String)); insertCommand.Parameters.Add(new SqlParameter("@FeatName", DbType.String)); insertCommand.Parameters.Add(new SqlParameter("@Axis", DbType.String)); insertCommand.Parameters.Add(new SqlParameter("@Actual", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@Nominal", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@Dev", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@TolMin", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@TolPlus", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@OutOfTol", DbType.Decimal)); foreach (string file in files.Except(ImportedFiles)) { var FileNameExt1 = Path.GetFileName(file); cmdd.Parameters.Clear(); cmdd.Parameters.Add(new SqlParameter("@FileExt", FileNameExt1)); cmdd.CommandText = @" IF (EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = 'RX_CMMData' AND TABLE_NAME = 'CMMData')) BEGIN SELECT COUNT(*) FROM CMMData WHERE FileName = @FileExt; END"; int count = Convert.ToInt32(cmdd.ExecuteScalar()); con.Close(); con.Open(); if (count == 0) { Console.WriteLine("Preparing to parse CMM data for SQL import..."); if (file.Count(c => c == '_') > 5) continue; insertCommand.CommandText = @" INSERT INTO CMMData (FeatType, FeatName, Axis, Actual, Nominal, Dev, TolMin, TolPlus, OutOfTol, PartNumber, CMMNumber, Date, FileName) VALUES (@FeatType, @FeatName, @Axis, @Actual, @Nominal, @Dev, @TolMin, @TolPlus, @OutOfTol, @PartNumber, @CMMNumber, @Date, @FileName);"; string FileNameExt = Path.GetFullPath(file); string RNumber = Path.GetFileNameWithoutExtension(file); int index2 = RNumber.IndexOf("~"); Match RNumberE = Regex.Match(RNumber, @"^(R|L)\d{6}(COMP|CRIT|TEST|SU[1-9])(?=_)", RegexOptions.IgnoreCase); Match RNumberD = Regex.Match(RNumber, @"(?<=_)\d{3}[A-Z]\d{4}|\d{3}[A-Z]\d\w\w\d(?=_)", RegexOptions.IgnoreCase); Match RNumberDate = Regex.Match(RNumber, @"(?<=_)\d{8}(?=_)", RegexOptions.IgnoreCase); if (RNumberD.Value == @"") continue; if (RNumberE.Value == @"") continue; if (RNumberDate.Value == @"") continue; if (index2 != -1) continue; /* string RNumberE = RNumber.Split('_')[0]; string RNumberD = RNumber.Split('_')[1]; string RNumberDate = RNumber.Split('_')[2]; */ DateTime dateTime = DateTime.ParseExact(RNumberDate.Value, "yyyyMMdd", Thread.CurrentThread.CurrentCulture); string cmmDate = dateTime.ToString("dd-MMM-yyyy"); string[] lines = File.ReadAllLines(file); bool parse = false; foreach (string tmpLine in lines) { string line = tmpLine.Trim(); if (!parse && line.StartsWith("Feat. Type,")) { parse = true; continue; } if (!parse || string.IsNullOrEmpty(line)) { continue; } Console.WriteLine(tmpLine); foreach (SqlParameter parameter in insertCommand.Parameters) { parameter.Value = null; } string[] values = line.Split(new[] { ',' }); for (int i = 0; i < values.Length - 1; i++) { SqlParameter param = insertCommand.Parameters[i]; if (param.DbType == DbType.Decimal) { decimal value; param.Value = decimal.TryParse(values[i], out value) ? value : 0; } else { param.Value = values[i]; } } insertCommand.Parameters.Add(new SqlParameter("@PartNumber", RNumberE)); insertCommand.Parameters.Add(new SqlParameter("@CMMNumber", RNumberD)); insertCommand.Parameters.Add(new SqlParameter("@Date", cmmDate)); insertCommand.Parameters.Add(new SqlParameter("@FileName", FileNameExt)); insertCommand.ExecuteNonQuery(); insertCommand.Parameters.RemoveAt("@PartNumber"); insertCommand.Parameters.RemoveAt("@CMMNumber"); insertCommand.Parameters.RemoveAt("@Date"); insertCommand.Parameters.RemoveAt("@FileName"); } } } Console.WriteLine("CMM data successfully imported to SQL database..."); } con.Close(); } } } }

    Read the article

  • parseInt and viewflipper layout problems

    - by user1234167
    I have a problem with parseInt it throws the error: unable to parse 'null' as integer. My view flipper is also not working. Hopefully this is an easy enough question. Here is my activity: import javax.xml.parsers.SAXParser; import javax.xml.parsers.SAXParserFactory; import org.xml.sax.InputSource; import org.xml.sax.XMLReader; import android.app.Activity; import android.graphics.Color; import android.os.Bundle; import android.util.Log; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; import android.widget.LinearLayout; import android.widget.TextView; import android.widget.ViewFlipper; import xml.parser.dataset; public class XmlParserActivity extends Activity implements OnClickListener { private final String MY_DEBUG_TAG = "WeatherForcaster"; // private dataset myDataSet; private LinearLayout layout; private int temp= 0; /** Called when the activity is first created. */ //the ViewSwitcher private Button btn; private ViewFlipper flip; // private TextView tv; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); layout=(LinearLayout)findViewById(R.id.linearlayout1); btn=(Button)findViewById(R.id.btn); btn.setOnClickListener(this); flip=(ViewFlipper)findViewById(R.id.flip); //when a view is displayed flip.setInAnimation(this,android.R.anim.fade_in); //when a view disappears flip.setOutAnimation(this, android.R.anim.fade_out); // String postcode = null; // public String getPostcode { // return postcode; // } //URL newUrl = c; // myweather.setText(c.toString()); /* Create a new TextView to display the parsingresult later. */ TextView tv = new TextView(this); // run(0); //WeatherApplicationActivity postcode = new WeatherApplicationActivity(); try { /* Create a URL we want to load some xml-data from. */ URL url = new URL("http://new.myweather2.com/developer/forecast.ashx?uac=gcV3ynNdoV&output=xml&query=G41"); //String url = new String("http://new.myweather2.com/developer/forecast.ashx?uac=gcV3ynNdoV&output=xml&query="+WeatherApplicationActivity.postcode ); //URL url = new URL(url); //url.toString( ); //myString(url.toString() + WeatherApplicationActivity.getString(postcode)); // url + WeatherApplicationActivity.getString(postcode); /* Get a SAXParser from the SAXPArserFactory. */ SAXParserFactory spf = SAXParserFactory.newInstance(); SAXParser sp = spf.newSAXParser(); /* Get the XMLReader of the SAXParser we created. */ XMLReader xr = sp.getXMLReader(); /* Create a new ContentHandler and apply it to the XML-Reader*/ handler myHandler = new handler(); xr.setContentHandler(myHandler); /* Parse the xml-data from our URL. */ xr.parse(new InputSource(url.openStream())); /* Parsing has finished. */ /* Our ExampleHandler now provides the parsed data to us. */ dataset parsedDataSet = myHandler.getParsedData(); /* Set the result to be displayed in our GUI. */ tv.setText(parsedDataSet.toString()); } catch (Exception e) { /* Display any Error to the GUI. */ tv.setText("Error: " + e.getMessage()); Log.e(MY_DEBUG_TAG, "WeatherQueryError", e); } temp = Integer.parseInt(xml.parser.dataset.getTemp()); if(temp <0){ //layout.setBackgroundColor(Color.BLUE); //layout.setBackgroundColor(getResources().getColor(R.color.silver)); findViewById(R.id.flip).setBackgroundColor(Color.BLUE); } else if(temp > 0 && temp < 9) { //layout.setBackgroundColor(Color.GREEN); //layout.setBackgroundColor(getResources().getColor(R.color.silver)); findViewById(R.id.flip).setBackgroundColor(Color.GREEN); } else { //layout.setBackgroundColor(Color.YELLOW); //layout.setBackgroundColor(getResources().getColor(R.color.silver)); findViewById(R.id.flip).setBackgroundColor(Color.YELLOW); } /* Display the TextView. */ this.setContentView(tv); } @Override public void onClick(View arg0) { // TODO Auto-generated method stub onClick(View arg0) { // TODO Auto-generated method stub flip.showNext(); //specify flipping interval //flip.setFlipInterval(1000); //flip.startFlipping(); } } this is my dataset: package xml.parser; public class dataset { static String temp = null; // private int extractedInt = 0; public static String getTemp() { return temp; } public void setTemp(String temp) { this.temp = temp; } this is my handler: public void characters(char ch[], int start, int length) { if(this.in_temp){ String setTemp = new String(ch, start, length); // myParsedDataSet.setTempUnit(new String(ch, start, length)); // myParsedDataSet.setTemp; } the dataset and handler i only pasted the code that involves the temp as i no they r working when i take out the if statement. However even then my viewflipper wont work. This is my main xml: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent" android:id="@+id/linearlayout1" > <TextView android:layout_width="fill_parent" android:layout_height="wrap_content" android:textSize="25dip" android:text="Flip Example" /> <TextView android:layout_width="fill_parent" android:layout_height="wrap_content" android:textSize="25dip" android:id="@+id/tv" /> <Button android:layout_width="wrap_content" android:layout_height="wrap_content" android:textSize="25dip" android:text="Flip" android:id="@+id/btn" android:onClick="ClickHandler" /> <ViewFlipper android:layout_width="fill_parent" android:layout_height="fill_parent" android:id="@+id/flip"> <LinearLayout android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent" > <TextView android:layout_width="fill_parent" android:layout_height="wrap_content" android:textSize="25dip" android:text="Item1a" /> </LinearLayout> <TextView android:layout_width="fill_parent" android:layout_height="wrap_content" android:textSize="25dip" android:id="@+id/tv2" /> </ViewFlipper> </LinearLayout> this is my logcat: 04-01 18:02:24.744: E/AndroidRuntime(7331): FATAL EXCEPTION: main 04-01 18:02:24.744: E/AndroidRuntime(7331): java.lang.RuntimeException: Unable to start activity ComponentInfo{xml.parser/xml.parser.XmlParserActivity}: java.lang.NumberFormatException: unable to parse 'null' as integer 04-01 18:02:24.744: E/AndroidRuntime(7331): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1830) 04-01 18:02:24.744: E/AndroidRuntime(7331): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:1851) 04-01 18:02:24.744: E/AndroidRuntime(7331): at android.app.ActivityThread.access$1500(ActivityThread.java:132) 04-01 18:02:24.744: E/AndroidRuntime(7331): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1038) 04-01 18:02:24.744: E/AndroidRuntime(7331): at android.os.Handler.dispatchMessage(Handler.java:99) 04-01 18:02:24.744: E/AndroidRuntime(7331): at android.os.Looper.loop(Looper.java:150) 04-01 18:02:24.744: E/AndroidRuntime(7331): at android.app.ActivityThread.main(ActivityThread.java:4293) 04-01 18:02:24.744: E/AndroidRuntime(7331): at java.lang.reflect.Method.invokeNative(Native Method) 04-01 18:02:24.744: E/AndroidRuntime(7331): at java.lang.reflect.Method.invoke(Method.java:507) 04-01 18:02:24.744: E/AndroidRuntime(7331): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:849) 04-01 18:02:24.744: E/AndroidRuntime(7331): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:607) 04-01 18:02:24.744: E/AndroidRuntime(7331): at dalvik.system.NativeStart.main(Native Method) 04-01 18:02:24.744: E/AndroidRuntime(7331): Caused by: java.lang.NumberFormatException: unable to parse 'null' as integer 04-01 18:02:24.744: E/AndroidRuntime(7331): at java.lang.Integer.parseInt(Integer.java:356) 04-01 18:02:24.744: E/AndroidRuntime(7331): at java.lang.Integer.parseInt(Integer.java:332) 04-01 18:02:24.744: E/AndroidRuntime(7331): at xml.parser.XmlParserActivity.onCreate(XmlParserActivity.java:118) 04-01 18:02:24.744: E/AndroidRuntime(7331): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1072) 04-01 18:02:24.744: E/AndroidRuntime(7331): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:1794) I hope I have given enough information about my problems. I will be extremely grateful if anyone can help me out.

    Read the article

  • SQL error - Cannot convert nvarchar to decimal

    - by jakesankey
    I have a C# application that simply parses all of the txt documents within a given network directory and imports the data to a SQL server db. Everything was cruising along just fine until about the 1800th file when it happend to have a few blanks in columns that are called out as DBType.Decimal (and the value is usually zero in the files, not blank). So I got this error, "cannot convert nvarchar to decimal". I am wondering how I could tell the app to simply skip the lines that have this issue?? Perhaps I could even just change the column type to varchar even tho values are numbers (what problems could this create?) Thanks for any help! using System; using System.Data; using System.Data.SQLite; using System.IO; using System.Text.RegularExpressions; using System.Threading; using System.Collections.Generic; using System.Linq; using System.Data.SqlClient; namespace JohnDeereCMMDataParser { internal class Program { public static List<string> GetImportedFileList() { List<string> ImportedFiles = new List<string>(); using (SqlConnection connect = new SqlConnection(@"Server=FRXSQLDEV;Database=RX_CMMData;Integrated Security=YES")) { connect.Open(); using (SqlCommand fmd = connect.CreateCommand()) { fmd.CommandText = @"SELECT FileName FROM CMMData;"; fmd.CommandType = CommandType.Text; SqlDataReader r = fmd.ExecuteReader(); while (r.Read()) { ImportedFiles.Add(Convert.ToString(r["FileName"])); } } } return ImportedFiles; } private static void Main(string[] args) { Console.Title = "John Deere CMM Data Parser"; Console.WriteLine("Preparing CMM Data Parser... done"); Console.WriteLine("Scanning for new CMM data..."); Console.ForegroundColor = ConsoleColor.Gray; using (SqlConnection con = new SqlConnection(@"Server=FRXSQLDEV;Database=RX_CMMData;Integrated Security=YES")) { con.Open(); using (SqlCommand insertCommand = con.CreateCommand()) { Console.WriteLine("Connecting to SQL server..."); SqlCommand cmdd = con.CreateCommand(); string[] files = Directory.GetFiles(@"C:\Documents and Settings\js91162\Desktop\CMM WENZEL\", "*_*_*.txt", SearchOption.AllDirectories); List<string> ImportedFiles = GetImportedFileList(); insertCommand.Parameters.Add(new SqlParameter("@FeatType", DbType.String)); insertCommand.Parameters.Add(new SqlParameter("@FeatName", DbType.String)); insertCommand.Parameters.Add(new SqlParameter("@Axis", DbType.String)); insertCommand.Parameters.Add(new SqlParameter("@Actual", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@Nominal", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@Dev", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@TolMin", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@TolPlus", DbType.Decimal)); insertCommand.Parameters.Add(new SqlParameter("@OutOfTol", DbType.Decimal)); foreach (string file in files.Except(ImportedFiles)) { var FileNameExt1 = Path.GetFileName(file); cmdd.Parameters.Clear(); cmdd.Parameters.Add(new SqlParameter("@FileExt", FileNameExt1)); cmdd.CommandText = @" IF (EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = 'RX_CMMData' AND TABLE_NAME = 'CMMData')) BEGIN SELECT COUNT(*) FROM CMMData WHERE FileName = @FileExt; END"; int count = Convert.ToInt32(cmdd.ExecuteScalar()); con.Close(); con.Open(); if (count == 0) { Console.WriteLine("Preparing to parse CMM data for SQL import..."); if (file.Count(c => c == '_') > 5) continue; insertCommand.CommandText = @" INSERT INTO CMMData (FeatType, FeatName, Axis, Actual, Nominal, Dev, TolMin, TolPlus, OutOfTol, PartNumber, CMMNumber, Date, FileName) VALUES (@FeatType, @FeatName, @Axis, @Actual, @Nominal, @Dev, @TolMin, @TolPlus, @OutOfTol, @PartNumber, @CMMNumber, @Date, @FileName);"; string FileNameExt = Path.GetFullPath(file); string RNumber = Path.GetFileNameWithoutExtension(file); int index2 = RNumber.IndexOf("~"); Match RNumberE = Regex.Match(RNumber, @"^(R|L)\d{6}(COMP|CRIT|TEST|SU[1-9])(?=_)", RegexOptions.IgnoreCase); Match RNumberD = Regex.Match(RNumber, @"(?<=_)\d{3}[A-Z]\d{4}|\d{3}[A-Z]\d\w\w\d(?=_)", RegexOptions.IgnoreCase); Match RNumberDate = Regex.Match(RNumber, @"(?<=_)\d{8}(?=_)", RegexOptions.IgnoreCase); string RNumE = Convert.ToString(RNumberE); string RNumD = Convert.ToString(RNumberD); if (RNumberD.Value == @"") continue; if (RNumberE.Value == @"") continue; if (RNumberDate.Value == @"") continue; if (index2 != -1) continue; DateTime dateTime = DateTime.ParseExact(RNumberDate.Value, "yyyyMMdd", Thread.CurrentThread.CurrentCulture); string cmmDate = dateTime.ToString("dd-MMM-yyyy"); string[] lines = File.ReadAllLines(file); bool parse = false; foreach (string tmpLine in lines) { string line = tmpLine.Trim(); if (!parse && line.StartsWith("Feat. Type,")) { parse = true; continue; } if (!parse || string.IsNullOrEmpty(line)) { continue; } Console.WriteLine(tmpLine); foreach (SqlParameter parameter in insertCommand.Parameters) { parameter.Value = null; } string[] values = line.Split(new[] { ',' }); for (int i = 0; i < values.Length - 1; i++) { if (i = "" || i = null) continue; SqlParameter param = insertCommand.Parameters[i]; if (param.DbType == DbType.Decimal) { decimal value; param.Value = decimal.TryParse(values[i], out value) ? value : 0; } else { param.Value = values[i]; } } insertCommand.Parameters.Add(new SqlParameter("@PartNumber", RNumE)); insertCommand.Parameters.Add(new SqlParameter("@CMMNumber", RNumD)); insertCommand.Parameters.Add(new SqlParameter("@Date", cmmDate)); insertCommand.Parameters.Add(new SqlParameter("@FileName", FileNameExt)); insertCommand.ExecuteNonQuery(); insertCommand.Parameters.RemoveAt("@PartNumber"); insertCommand.Parameters.RemoveAt("@CMMNumber"); insertCommand.Parameters.RemoveAt("@Date"); insertCommand.Parameters.RemoveAt("@FileName"); } } } Console.WriteLine("CMM data successfully imported to SQL database..."); } con.Close(); } } } }

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >