Search Results

Search found 42625 results on 1705 pages for 'function points'.

Page 97/1705 | < Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >

  • Issue with InnoDB engine while enabling and [ skip-innodb ] - [ERROR] Plugin 'InnoDB' init function returned error

    - by Ahn
    How to enable InnoDB, which was previously disabled with skip-innodb option. Case 1: Disabled the innodb with skip-innodb option and show engines givens as below. Engine | Support ... | InnoDB | NO ...... Case 2: As I want to enable the innodb, I commanded the #skip-innodb option and restarted. But now the show engines even not showing the InnoDB engine in the list. ? Mysql Version : 5.1.57-community-log OS : CentOS release 5.7 (Final) Log: 120622 13:06:36 InnoDB: Initializing buffer pool, size = 8.0M 120622 13:06:36 InnoDB: Completed initialization of buffer pool InnoDB: No valid checkpoint found. InnoDB: If this error appears when you are creating an InnoDB database, InnoDB: the problem may be that during an earlier attempt you managed InnoDB: to create the InnoDB data files, but log file creation failed. InnoDB: If that is the case, please refer to InnoDB: http://dev.mysql.com/doc/refman/5.1/en/error-creating-innodb.html 120622 13:06:36 [ERROR] Plugin 'InnoDB' init function returned error. 120622 13:06:36 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 120622 13:06:36 [Note] Event Scheduler: Loaded 0 events 120622 13:06:36 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.57-community-log' socket: '/data/mysqlsnd/mysql.sock1' port: 3307 MySQL Community Server (GPL)

    Read the article

  • Swapping Function (Fn) and Control (Ctrl) Keys on Lenovo ThinkPad W500

    - by Howiecamp
    I'd like to swap the Fn and Ctrl keys on my ThinkPad W500 (like many others! See: How can I switch the function and control keys on my laptop? and Intercepting the Fn key on laptops) Numerous folks indicate that Windows doesn't register the Fn key as a keypress but using Mihov ASCII Master 2.0, that gives the ASCII value of a keypress, I see the Fn key returning FF (perhaps FF in this case means 'not registered'). I also see that keys like Ctrl register with one ASCII code when pressed alone and another when pressed in combo with another key. Fn will only register when pressed alone, so Windows definitely isn't seeing the combo. This took a solution like AutoHotKey off the table. I ran KeyTweak (which shows you the hardware scan codes of a keypress and the Fn key registerd as 57443). Using this program I remapped Fn to the Ctrl key; this worked perfectly. However, I suspect that because of the issue in #1, the combo of, for example, Fn + C did not execute a copy. Short of retraining my pinky I'm actually considering removing the keyboard and resoldering the connections to swap those keys. I'd love to get some input as to the root technical issue(s) and possible solutions here.

    Read the article

  • Swapping Function (Fn) and Control (Ctrl) Keys on Lenovo ThinkPad W500

    - by Howiecamp
    I'd like to swap the Fn and Ctrl keys on my ThinkPad W500 (like many others!). I wanted to comment on http://superuser.com/questions/35228/how-can-i-switch-the-function-and-control-keys-on-my-laptop and StackOverflow question 514781 (please Google it because I don't have enough rep to include 2 hyperlinks) but I don't have enough rep to do so to comment. Numerous folks (in both the above questions and on other Google searches) indicate that Windows doesn't register the Fn key as a keypress but using a tool that gives the ASCII value of a keypress (visit www mihov com / eng / am.html) I see the Fn key returning FF (perhaps FF in this case means 'not registered'). I also see that keys like Ctrl register with one ASCII code when pressed alone and another when pressed in combo with another key. Fn will only register when pressed alone, so Windows definitely isn't seeing the combo. This took a solution like AutoHotKey off the table. I ran KeyTweak (which shows you the hardware scan codes of a keypress and the Fn key registerd as 57443). Using this program I remapped Fn to the Ctrl key; this worked perfectly. However, I suspect that because of the issue in #1, the combo of, for example, Fn + C did not execute a copy. Short of retraining my pinky I'm actually considering removing the keyboard and resoldering the connections to swap those keys. I'd love to get some input as to the root technical issue(s) and possible solutions here. Thanks

    Read the article

  • Problem with JS Selectbox Function

    - by Thomas
    I have a selectbox with three options. When a user selects one of the three options, I want a specific div to appear below it. I am trying to write the code that dictates which specific box is to appear when each of the three options is selected. So far, I have only worked on the code that pertains to the first option. However, whenever the user selects any of the three options from the selectbox, the function for the first option is triggered and the div is displayed. My question is two part: 1) How do I write a conditional function that specifically targets the selected option 2) What is the best way to accomplish what I have described above; How do I efficiently go about defining three different functions for three different options in a select box? Here is the function I was working on for the first option: $(document).ready(function(){ var subTableDiv = $("div.subTableDiv"); var subTableDiv1 = $("div.subTableDiv1"); var subTableDiv2 = $("div.subTableDiv2"); subTableDiv.hide(); subTableDiv1.hide(); subTableDiv2.hide(); var selectmenu=document.getElementById("customfields-s-18-s"); selectmenu.onchange=function(){ //run some code when "onchange" event fires var chosenoption=this.options[this.selectedIndex].value //this refers to "selectmenu" if (chosenoption.value ="Co-Op"){ subTableDiv1.slideDown("medium"); } } }); Html: <tr> <div> <select name="customfields-s-18-s" class="dropdown" id="customfields-s-18-s" > <option value="Condominium"> Condominium</option> <option value="Co-Op"> Co-Op</option> <option value="Condop"> Condop</option> </select> </div> </tr> <tr class="subTable"> <td colspan="2"> <div style="background-color: #EEEEEE; border: 1px solid #CCCCCC; padding: 10px;" id="Condominium" class="subTableDiv">Hi There! This is the first Box</div> </td> </tr> <tr class="subTable"> <td colspan="2"> <div style="background-color: #EEEEEE; border: 1px solid #CCCCCC; padding: 10px;" id="Co-Op" class="subTableDiv1">Hi There! This is the Second Box</div> </td> </tr> <tr class="subTable"> <td colspan="2"> <div style="background-color: #EEEEEE; border: 1px solid #CCCCCC; padding: 10px;" id="Condop" class="subTableDiv2">Hi There! This is the Third Box.</div> </td> </tr>

    Read the article

  • boost::function & boost::lambda again

    - by John Dibling
    Follow-up to post: http://stackoverflow.com/questions/2978096/using-width-precision-specifiers-with-boostformat I'm trying to use boost::function to create a function that uses lambdas to format a string with boost::format. Ultimately what I'm trying to achieve is using width & precision specifiers for strings with format. boost::format does not support the use of the * width & precision specifiers, as indicated in the docs: Width or precision set to asterisk (*) are used by printf to read this field from an argument. e.g. printf("%1$d:%2$.*3$d:%4$.*3$d\n", hour, min, precision, sec); This class does not support this mechanism for now. so such precision or width fields are quietly ignored by the parsing. so I'm trying to find other ways to accomplish the same goal. Here is what I have so far, which isn't working: #include <string> #include <boost\function.hpp> #include <boost\lambda\lambda.hpp> #include <iostream> #include <boost\format.hpp> #include <iomanip> #include <boost\bind.hpp> int main() { using namespace boost::lambda; using namespace std; boost::function<std::string(int, std::string)> f = (boost::format("%s") % boost::io::group(setw(_1*2), setprecision(_2*2), _3)).str(); std::string s = (boost::format("%s") % f(15, "Hello")).str(); return 0; } This generates many compiler errors: 1>------ Build started: Project: hacks, Configuration: Debug x64 ------ 1>Compiling... 1>main.cpp 1>.\main.cpp(15) : error C2872: '_1' : ambiguous symbol 1> could be 'D:\Program Files (x86)\boost\boost_1_42\boost/lambda/core.hpp(69) : boost::lambda::placeholder1_type &boost::lambda::`anonymous-namespace'::_1' 1> or 'D:\Program Files (x86)\boost\boost_1_42\boost/bind/placeholders.hpp(43) : boost::arg<I> `anonymous-namespace'::_1' 1> with 1> [ 1> I=1 1> ] 1>.\main.cpp(15) : error C2664: 'std::setw' : cannot convert parameter 1 from 'boost::lambda::placeholder1_type' to 'std::streamsize' 1> No user-defined-conversion operator available that can perform this conversion, or the operator cannot be called 1>.\main.cpp(15) : error C2872: '_2' : ambiguous symbol 1> could be 'D:\Program Files (x86)\boost\boost_1_42\boost/lambda/core.hpp(70) : boost::lambda::placeholder2_type &boost::lambda::`anonymous-namespace'::_2' 1> or 'D:\Program Files (x86)\boost\boost_1_42\boost/bind/placeholders.hpp(44) : boost::arg<I> `anonymous-namespace'::_2' 1> with 1> [ 1> I=2 1> ] 1>.\main.cpp(15) : error C2664: 'std::setprecision' : cannot convert parameter 1 from 'boost::lambda::placeholder2_type' to 'std::streamsize' 1> No user-defined-conversion operator available that can perform this conversion, or the operator cannot be called 1>.\main.cpp(15) : error C2872: '_3' : ambiguous symbol 1> could be 'D:\Program Files (x86)\boost\boost_1_42\boost/lambda/core.hpp(71) : boost::lambda::placeholder3_type &boost::lambda::`anonymous-namespace'::_3' 1> or 'D:\Program Files (x86)\boost\boost_1_42\boost/bind/placeholders.hpp(45) : boost::arg<I> `anonymous-namespace'::_3' 1> with 1> [ 1> I=3 1> ] 1>.\main.cpp(15) : error C2660: 'boost::io::group' : function does not take 3 arguments 1>.\main.cpp(15) : error C2228: left of '.str' must have class/struct/union 1>Build log was saved at "file://c:\Users\john\Documents\Visual Studio 2005\Projects\hacks\x64\Debug\BuildLog.htm" 1>hacks - 7 error(s), 0 warning(s) ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== My fundamental understanding of boost's lambdas and functions is probably lacking. How can I get this to work?

    Read the article

  • How to schedule dynamic function with cron job?

    - by iBrazilian2
    I want to know how I can schedule a dynamic(auto populated data) function to auto run everyday at saved time? Let's say I have a form that once the button is clicked it sends the data to the function, which the posts the data. I simply want to automate that so that I don't have to press the button. <ul> <?php foreach($Class->retrieveData as $data) { <form method="post" action=""> <li> <input type="hidden" name="name">'.$data['name'].'<br/> <input type="hidden" name="description">'.$data['description'].'<br/> <input type="submit" name="post_data" value="Post"> </li> </form> } ?> </ul> Now, the form will pass the data to the function. if(isset($_POST['post_data'])) // if post_data button is clicked then it runs myFunction() { myFunction(); } myFunction() { $name = $_POST['name']; $description = $_POST['description']; } I tried doing the following but the problem is that Cron Job can only run the whole .php file, and I am retrieving the saved time to run from MySQL. foreach($Class->getTime() as $timeData) { $timeHour = $timeData['timeHour']; $timeMinute = $timeData['timeMinute']; $hourMin = date('H:i'); $timeData = ''.$timeHour.':'.$timeMinute.''; if($hourMin == $timeData) { run myFunction. } } $hourMin is the current hour:minute which is being matched against a saved time to auto run from Mysql. So if $hourMin == $timeData then the function will run. How can I run Cron Job to auto run myFunction() if the $hourMin equals $timeData? So... List 1 = is to be runned at 10am List 2 = is to be runned at 12pm List 3 = is to be runned at 2pm The 10am, 12pm, 2pm is the $timeHour and $timeMinute that is retrieved from MySQL but based on each list id's. EDIT @randomSeed, 1) I can schedule cron jobs. 2) $name and $description will all be arrays, so the following is what I am trying to accomplish. $name = array( 'Jon', 'Steven', 'Carter' ); $description = array( 'Jon is a great person.', 'Steven has an outgoing character.', 'Carter is a horrible person.' ); I want to parse the first arrays from both $name and $description if the scheduled time is correct. In database I have the following postDataTime table +----+---------+----------+------------+--------+ | iD | timeDay | timeHour | timeMinute | postiD | +--------------------------------------+--------+ | 1 | * | 9 | 0 | 21 | |----|---------|----------|------------|--------| | 2 | * | 10 | 30 | 22 | |----|---------|----------|------------|--------| | 3 | * | 11 | 0 | 23 | +----|---------+----------+------------+--------+ iD = auto incremented on upload. timeDay = * is everyday (cron job style) timeHour = Hour of the day to run the script timeMinute = minute of the hour to run script postiD = this is the id of the post that is located in another table (n+1 relationship) If it's difficult to understand.. if(time() == 10:30(time from MySQL postiD = 22)) { // run myFunction with the data that is retrieved for that time ex: $postiD = '22'; $name = 'Steven'; $description = 'Steven has an outgoing character.'; // the above is what will be in the $_POST from the form and will be // sent to the myFunction() } I simply want to schedule everything according to the time that is saved to MySQL as I showed at the very top(postDataTime table). (I'd show what I have tried, but I have searched for countless hours for an example of what I am trying to accomplish but I cannot find anything and what I tried doesn't work.). I thought I could use the exec() function but from what it seems that does not allow me to run functions, otherwise I would do the following.. $time = '10:30'; if($time == time()) { exec(myFunction()); }

    Read the article

  • How to call virtual function of an object in C++

    - by SoonDead
    I'm struggling with calling a virtual function in C++. I'm not experienced in C++, I mainly use C# and Java so I might have some delusions, but bear with me. I have to write a program where I have to avoid dynamic memory allocation if possible. I have made a class called List: template <class T> class List { public: T items[maxListLength]; int length; List() { length = 0; } T get(int i) const { if (i >= 0 && i < length) { return items[i]; } else { throw "Out of range!"; } }; // set the value of an already existing element void set(int i, T p) { if (i >= 0 && i < length) { items[i] = p; } else { throw "Out of range!"; } } // returns the index of the element int add(T p) { if (length >= maxListLength) { throw "Too many points!"; } items[length] = p; return length++; } // removes and returns the last element; T pop() { if (length > 0) { return items[--length]; } else { throw "There is no element to remove!"; } } }; It just makes an array of the given type, and manages the length of it. There is no need for dynamic memory allocation, I can just write: List<Object> objects; MyObject obj; objects.add(obj); MyObject inherits form Object. Object has a virtual function which is supposed to be overridden in MyObject: struct Object { virtual float method(const Input& input) { return 0.0f; } }; struct MyObject: public Object { virtual float method(const Input& input) { return 1.0f; } }; I get the elements as: objects.get(0).method(asdf); The problem is that even though the first element is a MyObject, the Object's method function is called. I'm guessing there is something wrong with storing the object in an array of Objects without dynamically allocating memory for the MyObject, but I'm not sure. Is there a way to call MyObject's method function? How? It's supposed to be a heterogeneous collection btw, so that's why the inheritance is there in the first place. If there is no way to call the MyObject's method function, then how should I make my list in the first place?

    Read the article

  • Details of 5GB and 50GB SQL Azure databases have now been released, along with new price points

    - by Eric Nelson
    Like many others signed up to the Windows Azure Platform, I received an email overnight detailing the upcoming database size changes for SQL Azure. I know from our work with early adopters over the last 12 months that the 1GB and 10GB limits were sometimes seen as blockers, especially when migrating existing application to SQL Azure. On June 28th 2010, we will be increasing the size limits: SQL Azure Web Edition database from 1 GB to 5 GB SQL Azure Business Edition database will go from 10 GB to 50 GB Along with these changes comes new price points, including the option to increase in increments of 10GB: Web Edition: Up to 1 GB relational database = $9.99 / month Up to 5 GB relational database = $49.95 / month Business Edition: Up to 10 GB relational database = $99.99 / month Up to 20 GB relational database = $199.98 / month Up to 30 GB relational database = $299.97 / month Up to 40 GB relational database = $399.96 / month Up to 50 GB relational database = $499.95 / month Check out the full SQL Azure pricing. Related Links: http://ukazure.ning.com UK community site Getting started with the Windows Azure Platform

    Read the article

  • Is finding graph minors without single node pinch points possible?

    - by Alturis
    Is it possible to robustly find all the graph minors within an arbitrary node graph where the pinch points are generally not single nodes? I have read some other posts on here about how to break up your graph into a Hamiltonian cycle and then from that find the graph minors but it seems to be such an algorithm would require that each "room" had "doorways" consisting of single nodes. To explain a bit more a visual aid is necessary. Lets say the nodes below are an example of the typical node graph. What I am looking for is a way to automatically find the different colored regions of the graph (or graph minors)

    Read the article

  • Quels points prendre en compte avant de se lancer vers la mobilité ? Le CMS ASP.NET DotNetNuke répond dans un livre blanc

    Découvrez les six points à prendre en compte avant de se lancer vers la mobilité selon DotNetNuke, dans un livre blanc publié par le CMS ASP.NET Le secteur du mobile a le vent en poupe. D'ici 2014, le nombre de pages affichées par les personnes utilisant les terminaux mobiles sera plus élevé que le nombre de visites effectuées depuis les ordinateurs de bureau et portables, selon une estimation de Morgan Stanley Research. Les entreprises pour maintenir leur présence sur Internet doivent donc songer à fournir dès à présent un accès rationalisé et facile à l'information pour les clients qui utilisent les smartphones et les tablettes. Comment préparer sa présence sur le mobile...

    Read the article

  • Displair : l'affichage 3D multi-touch devient une réalité commerciale, 1500 points de contact sur une surface allant jusqu'à 142 pouces

    Displair : l'affichage multi-touch en 3D en phase de devenir une réalité commerciale 1500 points de contact sur une surface allant jusqu'à 142 pouces Interagir avec un affichage volumétrique en 3D à la Minority Report devient-il une réalité fiable et commerciale ? Une société russe « Displair » dévoile en tout cas un système d'affichage qui s'en approche grandement. Displair utilise des projections laser sur une couche de brouillard froid pour reproduire des objets 3D dans le vide. Ce genre d'affichage holographique a fait ses balbutiements il y a bientôt une décennie et le produit n'aurait pas été aussi impressionnant sans une détection avancée du mouvement....

    Read the article

  • When decomposing a large function, how can I avoid the complexity from the extra subfunctions?

    - by missingno
    Say I have a large function like the following: function do_lots_of_stuff(){ { //subpart 1 ... } ... { //subpart N ... } } a common pattern is to decompose it into subfunctions function do_lots_of_stuff(){ subpart_1(...) subpart_2(...) ... subpart_N(...) } I usually find that decomposition has two main advantages: The decomposed function becomes much smaller. This can help people read it without getting lost in the details. Parameters have to be explicitly passed to the underlying subfunctions, instead of being implicitly available by just being in scope. This can help readability and modularity in some situations. However, I also find that decomposition has some disadvantages: There are no guarantees that the subfunctions "belong" to do_lots_of_stuff so there is nothing stopping someone from accidentally calling them from a wrong place. A module's complexity grows quadratically with the number of functions we add to it. (There are more possible ways for things to call each other) Therefore: Are there useful convention or coding styles that help me balance the pros and cons of function decomposition or should I just use an editor with code folding and call it a day? EDIT: This problem also applies to functional code (although in a less pressing manner). For example, in a functional setting we would have the subparts be returning values that are combined in the end and the decomposition problem of having lots of subfunctions being able to use each other is still present. We can't always assume that the problem domain will be able to be modeled on just some small simple types with just a few highly orthogonal functions. There will always be complicated algorithms or long lists of business rules that we still want to correctly be able to deal with. function do_lots_of_stuff(){ p1 = subpart_1() p2 = subpart_2() pN = subpart_N() return assembleStuff(p1, p2, ..., pN) }

    Read the article

  • Quels points prendre en compte avant de se lancer vers la mobilité ? Réponse dans un livre blanc par les développeurs du CMS DotNetNuke

    Découvrez les six points à prendre en compte avant de se lancer vers la mobilité selon DotNetNuke, dans un livre blanc publié par le CMS ASP.NET Le secteur du mobile a le vent en poupe. D'ici 2014, le nombre de pages affichées par les personnes utilisant les terminaux mobiles sera plus élevé que le nombre de visites effectuées depuis les ordinateurs de bureau et portables, selon une estimation de Morgan Stanley Research. Les entreprises pour maintenir leur présence sur Internet doivent donc songer à fournir dès à présent un accès rationalisé et facile à l'information pour les clients qui utilisent les smartphones et les tablettes. Comment préparer sa présence sur le mobile...

    Read the article

  • IBM Informix Spatial datablade LIneFromText function

    - by swatit
    Hi Everybody, I am using IBM-Informix for my school project as part of "Informix on-campus" ativity conduted by IBM. I am using spatial datablade to store spatial data. My spatial data table looks like, CREATE TABLE xmlTest (row_id integer NOT NULL,pre integer, post integer,parent integer,tagname varchar(40,1),point ST_POINT); Then I inserted the spatial data into the table. Now I am trying to select the 'Points' lying under given 'polygon', where the co-ordinates of the polygon are dynamic i.e co-ordinates will be decided in the 'select' query. My query is like SELECT v2.* FROM xmlTest v1,xmlTest v2 WHERE ST_Contains(ST_Polygon(ST_LineFromText('linestring (0 0, 22000 0,22000 22000,0 22000,0 0)',5)),ST_Point(v1.pre,v1.post,5)) AND v1.tagname like 'n1' AND ST_Contains(ST_Polygon(ST_LineFromText('linestring (0 0,v1.pre 0,v1.pre v1.post,0 v1.post,0 0 )',5)),ST_Point(v2.pre,v2.post,5)) AND v2.tagname like 'n2' however it is giving me error as "(USE31) - Too few points for geometry type in ST_LineFromText.", in the second linefromtext function. I checked the number of parameters for linestring function, but could not find the source of error. I think,linestring function's parameters should be fixed values and not the variable like in this query. Is it right? Then what is the alternate way, where I can specify my polygon co-ordinates dynamically? or is there any mistake in my query? I hope my question is clear. I appreciate your help!

    Read the article

  • What is the name of this geometrical function?

    - by Spike
    In a two dimentional integer space, you have two points, A and B. This function returns an enumeration of the points in the quadrilateral subset bounded by A and B. A = {1,1} B = {2,3} Fn(A,B) = {{1,1},{1,2},{1,3},{2,1},{2,2},{2,3}} I can implement it in a few lines of LINQ. private void UnknownFunction(Point to, Point from, List<Point> list) { var vectorX = Enumerable.Range(Math.Min(to.X, from.X), Math.Abs(to.X - from.Y) + 1); var vectorY = Enumerable.Range(Math.Min(to.Y, from.Y), Math.Abs(to.Y - from.Y) + 1); foreach (var x in vectorX) foreach (var y in vectorY) list.Add(new Point(x, y)); } I'm fairly sure that this is a standard mathematical operation, but I can't think what it is. Feel free to tell me that it's one line of code in your language of choice. Or to give me a cunning implementation with lambdas or some such. But mostly I just want to know what it's called. It's driving me nuts. It feels a little like a convolution, but it's been too long since I was at school for me to be sure.

    Read the article

  • XCode 5 says I got a duplicate, which I don't

    - by GoodMove
    The point is every time I try to run a C++ code in XCode 5 (the file s "File.cpp") xcode returns this: duplicate symbol _main ld: 1 duplicate symbol for architecture i386 clang: error: linker command failed with exit code 1 (use -v to see invocation) And it only returns the error, when I got the following function whatever it contains: int main() { } I checked the folder, which XCode points to (where it says the duplicates are placed), but didn't find anything though. What am I supposed to do??? #include "File.h" using namespace std; void func (void){ cout << "Hello World!" << endl; }

    Read the article

  • method works fine, until it is called in a function, then UnboundLocalError

    - by user1776100
    I define a method called dist, to calculate the distance between two points which I does it correctly when directly using the method. However, when I get a function to call it to calculate the distance between two points, I get UnboundLocalError: local variable 'minkowski_distance' referenced before assignment edit sorry, I just realised, this function does work. However I have another method calling it that doesn't. I put the last method at the bottom This is the method: class MinkowskiDistance(Distance): def __init__(self, dist_funct_name_str = 'Minkowski distance', p=2): self.p = p def dist(self, obj_a, obj_b): distance_to_power_p=0 p=self.p for i in range(len(obj_a)): distance_to_power_p += abs((obj_a[i]-obj_b[i]))**(p) minkowski_distance = (distance_to_power_p)**(1/p) return minkowski_distance and this is the function: (it basically splits the tuples x and y into their number and string components and calculates the distance between the numeric part of x and y and then the distance between the string parts, then adds them. def total_dist(x, y, p=2, q=2): jacard = QGramDistance(q=q) minkowski = MinkowskiDistance(p=p) x_num = [] x_str = [] y_num = [] y_str = [] #I am spliting each vector into its numerical parts and its string parts so that the distances #of each part can be found, then summed together. for i in range(len(x)): if type(x[i]) == float or type(x[i]) == int: x_num.append(x[i]) y_num.append(y[i]) else: x_str.append(x[i]) y_str.append(y[i]) num_dist = minkowski.dist(x_num,y_num) str_dist = I find using some more steps #I am simply adding the two types of distance to get the total distance: return num_dist + str_dist class NearestNeighbourClustering(Clustering): def __init__(self, data_file, clust_algo_name_str='', strip_header = "no", remove = -1): self.data_file= data_file self.header_strip = strip_header self.remove_column = remove def run_clustering(self, max_dist, p=2, q=2): K = {} #dictionary of clusters data_points = self.read_data_file() K[0]=[data_points[0]] k=0 #I added the first point in the data to the 0th cluster #k = number of clusters minus 1 n = len(data_points) for i in range(1,n): data_point_in_a_cluster = "no" for c in range(k+1): distances_from_i = [total_dist(data_points[i],K[c][j], p=p, q=q) for j in range(len(K[c]))] d = min(distances_from_i) if d <= max_dist: K[c].append(data_points[i]) data_point_in_a_cluster = "yes" if data_point_in_a_cluster == "no": k += 1 K[k]=[data_points[i]] return K

    Read the article

  • How do I improve this linear regression function?

    - by user558383
    I have the following PHP function that I'm using to draw a trend line. However, it sometimes plots the line below all the points in the scatter graph. Is there an error in my function or is there a better way to do it. I think it might be something to do with that with the line it produces, it treats all the residuals (the distances from the scatter points to the line) as positive regardless of them being above or below the line. function linear_regression($x, $y) { $n = count($x); $x_sum = array_sum($x); $y_sum = array_sum($y); $xx_sum = 0; $xy_sum = 0; for($i = 0; $i < $n; $i++) { $xy_sum+=($x[$i]*$y[$i]); $xx_sum+=($x[$i]*$x[$i]); } $m = (($n * $xy_sum) - ($x_sum * $y_sum)) / (($n * $xx_sum) - ($x_sum * $x_sum)); $b = ($y_sum - ($m * $x_sum)) / $n; return array("m"=>$m, "b"=>$b); }

    Read the article

  • How to use a different partition to store Restore Points in Windows?

    - by D Connors
    I'm running Windows 7, and it's installed on a 32 GB partition which is currently running low on space (and expanding it is not an option). Windows System Restore takes up several Gigs of that space. Since I like having many restore points available, I don't want to decrease the amount of space available for system restore. So my option is to try and get Windows to save my Restore Points on a different partition. Is that possible?

    Read the article

  • function to org-sort by three (3) criteria: due date / priority / title

    - by lawlist
    Is anyone aware of an org-sort function / modification that can refile / organize a group of TODO so that it sorts them by three (3) criteria: first sort by due date, second sort by priority, and third sort by by title of the task? EDIT: If anyone can please help me to modify this so that undated TODO are sorted last, that would be greatly appreciated -- at the present time, undated TODO are not being sorted: ;; multiple sort (defun org-sort-multi (&rest sort-types) "Multiple sorts on a certain level of an outline tree, or plain list items. SORT-TYPES is a list where each entry is either a character or a cons pair (BOOL . CHAR), where BOOL is whether or not to sort case-sensitively, and CHAR is one of the characters defined in `org-sort-entries-or-items'. Entries are applied in back to front order. Example: To sort first by TODO status, then by priority, then by date, then alphabetically (case-sensitive) use the following call: (org-sort-multi '(?d ?p ?t (t . ?a)))" (interactive) (dolist (x (nreverse sort-types)) (when (char-valid-p x) (setq x (cons nil x))) (condition-case nil (org-sort-entries (car x) (cdr x)) (error nil)))) ;; sort current level (defun lawlist-sort (&rest sort-types) "Sort the current org level. SORT-TYPES is a list where each entry is either a character or a cons pair (BOOL . CHAR), where BOOL is whether or not to sort case-sensitively, and CHAR is one of the characters defined in `org-sort-entries-or-items'. Entries are applied in back to front order. Defaults to \"?o ?p\" which is sorted by TODO status, then by priority" (interactive) (when (equal mode-name "Org") (let ((sort-types (or sort-types (if (or (org-entry-get nil "TODO") (org-entry-get nil "PRIORITY")) '(?d ?t ?p) ;; date, time, priority '((nil . ?a)))))) (save-excursion (outline-up-heading 1) (let ((start (point)) end) (while (and (not (bobp)) (not (eobp)) (<= (point) start)) (condition-case nil (outline-forward-same-level 1) (error (outline-up-heading 1)))) (unless (> (point) start) (goto-char (point-max))) (setq end (point)) (goto-char start) (apply 'org-sort-multi sort-types) (goto-char end) (when (eobp) (forward-line -1)) (when (looking-at "^\\s-*$") ;; (delete-line) ) (goto-char start) ;; (dotimes (x ) (org-cycle)) )))))

    Read the article

  • Excel 2007: plot data points not on an axis/ force linear x-incrementation without altering integrity of non-linear data

    - by Ennapode
    In Excel, how does one go about plotting points that don't have an x component that is an x-axis label? For example, in my graph, the x-components are derived from the cosine function and aren't linear, but Excel is displaying them as if .0016 to .0062 to .0135 is an equal incrementation. How would I change this so that the x-axis has an even incrementation without altering the integrity of the points themselves? In other words, how do I plot a point with an x component independent from the x-axis label?

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • C++ invalid reference problem

    - by Karol
    Hi all, I'm writing some callback implementation in C++. I have an abstract callback class, let's say: /** Abstract callback class. */ class callback { public: /** Executes the callback. */ void call() { do_call(); }; protected: /** Callback call implementation specific to derived callback. */ virtual void do_call() = 0; }; Each callback I create (accepting single-argument functions, double-argument functions...) is created as a mixin using one of the following: /** Makes the callback a single-argument callback. */ template <typename T> class singleArgumentCallback { protected: /** Callback argument. */ T arg; public: /** Constructor. */ singleArgumentCallback(T arg): arg(arg) { } }; /** Makes the callback a double-argument callback. */ template <typename T, typename V> class doubleArgumentCallback { protected: /** Callback argument 1. */ T arg1; /** Callback argument 2. */ V arg2; public: /** Constructor. */ doubleArgumentCallback(T arg1, V arg2): arg1(arg1), arg2(arg2) { } }; For example, a single-arg function callback would look like this: /** Single-arg callbacks. */ template <typename T> class singleArgFunctionCallback: public callback, protected singleArgumentCallback<T> { /** Callback. */ void (*callbackMethod)(T arg); public: /** Constructor. */ singleArgFunctionCallback(void (*callback)(T), T argument): singleArgumentCallback<T>(argument), callbackMethod(callback) { } protected: void do_call() { this->callbackMethod(this->arg); } }; For user convenience, I'd like to have a method that creates a callback without having the user think about details, so that one can call (this interface is not subject to change, unfortunately): void test3(float x) { std::cout << x << std::endl; } void test5(const std::string& s) { std::cout << s << std::endl; } make_callback(&test3, 12.0f)->call(); make_callback(&test5, "oh hai!")->call(); My current implementation of make_callback(...) is as follows: /** Creates a callback object. */ template <typename T, typename U> callback* make_callback( void (*callbackMethod)(T), U argument) { return new singleArgFunctionCallback<T>(callbackMethod, argument); } Unfortunately, when I call make_callback(&test5, "oh hai!")->call(); I get an empty string on the standard output. I believe the problem is that the reference gets out of scope after callback initialization. I tried using pointers and references, but it's impossible to have a pointer/reference to reference, so I failed. The only solution I had was to forbid substituting reference type as T (for example, T cannot be std::string&) but that's a sad solution since I have to create another singleArgCallbackAcceptingReference class accepting a function pointer with following signature: void (*callbackMethod)(T& arg); thus, my code gets duplicated 2^n times, where n is the number of arguments of a callback function. Does anybody know any workaround or has any idea how to fix it? Thanks in advance!

    Read the article

  • How to execute scalar function using Enterprise Library?

    - by Vadim
    I'm having trouble to execute scalar function using Enterprise Library 5.0. The code looks something like that: somedDb.ExecuteScalar(CommandType.Text, "SELECT dbo.MyFunction('param')"); When the code is executed, I get the following error: Cannot find either column "dbo" or the user-defined function or aggregate "dbo.MyFunction", or the name is ambiguous.

    Read the article

< Previous Page | 93 94 95 96 97 98 99 100 101 102 103 104  | Next Page >