Search Results

Search found 365 results on 15 pages for 'slice'.

Page 3/15 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • C++, name collision across different namespace

    - by aaa
    hello. I am baffled by the following name collision: namespace mp2 { boost::numeric::ublas::matrix_range<M> slice(M& m, const R1& r1, const R2& r2) { namespace ublas = boost::numeric::ublas; ublas::range r1_(r1.begin(), r1.end()), r2_(r2.begin(), r2.end()); return ublas::matrix_range<M>(m, r1_, r2_); } double energy(const Wavefunction &wf) { const Wavefunction::matrix& C = wf.coefficients(); int No = wf.occupied().size(); foreach (const Basis::MappedShell& P, basis.shells()) { slice(C, range(No), range(P)); the error from g++4.4 is 7 In file included from mp2.cpp:1: 8 /usr/include/boost/numeric/ublas/fwd.hpp: In function âdouble mp2::energy(const Wavefunction&)â: 9 /usr/include/boost/numeric/ublas/fwd.hpp:32: error: âboost::numeric::ublas::sliceâ is not a function, 10 ../../src/mp2/energy.hpp:98: error: conflict with âtemplate<class M, class R1, class R2> boost::numeric::ublas::matrix_range<M> mp2::slice(M&, const R1&, const R2&)â 11 ../../src/mp2/energy.hpp:123: error: in call to âsliceâ 12 /usr/include/boost/numeric/ublas/fwd.hpp:32: error: âboost::numeric::ublas::sliceâ is not a function, 13 ../../src/mp2/energy.hpp:98: error: conflict with âtemplate<class M, class R1, class R2> boost::numeric::ublas::matrix_range<M> mp2::slice(M&, const R1&, const R2&)â 14 ../../src/mp2/energy.hpp:129: error: in call to âsliceâ 15 make: *** [mp2.lo] Error 1 ublas segment is namespace boost { namespace numeric { namespace ublas { typedef basic_slice<> slice; why is slice in ublas collides with slice in mp2? I and fairly certain there is no using namespace ublas in the code and in includes. thank you

    Read the article

  • How do I get a slice from an array reference?

    - by Sachin
    Let us say that we have following array: my @arr=('Jan','Feb','Mar','Apr'); my @arr2=@arr[0..2]; How can we do the same thing if we have array reference like below: my $arr_ref=['Jan','Feb','Mar','Apr']; my $arr_ref2; # How can we do something similar to @arr[0..2]; using $arr_ref ?

    Read the article

  • Pie Charts Just Don't Work When Comparing Data - Number 10 of Top 10 Reasons to Never Ever Use a Pie

    - by Tony Wolfram
    When comparing data, which is what a pie chart is for, people have a hard time judging the angles and areas of the multiple pie slices in order to calculate how much bigger one slice is than the others. Pie Charts Don't Work A slice of pie is good for serving up a portion of desert. It's not good for making a judgement about how big the slice is, what percentage of 100 it is, or how it compares to other slices. People have trouble comparing angles and areas to each other. Controlled studies show that people will overestimate the percentage that a pie slice area represents. This is because we have trouble calculating the area based on the space between the two angles that define the slice. This picture shows how a pie chart is useless in determing the largest value when you have to compare pie slices.   You can't compare angles and slice areas to each other. Human perception and cognition is poor when viewing angles and areas and trying to make a mental comparison. Pie charts overload the working memory, forcing the person to make complicated calculations, and at the same time make a decision based on those comparisons. What's the point of showing a pie chart when you want to compare data, except to say, "well, the slices are almost the same, but I'm not really sure which one is bigger, or by how much, or what order they are from largest to smallest. But the colors sure are pretty. Plus, I like round things. Oh,was I suppose to make some important business decision? Sorry." Bad Choices and Bad Decisions Interaction Designers, Graphic Artists, Report Builders, Software Developers, and Executives have all made the decision to use pie charts in their reports, software applications, and dashboards. It was a bad decision. It was a poor choice. There are always better options and choices, yet the designer still made the decision to use a pie chart. I'll expore why people make such poor choices in my upcoming blog entires. (Hint: It has more to do with emotions than with analytical thinking.) I've outlined my opinions and arguments about the evils of using pie charts in "Countdown of Top 10 Reasons to Never Ever Use a Pie Chart." Each of my next 10 blog entries will support these arguments with illustrations, examples, and references to studies. But my goal is not to continuously and endlessly rage against the evils of using pie charts. This blog is not about pie charts. This blog is about understanding why designers choose to use a pie chart. Why, when give better alternatives, and acknowledging the shortcomings of pie charts, do designers over and over again still freely choose to place a pie chart in a report? As an extra treat and parting shot, check out the nice pie chart that Wikipedia uses to illustrate the United States population by state.   Remember, somebody chose to use this pie chart, with all its glorious colors, and post it on Wikipedia for all the world to see. My next blog will give you a better alternative for displaying comparable data - the sorted bar chart.

    Read the article

  • Flex HttpService POST limited to 543 Byte per Form field?

    - by motto
    Hi, I am getting a FaultEvent when trying to send form fields through HTTPService that contain more than 542 chars. Initializing the HttpService: httpServ = new HTTPService(); httpServ.method = 'POST'; httpServ.url = ENDPOINT_URL; //http://localhost:3001/ReportError.aspx httpServ.resultFormat = HTTPService.RESULT_FORMAT_TEXT; httpServ.contentType = HTTPService.CONTENT_TYPE_FORM; httpServ.addEventListener(ResultEvent.RESULT, OnErrorSent); httpServ.addEventListener(FaultEvent.FAULT, OnFault); Sending the request: var params:Object = {}; //params["stack"] = e.stackTrace.slice(0, 542); //length 542 = works //params["stack2"] = e.stackTrace.slice(1, 543); //length 542 = works (just to show that it's not about the content itself) params["stack3"] = e.stackTrace.slice(0, 543); //length 543 = fails I also seem to be able to create many form fields (with 542 length) so that it's not a limit of the request itself but of the form field: var params:Object = {}; params["stack"] = e.stackTrace.slice(0, 542); //length 542 params["stack2"] = e.stackTrace.slice(1, 543); //length 542 params["stack3"] = e.stackTrace.slice(2, 544); //length 542 // Length > 1600 chars The receiving party is an ASP.NET 4 site on the same domain and port. I hope someone already came across a similar restrictions or has some general advice on how to trace this problem down further. Thanks in advance.

    Read the article

  • How do I simplify my code?

    - by Mitchell Skurnik
    I just finished creating my first major application in C#/Silverlight. In the end the total line count came out to over 12,000 lines of code. Considering this was a rewrite of a php/javascript application I created 2 years that was over 28,000 lines I am actually quite proud of my accomplishment. After reading many questions and answers here on stackoverflow and other sites online, I followed many posters advice: I created classes, procedures, and such for things that I would have a year ago copied and pasted; I created logic charts to figure out complex functions; making sure there are no crazy hidden characters (used tabs instead of spaces); and a few others things; place comments where necessary (I have lots of comments). My application consists of 4 tiles laid out horizontally that have user controls loaded into each slice. You can have between one and four slices loaded at anytime. If you have once slice loaded, the slice takes up the entire artboard...if you have 2 loaded, each take up half, 3 a third, 4 a quarter. Each one of these slices represent (for the sake of this example) a light control. Each slice has 3 slider controls in it. Now when I coded the functionality of the sliders, I used a switch/case statement inside of a public function that would run the command on the specified slice/slider. The made for some duplicate code but I saw no way around it as each slice was named differently. So I would do slice1.my.commands(); slice2.my.commands(); etc. My question to you is how do I clean up my code even futher? (Sadly I cannot post any of my code). Is there any way to take this repetion out of my code?

    Read the article

  • How can I create proportionally-sized pie charts side-by-side in Excel 2007?

    - by Andrew Doran
    I have a pivot table with two sets of data as follows: 2011 2012 Slice A 45 20 Slice B 33 28 Slice C 22 2 I am trying to present two pie charts side-by-side, one with the 2011 data and one with the 2012 data. I want the relative size of each pie chart to reflect the totals, i.e. the pie chart with the 2011 data (totalling 100) should be twice the size of the pie chart with the 2012 data (totalling 50). The 'pie of pie' chart type seems to be closest to what I am looking for but this breaks out data from one slice and presents it in a second diagram so it isn't appropriate here.

    Read the article

  • Languages like Tcl that have configurable syntax?

    - by boost
    I'm looking for a language that will let me do what I could do with Clipper years ago, and which I can do with Tcl, namely add functionality in a way other than just adding functions. For example in Clipper/(x)Harbour there are commands #command, #translate, #xcommand and #xtranslate that allow things like this: #xcommand REPEAT; => DO WHILE .T. #xcommand UNTIL <cond>; => IF (<cond>); ;EXIT; ;ENDIF; ;ENDDO LOCAL n := 1 REPEAT n := n + 1 UNTIL n > 100 Similarly, in Tcl I'm doing proc process_range {_for_ project _from_ dat1 _to_ dat2 _by_ slice} { set fromDate [clock scan $dat1] set toDate [clock scan $dat2] if {$slice eq "day"} then {set incrementor [expr 24 * 60]} if {$slice eq "hour"} then {set incrementor 60} set method DateRange puts "Scanning from [clock format $fromDate -format "%c"] to [clock format $toDate -format "%c"] by $slice" for {set dateCursor $fromDate} {$dateCursor <= $toDate} {set dateCursor [clock add $dateCursor $incrementor minutes]} { # ... } } process_range for "client" from "2013-10-18 00:00" to "2013-10-20 23:59" by day Are there any other languages that permit this kind of, almost COBOL-esque, syntax modification? If you're wondering why I'm asking, it's for setting up stuff so that others with a not-as-geeky-as-I-am skillset can declare processing tasks.

    Read the article

  • QT- QImage and multi-threading problem.

    - by umanga
    Greetings all, Please refer to image at : http://i48.tinypic.com/316qb78.jpg We are developing an application to extract cell edges from MRC images from electron microscope. MRC file format stores volumetric pixel data (http://en.wikipedia.org/wiki/Voxel) and we simply use 3D char array(char***) to load and store data (gray scale values) from a MRC file. As shown in the image,there are 3 viewers to display XY,YZ and ZX planes respectively. Scrollbars on the top of the viewers use to change the image slice along an axis. Here is the steps we do when user changes the scrollbar position. 1) get the new scrollbar value.(this is the selected slice) 2) for the relavant plane (YZ,XY or ZX), generate (char* slice;) array for the selected slice by reading 3D char array (char***) 3) Create a new QImage* (Format_RGB888) and set pixel values by reading 'slice' (using img-setPixel(x,y,c);) 4) This new QImage* is painted in the paintEvent() method. We are going to execute "edge-detection" process in a seperate thread since it is an intensive process.During this process we need to draw detected curve (set of pixels) on top of above QImage*.(as a layer).This means we need to call drawPoint() methods outside the QT thread. Is it the best wayto use QImage for this case? What is the best way to execute QT drawing methods from another thread? thanks in advance,

    Read the article

  • Odd Suhosin memory alerts

    - by slice
    I am getting a lot of odd suhosin alerts in my syslog. The following are example entries: Jun 9 08:46:11 suhosin[9764]: ALERT - script tried to increase memory_limit to 2145386496 bytes which is above the allowed value (attacker '157.55.39.180', file '/var/www/site/index.php') Jun 9 08:46:11 suhosin[9744]: ALERT - script tried to increase memory_limit to 2145386496 bytes which is above the allowed value (attacker '109.74.2.136', file '/var/www/site/test.php') Jun 9 08:46:13 suhosin[9779]: ALERT - script tried to increase memory_limit to 0 bytes which is above the allowed value (attacker 'REMOTE_ADDR not set', file 'unknown') Jun 9 08:46:13 suhosin[9779]: ALERT - script tried to increase memory_limit to 2145386496 bytes which is above the allowed value (attacker 'REMOTE_ADDR not set', file 'unknown') What is happening here? Why 0 bytes or 2145386496 bytes (2046 GB!!??)? Why does it sometimes state the attacker and the requested script and sometimes state 'REMOTE_ADDR not set' and file 'unknown'? How do I proceed to figure this out?

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • codility challenge, test case OK , Evaluation report Wrong Answer

    - by Hussein Fawzy
    the aluminium 2014 gives me wrong answer [3 , 9 , -6 , 7 ,-3 , 9 , -6 , -10] got 25 expected 28 but when i repeated the challenge with the same code and make case test it gives me the correct answer Your test case [3, 9, -6, 7, -3, 9, -6, -10] : NO RUNTIME ERRORS (returned value: 28) what is the wrong with it ??? the challenge :- A non-empty zero-indexed array A consisting of N integers is given. A pair of integers (P, Q), such that 0 = P = Q < N, is called a slice of array A. The sum of a slice (P, Q) is the total of A[P] + A[P+1] + ... + A[Q]. The maximum sum is the maximum sum of any slice of A. For example, consider array A such that: A[0] = 3 A[1] = 2 A[2] = -6 A[3] = 3 A[4] = 1 For example (0, 1) is a slice of A that has sum A[0] + A[1] = 5. This is the maximum sum of A. You can perform a single swap operation in array A. This operation takes two indices I and J, such that 0 = I = J < N, and exchanges the values of A[I] and A[J]. To goal is to find the maximum sum you can achieve after performing a single swap. For example, after swapping elements 2 and 4, you will get the following array A: A[0] = 3 A[1] = 2 A[2] = 1 A[3] = 3 A[4] = -6 After that, (0, 3) is a slice of A that has the sum A[0] + A[1] + A[2] + A[3] = 9. This is the maximum sum of A after a single swap. Write a function: class Solution { public int solution(int[] A); } that, given a non-empty zero-indexed array A of N integers, returns the maximum sum of any slice of A after a single swap operation. For example, given: A[0] = 3 A[1] = 2 A[2] = -6 A[3] = 3 A[4] = 1 the function should return 9, as explained above. and my code is :- import java.math.*; class Solution { public int solution(int[] A) { if(A.length == 1) return A[0]; else if (A.length==2) return A[0]+A[1]; else{ int finalMaxSum = A[0]; for (int l=0 ; l<A.length ; l++){ for (int k = l+1 ; k<A.length ; k++ ){ int [] newA = A; int temp = newA[l]; newA [l] = newA[k]; newA[k]=temp; int maxSum = newA[0]; int current_max = newA[0]; for(int i = 1; i < newA.length; i++) { current_max = Math.max(A[i], current_max + newA[i]); maxSum = Math.max(maxSum, current_max); } finalMaxSum = Math.max(finalMaxSum , maxSum); } } return finalMaxSum; } } } i don't know what's the wrong with it ??

    Read the article

  • matrix multiplication with MPI [on hold]

    - by user3695701
    I'm working on an assignment on matrix multiplication with MPI. A*B=C. the requirement is that B should be vertically partitioned. Here's what I intend to do: broadcast matrix A to all processes and scatter B into several slices with each slice containing n/p columns. The following code only works when the number of process(p) is 1. when p1(say 2), I got [cluster2:21080] *** Process received signal *** [cluster2:21080] Signal: Segmentation fault (11) [cluster2:21080] Signal code: Address not mapped (1) [cluster2:21080] Failing at address: (nil) [cluster2:21080] [ 0] /lib/libpthread.so.0(+0xf8f0) [0x7f49f38108f0] [cluster2:21080] [ 1] /lib/libc.so.6(memcpy+0xe1) [0x7f49f35024c1] [cluster2:21080] [ 2] /usr/lib/libmpi.so.0(ompi_convertor_unpack+0x121)[0x7f49f47c88e1] [cluster2:21080] [ 3] /usr/lib/openmpi/lib/openmpi/mca_pml_ob1.so(+0x8a26) [0x7f49f0dcea26] [cluster2:21080] [ 4] /usr/lib/openmpi/lib/openmpi/mca_btl_tcp.so(+0x662c) [0x7f49efce462c] [cluster2:21080] [ 5] /usr/lib/libopen-pal.so.0(+0x1ede8) [0x7f49f42e0de8] [cluster2:21080] [ 6] /usr/lib/libopen-pal.so.0(opal_progress+0x99) [0x7f49f42d5369] [cluster2:21080] [ 7] /usr/lib/openmpi/lib/openmpi/mca_pml_ob1.so(+0x5585) [0x7f49f0dcb585] [cluster2:21080] [ 8] /usr/lib/openmpi/lib/openmpi/mca_coll_tuned.so(+0xcc01) [0x7f49eeeb1c01] [cluster2:21080] [ 9] /usr/lib/openmpi/lib/openmpi/mca_coll_tuned.so(+0x266c) [0x7f49eeea766c] [cluster2:21080] [10] /usr/lib/openmpi/lib/openmpi/mca_coll_sync.so(+0x1388) [0x7f49ef0c0388] [cluster2:21080] [11] /usr/lib/libmpi.so.0(MPI_Bcast+0x10e) [0x7f49f47d025e] [cluster2:21080] [12] ./out(main+0x259) [0x401571] [cluster2:21080] [13] /lib/libc.so.6(__libc_start_main+0xfd) [0x7f49f3498c8d] [cluster2:21080] [14] ./out() [0x400f29] [cluster2:21080] *** End of error message *** Can someone help me? Thanks. //matrices A and B //double* A =(double *)malloc(n*n*sizeof(double)); //double* B =(double *)malloc(n*n*sizeof(double)); //code initializing A,B... //n is the size of the matrix //p is the number of processes //myrank is the rank of calling process MPI_Init (&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &myrank); MPI_Comm_size(MPI_COMM_WORLD, &p); //broadcast A to all processes MPI_Bcast (A, n*n, MPI_DOUBLE, 0, MPI_COMM_WORLD); MPI_Datatype tmp_type, col_type; // extract a slice from B MPI_Type_vector(n, num_of_col_per_slice, n, MPI_DOUBLE, &tmp_type); // position of the first (0) and each next (stride * sizeof(double) ) slice MPI_Type_create_resized(tmp_type, 0, n * sizeof(double), &col_type); MPI_Type_commit(&col_type); //scatter a slice of B to each process MPI_Scatter(B, 1, col_type, B+myrank*n/p, n * n/p, MPI_DOUBLE, 0, MPI_COMM_WORLD); //use blas function to calculate A*sliceOfB and store the resulting slice to C cblas_dgemm(CblasRowMajor, CblasNoTrans, CblasNoTrans, n, n/p, n, 1.0, A, n, B+myrank*n/p, n, 0.0, C+myrank*n/p, n); //gather all those resulting slices into C MPI_Gather (C+myrank*n/p, n*n/p, MPI_DOUBLE, C, n*n/p, MPI_DOUBLE, 0, MPI_COMM_WORLD);

    Read the article

  • How t have the RDLC pie chart show ranges instead of individual values

    - by Emad
    I have some data coming from a custom dataset that look like the following example Person Age P1 20 P2 21 P3 30 P4 31 P5 40 And I want to develop a pie showing the age distribution against the age. The point is that I want this Age to be shown in ranges. (20-29, 30-39, etc for example So we have: Slice with total number =2 (P1 + P2) as age is 20-29 (one at 20 and another at 21) Slice with total number =3 (P3 + P4) as age is 30-39 (one at 30 and another at 31) Slice with total number =1 (P5) as age is 40 (one at 40). How can i customize the pie chart to aggregate values by ranges that I can define?

    Read the article

  • process thread scheduling

    - by arvind
    I have the following query regarding the scheduling of process threads. a) If my process A has 3 threads then can these threads be scheduled concurrently on the different CPUs in SMP m/c or they will be given time slice on the same cpu. b) Suppose I have two processes A with 3 threads and Process B with 2 threads (all threads are of same priority) then cpu time allocated to each thread (time slice) is dependent on the number of threads in the process or not? Correct me if I am wrong is it so that cpu time is allocated to process which is then shared among its threads i.e. time slice given to process A threads is less than that of Process B threads.

    Read the article

  • PHP and Django: Nginx, FastCGI and Green Unicorn?

    - by littlejim84
    I'm curious... I'm looking to have a really efficient setup for my slice for a client. I'm not an expert with servers and so am looking for good solid resources to help me set this up... It's been recommended to me that using FastCGI for PHP, Green Unicorn (gunicorn) for Django and Nginx for media is a good combination to have PHP and Django running on the same slice/server. This is needed due to have a main Django website and admin, but also to have a PHP forum on there too. Could anyone push me to some useful resources that would help me set this up on my slice? Or at least, any views or comments on this particular setup?

    Read the article

  • Click edit button twice in gridview asp.net c# issue

    - by Supriyo Banerjee
    I have a gridview created on a page where I want to provide an edit button for the user to click in. However the issue is the grid view row becomes editable only while clicking the edit button second time. Not sure what is going wrong here, any help would be appreciated. One additional point is my grid view is displayed on the page only on a click of a button and is not there on page_load event hence. Posting the code snippets: //MY Aspx code <Columns> <asp:TemplateField HeaderText="Slice" SortExpression="name"> <ItemTemplate> <asp:Label ID="lblslice" Text='<%# Eval("slice") %>' runat="server"></asp:Label> </ItemTemplate> <EditItemTemplate> <asp:Label ID="lblslice" Text='<%# Eval("slice") %>' runat="server"></asp:Label> </EditItemTemplate> </asp:TemplateField> <asp:TemplateField HeaderText="Metric" SortExpression="Description"> <ItemTemplate> <asp:Label ID="lblmetric" Text='<%# Eval("metric")%>' runat="server"></asp:Label> </ItemTemplate> <EditItemTemplate> <asp:Label ID="lblmetric" Text='<%# Eval("metric")%>' runat="server"></asp:Label> </EditItemTemplate> </asp:TemplateField> <asp:TemplateField HeaderText="Original" SortExpression="Type"> <ItemTemplate> <asp:Label ID="lbloriginal" Text='<%# Eval("Original")%>' runat="server"></asp:Label> </ItemTemplate> <EditItemTemplate> <asp:Label ID="lbloriginal" Text='<%# Eval("Original")%>' runat="server"></asp:Label> </EditItemTemplate> </asp:TemplateField> <asp:TemplateField HeaderText="WOW" SortExpression="Market"> <ItemTemplate> <asp:Label ID="lblwow" Text='<%# Eval("WOW")%>' runat="server"></asp:Label> </ItemTemplate> <EditItemTemplate> <asp:Label ID="lblwow" Text='<%# Eval("WOW")%>' runat="server"></asp:Label> </EditItemTemplate> </asp:TemplateField> <asp:TemplateField HeaderText="Change" SortExpression="Market" > <ItemTemplate> <asp:Label ID="lblChange" Text='<%# Eval("Change")%>' runat="server"></asp:Label> </ItemTemplate> <EditItemTemplate> <asp:TextBox ID="TxtCustomerID" Text='<%# Eval("Change") %> ' runat="server"></asp:TextBox> </EditItemTemplate> </asp:TemplateField> <asp:CommandField HeaderText="Edit" ShowEditButton="True" /> </Columns> </asp:GridView> //My code behind: protected void Page_Load(object sender, EventArgs e) { } public void populagridview1(string slice,string fromdate,string todate,string year) { SqlCommand cmd; SqlDataAdapter da; DataSet ds; cmd = new SqlCommand(); cmd.CommandType = CommandType.StoredProcedure; cmd.CommandText = "usp_geteventchanges"; cmd.Connection = conn; conn.Open(); SqlParameter param1 = new SqlParameter("@slice", slice); cmd.Parameters.Add(param1); SqlParameter param2 = new SqlParameter("@fromdate", fromdate); cmd.Parameters.Add(param2); SqlParameter param3 = new SqlParameter("@todate", todate); cmd.Parameters.Add(param3); SqlParameter param4 = new SqlParameter("@year", year); cmd.Parameters.Add(param4); da = new SqlDataAdapter(cmd); ds = new DataSet(); da.Fill(ds, "Table"); GridView1.DataSource = ds; GridView1.DataBind(); conn.Close(); } protected void ImpactCalc(object sender, EventArgs e) { populagridview1(ddl_slice.SelectedValue, dt_to_integer(Picker1.Text), dt_to_integer(Picker2.Text), Txt_Year.Text); } protected void GridView1_RowEditing(object sender, GridViewEditEventArgs e) { gvEditIndex = e.NewEditIndex; Gridview1.DataBind(); } My page layout This edit screen appears after clicking edit twice.. the grid view gets displayed on hitting the Calculate impact button. The data is from a backend stored procedure which is fired on clicking the Calculate impact button

    Read the article

  • explanation about prototype.js function binding code

    - by resopollution
    From: http://ejohn.org/apps/learn/#2 Function.prototype.bind = function(){ var fn = this, args = Array.prototype.slice.call(arguments), object = args.shift(); return function(){ return fn.apply(object, args.concat(Array.prototype.slice.call(arguments))); }; }; Can anyone tell me why the second return is necessary (before fn.apply)? Also, can anyone explain why args.concat is necessary? Why wouldn't it be re-written as: fn.apply(object, args) instead of return fn.apply(object, args.concat(Array.prototype.slice.call(arguments)));

    Read the article

  • Creating a Dynamic calendar in Silverlight

    - by Tom
    I am attempting to create a long range calendar that dynamically loads (and unloads) event data as the user scrolls left or right through time. I'm really struggling to figure out how to lay the basic framework of the UI out and how to dynamically build the interface as the user scrolls by clicking and dragging the mouse in the view area. See the image below for a basic diagram of the intent. Each slice would have potentially multiple rectangles in it for events that occurred on that day (slice). I would like each slice to be a canvas to allow me to position those rectangles appropriately. There are a few problems that I am not yet sure how to tackle but this is the first big one that I've been mulling over for a while and can't quite wrap my head around: I know how to dynamically create controls but how would I go about adding things to one end of the scrollable content while removing things from the other depending on the way the user is scrolling? Any guidance in the right direction would be much appreciated! Thanks.

    Read the article

  • Architecture of interaction modes ("paint tools") for a 3D paint program

    - by Bernhard Kausler
    We are developing a Qt-based application to navigate through and paint on a volume treated as a 3D pixel graphic. The layout of the app consists of three orthogonal slice views on which the user may paint stuff like dots, circles etc. and also erase already painted pixels. Think of a 3D Gimp or MS Paint. How would you design the the architecture for the different interaction modes (i.e. paint tools)? My idea is: use the MVC pattern have a separate controler for every interaction mode install an event filter on all three slice views to collect all incoming user interaction events (mouse, keyboard) redirect the events to the currently active interaction controler I would appreciate critical comments on that idea.

    Read the article

  • Draw Bug 2D player Camera

    - by RedShft
    I have just implemented a 2D player camera for my game, everything works properly except the player on the screen jitters when it moves between tiles. What I mean by jitter, is that if the player is moving the camera updates the tileset to be drawn and if the player steps to the right, the camera snaps that way. The movement is not smooth. I'm guessing this is occurring because of how I implemented the function to calculate the current viewable area or how my draw function works. I'm not entirely sure how to fix this. This camera system was entirely of my own creation and a first attempt at that, so it's very possible this is not a great way of doing things. My camera class, pulls information from the current tileset and calculates the viewable area. Right now I am targettng a resolution of 800 by 600. So I try to fit the appropriate amount of tiles for that resolution. My camera class, after calculating the current viewable tileset relative to the players location, returns a slice of the original tileset to be drawn. This tileset slice is updated every frame according to the players position. This slice is then passed to the map class, which draws the tile on screen. //Map Draw Function //This draw function currently matches the GID of the tile to it's location on the //PNG file of the tileset and then draws this portion on the screen void Draw(SDL_Surface* background, int[] _tileSet) { enforce( tilesetImage != null, "Tileset is null!"); enforce( background != null, "BackGround is null!"); int i = 0; int j = 0; SDL_Rect DestR, SrcR; SrcR.x = 0; SrcR.y = 0; SrcR.h = 32; SrcR.w = 32; foreach(tile; _tileSet) { //This code is matching the current tiles ID to the tileset image SrcR.x = cast(short)(tileWidth * (tile >= 11 ? (tile - ((tile / 10) * 10) - 1) : tile - 1)); SrcR.y = cast(short)(tileHeight * (tile > 10 ? (tile / 10) : 0)); //Applying the tile to the surface SDL_BlitSurface( tilesetImage, &SrcR, background, &DestR ); //this keeps track of what column/row we are on i++; if ( i == mapWidth ) { i = 0; j++; } DestR.x = cast(short)(i * tileWidth); DestR.y = cast(short)(j * tileHeight); } } //Camera Class class Camera { private: //A rectangle representing the view area SDL_Rect viewArea; //In number of tiles int viewAreaWidth; int viewAreaHeight; //This is the x and y coordinate of the camera in MAP SPACE IN PIXELS vect2 cameraCoordinates; //The player location in map space IN PIXELS vect2 playerLocation; //This is the players location in screen space; vect2 playerScreenLoc; int playerTileCol; int playerTileRow; int cameraTileCol; int cameraTileRow; //The map is stored in a single array with the tile ids //this corresponds to the index of the starting and ending tile int cameraStartTile, cameraEndTile; //This is a slice of the current tile set int[] tileSetCopy; int mapWidth; int mapHeight; int tileWidth; int tileHeight; public: this() { this.viewAreaWidth = 25; this.viewAreaHeight = 19; this.cameraCoordinates = vect2(0, 0); this.playerLocation = vect2(0, 0); this.viewArea = SDL_Rect (0, 0, 0, 0); this.tileWidth = 32; this.tileHeight = 32; } void Init(vect2 playerPosition, ref int[] tileSet, int mapWidth, int mapHeight ) { playerLocation = playerPosition; this.mapWidth = mapWidth; this.mapHeight = mapHeight; CalculateCurrentCameraPosition( tileSet, playerPosition ); //writeln( "Tile Set Copy: ", tileSetCopy ); //writeln( "Orginal Tile Set: ", tileSet ); } void CalculateCurrentCameraPosition( ref int[] tileSet, vect2 playerPosition ) { playerLocation = playerPosition; playerTileCol = cast(int)((playerLocation.x / tileWidth) + 1); playerTileRow = cast(int)((playerLocation.y / tileHeight) + 1); //writeln( "Player Tile (Column, Row): ","(", playerTileCol, ", ", playerTileRow, ")"); cameraTileCol = playerTileCol - (viewAreaWidth / 2); cameraTileRow = playerTileRow - (viewAreaHeight / 2); CameraMapBoundsCheck(); //writeln( "Camera Tile Start (Column, Row): ","(", cameraTileCol, ", ", cameraTileRow, ")"); cameraStartTile = ( (cameraTileRow - 1) * mapWidth ) + cameraTileCol - 1; //writeln( "Camera Start Tile: ", cameraStartTile ); cameraEndTile = cameraStartTile + ( viewAreaWidth * viewAreaHeight ) * 2; //writeln( "Camera End Tile: ", cameraEndTile ); tileSetCopy = tileSet[cameraStartTile..cameraEndTile]; } vect2 CalculatePlayerScreenLocation() { cameraCoordinates.x = cast(float)(cameraTileCol * tileWidth); cameraCoordinates.y = cast(float)(cameraTileRow * tileHeight); playerScreenLoc = playerLocation - cameraCoordinates + vect2(32, 32);; //writeln( "Camera Coordinates: ", cameraCoordinates ); //writeln( "Player Location (Map Space): ", playerLocation ); //writeln( "Player Location (Screen Space): ", playerScreenLoc ); return playerScreenLoc; } void CameraMapBoundsCheck() { if( cameraTileCol < 1 ) cameraTileCol = 1; if( cameraTileRow < 1 ) cameraTileRow = 1; if( cameraTileCol + 24 > mapWidth ) cameraTileCol = mapWidth - 24; if( cameraTileRow + 19 > mapHeight ) cameraTileRow = mapHeight - 19; } ref int[] GetTileSet() { return tileSetCopy; } int GetViewWidth() { return viewAreaWidth; } }

    Read the article

  • How to automatically generate html table from image in Linux?

    - by alfish
    In Photoshop, you can easily devide the image into zones using point and click and it automatically generates the corresponding html with image slices addressed in tables. Gimp also has a Slice (Filter Web Slice) but it is so rudimentary and, as far as I can see, does not allow point and click selection of slices. I am wondering if the functionality can be added into Gimp, or there are other Linux software to do this. I hate to return to Windows jut to do this simple task which I happen to use frequently. Thanks in advance for your suggestions.

    Read the article

  • Ubuntu server crashes; need help figuring how to figure out why

    - by neezer
    I have a 768 Slice at slicehost.com running Ubuntu Server 8.04.2 LTS (hardy) with a LAMP stack on it that periodically crashes, though why I am not sure. From what I can tell, there is a process that basically goes rogue and consumes all the memory on the slice, suffocating all the other programs running until the whole thing comes to a grinding halt, and I have to do a hard reboot of the slice to get it back up and running again. I can't detect any pattern for this (it seems to happen about once a month, more or less). Here's a screenshot of my console during the last crash: I would assume that a possible cause might a PHP script or an apache configuration rule that might cause the crash if triggered? How would I be able to find out which one is the offending one? I've checked and rechecked all my PHP scripts, and running them doesn't seem to trigger the crash. I've also been able to log on to my system during a crash and see what's running (with top), but I can't tell how the offending process was started, so I can't trace the root of the problem! I know my description is overly generic, but unfortunately my expertise in tracking down the source of these glitches is very limited. If you need any additional information about my system in order to help me figure this out, please let me know in the comments, and I will append it to the question. My only other lead as to the culprit here is Wordpress, which we have installed on this server. Here are the details: Wordpress 3.0.3 with the following plugins installed and activated: Addmarx - Bookmark/Share/Email Dropdown, Akismet, All in One SEO Pack, Animated Banners, Automatically publish highlights of any website, directly to your Blog, Broken Link Checker, CMS Dashboard, Collapsing Categories, Status Updater, SubHeading, Ultimate Google Analytics, VastSubCat, WP-CMS Post Control, and WP Super Cache

    Read the article

  • How can I partition a vector?

    - by Karsten W.
    How can I build a function slice(x, n=2) which would return a list of vectors where each vector except maybe the last has size n, i.e. slice(letters, 10) would return list(c("a", "b", "c", "d", "e", "f", "g", "h", "i", "j"), c("k", "l", "m", "n", "o", "p", "q", "r", "s", "t"), c("u", "v", "w", "x", "y", "z")) ?

    Read the article

  • Problems with Merb on Snow Leopard

    - by hamhoagie
    I've recently started looking at Merb, for use with some small projects around the office. I'm trying to set up my first project following the docs, and am encountering an exception such as: foo:beta user$ merb Merb root at: /Users/user/code/merb/beta Loading init file from ./config/init.rb Loading ./config/environments/development.rb ~ Connecting to database... ~ Loaded slice 'MerbAuthSlicePassword' ... ~ Parent pid: 39794 ~ Compiling routes... ~ Activating slice 'MerbAuthSlicePassword' ... ~ ~ FATAL: Mongrel is not installed, but you are trying to use it. You need to either install mongrel or a different Ruby web server, like thin. I have installed Mongrel from gem as well as from MacPorts, and am confused by this exception. Significant stats: ruby 1.8.7 (2010-01-10 patchlevel 249) [i686-darwin10] From my installed gems: merb (1.1.0) merb-action-args (1.1.0) merb-assets (1.1.0) merb-auth (1.1.0) merb-auth-core (1.1.0) merb-auth-more (1.1.0) merb-auth-slice-password (1.1.0) merb-cache (1.1.0) merb-core (1.1.0) merb-exceptions (1.1.0) merb-gen (1.1.0) merb-haml (1.1.0) merb-helpers (1.1.0) merb-mailer (1.1.0) merb-param-protection (1.1.0) merb-slices (1.1.0) merb_datamapper (1.1.0) mongrel (1.1.5) Merb documentation is non-existent, so I find myself stuck. Thanks in advance.

    Read the article

  • Generate syntax tree for simple math operations

    - by M28
    I am trying to generate a syntax tree, for a given string with simple math operators (+, -, *, /, and parenthesis). Given the string "1 + 2 * 3": It should return an array like this: ["+", [1, ["*", [2,3] ] ] ] I made a function to transform "1 + 2 * 3" in [1,"+",2,"*",3]. The problem is: I have no idea to give priority to certain operations. My code is: function isNumber(ch){ switch (ch) { case '0': case '1': case '2': case '3': case '4': case '5': case '6': case '7': case '8': case '9': case '.': return true; break; default: return false; break; } } function generateSyntaxTree(text){ if (typeof text != 'string') return []; var code = text.replace(new RegExp("[ \t\r\n\v\f]", "gm"), ""); var codeArray = []; var syntaxTree = []; // Put it in its on scope (function(){ var lastPos = 0; var wasNum = false; for (var i = 0; i < code.length; i++) { var cChar = code[i]; if (isNumber(cChar)) { if (!wasNum) { if (i != 0) { codeArray.push(code.slice(lastPos, i)); } lastPos = i; wasNum = true; } } else { if (wasNum) { var n = Number(code.slice(lastPos, i)); if (isNaN(n)) { throw new Error("Invalid Number"); return []; } else { codeArray.push(n); } wasNum = false; lastPos = i; } } } if (wasNum) { var n = Number(code.slice(lastPos, code.length)); if (isNaN(n)) { throw new Error("Invalid Number"); return []; } else { codeArray.push(n); } } })(); // At this moment, codeArray = [1,"+",2,"*",3] return syntaxTree; } alert('Returned: ' + generateSyntaxTree("1 + 2 * 3"));

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >