Search Results

Search found 6580 results on 264 pages for 'boost foreach'.

Page 90/264 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • (PHP) Validation, Security and Speed - Does my app have these?

    - by Devner
    Hi all, I am currently working on a building community website in PHP. This contains forms that a user can fill right from registration to lot of other functionality. I am not an Object-oriented guy, so I am using functions most of the time to handle my application. I know I have to learn OOPS, but currently need to develop this website and get it running soon. Anyway, here's a sample of what I let my app. do: Consider a page (register.php) that has a form where a user has 3 fields to fill up, say: First Name, Last Name and Email. Upon submission of this form, I want to validate the form and show the corresponding errors to the users: <form id="form1" name="form1" method="post" action="<?php echo $_SERVER['PHP_SELF']; ?>"> <label for="name">Name:</label> <input type="text" name="name" id="name" /><br /> <label for="lname">Last Name:</label> <input type="text" name="lname" id="lname" /><br /> <label for="email">Email:</label> <input type="text" name="email" id="email" /><br /> <input type="submit" name="submit" id="submit" value="Submit" /> </form> This form will POST the info to the same page. So here's the code that will process the POST'ed info: <?php require("functions.php"); if( isset($_POST['submit']) ) { $errors = fn_register(); if( count($errors) ) { //Show error messages } else { //Send welcome mail to the user or do database stuff... } } ?> <?php //functions.php page: function sql_quote( $value ) { if( get_magic_quotes_gpc() ) { $value = stripslashes( $value ); } else { $value = addslashes( $value ); } if( function_exists( "mysql_real_escape_string" ) ) { $value = mysql_real_escape_string( $value ); } return $value; } function clean($str) { $str = strip_tags($str, '<br>,<br />'); $str = trim($str); $str = sql_quote($str); return $str; } foreach ($_POST as &$value) { if (!is_array($value)) { $value = clean($value); } else { clean($value); } } foreach ($_GET as &$value) { if (!is_array($value)) { $value = clean($value); } else { clean($value); } } function validate_name( $fld, $min, $max, $rule, $label ) { if( $rule == 'required' ) { if ( trim($fld) == '' ) { $str = "$label: Cannot be left blank."; return $str; } } if ( isset($fld) && trim($fld) != '' ) { if ( isset($fld) && $fld != '' && !preg_match("/^[a-zA-Z\ ]+$/", $fld)) { $str = "$label: Invalid characters used! Only Lowercase, Uppercase alphabets and Spaces are allowed"; } else if ( strlen($fld) < $min or strlen($fld) > $max ) { $curr_char = strlen($fld); $str = "$label: Must be atleast $min character &amp; less than $max char. Entered characters: $curr_char"; } else { $str = 0; } } else { $str = 0; } return $str; } function validate_email( $fld, $min, $max, $rule, $label ) { if( $rule == 'required' ) { if ( trim($fld) == '' ) { $str = "$label: Cannot be left blank."; return $str; } } if ( isset($fld) && trim($fld) != '' ) { if ( !eregi('^[a-zA-Z0-9._-]+@[a-zA-Z0-9._-]+\.([a-zA-Z]{2,4})$', $fld) ) { $str = "$label: Invalid format. Please check."; } else if ( strlen($fld) < $min or strlen($fld) > $max ) { $curr_char = strlen($fld); $str = "$label: Must be atleast $min character &amp; less than $max char. Entered characters: $curr_char"; } else { $str = 0; } } else { $str = 0; } return $str; } function val_rules( $str, $val_type, $rule='required' ){ switch ($val_type) { case 'name': $val = validate_name( $str, 3, 20, $rule, 'First Name'); break; case 'lname': $val = validate_name( $str, 10, 20, $rule, 'Last Name'); break; case 'email': $val = validate_email( $str, 10, 60, $rule, 'Email'); break; } return $val; } function fn_register() { $errors = array(); $val_name = val_rules( $_POST['name'], 'name' ); $val_lname = val_rules( $_POST['lname'], 'lname', 'optional' ); $val_email = val_rules( $_POST['email'], 'email' ); if ( $val_name != '0' ) { $errors['name'] = $val_name; } if ( $val_lname != '0' ) { $errors['lname'] = $val_lname; } if ( $val_email != '0' ) { $errors['email'] = $val_email; } return $errors; } //END of functions.php page ?> OK, now it might look like there's a lot, but lemme break it down target wise: 1. I wanted the foreach ($_POST as &$value) and foreach ($_GET as &$value) loops to loop through the received info from the user submission and strip/remove all malicious input. I am calling a function called clean on the input first to achieve the objective as stated above. This function will process each of the input, whether individual field values or even arrays and allow only tags and remove everything else. The rest of it is obvious. Once this happens, the new/cleaned values will be processed by the fn_register() function and based on the values returned after the validation, we get the corresponding errors or NULL values (as applicable). So here's my questions: 1. This pretty much makes me feel secure as I am forcing the user to correct malicious data and won't process the final data unless the errors are corrected. Am I correct? Does the method that I follow guarantee the speed (as I am using lots of functions and their corresponding calls)? The fields of a form differ and the minimum number of fields I may have at any given point of time in any form may be 3 and can go upto as high as 100 (or even more, I am not sure as the website is still being developed). Will having 100's of fields and their validation in the above way, reduce the speed of application (say upto half a million users are accessing the website at the same time?). What can I do to improve the speed and reduce function calls (if possible)? 3, Can I do something to improve the current ways of validation? I am holding off object oriented approach and using FILTERS in PHP for the later. So please, I request you all to suggest me way to improve/tweak the current ways and suggest me if the script is vulnerable or safe enough to be used in a Live production environment. If not, what I can do to be able to use it live? Thank you all in advance.

    Read the article

  • jstl code issue

    - by theJava
    <%@ page language="java" contentType="text/html; charset=ISO-8859-1" pageEncoding="ISO-8859-1"%> <%@ taglib uri="http://www.springframework.org/tags/form" prefix="form"%> <%@ taglib uri="http://java.sun.com/jsp/jstl/core" prefix="c"%> <%@ taglib uri="http://java.sun.com/jsp/jstl/functions" prefix="fn"%> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <style type="text/css"> .even { background-color: silver; } </style> <title>Registration Page</title> </head> <body> <form:form action="add.htm" commandName="user"> <table> <tr> <td>User Name :</td> <td><form:input path="name" /></td> </tr> <tr> <td>Password :</td> <td><form:password path="password" /></td> </tr> <tr> <td>Gender :</td> <td><form:radiobutton path="gender" value="M" label="M" /> <form:radiobutton path="gender" value="F" label="F" /></td> </tr> <tr> <td>Country :</td> <td><form:select path="country"> <form:option value="0" label="Select" /> <form:option value="India" label="India" /> <form:option value="USA" label="USA" /> <form:option value="UK" label="UK" /> </form:select></td> </tr> <tr> <td>About you :</td> <td><form:textarea path="aboutYou" /></td> </tr> <tr> <td>Community :</td> <td><form:checkbox path="community" value="Spring" label="Spring" /> <form:checkbox path="community" value="Hibernate" label="Hibernate" /> <form:checkbox path="community" value="Struts" label="Struts" /></td> </tr> <tr> <td></td> <td><form:checkbox path="mailingList" label="Would you like to join our mailinglist?" /></td> </tr> <tr> <td colspan="2"><input type="submit" value="Register"></td> </tr> </table> </form:form> <c:if test="${fn:length(userList) > 0}"> <table cellpadding="5"> <tr class="even"> <th>Name</th> <th>Gender</th> <th>Country</th> <th>About You</th> </tr> <c:forEach items="${userList}" var="user" varStatus="status"> <tr class="<c:if test="${status.count % 2 == 0}">even</c:if>"> <td>${user.name}</td> <td>${user.gender}</td> <td>${user.country}</td> <td>${user.aboutYou}</td> </tr> </c:forEach> </table> </c:if> </body> </html> When i execute my jsp page, this piece of code does not show up at all. The full source code is below. <c:if test="${fn:length(userList) > 0}"> <table cellpadding="5"> <tr class="even"> <th>Name</th> <th>Gender</th> <th>Country</th> <th>About You</th> </tr> <c:forEach items="${userList}" var="user" varStatus="status"> <tr class="<c:if test="${status.count % 2 == 0}">even</c:if>"> <td>${user.name}</td> <td>${user.gender}</td> <td>${user.country}</td> <td>${user.aboutYou}</td> </tr> </c:forEach> </table> </c:if>

    Read the article

  • Post data to MVC3 controller without pagerefresh

    - by Smooth
    I have this script that basically has 4 select boxes, what I want is that for the 2 top select boxes, he submits the optionvalue that is selected to an action (which can be found at "ProductKoppeling/ProductKoppelingPartial"), I want to let him submit this data when I click on an option but without page refresh. I tried JSON and I tried Ajax, but I didn't get it working.. How should i do this? <script language="javascript" type="text/javascript"> function delete_1() { var answer = confirm("U staat op het punt dit product te verwijderen, wilt u doorgaan?") if (answer) { document.getElementById('Actie_1').value = '5'; document.getElementById('hpg_submit').submit(); } } function delete_2() { var answer = confirm("U staat op het punt dit product te verwijderen, wilt u doorgaan?") if (answer) { document.getElementById('Actie_2').value = '6'; document.getElementById('pg_submit').submit(); } } function delete_3() { var answer = confirm("U staat op het punt dit product te verwijderen, wilt u doorgaan?") if (answer) { document.getElementById('Actie_3').value = '6'; document.getElementById('p_submit').submit(); } } </script> <div style="width: 500px; float: left;"> @using (Html.BeginForm("ProductKoppelingPartial", "ProductKoppeling", FormMethod.Post, new { id = "onload_submit" })) { @Html.DropDownList("Klant.Id", (ViewBag.Klant as SelectList), new { onchange = "document.getElementById('onload_submit').submit()" }) } <div style="clear: both"></div> <div style="float: left;"> <b>Hoofdgroepen</b><br /> @using (Html.BeginForm("ProductKoppelingPartial", "ProductKoppeling", FormMethod.Post, new { id = "hpg_submit" })) { if (ViewBag.SelectedKlant != null) { <input type="hidden" name="Klant.Id" value="@ViewBag.SelectedKlant.Id" /> } <select style="width: 200px;" size="6" id="HoofdProductGroep" name="HoofdProductGroep.Id" onchange="document.getElementById('hpg_submit').submit();"> @foreach (var hpg in ViewBag.HoofdProductGroep) { if (ViewBag.SelectedHPG != null) { if (hpg.Id == ViewBag.SelectedHPG.Id) { <option value="@hpg.Id" selected="selected">@hpg.Naam</option> } else { <option value="@hpg.Id">@hpg.Naam</option> } } else { <option value="@hpg.Id">@hpg.Naam</option> } } </select> <input type="hidden" name="Actie" id="Actie_1" value="0" /> <br /> <img src="../../Content/toevoegen.png" style="cursor: pointer; width: 30px;" onclick="document.getElementById('Actie_1').value='1';document.getElementById('hpg_submit').submit();" /> <img src="../../Content/bewerken.png" style="cursor: pointer; float: none; width: 30px;" onclick="document.getElementById('Actie_1').value='2';document.getElementById('hpg_submit').submit();" /> <img src="../../Content/verwijderen.png" style="cursor: pointer; float: none; width: 30px;" onclick="delete_1()" /> } </div> <div style="float: right;"> <b>Groepen</b><br /> @using (Html.BeginForm("ProductKoppelingPartial", "ProductKoppeling", FormMethod.Post, new { id = "pg_submit" })) { if (ViewBag.SelectedHPG != null) { <input type="hidden" name="HoofdProductGroep.Id" value="@ViewBag.SelectedHPG.Id" /> } if (ViewBag.SelectedKlant != null) { <input type="hidden" name="Klant.Id" value="@ViewBag.SelectedKlant.Id" /> } <select size="6" style="width: 200px;" id="ProductGroep_Id" name="ProductGroep.Id" onchange="document.getElementById('pg_submit').submit();"> @foreach (var pg in ViewBag.ProductGroep) { if (ViewBag.SelectedPG != null) { if (pg.Id == ViewBag.SelectedPG.Id) { <option value="@pg.Id" selected="selected">@pg.Naam</option> } else { <option value="@pg.Id">@pg.Naam</option> } } else { <option value="@pg.Id">@pg.Naam</option> } } </select> <input type="hidden" name="Actie" id="Actie_2" value="0" /> <br /> <img src="../../Content/toevoegen.png" style="cursor: pointer; width: 30px;" onclick="document.getElementById('Actie_2').value='3';document.getElementById('pg_submit').submit();" /> <img src="../../Content/bewerken.png" style="cursor: pointer; float: none; width: 30px;" onclick="document.getElementById('Actie_2').value='4';document.getElementById('pg_submit').submit();" /> <img src="../../Content/verwijderen.png" style="cursor: pointer; float: none; width: 30px;" onclick="delete_2()" /> } </div> <div style="clear: both; height: 25px;"></div> @using (Html.BeginForm("Save", "ProductKoppeling", FormMethod.Post, new { id = "p_submit" })) { <div style="float: left"> <b>Producten</b><br /> <select size="18" style="width: 200px;" name="Product.Id"> @foreach (var p in ViewBag.Product) { <option value="@p.Id">@p.Naam</option> } </select> @if (ViewBag.SelectedPG != null) { if (ViewBag.SelectedPG.Id != null) { <input type="hidden" name="ProductGroep.Id" value="@ViewBag.SelectedPG.Id" /> } } <input type="hidden" name="Actie" id="Actie_3" value="0" /> <br /> <img src="../../Content/toevoegen.png" style="cursor: pointer; width: 30px;" onclick="document.getElementById('Actie_3').value='1';document.getElementById('p_submit').submit();" /> <img src="../../Content/bewerken.png" style="cursor: pointer; float: none; width: 30px;" onclick="document.getElementById('Actie_3').value='2';document.getElementById('p_submit').submit();" /> <img src="../../Content/verwijderen.png" style="cursor: pointer; float: none; width: 30px;" onclick="delete_3()" /> <br /> </div> <div style="float: left; width: 100px;"> <center> <br /><br /><br /><br /> <a style="cursor: pointer; float: none; color: blue; font-size: 30px;" onclick="document.getElementById('p_submit').submit();">»</a> <br /><br /><br /><br /><br /><br /><br /><br /><br /> <a style="cursor: pointer; float: none; color: blue; font-size: 30px;" onclick="document.getElementById('pgp_submit').submit();">«</a> </center> </div> } <div style="float: right;"> <b>Producten in groepen</b><br /> @using (Html.BeginForm("Delete", "ProductKoppeling", FormMethod.Post, new { id = "pgp_submit" })) { <select size="18" style="width: 200px;" name="ProductGroepProduct.Id"> @foreach (var pgp in ViewBag.ProductGroepProduct) { if (pgp != null) { if (pgp.Product != null) { <option value="@pgp.Id">@pgp.Product.Naam</option> } } } </select> } </div>

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • WCF Publish/Subscribe and using callbacks to send data to specific users

    - by manemawanna
    Hello thanks for looking, I'm working on a project at the moment and have become a little stuck. I'm creating a client server app, which allows a client to subscribe to the server to have messages forwarded to it. The issue I'm having is that when the client subscribes I wish for them to only recieve updates that relate to them. The system basically passes messages from a SQL server DB which the server monitors. When a new message is recieved the server should only forward the message to the clients that it applys to, based on whos logged on the client machine. I've had a look and found code samples which sign up for messages to be broadcast across all clients who have subscribed, but not any that show how to identify individual clients and if messages apply to them. If anyone could help or point me in the right direction it would be appreciated. You can now find some of my code below: namespace AnnouncementServiceLibrary { [ServiceContract(CallbackContract = typeof(IMessageCallback))] public interface IMessageCheck { [OperationContract] void MessageCheck(); } } namespace AnnouncementServiceLibrary { public interface IMessageCallback { [OperationContract(IsOneWay = true)] void OnNewMessage(Mess message); } } public bool Subscribe() { try { IMessageCallback callback = OperationContext.Current.GetCallbackChannel<IMessageCallback>(); //If they dont already exist in the subscribers list, adds them to it if (!subscribers.Contains(callback)) subscribers.Add(callback); return true; } catch { //Otherwise if an error occurs returns false return false; } } Subscribe/Unsubscribe: private static readonly List<IMessageCallback> subscribers = new List<IMessageCallback>(); /// <summary> /// Unsubscribes the user from recieving new messages when they become avaliable /// </summary> /// <returns>Returns a bool that indicates whether the operation worked or not</returns> public bool Unsubscribe() { try { IMessageCallback callback = OperationContext.Current.GetCallbackChannel<IMessageCallback>(); //If they exist in the list of subscribers they are then removed if (subscribers.Contains(callback)) subscribers.Remove(callback); return true; } catch { //Otherwise if an error occurs returns false return false; } } Finally this at the moment isnt't working as basically when a user subscribes as it loops through I want it to filter based on the users userID: #region IMessageCheck Members /// <summary> /// This method checks for new messages recieved based on those who have subscribed for the service /// </summary> public void MessageCheck() { //A continuous loop to keep the method going while(true) { //Changes the thread to a sleep state for 2 mins? Thread.Sleep(200000); //Go through each subscriber based on there callback information subscribers.ForEach(delegate(IMessageCallback callback) { //Checks if the person who wanted the callback can still be communicated with if (((ICommunicationObject)callback).State == CommunicationState.Opened) { //Creates a link to the database and gets the required information List<Mess> mess = new List<Mess>(); List<Message> me; List<MessageLink> messLink; AnnouncementDBDataContext aDb = new AnnouncementDBDataContext(); me = aDb.Messages.ToList(); messLink = aDb.MessageLinks.ToList(); //Query to retrieve any messages which are newer than the time when the last cycle finished var result = (from a in messLink join b in me on a.UniqueID equals b.UniqueID where b.TimeRecieved > _time select new { b.UniqueID, b.Author, b.Title, b.Body, b.Priority, a.Read, b.TimeRecieved }); //Foreach result a new message is created and returned to the PC that subscribed foreach (var a in result) { Mess message = new Mess(a.UniqueID, a.Author, a.Title, a.Body, a.Priority, (bool)a.Read, a.TimeRecieved); callback.OnNewMessage(message); } } //If the requesting PC can't be contacted they are removed from the subscribers list else { subscribers.Remove(callback); } }); //Sets the datetime so the next cycle can measure against to see if new messages have been recieved _time = DateTime.Now; } } #endregion

    Read the article

  • Silverlight Async Timeout Error

    - by Nath
    Calling through to my Silverlight Enabled WCF-Service in my silverlight application, occasionally users get timeouts. Whats the easiest way to boost the time allowed by the service client for a response? The exact exception thrown is: System.TimeoutException: [HttpRequestTimedOutWithoutDetail] Thanks

    Read the article

  • How I can export a datatable to MS word 2007, excel 2007,csv from asp.net?

    - by bala3569
    Hi, I am using the below code to Export DataTable to MS Word,Excel,CSV format & it's working fine. But problem is that this code export to MS Word 2003,Excel 2003 version. I need to Export my DataTable to Word 2007,Excel 2007,CSV because I am supposed to handle more than 100,000 records at a time and as we know Excel 2003 supports for only 65,000 records. Please help me out if you know that how to export DataTable or DataSet to MS Word 2007,Excel 2007. public static void Convertword(DataTable dt, HttpResponse Response,string filename) { Response.Clear(); Response.AddHeader("content-disposition", "attachment;filename=" + filename + ".doc"); Response.Charset = ""; Response.Cache.SetCacheability(HttpCacheability.NoCache); Response.ContentType = "application/vnd.word"; System.IO.StringWriter stringWrite = new System.IO.StringWriter(); System.Web.UI.HtmlTextWriter htmlWrite = new System.Web.UI.HtmlTextWriter(stringWrite); System.Web.UI.WebControls.GridView dg = new System.Web.UI.WebControls.GridView(); dg.DataSource = dt; dg.DataBind(); dg.RenderControl(htmlWrite); Response.Write(stringWrite.ToString()); Response.End(); //HttpContext.Current.ApplicationInstance.CompleteRequest(); } public static void Convertexcel(DataTable dt, HttpResponse Response, string filename) { Response.Clear(); Response.AddHeader("content-disposition", "attachment;filename=" + filename + ".xls"); Response.Charset = ""; Response.Cache.SetCacheability(HttpCacheability.NoCache); Response.ContentType = "application/vnd.ms-excel"; System.IO.StringWriter stringWrite = new System.IO.StringWriter(); System.Web.UI.HtmlTextWriter htmlWrite = new System.Web.UI.HtmlTextWriter(stringWrite); System.Web.UI.WebControls.DataGrid dg = new System.Web.UI.WebControls.DataGrid(); dg.DataSource = dt; dg.DataBind(); dg.RenderControl(htmlWrite); Response.Write(stringWrite.ToString()); Response.End(); //HttpContext.Current.ApplicationInstance.CompleteRequest(); } public static void ConvertCSV(DataTable dataTable, HttpResponse Response, string filename) { Response.Clear(); Response.Buffer = true; Response.AddHeader("content-disposition", "attachment;filename=" + filename + ".csv"); Response.Charset = ""; Response.Cache.SetCacheability(HttpCacheability.NoCache); Response.ContentType = "Application/x-msexcel"; StringBuilder sb = new StringBuilder(); if (dataTable.Columns.Count != 0) { foreach (DataColumn column in dataTable.Columns) { sb.Append(column.ColumnName + ','); } sb.Append("\r\n"); foreach (DataRow row in dataTable.Rows) { foreach (DataColumn column in dataTable.Columns) { if(row[column].ToString().Contains(',')==true) { row[column] = row[column].ToString().Replace(",", ""); } sb.Append(row[column].ToString() + ','); } sb.Append("\r\n"); } } Response.Write(sb.ToString()); Response.End(); //HttpContext.Current.ApplicationInstance.CompleteRequest(); }

    Read the article

  • building median for each string in IList<IDictionary<string, double>>

    - by Oliver
    Actually i have a little brain bender and and just can't find the right direction to get it to work: Given is an IList<IDictionary<string, double>> and it is filled as followed: Name|Value ----+----- "x" | 3.8 "y" | 4.2 "z" | 1.5 ----+----- "x" | 7.2 "y" | 2.9 "z" | 1.3 ----+----- ... | ... To fill this up with some random data i used the following methods: var list = CreateRandomPoints(new string[] { "x", "y", "z" }, 20); This will work as followed: private IList<IDictionary<string, double>> CreateRandomPoints(string[] variableNames, int count) { var list = new List<IDictionary<string, double>>(count); list.AddRange(CreateRandomPoints(variableNames).Take(count)); return list; } private IEnumerable<IDictionary<string, double>> CreateRandomPoints(string[] variableNames) { while (true) yield return CreateRandomLine(variableNames); } private IDictionary<string, double> CreateRandomLine(string[] variableNames) { var dict = new Dictionary<string, double>(variableNames.Length); foreach (var variable in variableNames) { dict.Add(variable, _Rand.NextDouble() * 10); } return dict; } Also i can say that it is already ensured that every Dictionary within the list contains the same keys (but from list to list the names and count of the keys can change). So that's what i got. Now to the things i need: I'd like to get the median (or any other math aggregate operation) of each Key within all the dictionaries, so that my function to call would look something like: IDictionary<string, double> GetMedianOfRows(this IList<IDictionary<string, double>> list) The best would be to give some kind of aggregate operation as a parameter to the function to make it more generic (don't know if the func has the correct parameters, but should imagine what i'd like to do): private IDictionary<string, double> Aggregate(this IList<IDictionary<string, double>> list, Func<IEnumerable<double>, double> aggregator) Also my actual biggest problem is to do the job with a single iteration over the list, cause if within the list are 20 variables with 1000 values i don't like to iterate 20 times over the list. Instead i would go one time over the list and compute all twenty variables at once (The biggest advantage of doing it that way would be to use this also as IEnumerable<T> on any part of the list in a later step). So here is the code i already got: public static IDictionary<string, double> GetMedianOfRows(this IList<IDictionary<string, double>> list) { //Check of parameters is left out! //Get first item for initialization of result dictionary var firstItem = list[0]; //Create result dictionary and fill in variable names var dict = new Dictionary<string, double>(firstItem.Count); //Iterate over the whole list foreach (IDictionary<string, double> row in list) { //Iterate over each key/value pair within the list foreach (var kvp in row) { //How to determine median of all values? } } return dict; } Just to be sure here is a little explanation about the Median.

    Read the article

  • Recognize objects in image

    - by DoomStone
    Hello I am in the process of doing a school project, where we have a robot driving on the ground in between Flamingo plates. We need to create an algorithm that can identify the locations of these plates, so we can create paths around them (We are using A Star for that). So far have we worked with AForged Library and we have created the following class, the only problem with this is that when it create the rectangles dose it not take in account that the plates are not always parallel with the camera border, and it that case will it just create a rectangle that cover the whole plate. So we need to some way find the rotation on the object, or another way to identify this. I have create an image that might help explain this Image the describe the problem: http://img683.imageshack.us/img683/9835/imagerectangle.png Any help on how I can do this would be greatly appreciated. Any other information or ideers are always welcome. public class PasteMap { private Bitmap image; private Bitmap processedImage; private Rectangle[] rectangels; public void initialize(Bitmap image) { this.image = image; } public void process() { processedImage = image; processedImage = applyFilters(processedImage); processedImage = filterWhite(processedImage); rectangels = extractRectangles(processedImage); //rectangels = filterRectangles(rectangels); processedImage = drawRectangelsToImage(processedImage, rectangels); } public Bitmap getProcessedImage { get { return processedImage; } } public Rectangle[] getRectangles { get { return rectangels; } } private Bitmap applyFilters(Bitmap image) { image = new ContrastCorrection(2).Apply(image); image = new GaussianBlur(10, 10).Apply(image); return image; } private Bitmap filterWhite(Bitmap image) { Bitmap test = new Bitmap(image.Width, image.Height); for (int width = 0; width < image.Width; width++) { for (int height = 0; height < image.Height; height++) { if (image.GetPixel(width, height).R > 200 && image.GetPixel(width, height).G > 200 && image.GetPixel(width, height).B > 200) { test.SetPixel(width, height, Color.White); } else test.SetPixel(width, height, Color.Black); } } return test; } private Rectangle[] extractRectangles(Bitmap image) { BlobCounter bc = new BlobCounter(); bc.FilterBlobs = true; bc.MinWidth = 5; bc.MinHeight = 5; // process binary image bc.ProcessImage( image ); Blob[] blobs = bc.GetObjects(image, false); // process blobs List<Rectangle> rects = new List<Rectangle>(); foreach (Blob blob in blobs) { if (blob.Area > 1000) { rects.Add(blob.Rectangle); } } return rects.ToArray(); } private Rectangle[] filterRectangles(Rectangle[] rects) { List<Rectangle> Rectangles = new List<Rectangle>(); foreach (Rectangle rect in rects) { if (rect.Width > 75 && rect.Height > 75) Rectangles.Add(rect); } return Rectangles.ToArray(); } private Bitmap drawRectangelsToImage(Bitmap image, Rectangle[] rects) { BitmapData data = image.LockBits(new Rectangle(0, 0, image.Width, image.Height), ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb); foreach (Rectangle rect in rects) Drawing.FillRectangle(data, rect, Color.Red); image.UnlockBits(data); return image; } }

    Read the article

  • Simple C# CSV Excel export class

    - by Chris
    Thought this might be handy for someone, this is an extremely simple CSV export class that I needed. Features: Extremely simple to use Escapes commas and quotes so excel handles them fine Exports date and datetimes in timezone-proof format Without further ado: using System; using System.Data.SqlTypes; using System.IO; using System.Text; using System.Collections.Generic; /// <summary> /// Simple CSV export /// Example: /// CsvExport myExport = new CsvExport(); /// /// myExport.AddRow(); /// myExport["Region"] = "New York, USA"; /// myExport["Sales"] = 100000; /// myExport["Date Opened"] = new DateTime(2003, 12, 31); /// /// myExport.AddRow(); /// myExport["Region"] = "Sydney \"in\" Australia"; /// myExport["Sales"] = 50000; /// myExport["Date Opened"] = new DateTime(2005, 1, 1, 9, 30, 0); /// /// Then you can do any of the following three output options: /// string myCsv = myExport.Export(); /// myExport.ExportToFile("Somefile.csv"); /// byte[] myCsvData = myExport.ExportToBytes(); /// </summary> public class CsvExport { /// <summary> /// To keep the ordered list of column names /// </summary> List<string> fields = new List<string>(); /// <summary> /// The list of rows /// </summary> List<Dictionary<string, object>> rows = new List<Dictionary<string, object>>(); /// <summary> /// The current row /// </summary> Dictionary<string, object> currentRow { get { return rows[rows.Count - 1]; } } /// <summary> /// Set a value on this column /// </summary> public object this[string field] { set { // Keep track of the field names, because the dictionary loses the ordering if (!fields.Contains(field)) fields.Add(field); currentRow[field] = value; } } /// <summary> /// Call this before setting any fields on a row /// </summary> public void AddRow() { rows.Add(new Dictionary<string, object>()); } /// <summary> /// Converts a value to how it should output in a csv file /// If it has a comma, it needs surrounding with double quotes /// Eg Sydney, Australia -> "Sydney, Australia" /// Also if it contains any double quotes ("), then they need to be replaced with quad quotes[sic] ("") /// Eg "Dangerous Dan" McGrew -> """Dangerous Dan"" McGrew" /// </summary> string MakeValueCsvFriendly(object value) { if (value == null) return ""; if (value is INullable && ((INullable)value).IsNull) return ""; if (value is DateTime) { if (((DateTime)value).TimeOfDay.TotalSeconds==0) return ((DateTime)value).ToString("yyyy-MM-dd"); return ((DateTime)value).ToString("yyyy-MM-dd HH:mm:ss"); } string output = value.ToString(); if (output.Contains(",") || output.Contains("\"")) output = '"' + output.Replace("\"", "\"\"") + '"'; return output; } /// <summary> /// Output all rows as a CSV returning a string /// </summary> public string Export() { StringBuilder sb = new StringBuilder(); // The header foreach (string field in fields) sb.Append(field).Append(","); sb.AppendLine(); // The rows foreach (Dictionary<string, object> row in rows) { foreach (string field in fields) sb.Append(MakeValueCsvFriendly(row[field])).Append(","); sb.AppendLine(); } return sb.ToString(); } /// <summary> /// Exports to a file /// </summary> public void ExportToFile(string path) { File.WriteAllText(path, Export()); } /// <summary> /// Exports as raw UTF8 bytes /// </summary> public byte[] ExportToBytes() { return Encoding.UTF8.GetBytes(Export()); } }

    Read the article

  • JASON parsing in iphone

    - by hardik
    hello all i am having a little query i want to know that how to parse the xml string in to JASON string i want to parse the xml string in to JASON string a little code would be boost thanks in advance

    Read the article

  • Best Cross Platform Networking Framework For iPhone Dev

    - by Carmen
    I am looking to write a client/server application that will run on both iPhone, OS X and Windows. What are the best solutions for networking that will work on all 3 platforms? I have looked into Qt and it doesn't look like it has support for iPhone. I have also looked at boost, and that looks like it can be compiled for the iPhone. However I was hoping for a somewhat higher level framework like what Qt has to offer.

    Read the article

  • Multithreading or task parallel library

    - by Bruce Adams
    I have an application which performs 30 independent tasks simultaneously using multithreading, each task retrieves data over http, performs a calculation and returns a result to the ui thread. Can I use tpl to perform the same tasks? Does tpl create 30 new threads and spread them over all the available cores, or does it just split the tasks over the available cores and use one thread per core? Will there be a performance boost using tpl over multithreading in this case?

    Read the article

  • Debugging in XCode as root

    - by Anton
    In my program I need to create sockets and bind them to listen HTTP port (80). The program works fine when I launch it from command line with sudo, escalating permissions to root. Running under XCode gives a 'permission denied' error on the call to binding function (asio::ip::tcp::acceptor::bind()). How can I do debugging under XCode? All done in C++ and boost.asio on Mac OS X 10.5 with XCode 3.1.2.

    Read the article

  • Get User SID From Logon ID (Windows XP and Up)

    - by Dave Ruske
    I have a Windows service that needs to access registry hives under HKEY_USERS when users log on, either locally or via Terminal Server. I'm using a WMI query on win32_logonsession to receive events when users log on, and one of the properties I get from that query is a LogonId. To figure out which registry hive I need to access, now, I need the users's SID, which is used as a registry key name beneath HKEY_USERS. In most cases, I can get this by doing a RelatedObjectQuery like so (in C#): RelatedObjectQuery relatedQuery = new RelatedObjectQuery( "associators of {Win32_LogonSession.LogonId='" + logonID + "'} WHERE AssocClass=Win32_LoggedOnUser Role=Dependent" ); where "logonID" is the logon session ID from the session query. Running the RelatedObjectQuery will generally give me a SID property that contains exactly what I need. There are two issues I have with this. First and most importantly, the RelatedObjectQuery will not return any results for a domain user that logs in with cached credentials, disconnected from the domain. Second, I'm not pleased with the performance of this RelatedObjectQuery --- it can take up to several seconds to execute. Here's a quick and dirty command line program I threw together to experiment with the queries. Rather than setting up to receive events, this just enumerates the users on the local machine: using System; using System.Collections.Generic; using System.Text; using System.Management; namespace EnumUsersTest { class Program { static void Main( string[] args ) { ManagementScope scope = new ManagementScope( "\\\\.\\root\\cimv2" ); string queryString = "select * from win32_logonsession"; // for all sessions //string queryString = "select * from win32_logonsession where logontype = 2"; // for local interactive sessions only ManagementObjectSearcher sessionQuery = new ManagementObjectSearcher( scope, new SelectQuery( queryString ) ); ManagementObjectCollection logonSessions = sessionQuery.Get(); foreach ( ManagementObject logonSession in logonSessions ) { string logonID = logonSession["LogonId"].ToString(); Console.WriteLine( "=== {0}, type {1} ===", logonID, logonSession["LogonType"].ToString() ); RelatedObjectQuery relatedQuery = new RelatedObjectQuery( "associators of {Win32_LogonSession.LogonId='" + logonID + "'} WHERE AssocClass=Win32_LoggedOnUser Role=Dependent" ); ManagementObjectSearcher userQuery = new ManagementObjectSearcher( scope, relatedQuery ); ManagementObjectCollection users = userQuery.Get(); foreach ( ManagementObject user in users ) { PrintProperties( user.Properties ); } } Console.WriteLine( "\nDone! Press a key to exit..." ); Console.ReadKey( true ); } private static void PrintProperty( PropertyData pd ) { string value = "null"; string valueType = "n/a"; if ( null == pd.Value ) value = "null"; if ( pd.Value != null ) { value = pd.Value.ToString(); valueType = pd.Value.GetType().ToString(); } Console.WriteLine( " \"{0}\" = ({1}) \"{2}\"", pd.Name, valueType, value ); } private static void PrintProperties( PropertyDataCollection properties ) { foreach ( PropertyData pd in properties ) { PrintProperty( pd ); } } } } So... is there way to quickly and reliably obtain the user SID given the information I retrieve from WMI, or should I be looking at using something like SENS instead?

    Read the article

  • Succinct introduction to C++/CLI for C#/Haskell/F#/JS/C++/... programmer

    - by Henrik
    Hello everybody, I'm trying to write integrations with the operating system and with things like active directory and Ocropus. I know a bunch of programming languages, including those listed in the title. I'm trying to learn exactly how C++/CLI works, but can't find succinct, exact and accurate descriptions online from the searching that I have done. So I ask here. Could you tell me the pitfalls and features of C++/CLI? Assume I know all of C# and start from there. I'm not an expert in C++, so some of my questions' answers might be "just like C++", but could say that I am at C#. I would like to know things like: Converting C++ pointers to CLI pointers, Any differences in passing by value/doubly indirect pointers/CLI pointers from C#/C++ and what is 'recommended'. How do gcnew, __gc, __nogc work with Polymorphism Structs Inner classes Interfaces The "fixed" keyword; does that exist? Compiling DLLs loaded into the kernel with C++/CLI possible? Loaded as device drivers? Invoked by the kernel? What does this mean anyway (i.e. to load something into the kernel exactly; how do I know if it is?)? L"my string" versus "my string"? wchar_t? How many types of chars are there? Are we safe in treating chars as uint32s or what should one treat them as to guarantee language indifference in code? Finalizers (~ClassName() {}) are discouraged in C# because there are no garantuees they will run deterministically, but since in C++ I have to use "delete" or use copy-c'tors as to stack allocate memory, what are the recommendations between C#/C++ interactions? What are the pitfalls when using reflection in C++/CLI? How well does C++/CLI work with the IDisposable pattern and with SafeHandle, SafeHandleZeroOrMinusOneIsInvalid? I've read briefly about asynchronous exceptions when doing DMA-operations, what are these? Are there limitations you impose upon yourself when using C++ with CLI integration rather than just doing plain C++? Attributes in C++ similar to Attributes in C#? Can I use the full meta-programming patterns available in C++ through templates now and still have it compile like ordinary C++? Have you tried writing C++/CLI with boost? What are the optimal ways of interfacing the boost library with C++/CLI; can you give me an example of passing a lambda expression to an iterator/foldr function? What is the preferred way of exception handling? Can C++/CLI catch managed exceptions now? How well does dynamic IL generation work with C++/CLI? Does it run on Mono? Any other things I ought to know about?

    Read the article

  • Succinct introduction to C++/CLI for C#/Haskell/F#/JS/C++/... programmer

    - by Henrik
    Hello everybody, I'm trying to write integrations with the operating system and with things like active directory and Ocropus. I know a bunch of programming languages, including those listed in the title. I'm trying to learn exactly how C++/CLI works, but can't find succinct, exact and accurate descriptions online from the searching that I have done. So I ask here. Could you tell me the pitfalls and features of C++/CLI? Assume I know all of C# and start from there. I'm not an expert in C++, so some of my questions' answers might be "just like C++", but could say that I am at C#. I would like to know things like: Converting C++ pointers to CLI pointers, Any differences in passing by value/doubly indirect pointers/CLI pointers from C#/C++ and what is 'recommended'. How do gcnew, __gc, __nogc work with Polymorphism Structs Inner classes Interfaces The "fixed" keyword; does that exist? Compiling DLLs loaded into the kernel with C++/CLI possible? Loaded as device drivers? Invoked by the kernel? What does this mean anyway (i.e. to load something into the kernel exactly; how do I know if it is?)? L"my string" versus "my string"? wchar_t? How many types of chars are there? Are we safe in treating chars as uint32s or what should one treat them as to guarantee language indifference in code? Finalizers (~ClassName() {}) are discouraged in C# because there are no garantuees they will run deterministically, but since in C++ I have to use "delete" or use copy-c'tors as to stack allocate memory, what are the recommendations between C#/C++ interactions? What are the pitfalls when using reflection in C++/CLI? How well does C++/CLI work with the IDisposable pattern and with SafeHandle, SafeHandleZeroOrMinusOneIsInvalid? I've read briefly about asynchronous exceptions when doing DMA-operations, what are these? Are there limitations you impose upon yourself when using C++ with CLI integration rather than just doing plain C++? Attributes in C++ similar to Attributes in C#? Can I use the full meta-programming patterns available in C++ through templates now and still have it compile like ordinary C++? Have you tried writing C++/CLI with boost? What are the optimal ways of interfacing the boost library with C++/CLI; can you give me an example of passing a lambda expression to an iterator/foldr function? What is the preferred way of exception handling? Can C++/CLI catch managed exceptions now? How well does dynamic IL generation work with C++/CLI? Does it run on Mono? Any other things I ought to know about?

    Read the article

  • PHP parsing XML file with and without namespaces

    - by Mike
    I need to get a XML File into a Database. Thats not the problem. Cant read it, parse it and create some Objects to map to the DB. Problem is, that sometimes the XML File can contain namespaces and sometimes not. Furtermore sometimes there is no namespace defined at all. So what i first got was something like this: <?xml version="1.0" encoding="UTF-8"?> <struct xmlns:b="http://www.w3schools.com/test/"> <objects> <object> <node_1>value1</node_1> <node_2>value2</node_2> <node_3 iso_land="AFG"/> <coords lat="12.00" long="13.00"/> </object> </objects> </struct> And the parsing: $t = $xml->xpath('/objects/object'); foreach($nodes AS $node) { if($t[0]->$node) { $obj->$node = (string) $t[0]->$node; } } Thats fine as long as there are no namespaces. Here comes the XML File with namespaces: <?xml version="1.0" encoding="UTF-8"?> <b:struct xmlns:b="http://www.w3schools.com/test/"> <b:objects> <b:object> <b:node_1>value1</b:node_1> <b:node_2>value2</b:node_2> <b:node_3 iso_land="AFG"/> <b:coords lat="12.00" long="13.00"/> </b:object> </b:objects> </b:struct> I now came up with something like this: $xml = simplexml_load_file("test.xml"); $namespaces = $xml->getNamespaces(TRUE); $ns = count($namespaces) ? 'a:' : ''; $xml->registerXPathNamespace("a", "http://www.w3schools.com/test/"); $nodes = array('node_1', 'node_2'); $obj = new stdClass(); foreach($nodes AS $node) { $t = $xml->xpath('/'.$ns.'objects/'.$ns.'object/'.$ns.$node); if($t[0]) { $obj->$node = (string) $t[0]; } } $t = $xml->xpath('/'.$ns.'objects/'.$ns.'object/'.$ns.'node_3'); if($t[0]) { $obj->iso_land = (string) $t[0]->attributes()->iso_land; } $t = $xml->xpath('/'.$ns.'objects/'.$ns.'object/'.$ns.'coords'); if($t[0]) { $obj->lat = (string) $t[0]->attributes()->lat; $obj->long = (string) $t[0]->attributes()->long; } That works with namespaces and without. But i feel that there must be a better way. Before that i could do something like this: $t = $xml->xpath('/'.$ns.'objects/'.$ns.'object'); foreach($nodes AS $node) { if($t[0]->$node) { $obj->$node = (string) $t[0]->$node; } } But that just wont work with namespaces.

    Read the article

  • curl_init undefined?

    - by udaya
    Hi I am importing the contacts from gmail to my page ..... The process doesnot work due to this error 'curl_init' is not defined The suggestion i got is to 1.uncomment destination curl.dll 2.copy the following libraries to the windows/system32 dir. ssleay32.dll libeay32.dll 3.copy php_curl.dll to windows/system32 After trying all these i refreshed my xampp Even then error occurs This is my page where i am trying to import the gmail contacts ` // set URL and other appropriate options curl_setopt($ch, CURLOPT_URL, "http://www.example.com/"); curl_setopt($ch, CURLOPT_HEADER, 0); // grab URL and pass it to the browser curl_exec($ch); // close cURL resource, and free up system resources curl_close($ch); ? "HOSTED_OR_GOOGLE", "Email" = $_POST['Email'], echo "Passwd" = $_POST['Passwd'], "service" = "cp", "source" = "tutsmore/1.2" ); //Now we are going to post these datas to the clientLogin url. // Initialize the curl object with the $curl = curl_init($clientlogin_url); //Make the post true curl_setopt($curl, CURLOPT_POST, true); //Passing the above array of parameters. curl_setopt($curl, CURLOPT_POSTFIELDS, $clientlogin_post); //Set this for authentication and ssl communication. curl_setopt($curl, CURLOPT_HTTPAUTH, CURLAUTH_ANY); //provide false to not to check the server for the certificate. curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, false); //Tell curl to just don't echo the data but return it to a variable. curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1); //The variable containing response $response = curl_exec($curl); //Check whether the user is successfully login using the preg_match and save the auth key if the user //is successfully logged in preg_match("/Auth=([a-z0-9_-]+)/i", $response, $matches); $auth = $matches[1]; // Include the Auth string in the headers $headers = array("Authorization: GoogleLogin auth=" . $auth); // Make the request to the google contacts feed with the auth key $curl = curl_init('http://www.google.com/m8/feeds/contacts/default/full?max-results=10000'); //passing the headers of auth key. curl_setopt($curl, CURLOPT_HTTPHEADER, $headers); //Return the result in a variable curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1); //the variable with the response. $feed = curl_exec($curl); //Create empty array of contacts echo "contacts".$contacts=array(); //Initialize the DOMDocument object $doc=new DOMDocument(); //Check whether the feed is empty //If not empty then load that feed. if (!empty($feed)) $doc-loadHTML($feed); //Initialize the domxpath object and provide the loaded feed $xpath=new DOMXPath($doc); //Get every entry tags from the feed. $query="//entry"; $data=$xpath-query($query); //Process each entry tag foreach ($data as $node) { //children of each entry tag. $entry_nodes=$node-childNodes; //Create a temproray array. $tempArray=array(); //Process the child node of the entry tag. foreach($entry_nodes as $child) { //get the tagname of the child node. $domNodesName=$child-nodeName; switch($domNodesName) { case 'title' : { $tempArray['fullName']=$child-nodeValue; } break; case 'email' : { if (strpos($child-getAttribute('rel'),'home')!==false) $tempArray['email_1']=$child-getAttribute('address'); elseif(strpos($child-getAttribute('rel'),'work')!=false) $tempArray['email_2']=$child-getAttribute('address'); elseif(strpos($child-getAttribute('rel'),'other')!==false) $tempArray['email_3']=$child-getAttribute('address'); } break; } } if (!empty($tempArray['email_1']))$contacts[$tempArray['email_1']]=$tempArray; if(!empty($tempArray['email_2'])) $contacts[$tempArray['email_2']]=$tempArray; if(!empty($tempArray['email_3'])) $contacts[$tempArray['email_3']]=$tempArray; } foreach($contacts as $key=$val) { //Echo the email echo $key.""; } } else { //The form ? " method="POST" Email: Password: tutsmore don't save your email and password trust us. ` code is completely provided for debugging if any optimization is needed i will try to optimize the code

    Read the article

  • Array Problem, need to sort via Keys

    - by sologhost
    Ok, not really sure how to do this. I have values that are being outputted from a SQL query like so: $row[0] = array('lid' => 1, 'llayout' => 1, 'lposition' => 1, 'mid' => 1, 'mlayout' => 1, 'mposition' => 0); $row[1] = array('lid' => 2, 'llayout' => 1, 'lposition' => 0, 'mid' => 2, 'mlayout' => 1, 'mposition' => 0); $row[2] = array('lid' => 2, 'llayout' => 1, 'lposition' => 0, 'mid' => 3, 'mlayout' => 1, 'mposition' => 1); $row[3] = array('lid' => 3, 'llayout' => 1, 'lposition' => 1, 'mid' => 4, 'mlayout' => 1, 'mposition' => 1); $row[4] = array('lid' => 4, 'llayout' => 1, 'lposition' => 2, 'mid' => 5, 'mlayout' => 1, 'mposition' => 0); etc. etc. Ok, so the best thing I can think of for this is to give lid and mid array keys and have it equal the mposition into an array within the while loop of query like so... $old[$row['lid']][$row['mid']] = $mposition; Now if I do this, I need to compare this array's keys with another array that I'll need to build based on a $_POST array[]. $new = array(); foreach($_POST as $id => $data) { // $id = column, but we still need to get each rows position... $id = str_replace('col_', '', $id); // now get the inner array... foreach($data as $pos => $idpos) $new[$id][$idpos] = $pos; } Ok, so now I have 2 arrays of info, 1 from the database, and another from the $_POST positions, I hope I got the array keys correct. Now I need to figure out which one's changed, comparing from the old to the new. And also, need to update the database with all of the new positions where new lid = the old lid, and the new mid = the old mid from each array. I'm sure I'll have to use array_key or array_key_intersect somehow, but not sure exactly how...??? Also, I don't think an UPDATE would be useful in a foreach loop, perhaps there's a way to do a CASE statement in the UPDATE query? Also, Am I going about this the right way? OR should I do it another way instead of using a muli-dimensional array for this.

    Read the article

  • Binding DataTable To GridView, But No Rows In GridViewRowCollection Despite GridView Population?

    - by KSwift87
    Problem: I've coded a GridView in the markup in a page. I have coded a DataTable in the code-behind that takes data from a collection of custom objects. I then bind that DataTable to the GridView. (Specific problem mentioned a couple code-snippets below.) GridView Markup: <asp:GridView ID="gvCart" runat="server" CssClass="pList" AutoGenerateColumns="false" DataKeyNames="ProductID"> <Columns> <asp:BoundField DataField="ProductID" HeaderText="ProductID" /> <asp:BoundField DataField="Name" HeaderText="ProductName" /> <asp:ImageField DataImageUrlField="Thumbnail" HeaderText="Thumbnail"></asp:ImageField> <asp:BoundField DataField="Unit Price" HeaderText="Unit Price" /> <asp:TemplateField HeaderText="Quantity"> <ItemTemplate> <asp:TextBox ID="Quantity" runat="server" Text="<%# Bind('Quantity') %>" Width="25px"></asp:TextBox> </ItemTemplate> </asp:TemplateField> <asp:BoundField DataField="Total Price" HeaderText="Total Price" /> </Columns> </asp:GridView> DataTable Code-Behind: private void View(List<OrderItem> cart) { DataSet ds = new DataSet(); DataTable dt = ds.Tables.Add("Cart"); if (cart != null) { dt.Columns.Add("ProductID"); dt.Columns.Add("Name"); dt.Columns.Add("Thumbnail"); dt.Columns.Add("Unit Price"); dt.Columns.Add("Quantity"); dt.Columns.Add("Total Price"); foreach (OrderItem item in cart) { DataRow dr = dt.NewRow(); dr["ProductID"] = item.productId.ToString(); dr["Name"] = item.productName; dr["Thumbnail"] = ResolveUrl(item.productThumbnail); dr["Unit Price"] = "$" + item.productPrice.ToString(); dr["Quantity"] = item.productQuantity.ToString(); dr["Total Price"] = "$" + (item.productPrice * item.productQuantity).ToString(); dt.Rows.Add(dr); } gvCart.DataSource = dt; gvCart.DataBind(); gvCart.Width = 500; for (int counter = 0; counter < gvCart.Rows.Count; counter++) { gvCart.Rows[counter].Cells.Add(Common.createCell("<a href='cart.aspx?action=update&prodId=" + gvCart.Rows[counter].Cells[0].Text + "'>Update</a><br /><a href='cart.aspx?action='action=remove&prodId=" + gvCart.Rows[counter].Cells[0].Text + "/>Remove</a>")); } } } Error occurs below in the foreach - the GridViewRowCollection is empty! private void Update(string prodId) { List<OrderItem> cart = (List<OrderItem>)Session["cart"]; int uQty = 0; foreach (GridViewRow gvr in gvCart.Rows) { if (gvr.RowType == DataControlRowType.DataRow) { if (gvr.Cells[0].Text == prodId) { uQty = int.Parse(((TextBox)gvr.Cells[4].FindControl("Quantity")).Text); } } } Goal: I'm basically trying to find a way to update the data in my GridView (and more importantly my cart Session object) without having to do everything else I've seen online such as utilizing OnRowUpdate, etc. Could someone please tell me why gvCart.Rows is empty and/or how I could accomplish my goal without utilizing OnRowUpdate, etc.? When I execute this code, the GridView gets populated but for some reason I can't access any of its rows in the code-behind.

    Read the article

  • Getting the constructor of an Interface Type through reflection, is there a better approach than loo

    - by Will Marcouiller
    I have written a generic type: IDirectorySource<T> where T : IDirectoryEntry, which I'm using to manage Active Directory entries through my interfaces objects: IGroup, IOrganizationalUnit, IUser. So that I can write the following: IDirectorySource<IGroup> groups = new DirectorySource<IGroup>(); // Where IGroup implements `IDirectoryEntry`, of course.` foreach (IGroup g in groups.ToList()) { listView1.Items.Add(g.Name).SubItems.Add(g.Description); } From the IDirectorySource<T>.ToList() methods, I use reflection to find out the appropriate constructor for the type parameter T. However, since T is given an interface type, it cannot find any constructor at all! Of course, I have an internal class Group : IGroup which implements the IGroup interface. No matter how hard I have tried, I can't figure out how to get the constructor out of my interface through my implementing class. [DirectorySchemaAttribute("group")] public interface IGroup { } internal class Group : IGroup { internal Group(DirectoryEntry entry) { NativeEntry = entry; Domain = NativeEntry.Path; } // Implementing IGroup interface... } Within the ToList() method of my IDirectorySource<T> interface implementation, I look for the constructor of T as follows: internal class DirectorySource<T> : IDirectorySource<T> { // Implementing properties... // Methods implementations... public IList<T> ToList() { Type t = typeof(T) // Let's assume we're always working with the IGroup interface as T here to keep it simple. // So, my `DirectorySchema` property is already set to "group". // My `DirectorySearcher` is already instantiated here, as I do it within the DirectorySource<T> constructor. Searcher.Filter = string.Format("(&(objectClass={0}))", DirectorySchema) ConstructorInfo ctor = null; ParameterInfo[] params = null; // This is where I get stuck for now... Please see the helper method. GetConstructor(out ctor, out params, new Type() { DirectoryEntry }); SearchResultCollection results = null; try { results = Searcher.FindAll(); } catch (DirectoryServicesCOMException ex) { // Handling exception here... } foreach (SearchResult entry in results) entities.Add(ctor.Invoke(new object() { entry.GetDirectoryEntry() })); return entities; } } private void GetConstructor(out ConstructorInfo constructor, out ParameterInfo[] parameters, Type paramsTypes) { Type t = typeof(T); ConstructorInfo[] ctors = t.GetConstructors(BindingFlags.CreateInstance | BindingFlags.NonPublic | BindingFlags.Public | BindingFlags.InvokeMethod); bool found = true; foreach (ContructorInfo c in ctors) { parameters = c.GetParameters(); if (parameters.GetLength(0) == paramsTypes.GetLength(0)) { for (int index = 0; index < parameters.GetLength(0); ++index) { if (!(parameters[index].GetType() is paramsTypes[index].GetType())) found = false; } if (found) { constructor = c; return; } } } // Processing constructor not found message here... } My problem is that T will always be an interface, so it never finds a constructor. Is there a better way than looping through all of my assembly types for implementations of my interface? I don't care about rewriting a piece of my code, I want to do it right on the first place so that I won't need to come back again and again and again. EDIT #1 Following Sam's advice, I will for now go with the IName and Name convention. However, is it me or there's some way to improve my code? Thanks! =)

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >