Search Results

Search found 46894 results on 1876 pages for 'java native interface'.

Page 1167/1876 | < Previous Page | 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174  | Next Page >

  • Writing OpenGL enabled GUI

    - by Jaen
    I am exploring a possibility to write a kind of a notebook analogue that would reproduce the look and feel of using a traditional notebook, but with the added benefit of customizing the page in ways you can't do on paper - ask the program to lay ruled paper here, grid paper there, paste an image, insert a recording from the built-in camera, try to do handwriting recognition on the tablet input, insert some latex for neat formulas and so on. I'm pretty interested in developing it just to see if writing notes on computer can come anywhere close to the comfort plain paper + pencil offer (hard to do IMO) and can always turn it in as a university C++ project, so double gain there. Coming from the type of project there are certain requirements for the user interface: the user will be able to zoom, move and rotate the notebook as he wishes and I think it's pretty sensible delegate it to OpenGL, so the prospective GUI needs to work well with OGL (preferably being rendered in it) the interface should be navigable with as little of keyboard input as user wishes (incorporating some sort of gestures maybe) up to limiting the keyboard keys as modifiers to the pen movements and taps; this includes tablet and possible multitouch support the interface should keep out of the way where not needed and come up where needed and be easily layerable the notebook sheet itself will be a container for objects representing the notebook blurbs, so it would be nice if the GUI would be able to overlay some frames over the exact parts of the OpenGL-drawn sheet to signify what can be done with given part (like moving, rotating, deleting, copying, editing etc.) and it's extents In terms of interface it's probably going to end up similar to Alias' Sketch Book Pro: [picture][http://1.bp.blogspot.com/_GGxlzvZW-CY/SeKYA_oBdSI/AAAAAAAAErE/J6A0kyXiuqA/s400/Autodesk_Alias_SketchBook_Pro_2.jpg] As far as toolkits go I'm considering Qt and nui, but I'm not really aware how well would they match up the requirements and how well would they handle such an application. As far as I know you can somehow coerce Qt into doing widget drawing with OpenGL, but on the other hand I heard voices it's slot-signal framework isn't exactly optimal and requires it's own preprocessor and I don't know how hard would be to do all the custom widgets I would need (say color-wheel, ruler, blurb frames, blurb selection, tablet-targeted pop-up menu etc.) in the constraints of Qt. Also quite a few Qt programs I've had on my machine seemed really sluggish, but it may be attributed to me having old PC or programmers using Qt suboptimally rather to the framework itself. As for [nui][http://www.libnui.net/] I know it's also cross-platform and all of the basic things you would require of a GUI toolkit and what is the biggest plus it is OpenGL-enabled from the start, but I don't know how it is with custom widgets and other facets and it certainly has smaller userbase and less elaborate documentation than Qt. The question goes as this: Does any of these toolkits fulfill (preferably all of) the requirements or there is a well fitting toolkit I haven't come across or maybe I should just roll up my sleeves, get SFML (or maybe Clutter would be more suited to this?) and something like FastDelegates or libsigc++ and program the GUI framework from the ground up myself? I would be very glad if anyone had experience with a similar GUI project and can offer some comments on how well these toolkits hold up or is it worthwhile to pursue own GUI toolkit in this case. Sorry for longwindedness, duh.

    Read the article

  • Why abstract classes necessary?

    - by bala3569
    1.What is the point of creating a class that can't be instantiated? Most commonly to serve as a base-class or interface (some languages have a separate interface construct, some don't) - it doesn't know the implementation (that is to be provided by the subclasses / implementing classes) 2.Why would anybody want such a class? For abstraction and re-use 3.What is the situation in which abstract classes become NECESSARY?can anyone brief it with an example?

    Read the article

  • Setting up layout/events on iPhone

    - by Mohit Deshpande
    I am using Open Source toolchain to compile my iPhone apps. So I have no Interface Builder or XCode. How would I setup the layout of widgets like UIButton, UITextView, etc. Also, how would I add an event handler to those UI widgets? Please remember that I don't have Interface Builder or XCode.

    Read the article

  • Hadoop On Azure C#Isotope

    - by Sreesankar
    During the initial release of the HadoopOnAzure, Microsoft had provided the C#Isotope SDK as a programatic interface to the Hadoop cluster on the Azure. After the HDInsight release this is removed from the downoads. More over while trying with the previous version of the sdk we get a 500 - Internal Server Error. Any idea if this services is disabled? If so whats the alternative way to programatically interface with the HDInsight Cluster on Azure?

    Read the article

  • Repository Pattern : Add Item

    - by No Body
    Just need to clarify this one, If I have the below interface public interface IRepository<T> { T Add(T entity); } when implementing it, does checking for duplication if entity is already existing before persist it is still a job of the Repository, or it should handle some where else?

    Read the article

  • How to learn a new programming language? And how to choose the appropriate language?

    - by Sebi
    Unfortunately my last question was closed, so I reformulated the question: I know this question was here a lot of times and can't be answered at all, but im not looking for a single name, but rather for an advice in my situation. I learned programming with Java and now I'm developing in Java for more or less 5 years (at the university) and I thinks my programming skills their are really ok/average. I have also small experience in C/C++ and C#. Now I have some spare time and I'd like to learn a new language or deepen the knowledge of Java/C/C++. But how to choose the right language to learn? I'd like to learn a language which will be usefull in the future concerning working in a software development business? I know there is no single answer, but I'm sure you could mention some languages that are more usefull than others. And what's the best way to learn such a language efficiently (bearing in mind that one has already learned some other languages)? Just doing tutorials? Or trying to implement a project? Trying to move a Java project to the new language?

    Read the article

  • Why can a List(Of SomeObject) not be converted to an IEnumerable(Of ISomeInterface) while SomeObject

    - by ropstah
    Please see the last comment in GetListChildren() function. "Why can a List(Of Page) not be converted to an IEnumerable(Of IListHasChildren) ?" Interface IListHasChildren Function GetListChildren() as IEnumerable(Of IListHasChildren) End Interface Public Class Page : Implements IListHasChildren Public Function GetChildren() As List(Of Page) //'return List(Of Page) with children End Function Public Function GetListChildren() : Implements IListHasChildren.GetListChildren Return Me.GetChildren() //'This cannot be converted to IEnumerable(Of IListHasChildren) ?? End Function End Class

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • How to Upload a file from client to server using OFBIZ?

    - by SIVAKUMAR.J
    I'm new to ofbiz so try to keep your answer as simple as possibly. If you can give examples that would be kind. My problem is I created a project inside the ofbiz/hot-deploy folder namely productionmgntSystem. Inside the folder ofbiz\hot-deploy\productionmgntSystem\webapp\productionmgntSystem I created a file app_details_1.ftl. The following are the code of this file <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <title>Insert title here</title> <script TYPE="TEXT/JAVASCRIPT" language=""JAVASCRIPT"> function uploadFile() { //alert("Before calling upload.jsp"); window.location='<@ofbizUrl>testing_service1</@ofbizUrl>' } </script> </head> <!-- <form action="<@ofbizUrl>testing_service1</@ofbizUrl>" enctype="multipart/form-data" name="app_details_frm"> --> <form action="<@ofbizUrl>logout1</@ofbizUrl>" enctype="multipart/form-data" name="app_details_frm"> <center style="height: 299px; "> <table border="0" style="height: 177px; width: 788px"> <tr style="height: 115px; "> <td style="width: 103px; "> <td style="width: 413px; "><h1>APPLICATION DETAILS</h1> <td style="width: 55px; "> </tr> <tr> <td style="width: 125px; ">Application name : </td> <td> <input name="app_name_txt" id="txt_1" value=" " /> </td> </tr> <tr> <td style="width: 125px; ">Excell sheet &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;: </td> <td> <input type="file" name="filename"/> </td> </tr> <tr> <td> <!-- <input type="button" name="logout1_cmd" value="Logout" onclick="logout1()"/> --> <input type="submit" name="logout_cmd" value="logout"/> </td> <td> <!-- <input type="submit" name="upload_cmd" value="Submit" /> --> <input type="button" name="upload1_cmd" value="Upload" onclick="uploadFile()"/> </td> </tr> </table> </center> </form> </html> the following coding is present in the file ofbiz\hot-deploy\productionmgntSystem\webapp\productionmgntSystem\WEB-INF\controller.xml ...... ....... ........ <request-map uri="testing_service1"> <security https="true" auth="true"/> <event type="java" path="org.ofbiz.productionmgntSystem.web_app_req.WebServices1" invoke="testingService"/> <response name="ok" type="view" value="ok_view"/> <response name="exception" type="view" value="exception_view"/> </request-map> .......... ............ .......... <view-map name="ok_view" type="ftl" page="ok_view.ftl"/> <view-map name="exception_view" type="ftl" page="exception_view.ftl"/> ................ ............. ............. The following are the coding present in the file ofbiz\hot-deploy\productionmgntSystem\src\org\ofbiz\productionmgntSystem\web_app_req\WebServices1.java package org.ofbiz.productionmgntSystem.web_app_req; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import java.io.DataInputStream; import java.io.FileOutputStream; import java.io.IOException; public class WebServices1 { public static String testingService(HttpServletRequest request, HttpServletResponse response) { //int i=0; String result="ok"; System.out.println("\n\n\t*************************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response)- Start"); String contentType=request.getContentType(); System.out.println("\n\n\t*************************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response)- contentType : "+contentType); String str=new String(); // response.setContentType("text/html"); //PrintWriter writer; if ((contentType != null) && (contentType.indexOf("multipart/form-data") >= 0)) { System.out.println("\n\n\t**********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) after if (contentType != null)"); try { // writer=response.getWriter(); System.out.println("\n\n\t**********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) - try Start"); DataInputStream in = new DataInputStream(request.getInputStream()); int formDataLength = request.getContentLength(); byte dataBytes[] = new byte[formDataLength]; int byteRead = 0; int totalBytesRead = 0; //this loop converting the uploaded file into byte code while (totalBytesRead < formDataLength) { byteRead = in.read(dataBytes, totalBytesRead,formDataLength); totalBytesRead += byteRead; } String file = new String(dataBytes); //for saving the file name String saveFile = file.substring(file.indexOf("filename=\"") + 10); saveFile = saveFile.substring(0, saveFile.indexOf("\n")); saveFile = saveFile.substring(saveFile.lastIndexOf("\\")+ 1,saveFile.indexOf("\"")); int lastIndex = contentType.lastIndexOf("="); String boundary = contentType.substring(lastIndex + 1,contentType.length()); int pos; //extracting the index of file pos = file.indexOf("filename=\""); pos = file.indexOf("\n", pos) + 1; pos = file.indexOf("\n", pos) + 1; pos = file.indexOf("\n", pos) + 1; int boundaryLocation = file.indexOf(boundary, pos) - 4; int startPos = ((file.substring(0, pos)).getBytes()).length; int endPos = ((file.substring(0, boundaryLocation)).getBytes()).length; //creating a new file with the same name and writing the content in new file FileOutputStream fileOut = new FileOutputStream("/"+saveFile); fileOut.write(dataBytes, startPos, (endPos - startPos)); fileOut.flush(); fileOut.close(); System.out.println("\n\n\t**********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) - try End"); } catch(IOException ioe) { System.out.println("\n\n\t*********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) - Catch IOException"); //ioe.printStackTrace(); return("exception"); } catch(Exception ex) { System.out.println("\n\n\t*********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) - Catch Exception"); return("exception"); } } else { System.out.println("\n\n\t********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) else part"); result="exception"; } System.out.println("\n\n\t*************************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response)- End"); return(result); } } I want to upload a file to the server. The file is get from user " tag in the "app_details_1.ftl" file & it is updated into the server by using the method "testingService(HttpServletRequest request, HttpServletResponse response)" in the class "WebServices1". But the file is not uploaded. Give me a good solution for uploading a file to the server.

    Read the article

  • Formatting Dates, Times and Numbers in ASP.NET

    Formatting is the process of converting a variable from its native type into a string representation. Anytime you display a DateTime or numeric variables in an ASP.NET page, you are formatting that variable from its native type into some sort of string representation. How a DateTime or numeric variable is formatted depends on the culture settings and the format string. Because dates and numeric values are formatted differently across cultures, the .NET Framework bases its formatting on the specified culture settings. By default, the formatting routines use the culture settings defined on the web server, but you can indicate that a particular culture be used anytime you format. In addition to the culture settings, formatting is also affected by a format string, which spells out the formatting details to apply. The .NET Framework contains a bounty of format strings. There are standard format strings, which are typically a single letter that applies detailed formatting logic. For example, the "C" format specifier will format a numeric type as a currency value; the "Y" format specifier displays the month name and four-digit year of the specified DateTime value. There are also custom format strings, which display a apply a very specific formatting rule. These custom format strings can be put together to build more intricate formats. For instance, the format string "dddd, MMMM d" displays the full day of the week name followed by a comma followed by the full name of the month followed by the day of the month. For more involved formatting scenarios, where neither the standard or custom format strings cut the mustard, you can always create your own formatting extension methods. This article explores the standard format strings for dates, times and numbers and includes a number of custom formatting methods I've created and use in my own projects. There's also a demo application you can download that lets you specify a culture and then shows you the output for the standard format strings for the selected culture. Read on to learn more! Read More >

    Read the article

  • Mobile Connections in Las Vegas April 17-21

    - by Wallym
    I'll be speaking at Mobile Connections in Las Vegas.  The event is April 17-21.  The event is a cross platform mobile event.  There will be sessions on iOS, Android, WP7, Blackberry, and cross platform tools.  The sessions I am speaking on are:Introduction to Android via MonoDroid:This session will introduce writing native applications geared for the Android Platform based on .NET/C#/Mono. We’ll examine the overall architecture of MonoDroid, discuss how it integrates with Visual Studio, debug with MonoDroid, and look at a couple of example apps written with MonoDroid. This session is targeted to the .NET developer who wants to move to the Android mobile platform. While the session will be introductory for the Android platform, it will be intermediate/expert for those on the .NET platform.Web Development with HTML5 to target Android, iOS, iPadThis session will examine the features of the mobile browser, and how developers can leverage it to build applications that target mobile devices. This session is for developers looking to target Android, iPhone, WebKit based devices, and other devices through the mobile web with the same application code, development managers looking to Android, iPhone, WebKit based devices, and other devices through the mobile web with the same application code, and developers and development managers looking to build mobile web apps for devices that look like native apps. Attendees will be able to immediately begin building web applications that target the Android and iPhone platforms. The benefits of this approach are: Easy cross platform development No requirement to learn Objective-C/Xcode or Java/Eclipse Applications are immediately upgradeable. There is no requirement to go through the Marketplace or Appstore of either platform. Web developers are easier to find than Objective-C, Blackberry, WebOS, or Java programmerYou can register for the event and get $100 off via this link.

    Read the article

  • How to Install a Wireless Card in Linux Using Windows Drivers

    - by Justin Garrison
    Linux has come a long way with hardware support, but if you have a wireless card that still does not have native Linux drivers you might be able to get the card working with a Windows driver and ndiswrapper. Using a Windows driver inside of Linux may also give you faster transfer rates or better encryption support depending on your wireless card. If your wireless card is working, it is not recommended to install the Windows driver just for fun because it could cause a conflict with the native Linux driver Latest Features How-To Geek ETC How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Smart Taskbar Is a Thumb Friendly Android Task Launcher Comix is an Awesome Comics Archive Viewer for Linux Get the MakeUseOf eBook Guide to Speeding Up Windows for Free Need Tech Support? Call the Star Wars Help Desk! [Video Classic] Reclaim Vertical UI Space by Adding a Toolbar to the Left or Right Side of Firefox Androidify Turns You into an Android-style Avatar

    Read the article

  • What's Microsoft's strategy on Windows CE development?

    - by Heinzi
    Lots of specialized mobile devices use Windows CE or Windows Mobile. I'm not talking about smart phones here -- I know that Windows Phone 7 is Microsoft's current technology of choice here. I'm talking about barcode readers, embedded devices, industry PDAs with specialized hardware, etc... the kind of devices (Example 1, Example 2) where Windows Phone Silverlight development is not an option (no P/Invoke to access the hardware, etc.). Since direct Compact Framework support has been dropped in Visual Studio 2010, the only option to develop for these device currently is to use outdated development tools (VS 2008), which already start to cause trouble on modern machines (e.g. there's no supported way to make the Windows Mobile Device Emulator's network stack work on Windows 7). Thus, my question is: What are Microsoft's plans regarding these mobile devices? Will they allow native applications on Windows Phone, such that, for example, barcode reader drivers can be developed that can be accessed in Silverlight applications? Will they re-add "native" Compact Framework support to Visual Studio and just haven't found the time yet? Or will they leave this niche market?

    Read the article

  • Fun with Declarative Components

    - by [email protected]
    Use case background I have been asked on a number of occasions if our selectOneChoice component could allow random text to be entered, as well as having a list of selections available. Unfortunately, the selectOneChoice component only allows entry via the dropdown selection list and doesn't allow text entry. I was thinking of possible solutions and thought that this might make a good example for using a declarative component.My initial idea My first thought was to use an af:inputText to allow the text entry, and an af:selectOneChoice with mode="compact" for the selections. To get it to layout horizontally, we would want to use an af:panelGroupLayout with layout="horizontal". To get the label for this to line up correctly, we'll need to wrap the af:panelGroupLayout with an af:panelLabelAndMessage. This is the basic structure: <af:panelLabelAndMessage> <af:panelGroupLayout layout="horizontal"> <af:inputText/> <af:selectOneChoice mode="compact"/> </af:panelgroupLayout></af:panelLabelAndMessage> Make it into a declarative component One of the steps to making a declarative component is deciding what attributes we want to be able to specify. To keep this example simple, let's just have: 'label' (the label of our declarative component)'value' (what we want to bind to the value of the input text)'items' (the select items in our dropdown) Here is the initial declarative component code (saved as file "inputTextWithChoice.jsff"): <?xml version='1.0' encoding='UTF-8'?><!-- Copyright (c) 2008, Oracle and/or its affiliates. All rights reserved. --><jsp:root xmlns:jsp="http://java.sun.com/JSP/Page" version="2.1" xmlns:f="http://java.sun.com/jsf/core" xmlns:af="http://xmlns.oracle.com/adf/faces/rich"> <jsp:directive.page contentType="text/html;charset=utf-8"/> <af:componentDef var="attrs" componentVar="comp"> <af:xmlContent> <component xmlns="http://xmlns.oracle.com/adf/faces/rich/component"> <description>Input text with choice component.</description> <attribute> <description>Label</description> <attribute-name>label</attribute-name> <attribute-class>java.lang.String</attribute-class> </attribute> <attribute> <description>Value</description> <attribute-name>value</attribute-name> <attribute-class>java.lang.Object</attribute-class> </attribute> <attribute> <description>Choice Select Items Value</description> <attribute-name>items</attribute-name> <attribute-class>[[Ljavax.faces.model.SelectItem;</attribute-class> </attribute> </component> </af:xmlContent> <af:panelLabelAndMessage id="myPlm" label="#{attrs.label}" for="myIt"> <af:panelGroupLayout id="myPgl" layout="horizontal"> <af:inputText id="myIt" value="#{attrs.value}" partialTriggers="mySoc" label="myIt" simple="true" /> <af:selectOneChoice id="mySoc" label="mySoc" simple="true" mode="compact" value="#{attrs.value}" autoSubmit="true"> <f:selectItems id="mySIs" value="#{attrs.items}" /> </af:selectOneChoice> </af:panelGroupLayout> </af:panelLabelAndMessage> </af:componentDef></jsp:root> By having af:inputText and af:selectOneChoice both have the same value, then (assuming that this passed in as an EL expression) selecting something in the selectOneChoice will update the value in the af:inputText. To use this declarative component in a jspx page: <af:declarativeComponent id="myItwc" viewId="inputTextWithChoice.jsff" label="InputText with Choice" value="#{demoInput.choiceValue}" items="#{demoInput.selectItems}" /> Some problems arise At first glace, this seems to be functioning like we want it to. However, there is a side effect to having the af:inputText and af:selectOneChoice share a value, if one changes, so does the other. The problem here is that when we update the af:inputText to something that doesn't match one of the selections in the af:selectOneChoice, the af:selectOneChoice will set itself to null (since the value doesn't match one of the selections) and the next time the page is submitted, it will submit the null value and the af:inputText will be empty. Oops, we don't want that. Hmm, what to do. Okay, how about if we make sure that the current value is always available in the selection list. But, lets not render it if the value is empty. We also need to add a partialTriggers attribute so that this gets updated when the af:inputText is changed. Plus, we really don't want to select this item so let's disable it. <af:selectOneChoice id="mySoc" partialTriggers="myIt" label="mySoc" simple="true" mode="compact" value="#{attrs.value}" autoSubmit="true"> <af:selectItem id="mySI" label="Selected:#{attrs.value}" value="#{attrs.value}" disabled="true" rendered="#{!empty attrs.value}"/> <af:separator id="mySp" /> <f:selectItems id="mySIs" value="#{attrs.items}" /></af:selectOneChoice> That seems to be working pretty good. One minor issue that we probably can't do anything about is that when you enter something in the inputText and then click on the selectOneChoice, the popup is displayed, but then goes away because it has been replaced via PPR because we told it to with the partialTriggers="myIt". This is not that big a deal, since if you are entering something manually, you probably don't want to select something from the list right afterwards. Making it look like a single component. Now, let's play around a bit with the contentStyle of the af:inputText and the af:selectOneChoice so that the compact icon will layout inside the af:inputText, making it look more like an af:selectManyChoice. We need to add some padding-right to the af;inputText so there is space for the icon. These adjustments were for the Fusion FX skin. <af:inputText id="myIt" partialTriggers="mySoc" autoSubmit="true" contentStyle="padding-right: 15px;" value="#{attrs.value}" label="myIt" simple="true" /><af:selectOneChoice id="mySoc" partialTriggers="myIt" contentStyle="position: relative; top: -2px; left: -19px;" label="mySoc" simple="true" mode="compact" value="#{attrs.value}" autoSubmit="true"> <af:selectItem id="mySI" label="Selected:#{attrs.value}" value="#{attrs.value}" disabled="true" rendered="#{!empty attrs.value}"/> <af:separator id="mySp" /> <f:selectItems id="mySIs" value="#{attrs.items}" /></af:selectOneChoice> There you have it, a declarative component that allows for suggested selections, but also allows arbitrary text to be entered. This could be used for search field, where the 'items' attribute could be populated with popular searches. Lines of java code written: 0

    Read the article

  • Future of BizTalk

    - by Vamsi Krishna
    The future of BizTalk- The last TechEd Conference was a very important one from BizTalk perspective. Microsoft will continue innovating BizTalk and releasing BizTalk versions for “for next few years to come”. So, all the investments that clients have made so far will continue giving returns. Three flavors of BizTalk: BizTalk On-Premise, BizTalk as IaaS and BizTalk as PaaS (Azure Service Bus).Windows Azure IaaS and How It WorksDate: June 12, 2012Speakers: Corey SandersThis session covers the significant investments that Microsoft is making in our Windows Azure Infrastructure-as-a-Service (IaaS) solution and how it http://channel9.msdn.com/Events/TechEd/NorthAmerica/2012/AZR201 TechEd provided two sessions around BizTalk futures:http://channel9.msdn.com/Events/TechEd/NorthAmerica/2012AZR 207: Application Integration Futures - The Road Map and What's Next on Windows Azure http://channel9.msdn.com/Events/TechEd/NorthAmerica/2012/AZR207AZR211: Building Integration Solutions Using Microsoft BizTalk On-Premises and on Windows Azurehttp://channel9.msdn.com/Events/TechEd/NorthAmerica/2012/AZR211Here is are some highlights from the two sessions at TechEd. Bala Sriram, Director of Development for BizTalk provided the introduction at the road map session and covered some key points around BizTalk:More then 12,000 Customers 79% of BizTalk users have already adopted BizTalk Server 2010 BizTalk Server will be released for “years to come” after this release. I.e. There will be more releases of BizTalk after BizTalk 2010 R2. BizTalk 2010 R2 will be releasing 6 months after Windows 8 New Azure (Cloud-based) BizTalk scenarios will be available in IaaS and PaaS BizTalk Server on-premises, IaaS, and PaaS are complimentary and designed to work together BizTalk Investments will be taken forward The second session was mainly around drilling in and demonstrating the up and coming capabilities in BizTalk Server on-premise, IaaS, and PaaS:BizTalk IaaS: Users will be able to bring their own or choose from a Azure IaaS template to quickly provision BizTalk Server virtual machines (VHDs) BizTalk Server 2010 R2: Native adapter to connect to Azure Service Bus Queues, Topics and Relay. Native send port adapter for REST. Demonstrated this in connecting to Salesforce to get and update Salesforce information. ESB Toolkit will be incorporate are part of product and setup BizTalk PaaS: HTTP, FTP, Service Bus, LOB Application connectivity XML and Flat File processing Message properties Transformation, Archiving, Tracking Content based routing

    Read the article

  • What are the factors affecting a new programming language?

    - by Saurav Sengupta
    I am developing a new general-purpose programming language of my own design. It's currently my own personal project. I have read of some experts saying that new languages do not usually survive (unfortunately I can't find a reference to that right now). What are the most substantial problems that a new language faces? The language syntax is similar to C/Python families, it does not use S-expressions, and it is an imperative language, but I'm doing first-class functions in it to provide the facilities of currying. In particular, I am concentrating on translating the source language to an intermediate language for execution by an interpreter, but I'm not in a position to translate to native code yet. What would be the issues with that? I've not personally used many non-native code languages, so I'm not well aware of the performance issues on today's machines. I also can't decide upon a lexer and parser generator. What would be the pros and cons of Flex and Yacc vs. hand-made? And what benefits will LLVM provide? I need to get the interpreter ready as quickly as possible. Finally, what factors will affect the language's use post release? I am planning a small library of essentials and full documentation for the first phase.

    Read the article

  • Android device - C++ OpenGL 2: eglCreateWindowSurface invalid

    - by ThreaderSlash
    I am trying to debug and run OGLES on Native C++ in my Android device in order to implement a native 3D game for mobile smart phones. The point is that I got an error and see no reason for that. Here is the line from the code that the debugger complains: mSurface = eglCreateWindowSurface(mDisplay, lConfig, mApplication->window, NULL); And this is the error message: Invalid arguments ' Candidates are: void * eglCreateWindowSurface(void *, void *, unsigned long int, const int *) ' --x-- Here is the declaration: android_app* mApplication; EGLDisplay mDisplay; EGLint lFormat, lNumConfigs, lErrorResult; EGLConfig lConfig; // Defines display requirements. 16bits mode here. const EGLint lAttributes[] = { EGL_RENDERABLE_TYPE, EGL_OPENGL_ES2_BIT, EGL_BLUE_SIZE, 5, EGL_GREEN_SIZE, 6, EGL_RED_SIZE, 5, EGL_SURFACE_TYPE, EGL_WINDOW_BIT, EGL_RENDER_BUFFER, EGL_BACK_BUFFER, EGL_NONE }; // Retrieves a display connection and initializes it. packt_Log_debug("Connecting to the display."); mDisplay = eglGetDisplay(EGL_DEFAULT_DISPLAY); if (mDisplay == EGL_NO_DISPLAY) goto ERROR; if (!eglInitialize(mDisplay, NULL, NULL)) goto ERROR; // Selects the first OpenGL configuration found. packt_Log_debug("Selecting a display config."); if(!eglChooseConfig(mDisplay, lAttributes, &lConfig, 1, &lNumConfigs) || (lNumConfigs <= 0)) goto ERROR; // Reconfigures the Android window with the EGL format. packt_Log_debug("Configuring window format."); if (!eglGetConfigAttrib(mDisplay, lConfig, EGL_NATIVE_VISUAL_ID, &lFormat)) goto ERROR; ANativeWindow_setBuffersGeometry(mApplication->window, 0, 0, lFormat); // Creates the display surface. packt_Log_debug("Initializing the display."); mSurface = eglCreateWindowSurface(mDisplay, lConfig, mApplication->window, NULL); --x-- Hope someone here can shed some light on it.

    Read the article

  • Microsoft Terminology: .NET C++ vs. traditional C++

    - by Mike Clark
    I've recently been working with a team that's using both .NET C++ and pre-.NET C++. I fully understand the technical differences between the two technologies. However, I sometimes feel like I'm floundering when it comes to the terminology used to differentiate the two. Example: Say we have two projects: ProjectA contains "C++" code that builds a .NET assembly DLL. ProjectB contains Visual C++ code that builds a traditional native Windows DLL. What is the best way to succinctly and terminologically draw a distinction between the two projects? Again, I'm not asking for an in-depth technical description of the differences between the two technologies. I'm just looking for names and labels. This is how, today, I might try to make the distinction when talking to someone: "ProjectA is a managed .NET C++ project" and "ProjectB is an unmanaged native C++ DLL project." However I am not at all certain that this terminology is ideal, or even correct. Please describe what you feel the ideal language to use in this situation (or similar situations) might be. Feel free to motivate your answer.

    Read the article

  • Self-signed certificates for a known community

    - by costlow
    Recently announced changes scheduled for Java 7 update 51 (January 2014) have established that the default security slider will require code signatures and the Permissions Manifest attribute. Code signatures are a common practice recommended in the industry because they help determine that the code your computer will run is the same code that the publisher created. This post is written to help users that need to use self-signed certificates without involving a public Certificate Authority. The role of self-signed certificates within a known community You may still use self-signed certificates within a known community. The difference between self-signed and purchased-from-CA is that your users must import your self-signed certificate to indicate that it is valid, whereas Certificate Authorities are already trusted by default. This works for known communities where people will trust that my certificate is mine, but does not scale widely where I cannot actually contact or know the systems that will need to trust my certificate. Public Certificate Authorities are widely trusted already because they abide by many different requirements and frequent checks. An example would be students in a university class sharing their public certificates on a mailing list or web page, employees publishing on the intranet, or a system administrator rolling certificates out to end-users. Managed machines help this because you can automate the rollout, but they are not required -- the major point simply that people will trust and import your certificate. How to distribute self-signed certificates for a known community There are several steps required to distribute a self-signed certificate to users so that they will properly trust it. These steps are: Creating a public/private key pair for signing. Exporting your public certificate for others Importing your certificate onto machines that should trust you Verify work on a different machine Creating a public/private key pair for signing Having a public/private key pair will give you the ability both to sign items yourself and issue a Certificate Signing Request (CSR) to a certificate authority. Create your public/private key pair by following the instructions for creating key pairs.Every Certificate Authority that I looked at provided similar instructions, but for the sake of cohesiveness I will include the commands that I used here: Generate the key pair.keytool -genkeypair -alias erikcostlow -keyalg EC -keysize 571 -validity 730 -keystore javakeystore_keepsecret.jks Provide a good password for this file. The alias "erikcostlow" is my name and therefore easy to remember. Substitute your name of something like "mykey." The sigalg of EC (Elliptical Curve) and keysize of 571 will give your key a good strong lifetime. All keys are set to expire. Two years or 730 days is a reasonable compromise between not-long-enough and too-long. Most public Certificate Authorities will sign something for one to five years. You will be placing your keys in javakeystore_keepsecret.jks -- this file will contain private keys and therefore should not be shared. If someone else gets these private keys, they can impersonate your signature. Please be cautious about automated cloud backup systems and private key stores. Answer all the questions. It is important to provide good answers because you will stick with them for the "-validity" days that you specified above.What is your first and last name?  [Unknown]:  First LastWhat is the name of your organizational unit?  [Unknown]:  Line of BusinessWhat is the name of your organization?  [Unknown]:  MyCompanyWhat is the name of your City or Locality?  [Unknown]:  City NameWhat is the name of your State or Province?  [Unknown]:  CAWhat is the two-letter country code for this unit?  [Unknown]:  USIs CN=First Last, OU=Line of Business, O=MyCompany, L=City, ST=CA, C=US correct?  [no]:  yesEnter key password for <erikcostlow>        (RETURN if same as keystore password): Verify your work:keytool -list -keystore javakeystore_keepsecret.jksYou should see your new key pair. Exporting your public certificate for others Public Key Infrastructure relies on two simple concepts: the public key may be made public and the private key must be private. By exporting your public certificate, you are able to share it with others who can then import the certificate to trust you. keytool -exportcert -keystore javakeystore_keepsecret.jks -alias erikcostlow -file erikcostlow.cer To verify this, you can open the .cer file by double-clicking it on most operating systems. It should show the information that you entered during the creation prompts. This is the file that you will share with others. They will use this certificate to prove that artifacts signed by this certificate came from you. If you do not manage machines directly, place the certificate file on an area that people within the known community should trust, such as an intranet page. Import the certificate onto machines that should trust you In order to trust the certificate, people within your known network must import your certificate into their keystores. The first step is to verify that the certificate is actually yours, which can be done through any band: email, phone, in-person, etc. Known networks can usually do this Determine the right keystore: For an individual user looking to trust another, the correct file is within that user’s directory.e.g. USER_HOME\AppData\LocalLow\Sun\Java\Deployment\security\trusted.certs For system-wide installations, Java’s Certificate Authorities are in JAVA_HOMEe.g. C:\Program Files\Java\jre8\lib\security\cacerts File paths for Mac and Linux are included in the link above. Follow the instructions to import the certificate into the keystore. keytool -importcert -keystore THEKEYSTOREFROMABOVE -alias erikcostlow -file erikcostlow.cer In this case, I am still using my name for the alias because it’s easy for me to remember. You may also use an alias of your company name. Scaling distribution of the import The easiest way to apply your certificate across many machines is to just push the .certs or cacerts file onto them. When doing this, watch out for any changes that people would have made to this file on their machines. Trusted.certs: When publishing into user directories, your file will overwrite any keys that the user has added since last update. CACerts: It is best to re-run the import command with each installation rather than just overwriting the file. If you just keep the same cacerts file between upgrades, you will overwrite any CAs that have been added or removed. By re-importing, you stay up to date with changes. Verify work on a different machine Verification is a way of checking on the client machine to ensure that it properly trusts signed artifacts after you have added your signing certificate. Many people have started using deployment rule sets. You can validate the deployment rule set by: Create and sign the deployment rule set on the computer that holds the private key. Copy the deployment rule set on to the different machine where you have imported the signing certificate. Verify that the Java Control Panel’s security tab shows your deployment rule set. Verifying an individual JAR file or multiple JAR files You can test a certificate chain by using the jarsigner command. jarsigner -verify filename.jar If the output does not say "jar verified" then run the following command to see why: jarsigner -verify -verbose -certs filename.jar Check the output for the term “CertPath not validated.”

    Read the article

  • .NET access to the GPU for compute purposes

    - by Daniel Moth
    In the distant past I talked about GPGPU and Microsoft's then approach of DirectCompute. Since then of course we now have C++ AMP coming out with Visual Studio 11, so there is a mainstream easier way for developers to access the GPU for compute purposes, using C++. The question occasionally arises of how can a .NET developer access the GPU for compute purposes from their C# (or VB) code. The answer is by interoping from the managed code to a native DLL and in the native DLL use C++ AMP. As a long term .NET developer myself, I can tell you this is straightforward. Sure, there could have been a managed wrapper for C++ AMP, but honestly that is the reason we have interop – it doesn't make much sense to invest resources to solve a problem that is already solved (most developer customers would prefer investments in other areas of Visual Studio!). Besides, interoping from C# to C++ is much easier than interoping to some of the other older approaches of GPGPU programming ;-) To help you get started with the interop approach, Igor Ostrovsky has previously shared the "Hello World" version of interoping from C# to C++ AMP in his blog post: How to use C++ AMP from C# …we then were asked specifically about how to interop from C# to C++ AMP in a Metro style application on Windows 8, so Igor delivered again with this post: How to use C++ AMP from C# using WinRT Have fun! Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • 'Binary XML' for game data?

    - by bluescrn
    I'm working on a level editing tool that saves its data as XML. This is ideal during development, as it's painless to make small changes to the data format, and it works nicely with tree-like data. The downside, though, is that the XML files are rather bloated, mostly due to duplication of tag and attribute names. Also due to numeric data taking significantly more space than using native datatypes. A small level could easily end up as 1Mb+. I want to get these sizes down significantly, especially if the system is to be used for a game on the iPhone or other devices with relatively limited memory. The optimal solution, for memory and performance, would be to convert the XML to a binary level format. But I don't want to do this. I want to keep the format fairly flexible. XML makes it very easy to add new attributes to objects, and give them a default value if an old version of the data is loaded. So I want to keep with the hierarchy of nodes, with attributes as name-value pairs. But I need to store this in a more compact format - to remove the massive duplication of tag/attribute names. Maybe also to give attributes native types, so, for example floating-point data is stored as 4 bytes per float, not as a text string. Google/Wikipedia reveal that 'binary XML' is hardly a new problem - it's been solved a number of times already. Has anyone here got experience with any of the existing systems/standards? - are any ideal for games use - with a free, lightweight and cross-platform parser/loader library (C/C++) available? Or should I reinvent this wheel myself? Or am I better off forgetting the ideal, and just compressing my raw .xml data (it should pack well with zip-like compression), and just taking the memory/performance hit on-load?

    Read the article

  • Securing WebSocket applications on Glassfish

    - by Pavel Bucek
    Today we are going to cover deploying secured WebSocket applications on Glassfish and access to these services using WebSocket Client API. WebSocket server application setup Our server endpoint might look as simple as this: @ServerEndpoint("/echo") public class EchoEndpoint { @OnMessage   public String echo(String message) {     return message + " (from your server)";   } } Everything else must be configured on container level. We can start with enabling SSL, which will require web.xml to be added to your project. For starters, it might look as following: <web-app version="3.0" xmlns="http://java.sun.com/xml/ns/javaee">   <security-constraint>     <web-resource-collection>       <web-resource-name>Protected resource</web-resource-name>       <url-pattern>/*</url-pattern>       <http-method>GET</http-method>     </web-resource-collection>     <!-- https -->     <user-data-constraint>       <transport-guarantee>CONFIDENTIAL</transport-guarantee>     </user-data-constraint>   </security-constraint> </web-app> This is minimal web.xml for this task - web-resource-collection just defines URL pattern and HTTP method(s) we want to put a constraint on and user-data-constraint defines that constraint, which is in our case transport-guarantee. More information about these properties and security settings for web application can be found in Oracle Java EE 7 Tutorial. I have some simple webpage attached as well, so I can test my endpoint right away. You can find it (along with complete project) in Tyrus workspace: [webpage] [whole project]. After deploying this application to Glassfish Application Server, you should be able to hit it using your favorite browser. URL where my application resides is https://localhost:8181/sample-echo-https/ (may be different, depends on other configuration). My browser warns me about untrusted certificate (I use what freshly built Glassfish provides - self signed certificates) and after adding an exception for this site, I can see my webpage and I am able to securely connect to wss://localhost:8181/sample-echo-https/echo. WebSocket client Already mentioned demo application also contains test client, but execution of this is skipped for normal build. Reason for this is that Glassfish uses these self-signed "random" untrusted certificates and you are (in most cases) not able to connect to these services without any additional settings. Creating test WebSocket client is actually quite similar to server side, only difference is that you have to somewhere create client container and invoke connect with some additional info. Java API for WebSocket allows you to use annotated and programmatic way to construct endpoints. Server side shows the annotated case, so let's see how the programmatic approach will look. final WebSocketContainer client = ContainerProvider.getWebSocketContainer(); client.connectToServer(new Endpoint() {   @Override   public void onOpen(Session session, EndpointConfig EndpointConfig) {     try {       // register message handler - will just print out the       // received message on standard output.       session.addMessageHandler(new MessageHandler.Whole<String>() {       @Override         public void onMessage(String message) {          System.out.println("### Received: " + message);         }       });       // send a message       session.getBasicRemote().sendText("Do or do not, there is no try.");     } catch (IOException e) {       // do nothing     }   } }, ClientEndpointConfig.Builder.create().build(),    URI.create("wss://localhost:8181/sample-echo-https/echo")); This client should work with some secured endpoint with valid certificated signed by some trusted certificate authority (you can try that with wss://echo.websocket.org). Accessing our Glassfish instance will require some additional settings. You can tell Java which certificated you trust by adding -Djavax.net.ssl.trustStore property (and few others in case you are using linked sample). Complete command line when you are testing your service might need to look somewhat like: mvn clean test -Djavax.net.ssl.trustStore=$AS_MAIN/domains/domain1/config/cacerts.jks\ -Djavax.net.ssl.trustStorePassword=changeit -Dtyrus.test.host=localhost\ -DskipTests=false Where AS_MAIN points to your Glassfish instance. Note: you might need to setup keyStore and trustStore per client instead of per JVM; there is a way how to do it, but it is Tyrus proprietary feature: http://tyrus.java.net/documentation/1.2.1/user-guide.html#d0e1128. And that's it! Now nobody is able to "hear" what you are sending to or receiving from your WebSocket endpoint. There is always room for improvement, so the next step you might want to take is introduce some authentication mechanism (like HTTP Basic or Digest). This topic is more about container configuration so I'm not going to go into details, but there is one thing worth mentioning: to access services which require authorization, you might need to put this additional information to HTTP headers of first (Upgrade) request (there is not (yet) any direct support even for these fundamental mechanisms, user need to register Configurator and add headers in beforeRequest method invocation). I filed related feature request as TYRUS-228; feel free to comment/vote if you need this functionality.

    Read the article

  • Message Driven Bean JMS integration

    - by Anthony Shorten
    In Oracle Utilities Application Framework V4.1 and above the product introduced the concept of real time JMS integration within the Framework for interfacing. Customer familiar with older versions of the Framework will recall that we used a component called the Multi-purpose Listener (MPL) which was a very light service bus for calling interface channels (including JMS). The MPL is not supplied with all products and customers prefer to use Oracle SOA Suite and native methods rather then MPL. In Oracle Utilities Application Framework V4.1 (and for Oracle Utilities Application Framework V2.2 via Patches 9454971, 9256359, 9672027 and 9838219) we introduced real time JMS integration natively for outbound JMS integration and using Message Driven Beans (MDB) for incoming integration. The outbound integration has not changed a lot between releases where you create an Outbound Message Type to indicate the record types to send out, create a JMS sender (though now you use the Real Time Sender) and then create an External System definition to complete the configuration. When an outbound message appears in the table of the type and external system configured (via a business event such as an algorithm or plug-in script) the Oracle Utilities Application Framework will place the message on the configured Queue linked to the JMS Sender. The inbound integration has changed. In the past you created XAI Receivers and specified configuration about what types of transactions to process. This is now all configuration file driven. The configuration files for the Business Application Server (ejb-jar.xml and weblogic-ejb-jar.xml) define Message Driven Beans and the queues to monitor. When a message appears on the queue, the MDB processes it through our web services interface. Configuration of the MDB can be native (via editing the configuration files) or through the new user exit capabilities (which is aimed at maintaining custom configuration across upgrades). The latter is better as you build fragments of configuration to make it easier to maintain. In the next few weeks a number of new whitepaper will be released to illustrate the features of the Oracle WebLogic JMS and Oracle SOA Suite integration capabilities.

    Read the article

  • Which toolkit to use for 3D MMO game development?

    - by Ahmet Yildirim
    Lately i've been thinking about which path to follow for developing an 3D Online game. I have googled a lot but i couldnt find a good article that covers both game development and online server & client development in same context. This question has been in mind for about 2 weeks now. So.. yesterday i started developing a game from scratch by using Irrlicht.Net Wrapper to use Socket library of .NET which im already familiar. But i found out .Net wrapper of Irrlicht is not totally finished yet and still have lacks from the original. So i lost all my motives :/. So i thought why not to ask the experts before i run into another dead end... What Game Engine and Networking Library is best way to go for 3D MMO Development? Here is some of my early conclusions: Please let me know the ones im wrong. C++: Best Performance for 3D Graphics. Most Game Engines has native C++ Libraries. Lacks a Solid Socket Library .NETC++ Lacks Intellisense Support. C#: Intellisense Support NET Socket Library Lacks 3D Graphics Performance Lacks a native solid 3D Game Engine

    Read the article

< Previous Page | 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174  | Next Page >