Search Results

Search found 13817 results on 553 pages for 'net4 full'.

Page 417/553 | < Previous Page | 413 414 415 416 417 418 419 420 421 422 423 424  | Next Page >

  • Adding SSD as boot drive to existing system

    - by thegrinner
    I recently bought two 128GB SSDs that I'm planning on adding (RAID 0) to a system I currently have on a 1TB HDD. I'm hoping to redo the disk space such that the SSDs act as the boot drive (only other items would be things I install there explicitly) while the majority of my system is on the HDD - documents, media, program files. Something like this: SSD = [ OS | Explicitly placed programs] HDD = [ Program Files | Media | Documents | etc] I have an external drive capable of holding all the data I want to save, so the backup isn't too much of a concern. What I'm worried about is how I should go about doing this - do I need to do a clean install on the SSDs, reformat the HDD, move things like Program Files/Users to the HDD, and then restore data (not full programs but things like saves)? Should I be using one of the regedit hacks I've seen around to change the default install directories instead of moving program files and users? Should I have the actual folders on the HDD and symlinks on the SSD? Or is there a better solution? Do I need to disconnect my HDD while doing the clean Windows install?

    Read the article

  • Fedora 13 becomes unresponsive when too many applications running.

    - by user61766
    I am using Fedora 13 64bit on Dell Vostro full 4GB RAM system with default Gnome GUI. This is very annoying problem that I don't how to fix except by rebooting the whole PC. When I have too many applications running ( like browser windows), the system start acting sluggish. The fist symptoms appear in Eclipse IDE which becomes so terrible it just becomes frozen for sometime one whole minute after I try to edit something in the editor. Then Firefox seems like it has crashed. Google Chrome becomes very unresponsive as well. All GUI applications including File manager becomes unresponsive. When I check System Monitor, the CPU is still around 20% and memory is at 80% but the system seems getting fried up. This progressively becomes worse until I soft reboot it or if I dont do it evetually the whole system is fried, no response to any keyboard key or mouse and I have to hit the hardware turn off button. I regularly yum update the system but this makes no difference. Please don't tell not to run too many applications because I need those for my work. I thought Linux is well designed Operating System but I am very disappointed so far. Can some one here help ?

    Read the article

  • Mac OS X: Applications not responding

    - by Robot55
    This happens to me several times a day. I'll quit an app, the window will close, but the app won't finish shutting down. It stays open in the dock, and right clicking and force quit show not responding. Force quit won't work. I can't shut the app down. Starting any other app when this happens causes that app to behave in exactly the same way: the icon opens in the dock, but the app is non-responsive and can't be forced to quit. Can't access terminal when this happens, because it locks up just like all the apps. While I haven't tried to open every app, I think any app I try to open when this is happening will lock up. If I relaunch Finder, it too locks up and then the only thing left is to hold down the power button for a hard reboot. Any app that is running while this is happening will continue to run normally unless I try to shut it down. Repairing disk permissions has no effect. I also did a time machine and a full restore on a brand new MbP - and sure enough, after restore, the new MbP suffers form the same problem. Creating a new user has no effect. Any thoughts? Please help MacBook Pro 15" AG 2.53ghz i5 cpu 8gb RAM 500gb HD (over 200gb free)

    Read the article

  • How to ensure local file is up-to-date or ahead (dropbox sync) before truecrypt auto-mount it?

    - by user620965
    There are a lot tutorials out there that states that dropbox build-in encryption is not secure enought. That tutorials recommands to sync a truecrypt container file to have all files in it securely encrypted. This setup is know to be limited. You can NOT have that truecrypt container file mounted on the same time on more than one location - if you have inserted changes to the contents of the container in more then one location at a time then this setup produces a conflict on the container file in the dropbox system - resulting in one container file for each location. In my case that issue is not relevant - i do not use my data on more than one location at a time. I want to use the auto-mount feature of truecrypt on startup of windows 7 to have a zero configuration environment - and start working right away. But i want to ensure that the local truecrypt container file is up-to-date before truecrypt mounts it automatically - imagine you updated the contents of the container on your primary location and your secondary location was off for a long time. In that case it can take "a long time" till dropbox sync is complete (e.g. depending on your internet connection and the size of the container file). There is a option in truecrypt that ensures that truecrypt do not update the timestamp of the container file - which speeds up the sync, because dropbox client is doing a differential sync then instead of a time consuming full-sync. That is an improvement to that setup, but this do not fix my issue. The question is how to make the auto-mount function wait for the container file to be up-to-date (updated by dropbox)? In contrast: if the file was changed local, but remote file (in the dropbox cloud system) is still old (not jet updated by the sync process / or process is progress), should not make truecrypt to wait for the sync. Suggestions?

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • What&rsquo;s New in ASP.NET 4.0 Part Two: WebForms and Visual Studio Enhancements

    - by Rick Strahl
    In the last installment I talked about the core changes in the ASP.NET runtime that I’ve been taking advantage of. In this column, I’ll cover the changes to the Web Forms engine and some of the cool improvements in Visual Studio that make Web and general development easier. WebForms The WebForms engine is the area that has received most significant changes in ASP.NET 4.0. Probably the most widely anticipated features are related to managing page client ids and of ViewState on WebForm pages. Take Control of Your ClientIDs Unique ClientID generation in ASP.NET has been one of the most complained about “features” in ASP.NET. Although there’s a very good technical reason for these unique generated ids - they guarantee unique ids for each and every server control on a page - these unique and generated ids often get in the way of client-side JavaScript development and CSS styling as it’s often inconvenient and fragile to work with the long, generated ClientIDs. In ASP.NET 4.0 you can now specify an explicit client id mode on each control or each naming container parent control to control how client ids are generated. By default, ASP.NET generates mangled client ids for any control contained in a naming container (like a Master Page, or a User Control for example). The key to ClientID management in ASP.NET 4.0 are the new ClientIDMode and ClientIDRowSuffix properties. ClientIDMode supports four different ClientID generation settings shown below. For the following examples, imagine that you have a Textbox control named txtName inside of a master page control container on a WebForms page. <%@Page Language="C#"      MasterPageFile="~/Site.Master"     CodeBehind="WebForm2.aspx.cs"     Inherits="WebApplication1.WebForm2"  %> <asp:Content ID="content"  ContentPlaceHolderID="content"               runat="server"               ClientIDMode="Static" >       <asp:TextBox runat="server" ID="txtName" /> </asp:Content> The four available ClientIDMode values are: AutoID This is the existing behavior in ASP.NET 1.x-3.x where full naming container munging takes place. <input name="ctl00$content$txtName" type="text"        id="ctl00_content_txtName" /> This should be familiar to any ASP.NET developer and results in fairly unpredictable client ids that can easily change if the containership hierarchy changes. For example, removing the master page changes the name in this case, so if you were to move a block of script code that works against the control to a non-Master page, the script code immediately breaks. Static This option is the most deterministic setting that forces the control’s ClientID to use its ID value directly. No naming container naming at all is applied and you end up with clean client ids: <input name="ctl00$content$txtName"         type="text" id="txtName" /> Note that the name property which is used for postback variables to the server still is munged, but the ClientID property is displayed simply as the ID value that you have assigned to the control. This option is what most of us want to use, but you have to be clear on that because it can potentially cause conflicts with other controls on the page. If there are several instances of the same naming container (several instances of the same user control for example) there can easily be a client id naming conflict. Note that if you assign Static to a data-bound control, like a list child control in templates, you do not get unique ids either, so for list controls where you rely on unique id for child controls, you’ll probably want to use Predictable rather than Static. I’ll write more on this a little later when I discuss ClientIDRowSuffix. Predictable The previous two values are pretty self-explanatory. Predictable however, requires some explanation. To me at least it’s not in the least bit predictable. MSDN defines this value as follows: This algorithm is used for controls that are in data-bound controls. The ClientID value is generated by concatenating the ClientID value of the parent naming container with the ID value of the control. If the control is a data-bound control that generates multiple rows, the value of the data field specified in the ClientIDRowSuffix property is added at the end. For the GridView control, multiple data fields can be specified. If the ClientIDRowSuffix property is blank, a sequential number is added at the end instead of a data-field value. Each segment is separated by an underscore character (_). The key that makes this value a bit confusing is that it relies on the parent NamingContainer’s ClientID to build its own ClientID value. This effectively means that the value is not predictable at all but rather very tightly coupled to the parent naming container’s ClientIDMode setting. For my simple textbox example, if the ClientIDMode property of the parent naming container (Page in this case) is set to “Predictable” you’ll get this: <input name="ctl00$content$txtName" type="text"         id="content_txtName" /> which gives an id that based on walking up to the currently active naming container (the MasterPage content container) and starting the id formatting from there downward. Think of this as a semi unique name that’s guaranteed unique only for the naming container. If, on the other hand, the Page is set to “AutoID” you get the following with Predictable on txtName: <input name="ctl00$content$txtName" type="text"         id="ctl00_content_txtName" /> The latter is effectively the same as if you specified AutoID because it inherits the AutoID naming from the Page and Content Master Page control of the page. But again - predictable behavior always depends on the parent naming container and how it generates its id, so the id may not always be exactly the same as the AutoID generated value because somewhere in the NamingContainer chain the ClientIDMode setting may be set to a different value. For example, if you had another naming container in the middle that was set to Static you’d end up effectively with an id that starts with the NamingContainers id rather than the whole ctl000_content munging. The most common use for Predictable is likely to be for data-bound controls, which results in each data bound item getting a unique ClientID. Unfortunately, even here the behavior can be very unpredictable depending on which data-bound control you use - I found significant differences in how template controls in a GridView behave from those that are used in a ListView control. For example, GridView creates clean child ClientIDs, while ListView still has a naming container in the ClientID, presumably because of the template container on which you can’t set ClientIDMode. Predictable is useful, but only if all naming containers down the chain use this setting. Otherwise you’re right back to the munged ids that are pretty unpredictable. Another property, ClientIDRowSuffix, can be used in combination with ClientIDMode of Predictable to force a suffix onto list client controls. For example: <asp:GridView runat="server" ID="gvItems"              AutoGenerateColumns="false"             ClientIDMode="Static"              ClientIDRowSuffix="Id">     <Columns>     <asp:TemplateField>         <ItemTemplate>             <asp:Label runat="server" id="txtName"                        Text='<%# Eval("Name") %>'                   ClientIDMode="Predictable"/>         </ItemTemplate>     </asp:TemplateField>     <asp:TemplateField>         <ItemTemplate>         <asp:Label runat="server" id="txtId"                     Text='<%# Eval("Id") %>'                     ClientIDMode="Predictable" />         </ItemTemplate>     </asp:TemplateField>     </Columns>  </asp:GridView> generates client Ids inside of a column in the master page described earlier: <td>     <span id="txtName_0">Rick</span> </td> where the value after the underscore is the ClientIDRowSuffix field - in this case “Id” of the item data bound to the control. Note that all of the child controls require ClientIDMode=”Predictable” in order for the ClientIDRowSuffix to be applied, and the parent GridView controls need to be set to Static either explicitly or via Naming Container inheritance to give these simple names. It’s a bummer that ClientIDRowSuffix doesn’t work with Static to produce this automatically. Another real problem is that other controls process the ClientIDMode differently. For example, a ListView control processes the Predictable ClientIDMode differently and produces the following with the Static ListView and Predictable child controls: <span id="ctrl0_txtName_0">Rick</span> I couldn’t even figure out a way using ClientIDMode to get a simple ID that also uses a suffix short of falling back to manually generated ids using <%= %> expressions instead. Given the inconsistencies inside of list controls using <%= %>, ids for the ListView might not be a bad idea anyway. Inherit The final setting is Inherit, which is the default for all controls except Page. This means that controls by default inherit the parent naming container’s ClientIDMode setting. For more detailed information on ClientID behavior and different scenarios you can check out a blog post of mine on this subject: http://www.west-wind.com/weblog/posts/54760.aspx. ClientID Enhancements Summary The ClientIDMode property is a welcome addition to ASP.NET 4.0. To me this is probably the most useful WebForms feature as it allows me to generate clean IDs simply by setting ClientIDMode="Static" on either the page or inside of Web.config (in the Pages section) which applies the setting down to the entire page which is my 95% scenario. For the few cases when it matters - for list controls and inside of multi-use user controls or custom server controls) - I can use Predictable or even AutoID to force controls to unique names. For application-level page development, this is easy to accomplish and provides maximum usability for working with client script code against page controls. ViewStateMode Another area of large criticism for WebForms is ViewState. ViewState is used internally by ASP.NET to persist page-level changes to non-postback properties on controls as pages post back to the server. It’s a useful mechanism that works great for the overall mechanics of WebForms, but it can also cause all sorts of overhead for page operation as ViewState can very quickly get out of control and consume huge amounts of bandwidth in your page content. ViewState can also wreak havoc with client-side scripting applications that modify control properties that are tracked by ViewState, which can produce very unpredictable results on a Postback after client-side updates. Over the years in my own development, I’ve often turned off ViewState on pages to reduce overhead. Yes, you lose some functionality, but you can easily implement most of the common functionality in non-ViewState workarounds. Relying less on heavy ViewState controls and sticking with simpler controls or raw HTML constructs avoids getting around ViewState problems. In ASP.NET 3.x and prior, it wasn’t easy to control ViewState - you could turn it on or off and if you turned it off at the page or web.config level, you couldn’t turn it back on for specific controls. In short, it was an all or nothing approach. With ASP.NET 4.0, the new ViewStateMode property gives you more control. It allows you to disable ViewState globally either on the page or web.config level and then turn it back on for specific controls that might need it. ViewStateMode only works when EnableViewState="true" on the page or web.config level (which is the default). You can then use ViewStateMode of Disabled, Enabled or Inherit to control the ViewState settings on the page. If you’re shooting for minimal ViewState usage, the ideal situation is to set ViewStateMode to disabled on the Page or web.config level and only turn it back on particular controls: <%@Page Language="C#"      CodeBehind="WebForm2.aspx.cs"     Inherits="Westwind.WebStore.WebForm2"        ClientIDMode="Static"                ViewStateMode="Disabled"     EnableViewState="true"  %> <!-- this control has viewstate  --> <asp:TextBox runat="server" ID="txtName"  ViewStateMode="Enabled" />       <!-- this control has no viewstate - it inherits  from parent container --> <asp:TextBox runat="server" ID="txtAddress" /> Note that the EnableViewState="true" at the Page level isn’t required since it’s the default, but it’s important that the value is true. ViewStateMode has no effect if EnableViewState="false" at the page level. The main benefit of ViewStateMode is that it allows you to more easily turn off ViewState for most of the page and enable only a few key controls that might need it. For me personally, this is a perfect combination as most of my WebForm apps can get away without any ViewState at all. But some controls - especially third party controls - often don’t work well without ViewState enabled, and now it’s much easier to selectively enable controls rather than the old way, which required you to pretty much turn off ViewState for all controls that you didn’t want ViewState on. Inline HTML Encoding HTML encoding is an important feature to prevent cross-site scripting attacks in data entered by users on your site. In order to make it easier to create HTML encoded content, ASP.NET 4.0 introduces a new Expression syntax using <%: %> to encode string values. The encoding expression syntax looks like this: <%: "<script type='text/javascript'>" +     "alert('Really?');</script>" %> which produces properly encoded HTML: &lt;script type=&#39;text/javascript&#39; &gt;alert(&#39;Really?&#39;);&lt;/script&gt; Effectively this is a shortcut to: <%= HttpUtility.HtmlEncode( "<script type='text/javascript'>" + "alert('Really?');</script>") %> Of course the <%: %> syntax can also evaluate expressions just like <%= %> so the more common scenario applies this expression syntax against data your application is displaying. Here’s an example displaying some data model values: <%: Model.Address.Street %> This snippet shows displaying data from your application’s data store or more importantly, from data entered by users. Anything that makes it easier and less verbose to HtmlEncode text is a welcome addition to avoid potential cross-site scripting attacks. Although I listed Inline HTML Encoding here under WebForms, anything that uses the WebForms rendering engine including ASP.NET MVC, benefits from this feature. ScriptManager Enhancements The ASP.NET ScriptManager control in the past has introduced some nice ways to take programmatic and markup control over script loading, but there were a number of shortcomings in this control. The ASP.NET 4.0 ScriptManager has a number of improvements that make it easier to control script loading and addresses a few of the shortcomings that have often kept me from using the control in favor of manual script loading. The first is the AjaxFrameworkMode property which finally lets you suppress loading the ASP.NET AJAX runtime. Disabled doesn’t load any ASP.NET AJAX libraries, but there’s also an Explicit mode that lets you pick and choose the library pieces individually and reduce the footprint of ASP.NET AJAX script included if you are using the library. There’s also a new EnableCdn property that forces any script that has a new WebResource attribute CdnPath property set to a CDN supplied URL. If the script has this Attribute property set to a non-null/empty value and EnableCdn is enabled on the ScriptManager, that script will be served from the specified CdnPath. [assembly: WebResource(    "Westwind.Web.Resources.ww.jquery.js",    "application/x-javascript",    CdnPath =  "http://mysite.com/scripts/ww.jquery.min.js")] Cool, but a little too static for my taste since this value can’t be changed at runtime to point at a debug script as needed, for example. Assembly names for loading scripts from resources can now be simple names rather than fully qualified assembly names, which make it less verbose to reference scripts from assemblies loaded from your bin folder or the assembly reference area in web.config: <asp:ScriptManager runat="server" id="Id"          EnableCdn="true"         AjaxFrameworkMode="disabled">     <Scripts>         <asp:ScriptReference          Name="Westwind.Web.Resources.ww.jquery.js"         Assembly="Westwind.Web" />     </Scripts>        </asp:ScriptManager> The ScriptManager in 4.0 also supports script combining via the CompositeScript tag, which allows you to very easily combine scripts into a single script resource served via ASP.NET. Even nicer: You can specify the URL that the combined script is served with. Check out the following script manager markup that combines several static file scripts and a script resource into a single ASP.NET served resource from a static URL (allscripts.js): <asp:ScriptManager runat="server" id="Id"          EnableCdn="true"         AjaxFrameworkMode="disabled">     <CompositeScript          Path="~/scripts/allscripts.js">         <Scripts>             <asp:ScriptReference                    Path="~/scripts/jquery.js" />             <asp:ScriptReference                    Path="~/scripts/ww.jquery.js" />             <asp:ScriptReference            Name="Westwind.Web.Resources.editors.js"                 Assembly="Westwind.Web" />         </Scripts>     </CompositeScript> </asp:ScriptManager> When you render this into HTML, you’ll see a single script reference in the page: <script src="scripts/allscripts.debug.js"          type="text/javascript"></script> All you need to do to make this work is ensure that allscripts.js and allscripts.debug.js exist in the scripts folder of your application - they can be empty but the file has to be there. This is pretty cool, but you want to be real careful that you use unique URLs for each combination of scripts you combine or else browser and server caching will easily screw you up royally. The script manager also allows you to override native ASP.NET AJAX scripts now as any script references defined in the Scripts section of the ScriptManager trump internal references. So if you want custom behavior or you want to fix a possible bug in the core libraries that normally are loaded from resources, you can now do this simply by referencing the script resource name in the Name property and pointing at System.Web for the assembly. Not a common scenario, but when you need it, it can come in real handy. Still, there are a number of shortcomings in this control. For one, the ScriptManager and ClientScript APIs still have no common entry point so control developers are still faced with having to check and support both APIs to load scripts so that controls can work on pages that do or don’t have a ScriptManager on the page. The CdnUrl is static and compiled in, which is very restrictive. And finally, there’s still no control over where scripts get loaded on the page - ScriptManager still injects scripts into the middle of the HTML markup rather than in the header or optionally the footer. This, in turn, means there is little control over script loading order, which can be problematic for control developers. MetaDescription, MetaKeywords Page Properties There are also a number of additional Page properties that correspond to some of the other features discussed in this column: ClientIDMode, ClientTarget and ViewStateMode. Another minor but useful feature is that you can now directly access the MetaDescription and MetaKeywords properties on the Page object to set the corresponding meta tags programmatically. Updating these values programmatically previously required either <%= %> expressions in the page markup or dynamic insertion of literal controls into the page. You can now just set these properties programmatically on the Page object in any Control derived class on the page or the Page itself: Page.MetaKeywords = "ASP.NET,4.0,New Features"; Page.MetaDescription = "This article discusses the new features in ASP.NET 4.0"; Note, that there’s no corresponding ASP.NET tag for the HTML Meta element, so the only way to specify these values in markup and access them is via the @Page tag: <%@Page Language="C#"      CodeBehind="WebForm2.aspx.cs"     Inherits="Westwind.WebStore.WebForm2"      ClientIDMode="Static"                MetaDescription="Article that discusses what's                      new in ASP.NET 4.0"     MetaKeywords="ASP.NET,4.0,New Features" %> Nothing earth shattering but quite convenient. Visual Studio 2010 Enhancements for Web Development For Web development there are also a host of editor enhancements in Visual Studio 2010. Some of these are not Web specific but they are useful for Web developers in general. Text Editors Throughout Visual Studio 2010, the text editors have all been updated to a new core engine based on WPF which provides some interesting new features for various code editors including the nice ability to zoom in and out with Ctrl-MouseWheel to quickly change the size of text. There are many more API options to control the editor and although Visual Studio 2010 doesn’t yet use many of these features, we can look forward to enhancements in add-ins and future editor updates from the various language teams that take advantage of the visual richness that WPF provides to editing. On the negative side, I’ve noticed that occasionally the code editor and especially the HTML and JavaScript editors will lose the ability to use various navigation keys like arrows, back and delete keys, which requires closing and reopening the documents at times. This issue seems to be well documented so I suspect this will be addressed soon with a hotfix or within the first service pack. Overall though, the code editors work very well, especially given that they were re-written completely using WPF, which was one of my big worries when I first heard about the complete redesign of the editors. Multi-Targeting Visual Studio now targets all versions of the .NET framework from 2.0 forward. You can use Visual Studio 2010 to work on your ASP.NET 2, 3.0 and 3.5 applications which is a nice way to get your feet wet with the new development environment without having to make changes to existing applications. It’s nice to have one tool to work in for all the different versions. Multi-Monitor Support One cool feature of Visual Studio 2010 is the ability to drag windows out of the Visual Studio environment and out onto the desktop including onto another monitor easily. Since Web development often involves working with a host of designers at the same time - visual designer, HTML markup window, code behind and JavaScript editor - it’s really nice to be able to have a little more screen real estate to work on each of these editors. Microsoft made a welcome change in the environment. IntelliSense Snippets for HTML and JavaScript Editors The HTML and JavaScript editors now finally support IntelliSense scripts to create macro-based template expansions that have been in the core C# and Visual Basic code editors since Visual Studio 2005. Snippets allow you to create short XML-based template definitions that can act as static macros or real templates that can have replaceable values that can be embedded into the expanded text. The XML syntax for these snippets is straight forward and it’s pretty easy to create custom snippets manually. You can easily create snippets using XML and store them in your custom snippets folder (C:\Users\rstrahl\Documents\Visual Studio 2010\Code Snippets\Visual Web Developer\My HTML Snippets and My JScript Snippets), but it helps to use one of the third-party tools that exist to simplify the process for you. I use SnippetEditor, by Bill McCarthy, which makes short work of creating snippets interactively (http://snippeteditor.codeplex.com/). Note: You may have to manually add the Visual Studio 2010 User specific Snippet folders to this tool to see existing ones you’ve created. Code snippets are some of the biggest time savers and HTML editing more than anything deals with lots of repetitive tasks that lend themselves to text expansion. Visual Studio 2010 includes a slew of built-in snippets (that you can also customize!) and you can create your own very easily. If you haven’t done so already, I encourage you to spend a little time examining your coding patterns and find the repetitive code that you write and convert it into snippets. I’ve been using CodeRush for this for years, but now you can do much of the basic expansion natively for HTML and JavaScript snippets. jQuery Integration Is Now Native jQuery is a popular JavaScript library and recently Microsoft has recently stated that it will become the primary client-side scripting technology to drive higher level script functionality in various ASP.NET Web projects that Microsoft provides. In Visual Studio 2010, the default full project template includes jQuery as part of a new project including the support files that provide IntelliSense (-vsdoc files). IntelliSense support for jQuery is now also baked into Visual Studio 2010, so unlike Visual Studio 2008 which required a separate download, no further installs are required for a rich IntelliSense experience with jQuery. Summary ASP.NET 4.0 brings many useful improvements to the platform, but thankfully most of the changes are incremental changes that don’t compromise backwards compatibility and they allow developers to ease into the new features one feature at a time. None of the changes in ASP.NET 4.0 or Visual Studio 2010 are monumental or game changers. The bigger features are language and .NET Framework changes that are also optional. This ASP.NET and tools release feels more like fine tuning and getting some long-standing kinks worked out of the platform. It shows that the ASP.NET team is dedicated to paying attention to community feedback and responding with changes to the platform and development environment based on this feedback. If you haven’t gotten your feet wet with ASP.NET 4.0 and Visual Studio 2010, there’s no reason not to give it a shot now - the ASP.NET 4.0 platform is solid and Visual Studio 2010 works very well for a brand new release. Check it out. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • Creating a thematic map

    - by jsharma
    This post describes how to create a simple thematic map, just a state population layer, with no underlying map tile layer. The map shows states color-coded by total population. The map is interactive with info-windows and can be panned and zoomed. The sample code demonstrates the following: Displaying an interactive vector layer with no background map tile layer (i.e. purpose and use of the Universe object) Using a dynamic (i.e. defined via the javascript client API) color bucket style Dynamically changing a layer's rendering style Specifying which attribute value to use in determining the bucket, and hence style, for a feature (FoI) The result is shown in the screenshot below. The states layer was defined, and stored in the user_sdo_themes view of the mvdemo schema, using MapBuilder. The underlying table is defined as SQL> desc states_32775  Name                                      Null?    Type ----------------------------------------- -------- ----------------------------  STATE                                              VARCHAR2(26)  STATE_ABRV                                         VARCHAR2(2) FIPSST                                             VARCHAR2(2) TOTPOP                                             NUMBER PCTSMPLD                                           NUMBER LANDSQMI                                           NUMBER POPPSQMI                                           NUMBER ... MEDHHINC NUMBER AVGHHINC NUMBER GEOM32775 MDSYS.SDO_GEOMETRY We'll use the TOTPOP column value in the advanced (color bucket) style for rendering the states layers. The predefined theme (US_STATES_BI) is defined as follows. SQL> select styling_rules from user_sdo_themes where name='US_STATES_BI'; STYLING_RULES -------------------------------------------------------------------------------- <?xml version="1.0" standalone="yes"?> <styling_rules highlight_style="C.CB_QUAL_8_CLASS_DARK2_1"> <hidden_info> <field column="STATE" name="Name"/> <field column="POPPSQMI" name="POPPSQMI"/> <field column="TOTPOP" name="TOTPOP"/> </hidden_info> <rule column="TOTPOP"> <features style="states_totpop"> </features> <label column="STATE_ABRV" style="T.BLUE_SERIF_10"> 1 </label> </rule> </styling_rules> SQL> The theme definition specifies that the state, poppsqmi, totpop, state_abrv, and geom columns will be queried from the states_32775 table. The state_abrv value will be used to label the state while the totpop value will be used to determine the color-fill from those defined in the states_totpop advanced style. The states_totpop style, which we will not use in our demo, is defined as shown below. SQL> select definition from user_sdo_styles where name='STATES_TOTPOP'; DEFINITION -------------------------------------------------------------------------------- <?xml version="1.0" ?> <AdvancedStyle> <BucketStyle> <Buckets default_style="C.S02_COUNTRY_AREA"> <RangedBucket seq="0" label="10K - 5M" low="10000" high="5000000" style="C.SEQ6_01" /> <RangedBucket seq="1" label="5M - 12M" low="5000001" high="1.2E7" style="C.SEQ6_02" /> <RangedBucket seq="2" label="12M - 20M" low="1.2000001E7" high="2.0E7" style="C.SEQ6_04" /> <RangedBucket seq="3" label="&gt; 20M" low="2.0000001E7" high="5.0E7" style="C.SEQ6_05" /> </Buckets> </BucketStyle> </AdvancedStyle> SQL> The demo defines additional advanced styles via the OM.style object and methods and uses those instead when rendering the states layer.   Now let's look at relevant snippets of code that defines the map extent and zoom levels (i.e. the OM.universe),  loads the states predefined vector layer (OM.layer), and sets up the advanced (color bucket) style. Defining the map extent and zoom levels. function initMap() {   //alert("Initialize map view");     // define the map extent and number of zoom levels.   // The Universe object is similar to the map tile layer configuration   // It defines the map extent, number of zoom levels, and spatial reference system   // well-known ones (like web mercator/google/bing or maps.oracle/elocation are predefined   // The Universe must be defined when there is no underlying map tile layer.   // When there is a map tile layer then that defines the map extent, srid, and zoom levels.      var uni= new OM.universe.Universe(     {         srid : 32775,         bounds : new OM.geometry.Rectangle(                         -3280000, 170000, 2300000, 3200000, 32775),         numberOfZoomLevels: 8     }); The srid specifies the spatial reference system which is Equal-Area Projection (United States). SQL> select cs_name from cs_srs where srid=32775 ; CS_NAME --------------------------------------------------- Equal-Area Projection (United States) The bounds defines the map extent. It is a Rectangle defined using the lower-left and upper-right coordinates and srid. Loading and displaying the states layer This is done in the states() function. The full code is at the end of this post, however here's the snippet which defines the states VectorLayer.     // States is a predefined layer in user_sdo_themes     var  layer2 = new OM.layer.VectorLayer("vLayer2",     {         def:         {             type:OM.layer.VectorLayer.TYPE_PREDEFINED,             dataSource:"mvdemo",             theme:"us_states_bi",             url: baseURL,             loadOnDemand: false         },         boundingTheme:true      }); The first parameter is a layer name, the second is an object literal for a layer config. The config object has two attributes: the first is the layer definition, the second specifies whether the layer is a bounding one (i.e. used to determine the current map zoom and center such that the whole layer is displayed within the map window) or not. The layer config has the following attributes: type - specifies whether is a predefined one, a defined via a SQL query (JDBC), or in a json-format file (DATAPACK) theme - is the predefined theme's name url - is the location of the mapviewer server loadOnDemand - specifies whether to load all the features or just those that lie within the current map window and load additional ones as needed on a pan or zoom The code snippet below dynamically defines an advanced style and then uses it, instead of the 'states_totpop' style, when rendering the states layer. // override predefined rendering style with programmatic one    var theRenderingStyle =      createBucketColorStyle('YlBr5', colorSeries, 'States5', true);   // specify which attribute is used in determining the bucket (i.e. color) to use for the state   // It can be an array because the style could be a chart type (pie/bar)   // which requires multiple attribute columns     // Use the STATE.TOTPOP column (aka attribute) value here    layer2.setRenderingStyle(theRenderingStyle, ["TOTPOP"]); The style itself is defined in the createBucketColorStyle() function. Dynamically defining an advanced style The advanced style used here is a bucket color style, i.e. a color style is associated with each bucket. So first we define the colors and then the buckets.     numClasses = colorSeries[colorName].classes;    // create Color Styles    for (var i=0; i < numClasses; i++)    {         theStyles[i] = new OM.style.Color(                      {fill: colorSeries[colorName].fill[i],                        stroke:colorSeries[colorName].stroke[i],                       strokeOpacity: useGradient? 0.25 : 1                      });    }; numClasses is the number of buckets. The colorSeries array contains the color fill and stroke definitions and is: var colorSeries = { //multi-hue color scheme #10 YlBl. "YlBl3": {   classes:3,                  fill: [0xEDF8B1, 0x7FCDBB, 0x2C7FB8],                  stroke:[0xB5DF9F, 0x72B8A8, 0x2872A6]   }, "YlBl5": {   classes:5,                  fill:[0xFFFFCC, 0xA1DAB4, 0x41B6C4, 0x2C7FB8, 0x253494],                  stroke:[0xE6E6B8, 0x91BCA2, 0x3AA4B0, 0x2872A6, 0x212F85]   }, //multi-hue color scheme #11 YlBr.  "YlBr3": {classes:3,                  fill:[0xFFF7BC, 0xFEC44F, 0xD95F0E],                  stroke:[0xE6DEA9, 0xE5B047, 0xC5360D]   }, "YlBr5": {classes:5,                  fill:[0xFFFFD4, 0xFED98E, 0xFE9929, 0xD95F0E, 0x993404],                  stroke:[0xE6E6BF, 0xE5C380, 0xE58A25, 0xC35663, 0x8A2F04]     }, etc. Next we create the bucket style.    bucketStyleDef = {       numClasses : colorSeries[colorName].classes, //      classification: 'custom',  //since we are supplying all the buckets //      buckets: theBuckets,       classification: 'logarithmic',  // use a logarithmic scale       styles: theStyles,       gradient:  useGradient? 'linear' : 'off' //      gradient:  useGradient? 'radial' : 'off'     };    theBucketStyle = new OM.style.BucketStyle(bucketStyleDef);    return theBucketStyle; A BucketStyle constructor takes a style definition as input. The style definition specifies the number of buckets (numClasses), a classification scheme (which can be equal-ranged, logarithmic scale, or custom), the styles for each bucket, whether to use a gradient effect, and optionally the buckets (required when using a custom classification scheme). The full source for the demo <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>Oracle Maps V2 Thematic Map Demo</title> <script src="http://localhost:8080/mapviewer/jslib/v2/oraclemapsv2.js" type="text/javascript"> </script> <script type="text/javascript"> //var $j = jQuery.noConflict(); var baseURL="http://localhost:8080/mapviewer"; // location of mapviewer OM.gv.proxyEnabled =false; // no mvproxy needed OM.gv.setResourcePath(baseURL+"/jslib/v2/images/"); // location of resources for UI elements like nav panel buttons var map = null; // the client mapviewer object var statesLayer = null, stateCountyLayer = null; // The vector layers for states and counties in a state var layerName="States"; // initial map center and zoom var mapCenterLon = -20000; var mapCenterLat = 1750000; var mapZoom = 2; var mpoint = new OM.geometry.Point(mapCenterLon,mapCenterLat,32775); var currentPalette = null, currentStyle=null; // set an onchange listener for the color palette select list // initialize the map // load and display the states layer $(document).ready( function() { $("#demo-htmlselect").change(function() { var theColorScheme = $(this).val(); useSelectedColorScheme(theColorScheme); }); initMap(); states(); } ); /** * color series from ColorBrewer site (http://colorbrewer2.org/). */ var colorSeries = { //multi-hue color scheme #10 YlBl. "YlBl3": { classes:3, fill: [0xEDF8B1, 0x7FCDBB, 0x2C7FB8], stroke:[0xB5DF9F, 0x72B8A8, 0x2872A6] }, "YlBl5": { classes:5, fill:[0xFFFFCC, 0xA1DAB4, 0x41B6C4, 0x2C7FB8, 0x253494], stroke:[0xE6E6B8, 0x91BCA2, 0x3AA4B0, 0x2872A6, 0x212F85] }, //multi-hue color scheme #11 YlBr. "YlBr3": {classes:3, fill:[0xFFF7BC, 0xFEC44F, 0xD95F0E], stroke:[0xE6DEA9, 0xE5B047, 0xC5360D] }, "YlBr5": {classes:5, fill:[0xFFFFD4, 0xFED98E, 0xFE9929, 0xD95F0E, 0x993404], stroke:[0xE6E6BF, 0xE5C380, 0xE58A25, 0xC35663, 0x8A2F04] }, // single-hue color schemes (blues, greens, greys, oranges, reds, purples) "Purples5": {classes:5, fill:[0xf2f0f7, 0xcbc9e2, 0x9e9ac8, 0x756bb1, 0x54278f], stroke:[0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3] }, "Blues5": {classes:5, fill:[0xEFF3FF, 0xbdd7e7, 0x68aed6, 0x3182bd, 0x18519C], stroke:[0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3] }, "Greens5": {classes:5, fill:[0xedf8e9, 0xbae4b3, 0x74c476, 0x31a354, 0x116d2c], stroke:[0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3] }, "Greys5": {classes:5, fill:[0xf7f7f7, 0xcccccc, 0x969696, 0x636363, 0x454545], stroke:[0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3] }, "Oranges5": {classes:5, fill:[0xfeedde, 0xfdb385, 0xfd8d3c, 0xe6550d, 0xa63603], stroke:[0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3] }, "Reds5": {classes:5, fill:[0xfee5d9, 0xfcae91, 0xfb6a4a, 0xde2d26, 0xa50f15], stroke:[0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3, 0xd3d3d3] } }; function createBucketColorStyle( colorName, colorSeries, rangeName, useGradient) { var theBucketStyle; var bucketStyleDef; var theStyles = []; var theColors = []; var aBucket, aStyle, aColor, aRange; var numClasses ; numClasses = colorSeries[colorName].classes; // create Color Styles for (var i=0; i < numClasses; i++) { theStyles[i] = new OM.style.Color( {fill: colorSeries[colorName].fill[i], stroke:colorSeries[colorName].stroke[i], strokeOpacity: useGradient? 0.25 : 1 }); }; bucketStyleDef = { numClasses : colorSeries[colorName].classes, // classification: 'custom', //since we are supplying all the buckets // buckets: theBuckets, classification: 'logarithmic', // use a logarithmic scale styles: theStyles, gradient: useGradient? 'linear' : 'off' // gradient: useGradient? 'radial' : 'off' }; theBucketStyle = new OM.style.BucketStyle(bucketStyleDef); return theBucketStyle; } function initMap() { //alert("Initialize map view"); // define the map extent and number of zoom levels. // The Universe object is similar to the map tile layer configuration // It defines the map extent, number of zoom levels, and spatial reference system // well-known ones (like web mercator/google/bing or maps.oracle/elocation are predefined // The Universe must be defined when there is no underlying map tile layer. // When there is a map tile layer then that defines the map extent, srid, and zoom levels. var uni= new OM.universe.Universe( { srid : 32775, bounds : new OM.geometry.Rectangle( -3280000, 170000, 2300000, 3200000, 32775), numberOfZoomLevels: 8 }); map = new OM.Map( document.getElementById('map'), { mapviewerURL: baseURL, universe:uni }) ; var navigationPanelBar = new OM.control.NavigationPanelBar(); map.addMapDecoration(navigationPanelBar); } // end initMap function states() { //alert("Load and display states"); layerName = "States"; if(statesLayer) { // states were already visible but the style may have changed // so set the style to the currently selected one var theData = $('#demo-htmlselect').val(); setStyle(theData); } else { // States is a predefined layer in user_sdo_themes var layer2 = new OM.layer.VectorLayer("vLayer2", { def: { type:OM.layer.VectorLayer.TYPE_PREDEFINED, dataSource:"mvdemo", theme:"us_states_bi", url: baseURL, loadOnDemand: false }, boundingTheme:true }); // add drop shadow effect and hover style var shadowFilter = new OM.visualfilter.DropShadow({opacity:0.5, color:"#000000", offset:6, radius:10}); var hoverStyle = new OM.style.Color( {stroke:"#838383", strokeThickness:2}); layer2.setHoverStyle(hoverStyle); layer2.setHoverVisualFilter(shadowFilter); layer2.enableFeatureHover(true); layer2.enableFeatureSelection(false); layer2.setLabelsVisible(true); // override predefined rendering style with programmatic one var theRenderingStyle = createBucketColorStyle('YlBr5', colorSeries, 'States5', true); // specify which attribute is used in determining the bucket (i.e. color) to use for the state // It can be an array because the style could be a chart type (pie/bar) // which requires multiple attribute columns // Use the STATE.TOTPOP column (aka attribute) value here layer2.setRenderingStyle(theRenderingStyle, ["TOTPOP"]); currentPalette = "YlBr5"; var stLayerIdx = map.addLayer(layer2); //alert('State Layer Idx = ' + stLayerIdx); map.setMapCenter(mpoint); map.setMapZoomLevel(mapZoom) ; // display the map map.init() ; statesLayer=layer2; // add rt-click event listener to show counties for the state layer2.addListener(OM.event.MouseEvent.MOUSE_RIGHT_CLICK,stateRtClick); } // end if } // end states function setStyle(styleName) { // alert("Selected Style = " + styleName); // there may be a counties layer also displayed. // that wll have different bucket ranges so create // one style for states and one for counties var newRenderingStyle = null; if (layerName === "States") { if(/3/.test(styleName)) { newRenderingStyle = createBucketColorStyle(styleName, colorSeries, 'States3', false); currentStyle = createBucketColorStyle(styleName, colorSeries, 'Counties3', false); } else { newRenderingStyle = createBucketColorStyle(styleName, colorSeries, 'States5', false); currentStyle = createBucketColorStyle(styleName, colorSeries, 'Counties5', false); } statesLayer.setRenderingStyle(newRenderingStyle, ["TOTPOP"]); if (stateCountyLayer) stateCountyLayer.setRenderingStyle(currentStyle, ["TOTPOP"]); } } // end setStyle function stateRtClick(evt){ var foi = evt.feature; //alert('Rt-Click on State: ' + foi.attributes['_label_'] + // ' with pop ' + foi.attributes['TOTPOP']); // display another layer with counties info // layer may change on each rt-click so create and add each time. var countyByState = null ; // the _label_ attribute of a feature in this case is the state abbreviation // we will use that to query and get the counties for a state var sqlText = "select totpop,geom32775 from counties_32775_moved where state_abrv="+ "'"+foi.getAttributeValue('_label_')+"'"; // alert(sqlText); if (currentStyle === null) currentStyle = createBucketColorStyle('YlBr5', colorSeries, 'Counties5', false); /* try a simple style instead new OM.style.ColorStyle( { stroke: "#B8F4FF", fill: "#18E5F4", fillOpacity:0 } ); */ // remove existing layer if any if(stateCountyLayer) map.removeLayer(stateCountyLayer); countyByState = new OM.layer.VectorLayer("stCountyLayer", {def:{type:OM.layer.VectorLayer.TYPE_JDBC, dataSource:"mvdemo", sql:sqlText, url:baseURL}}); // url:baseURL}, // renderingStyle:currentStyle}); countyByState.setVisible(true); // specify which attribute is used in determining the bucket (i.e. color) to use for the state countyByState.setRenderingStyle(currentStyle, ["TOTPOP"]); var ctLayerIdx = map.addLayer(countyByState); // alert('County Layer Idx = ' + ctLayerIdx); //map.addLayer(countyByState); stateCountyLayer = countyByState; } // end stateRtClick function useSelectedColorScheme(theColorScheme) { if(map) { // code to update renderStyle goes here //alert('will try to change render style'); setStyle(theColorScheme); } else { // do nothing } } </script> </head> <body bgcolor="#b4c5cc" style="height:100%;font-family:Arial,Helvetica,Verdana"> <h3 align="center">State population thematic map </h3> <div id="demo" style="position:absolute; left:68%; top:44px; width:28%; height:100%"> <HR/> <p/> Choose Color Scheme: <select id="demo-htmlselect"> <option value="YlBl3"> YellowBlue3</option> <option value="YlBr3"> YellowBrown3</option> <option value="YlBl5"> YellowBlue5</option> <option value="YlBr5" selected="selected"> YellowBrown5</option> <option value="Blues5"> Blues</option> <option value="Greens5"> Greens</option> <option value="Greys5"> Greys</option> <option value="Oranges5"> Oranges</option> <option value="Purples5"> Purples</option> <option value="Reds5"> Reds</option> </select> <p/> </div> <div id="map" style="position:absolute; left:10px; top:50px; width:65%; height:75%; background-color:#778f99"></div> <div style="position:absolute;top:85%; left:10px;width:98%" class="noprint"> <HR/> <p> Note: This demo uses HTML5 Canvas and requires IE9+, Firefox 10+, or Chrome. No map will show up in IE8 or earlier. </p> </div> </body> </html>

    Read the article

  • RESTful services architecture question

    - by abovesun
    This is question more about service architecture strategy, we are building big web system based on rest services on back end. And we are currently trying to build some standard internal to follow while developing rest services. Some queries returns list of entities, for example lets consider we have image galleries retrieving service: /gell_all_galeries, returning next response: <galleries> <gallery> <id>some_gallery_id</id> <name>my photos</name> <photos> <photo> <id>123</id> <name>my photo</name> <location>http://mysite/photo/show/123</location> ...... <author> <id>some_id</id> <name>some name</name> ....... <author> </photo> <photo> ..... </photo> <photo> ..... </photo> <photo> ..... </photo> <photo> ..... </photo> </photos> </gallery> <gallery> .... </gallery> <gallery> .... </gallery> <gallery> .... </gallery> <gallery> .... </gallery> </galleries> As you see here, response quite big and heavy, and not always we need such deep info level. Usual solution is to use or http://ru.wikipedia.org/wiki/Atom elements for each gallery instead of full gallery data: <galleries> <gallery> <id>some_gallery_id</id> <link href="http://mysite/gallery/some_gallery_id"/> </gallery> <gallery> <id>second_gallery_id</id> <link href="http://mysite/gallery/second_gallery_id"/> </gallery> <gallery> .... </gallery> <gallery> .... </gallery> <gallery> .... </gallery> <gallery> .... </gallery> </galleries> The first question, is next: maybe instead we shouldn't even use and types, and just use generic and for all resources that return list objects: <list> <item><link href="http://mysite/gallery/some_gallery_id"/></item> <item><link href="http://mysite/gallery/other_gallery_id"/></item> <item>....</item> </list> And the second question, after user try to retrieve info about some concrete gallery, he'll use for example http://mysite/gallery/some_gallery_id link, what should he see as results? Should it be: <gallery> <id>some_gallery_id</id> <name>my photos</name> <photos> <photo> <id>123</id> <name>my photo</name> <location>http://mysite/photo/show/123</location> ...... <author> <id>some_id</id> <name>some name</name> ....... <author> </photo> <photo> ..... </photo> <photo> ..... </photo> <photo> ..... </photo> <photo> ..... </photo> </photos> </gallery> or : <gallery> <id>some_gallery_id</id> <name>my photos</name> <photos> <photo><link href="http://mysite/photo/11111"/></photo> <photo><link href="http://mysite/photo/22222"/></photo> <photo><link href="http://mysite/photo/33333"/> </photo> <photo> ..... </photo> </photos> </gallery> or <gallery> <id>some_gallery_id</id> <name>my photos</name> <photos> <photo> <link href="http://mysite/photo/11111"/> <author> <link href="http://mysite/author/11111"/> </author> </photo> <photo> <link href="http://mysite/photo/22222"/> <author> <link href="http://mysite/author/11111"/> </author> </photo> <photo> <link href="http://mysite/photo/33333"/> <author> <link href="http://mysite/author/11111"/> </author> </photo> <photo> ..... </photo> </photos> </gallery> I mean if we use link instead of full object info, how deep we should go there? Should I show an author inside photo and so on. Probably my question ambiguous, but what I'm trying to do is create general strategy in such cases for all team members to follow in future.

    Read the article

  • Jquery append input for 2 different input-sections

    - by Email
    Hi i use the following simple jquery script to append input. Source http://muiomuio.com/web-design/add-remove-items-with-jquery <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> <title>Add and Remove - jQuery tutorial</title> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js" type="text/javascript"></script> <script type="text/javascript"> $(function() { var i = $('input').size() + 1; $('a.add').click(function() { $('<p><input type="text" value="input ' + i + '" name="input_field'+ i +'" /></p>').animate({ opacity: "show" }, "slow").appendTo('#inputs'); i++; }); $('a.remove').click(function() { if(i > 2) { $('input:last').animate({opacity:"hide"}, "slow").remove(); i--; } }); $('a.reset').click(function() { while(i > 2) { $('input:last').remove(); i--; } }); }); </script> </head> <body> <h1>Add / remove text fields from webform</h1> <a href="#" class="add"><img src="add.png" width="24" height="24" alt="add" title="add input" /></a> <a href="#" class="remove"><img src="remove.png" width="24" height="24" alt="remove input" /></a> <a href="#" class="reset"><img src="reset.png" width="24" height="24" alt="reset" /></a> <div id="inputs"> <p> <input type="text" value="input 1" name="input_field1"> </p> </div> </div> </body> </html> I know want to add more input fields so i add this html <div id="outputs"> <p> <input type="text" value="output 1" name="output_field1"> </p> how can i achieve that the var i = $('input').size() + 1; will be individually for every input section? EDITED: the full script is the following. copy and paste will get you a full clone of mine. Problem still exists <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> <title>Add and Remove - jQuery tutorial</title> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.min.js" type="text/javascript"></script> <script type="text/javascript"> $(function() { var i = $('input').size() + 1; $('a.add').click(function() { $('<p><input type="text" value="input ' + i + '" name="input_field'+ i +'" /></p>').animate({ opacity: "show" }, "slow").appendTo('#inputs'); i++; }); $('a.remove').click(function() { if(i > 2) { $('input:last').animate({opacity:"hide"}, "slow").remove(); i--; } }); $('a.reset').click(function() { while(i > 2) { $('input:last').remove(); i--; } }); $('a.add_o').click(function() { $('<p><input type="text" value="output ' + i + '" name="input_field'+ i +'" /></p>').animate({ opacity: "show" }, "slow").appendTo('#outputs'); i++; }); $('a.remove_o').click(function() { if(i > 2) { $('input:last').animate({opacity:"hide"}, "slow").remove(); i--; } }); $('a.reset_o').click(function() { while(i > 2) { $('input:last').remove(); i--; } }); }); </script> <style rel="stylesheet" type="text/css"> h1 { font-family:"Trebuchet MS", Arial, Helvetica, sans-serif;} .hide {visibility:hidden;} img {border:none;} input { width:500px; height:20px; padding:10px; background:#f7f7f7; border:1px solid #f0f0f0; color:#333; font-size:20px; text-align:center; line-height:120px; margin:0; -moz-border-radius:5px; -webkit-border-radius:5px; } #inputs { width:520px; padding:0px 20px; border:1px solid #f0f0f0; -moz-border-radius:20px; -webkit-border-radius:20px; } </style> </head> <body> <h1>Add / remove text fields from webform</h1> <a href="#" class="add"><img src="http://muiomuio.com/tutorials/jquery/add-remove/add.png" width="24" height="24" alt="add" title="add input" /></a> <a href="#" class="remove"><img src="http://muiomuio.com/tutorials/jquery/add-remove/remove.png" width="24" height="24" alt="remove input" /></a> <a href="#" class="reset"><img src="http://muiomuio.com/tutorials/jquery/add-remove/reset.png" width="24" height="24" alt="reset" /></a> <div id="inputs"> <p> <input type="text" value="input 1" name="input_field1"> </p> </div> <a href="#" class="add_o"><img src="http://muiomuio.com/tutorials/jquery/add-remove/add.png" width="24" height="24" alt="add" title="add input" /></a> <a href="#" class="remove_o"><img src="http://muiomuio.com/tutorials/jquery/add-remove/remove.png" width="24" height="24" alt="remove input" /></a> <a href="#" class="reset_o"><img src="http://muiomuio.com/tutorials/jquery/add-remove/reset.png" width="24" height="24" alt="reset" /></a> <div id="outputs"> <p> <input type="text" value="output 1" name="output_field1"> </p> </div> </body> </html>

    Read the article

  • Optimizing a lot of Scanner.findWithinHorizon(pattern, 0) calls

    - by darvids0n
    I'm building a process which extracts data from 6 csv-style files and two poorly laid out .txt reports and builds output CSVs, and I'm fully aware that there's going to be some overhead searching through all that whitespace thousands of times, but I never anticipated converting about about 50,000 records would take 12 hours. Excerpt of my manual matching code (I know it's horrible that I use lists of tokens like that, but it was the best thing I could think of): public static String lookup(List<String> tokensBefore, List<String> tokensAfter) { String result = null; while(_match(tokensBefore)) { // block until all input is read if(id.hasNext()) { result = id.next(); // capture the next token that matches if(_matchImmediate(tokensAfter)) // try to match tokensAfter to this result return result; } else return null; // end of file; no match } return null; // no matches } private static boolean _match(List<String> tokens) { return _match(tokens, true); } private static boolean _match(List<String> tokens, boolean block) { if(tokens != null && !tokens.isEmpty()) { if(id.findWithinHorizon(tokens.get(0), 0) == null) return false; for(int i = 1; i <= tokens.size(); i++) { if (i == tokens.size()) { // matches all tokens return true; } else if(id.hasNext() && !id.next().matches(tokens.get(i))) { break; // break to blocking behaviour } } } else { return true; // empty list always matches } if(block) return _match(tokens); // loop until we find something or nothing else return false; // return after just one attempted match } private static boolean _matchImmediate(List<String> tokens) { if(tokens != null) { for(int i = 0; i <= tokens.size(); i++) { if (i == tokens.size()) { // matches all tokens return true; } else if(!id.hasNext() || !id.next().matches(tokens.get(i))) { return false; // doesn't match, or end of file } } return false; // we have some serious problems if this ever gets called } else { return true; // empty list always matches } } Basically wondering how I would work in an efficient string search (Boyer-Moore or similar). My Scanner id is scanning a java.util.String, figured buffering it to memory would reduce I/O since the search here is being performed thousands of times on a relatively small file. The performance increase compared to scanning a BufferedReader(FileReader(File)) was probably less than 1%, the process still looks to be taking a LONG time. I've also traced execution and the slowness of my overall conversion process is definitely between the first and last like of the lookup method. In fact, so much so that I ran a shortcut process to count the number of occurrences of various identifiers in the .csv-style files (I use 2 lookup methods, this is just one of them) and the process completed indexing approx 4 different identifiers for 50,000 records in less than a minute. Compared to 12 hours, that's instant. Some notes (updated): I don't necessarily need the pattern-matching behaviour, I only get the first field of a line of text so I need to match line breaks or use Scanner.nextLine(). All ID numbers I need start at position 0 of a line and run through til the first block of whitespace, after which is the name of the corresponding object. I would ideally want to return a String, not an int locating the line number or start position of the result, but if it's faster then it will still work just fine. If an int is being returned, however, then I would now have to seek to that line again just to get the ID; storing the ID of every line that is searched sounds like a way around that. Anything to help me out, even if it saves 1ms per search, will help, so all input is appreciated. Thankyou! Usage scenario 1: I have a list of objects in file A, who in the old-style system have an id number which is not in file A. It is, however, POSSIBLY in another csv-style file (file B) or possibly still in a .txt report (file C) which each also contain a bunch of other information which is not useful here, and so file B needs to be searched through for the object's full name (1 token since it would reside within the second column of any given line), and then the first column should be the ID number. If that doesn't work, we then have to split the search token by whitespace into separate tokens before doing a search of file C for those tokens as well. Generalised code: String field; for (/* each record in file A */) { /* construct the rest of this object from file A info */ // now to find the ID, if we can List<String> objectName = new ArrayList<String>(1); objectName.add(Pattern.quote(thisObject.fullName)); field = lookup(objectSearchToken, objectName); // search file B if(field == null) // not found in file B { lookupReset(false); // initialise scanner to check file C objectName.clear(); // not using the full name String[] tokens = thisObject.fullName.split(id.delimiter().pattern()); for(String s : tokens) objectName.add(Pattern.quote(s)); field = lookup(objectSearchToken, objectName); // search file C lookupReset(true); // back to file B } else { /* found it, file B specific processing here */ } if(field != null) // found it in B or C thisObject.ID = field; } The objectName tokens are all uppercase words with possible hyphens or apostrophes in them, separated by spaces. Much like a person's name. As per a comment, I will pre-compile the regex for my objectSearchToken, which is just [\r\n]+. What's ending up happening in file C is, every single line is being checked, even the 95% of lines which don't contain an ID number and object name at the start. Would it be quicker to use ^[\r\n]+.*(objectname) instead of two separate regexes? It may reduce the number of _match executions. The more general case of that would be, concatenate all tokensBefore with all tokensAfter, and put a .* in the middle. It would need to be matching backwards through the file though, otherwise it would match the correct line but with a huge .* block in the middle with lots of lines. The above situation could be resolved if I could get java.util.Scanner to return the token previous to the current one after a call to findWithinHorizon. I have another usage scenario. Will put it up asap.

    Read the article

  • PHP session not working with JQuery Ajax?

    - by Bolt_Head
    Update, Solved: After all this I found out that I was calling an old version of my code in the update ajax. 'boardControl.php' instead of 'boardUpdate.php' These are the kinds of mistakes that make programing fun. I'm writing a browser gomoku game. I have the ajax statement that allows the player to play a piece. $(document).ready(function() { $("td").live('click',function(){ var value = $(this).attr('id'); $.get('includes/boardControl.php',{play: value, bid: bid}); }); }); value = board square location bid = board ID Before creating a user login for player identification, the server side php had a temporary solution. It would rotate the piece state for the squares when clicked instead of knowing what player to create them for. After creating login stuff I set a session variable for the player's ID. I was hoping to read the session ID from the php during the ajax request and figure out what player they are from there. session_start(); ... $playerId = $_SESSION['char']; $Query=("SELECT p1, p2 FROM board WHERE bid=$bid"); $Result=mysql_query($Query); $p1 = mysql_result($Result,0,"p1"); $p2 = mysql_result($Result,0,"p2"); $newPiece = 0; //*default no player if($playerId == $p1) $newPiece = 1; if($playerId == $p2) $newPiece = 2; For some reason when I run the full web app, the pieces still cycle though, even after I deleted the code to make them cycle. Furthermore, after logging in If i manually load the php page in the browser, it modifies the database correctly (where it only plays pieces belonging to that player) and outputs the correct results. It seems to me that the session is not being carried over when used with Ajax. Yet Google searches tell me that, sessions do work with Ajax. Update: I'm trying to provide more information. Logging in works correctly. My ID is recognized and I printed it out next to the board to ensure that I was retrieving it correctly. The ajax request does update the board. The values passed are correct and confirmed with firebug's console. However instead of placing pieces only for the player they belong to it cycles though the piece states (0,1,2). When manually browsing to boardUpdate.php and putting in the same values sent from the Ajax the results seen in the echo'ed response indicates that the corresponding piece is played each time as intended. Same results on my laptop after fresh load of firefox. Manually browsing to boardUpdate.php without logging in before hand leave the board unchanged (as intended when no user is found in the session). I've double checked the that session_start() is on the php files and double checked the session ID variables. Hope this extra information helps, i'm running out of ideas what to tell you. Should I load up the full code? Update 2: After checking the Ajax responce in fire-bug I realized that the 'play' request does not get a result, and the board is not updated till the next 'update'. I'm still looking into this but I'll post it here for you guys too. boardUpdate.php Notable places are: Refresh Board(line6) Place Piece(line20) function boardUpdate($turnCount) (line63) <?php session_start(); require '../../omok/dbConnect.php'; //*** Refresh Board *** if(isset($_GET['update'])) { $bid = $_GET['bid']; $Query=("SELECT turn FROM board WHERE bid=$bid"); $Result=mysql_query($Query); $turnCount=mysql_result($Result,0,"turn"); if($_GET['turnCount'] < $turnCount) //** Turn increased { boardUpdate($turnCount); } } //*** Place Piece *** if(isset($_GET['play'])) // turn order? player detect? { $squareID = $_GET['play']; $bid = $_GET['bid']; $Query=("SELECT turn, boardstate FROM board WHERE bid=$bid"); $Result=mysql_query($Query); $turnCount=mysql_result($Result,0,"turn"); $boardState=mysql_result($Result,0,"boardstate"); $turnCount++; $playerId = $_SESSION['char']; $Query=("SELECT p1, p2 FROM board WHERE bid=$bid"); $Result=mysql_query($Query); $p1 = mysql_result($Result,0,"p1"); $p2 = mysql_result($Result,0,"p2"); $newPiece = 0; //*default no player if($playerId == $p1) $newPiece = 1; if($playerId == $p2) $newPiece = 2; // if($newPiece != 0) // { $oldPiece = getBoardSpot($squareID, $bid); $oldLetter = $boardState{floor($squareID/3)}; $slot = $squareID%3; //***function updateCode($old, $new, $current, $slot)*** $newLetter = updateCode($oldPiece, $newPiece, $oldLetter, $slot); $newLetter = value2Letter($newLetter); $newBoard = substr_replace($boardState, $newLetter, floor($squareID/3), 1); //** Update Query for boardstate & turn $Query=("UPDATE board SET boardState = '$newBoard', turn = '$turnCount' WHERE bid = '$bid'"); mysql_query($Query); // } boardUpdate($turnCount); } function boardUpdate($turnCount) { $json = '{"turnCount":"'.$turnCount.'",'; //** turnCount ** $bid = $_GET['bid']; $Query=("SELECT boardstate FROM board WHERE bid='$bid'"); $Result=mysql_query($Query); $Board=mysql_result($Result,0,"boardstate"); $json.= '"boardState":"'.$Board.'"'; //** boardState ** $json.= '}'; echo $json; } function letter2Value($input) { if(ord($input) >= 48 && ord($input) <= 57) return ord($input) - 48; else return ord($input) - 87; } function value2Letter($input) { if($input >= 10) return chr($input += 87); else return chr($input += 48); } //*** UPDATE CODE *** updates an letter with a new peice change and returns result letter. //***** $old : peice value before update //***** $new : peice value after update //***** $current : letterValue of code before update. //***** $slot : which of the 3 sqaures the change needs to take place in. function updateCode($old, $new, $current, $slot) { if($slot == 0) {// echo $current,"+((",$new,"-",$old,")*9)"; return letter2Value($current)+(($new-$old)*9); } else if($slot == 1) {// echo $current,"+((",$new,"-",$old,")*3)"; return letter2Value($current)+(($new-$old)*3); } else //slot == 2 {// echo $current,"+((",$new,"-",$old,")"; return letter2Value($current)+($new-$old); } }//updateCode() //**** GETBOARDSPOT *** Returns the peice value at defined location on the board. //****** 0 is first sqaure increment +1 in reading order (0-254). function getBoardSpot($squareID, $bid) { $Query=("SELECT boardstate FROM board WHERE bid='$bid'"); $Result=mysql_query($Query); $Board=mysql_result($Result,0,"boardstate"); if($squareID %3 == 2) //**3rd spot** { if( letter2Value($Board{floor($squareID/3)} ) % 3 == 0) return 0; else if( letter2Value($Board{floor($squareID/3)} ) % 3 == 1) return 1; else return 2; } else if($squareID %3 == 0) //**1st spot** { if(letter2Value($Board{floor($squareID/3)} ) <= 8) return 0; else if(letter2Value($Board{floor($squareID/3)} ) >= 18) return 2; else return 1; } else //**2nd spot** { return floor(letter2Value($Board{floor($squareID/3)}))/3%3; } }//end getBoardSpot() ?> Please help, I'd be glad to provide more information if needed. Thanks in advance =)

    Read the article

  • DirectShow: Video-Preview and Image (with working code)

    - by xsl
    Questions / Issues If someone can recommend me a good free hosting site I can provide the whole project file. As mentioned in the text below the TakePicture() method is not working properly on the HTC HD 2 device. It would be nice if someone could look at the code below and tell me if it is right or wrong what I'm doing. Introduction I recently asked a question about displaying a video preview, taking camera image and rotating a video stream with DirectShow. The tricky thing about the topic is, that it's very hard to find good examples and the documentation and the framework itself is very hard to understand for someone who is new to windows programming and C++ in general. Nevertheless I managed to create a class that implements most of this features and probably works with most mobile devices. Probably because the DirectShow implementation depends a lot on the device itself. I could only test it with the HTC HD and HTC HD2, which are known as quite incompatible. HTC HD Working: Video preview, writing photo to file Not working: Set video resolution (CRASH), set photo resolution (LOW quality) HTC HD 2 Working: Set video resolution, set photo resolution Problematic: Video Preview rotated Not working: Writing photo to file To make it easier for others by providing a working example, I decided to share everything I have got so far below. I removed all of the error handling for the sake of simplicity. As far as documentation goes, I can recommend you to read the MSDN documentation, after that the code below is pretty straight forward. void Camera::Init() { CreateComObjects(); _captureGraphBuilder->SetFiltergraph(_filterGraph); InitializeVideoFilter(); InitializeStillImageFilter(); } Dipslay a video preview (working with any tested handheld): void Camera::DisplayVideoPreview(HWND windowHandle) { IVideoWindow *_vidWin; _filterGraph->QueryInterface(IID_IMediaControl,(void **) &_mediaControl); _filterGraph->QueryInterface(IID_IVideoWindow, (void **) &_vidWin); _videoCaptureFilter->QueryInterface(IID_IAMVideoControl, (void**) &_videoControl); _captureGraphBuilder->RenderStream(&PIN_CATEGORY_PREVIEW, &MEDIATYPE_Video, _videoCaptureFilter, NULL, NULL); CRect rect; long width, height; GetClientRect(windowHandle, &rect); _vidWin->put_Owner((OAHWND)windowHandle); _vidWin->put_WindowStyle(WS_CHILD | WS_CLIPSIBLINGS); _vidWin->get_Width(&width); _vidWin->get_Height(&height); height = rect.Height(); _vidWin->put_Height(height); _vidWin->put_Width(rect.Width()); _vidWin->SetWindowPosition(0,0, rect.Width(), height); _mediaControl->Run(); } HTC HD2: If set SetPhotoResolution() is called FindPin will return E_FAIL. If not, it will create a file full of null bytes. HTC HD: Works void Camera::TakePicture(WCHAR *fileName) { CComPtr<IFileSinkFilter> fileSink; CComPtr<IPin> stillPin; CComPtr<IUnknown> unknownCaptureFilter; CComPtr<IAMVideoControl> videoControl; _imageSinkFilter.QueryInterface(&fileSink); fileSink->SetFileName(fileName, NULL); _videoCaptureFilter.QueryInterface(&unknownCaptureFilter); _captureGraphBuilder->FindPin(unknownCaptureFilter, PINDIR_OUTPUT, &PIN_CATEGORY_STILL, &MEDIATYPE_Video, FALSE, 0, &stillPin); _videoCaptureFilter.QueryInterface(&videoControl); videoControl->SetMode(stillPin, VideoControlFlag_Trigger); } Set resolution: Works great on HTC HD2. HTC HD won't allow SetVideoResolution() and only offers one low resolution photo resolution: void Camera::SetVideoResolution(int width, int height) { SetResolution(true, width, height); } void Camera::SetPhotoResolution(int width, int height) { SetResolution(false, width, height); } void Camera::SetResolution(bool video, int width, int height) { IAMStreamConfig *config; config = NULL; if (video) { _captureGraphBuilder->FindInterface(&PIN_CATEGORY_PREVIEW, &MEDIATYPE_Video, _videoCaptureFilter, IID_IAMStreamConfig, (void**) &config); } else { _captureGraphBuilder->FindInterface(&PIN_CATEGORY_STILL, &MEDIATYPE_Video, _videoCaptureFilter, IID_IAMStreamConfig, (void**) &config); } int resolutions, size; VIDEO_STREAM_CONFIG_CAPS caps; config->GetNumberOfCapabilities(&resolutions, &size); for (int i = 0; i < resolutions; i++) { AM_MEDIA_TYPE *mediaType; if (config->GetStreamCaps(i, &mediaType, reinterpret_cast<BYTE*>(&caps)) == S_OK ) { int maxWidth = caps.MaxOutputSize.cx; int maxHeigth = caps.MaxOutputSize.cy; if(maxWidth == width && maxHeigth == height) { VIDEOINFOHEADER *info = reinterpret_cast<VIDEOINFOHEADER*>(mediaType->pbFormat); info->bmiHeader.biWidth = maxWidth; info->bmiHeader.biHeight = maxHeigth; info->bmiHeader.biSizeImage = DIBSIZE(info->bmiHeader); config->SetFormat(mediaType); DeleteMediaType(mediaType); break; } DeleteMediaType(mediaType); } } } Other methods used to build the filter graph and create the COM objects: void Camera::CreateComObjects() { CoInitialize(NULL); CoCreateInstance(CLSID_CaptureGraphBuilder, NULL, CLSCTX_INPROC_SERVER, IID_ICaptureGraphBuilder2, (void **) &_captureGraphBuilder); CoCreateInstance(CLSID_FilterGraph, NULL, CLSCTX_INPROC_SERVER, IID_IGraphBuilder, (void **) &_filterGraph); CoCreateInstance(CLSID_VideoCapture, NULL, CLSCTX_INPROC, IID_IBaseFilter, (void**) &_videoCaptureFilter); CoCreateInstance(CLSID_IMGSinkFilter, NULL, CLSCTX_INPROC, IID_IBaseFilter, (void**) &_imageSinkFilter); } void Camera::InitializeVideoFilter() { _videoCaptureFilter->QueryInterface(&_propertyBag); wchar_t deviceName[MAX_PATH] = L"\0"; GetDeviceName(deviceName); CComVariant comName = deviceName; CPropertyBag propertyBag; propertyBag.Write(L"VCapName", &comName); _propertyBag->Load(&propertyBag, NULL); _filterGraph->AddFilter(_videoCaptureFilter, L"Video Capture Filter Source"); } void Camera::InitializeStillImageFilter() { _filterGraph->AddFilter(_imageSinkFilter, L"Still image filter"); _captureGraphBuilder->RenderStream(&PIN_CATEGORY_STILL, &MEDIATYPE_Video, _videoCaptureFilter, NULL, _imageSinkFilter); } void Camera::GetDeviceName(WCHAR *deviceName) { HRESULT hr = S_OK; HANDLE handle = NULL; DEVMGR_DEVICE_INFORMATION di; GUID guidCamera = { 0xCB998A05, 0x122C, 0x4166, 0x84, 0x6A, 0x93, 0x3E, 0x4D, 0x7E, 0x3C, 0x86 }; di.dwSize = sizeof(di); handle = FindFirstDevice(DeviceSearchByGuid, &guidCamera, &di); StringCchCopy(deviceName, MAX_PATH, di.szLegacyName); } Full header file: #ifndef __CAMERA_H__ #define __CAMERA_H__ class Camera { public: void Init(); void DisplayVideoPreview(HWND windowHandle); void TakePicture(WCHAR *fileName); void SetVideoResolution(int width, int height); void SetPhotoResolution(int width, int height); private: CComPtr<ICaptureGraphBuilder2> _captureGraphBuilder; CComPtr<IGraphBuilder> _filterGraph; CComPtr<IBaseFilter> _videoCaptureFilter; CComPtr<IPersistPropertyBag> _propertyBag; CComPtr<IMediaControl> _mediaControl; CComPtr<IAMVideoControl> _videoControl; CComPtr<IBaseFilter> _imageSinkFilter; void GetDeviceName(WCHAR *deviceName); void InitializeVideoFilter(); void InitializeStillImageFilter(); void CreateComObjects(); void SetResolution(bool video, int width, int height); }; #endif

    Read the article

  • Returning new object, overwrite the existing one in Java

    - by lupin
    Note: This is an assignment. Hi, Ok I have this method that will create a supposedly union of 2 sets. i mport java.io.*; class Set { public int numberOfElements; public String[] setElements; public int maxNumberOfElements; // constructor for our Set class public Set(int numberOfE, int setE, int maxNumberOfE) { this.numberOfElements = numberOfE; this.setElements = new String[setE]; this.maxNumberOfElements = maxNumberOfE; } // Helper method to shorten/remove element of array since we're using basic array instead of ArrayList or HashSet from collection interface :( static String[] removeAt(int k, String[] arr) { final int L = arr.length; String[] ret = new String[L - 1]; System.arraycopy(arr, 0, ret, 0, k); System.arraycopy(arr, k + 1, ret, k, L - k - 1); return ret; } int findElement(String element) { int retval = 0; for ( int i = 0; i < setElements.length; i++) { if ( setElements[i] != null && setElements[i].equals(element) ) { return retval = i; } retval = -1; } return retval; } void add(String newValue) { int elem = findElement(newValue); if( numberOfElements < maxNumberOfElements && elem == -1 ) { setElements[numberOfElements] = newValue; numberOfElements++; } } int getLength() { if ( setElements != null ) { return setElements.length; } else { return 0; } } String[] emptySet() { setElements = new String[0]; return setElements; } Boolean isFull() { Boolean True = new Boolean(true); Boolean False = new Boolean(false); if ( setElements.length == maxNumberOfElements ){ return True; } else { return False; } } Boolean isEmpty() { Boolean True = new Boolean(true); Boolean False = new Boolean(false); if ( setElements.length == 0 ) { return True; } else { return False; } } void remove(String newValue) { for ( int i = 0; i < setElements.length; i++) { if ( setElements[i] != null && setElements[i].equals(newValue) ) { setElements = removeAt(i,setElements); } } } int isAMember(String element) { int retval = -1; for ( int i = 0; i < setElements.length; i++ ) { if (setElements[i] != null && setElements[i].equals(element)) { return retval = i; } } return retval; } void printSet() { for ( int i = 0; i < setElements.length; i++) { if (setElements[i] != null) { System.out.println("Member elements on index: "+ i +" " + setElements[i]); } } } String[] getMember() { String[] tempArray = new String[setElements.length]; for ( int i = 0; i < setElements.length; i++) { if(setElements[i] != null) { tempArray[i] = setElements[i]; } } return tempArray; } Set union(Set x, Set y) { String[] newXtemparray = new String[x.getLength()]; String[] newYtemparray = new String[y.getLength()]; int len = newYtemparray.length + newXtemparray.length; Set temp = new Set(0,len,len); newXtemparray = x.getMember(); newYtemparray = x.getMember(); for(int i = 0; i < newYtemparray.length; i++) { temp.add(newYtemparray[i]); } for(int j = 0; j < newXtemparray.length; j++) { temp.add(newXtemparray[j]); } return temp; } Set difference(Set x, Set y) { String[] newXtemparray = new String[x.getLength()]; String[] newYtemparray = new String[y.getLength()]; int len = newYtemparray.length + newXtemparray.length; Set temp = new Set(0,len,len); newXtemparray = x.getMember(); newYtemparray = x.getMember(); for(int i = 0; i < newXtemparray.length; i++) { temp.add(newYtemparray[i]); } for(int j = 0; j < newYtemparray.length; j++) { int retval = temp.findElement(newYtemparray[j]); if( retval != -1 ) { temp.remove(newYtemparray[j]); } } return temp; } } // This is the SetDemo class that will make use of our Set class class SetDemo { public static void main(String[] args) { //get input from keyboard BufferedReader keyboard; InputStreamReader reader; String temp = ""; reader = new InputStreamReader(System.in); keyboard = new BufferedReader(reader); try { System.out.println("Enter string element to be added" ); temp = keyboard.readLine( ); System.out.println("You entered " + temp ); } catch (IOException IOerr) { System.out.println("There was an error during input"); } /* ************************************************************************** * Test cases for our new created Set class. * ************************************************************************** */ Set setA = new Set(0,10,10); setA.add(temp); setA.add("b"); setA.add("b"); setA.add("hello"); setA.add("world"); setA.add("six"); setA.add("seven"); setA.add("b"); int size = setA.getLength(); System.out.println("Set size is: " + size ); Boolean isempty = setA.isEmpty(); System.out.println("Set is empty? " + isempty ); int ismember = setA.isAMember("sixb"); System.out.println("Element sixb is member of setA? " + ismember ); Boolean output = setA.isFull(); System.out.println("Set is full? " + output ); //setA.printSet(); int index = setA.findElement("world"); System.out.println("Element b located on index: " + index ); setA.remove("b"); //setA.emptySet(); int resize = setA.getLength(); System.out.println("Set size is: " + resize ); //setA.printSet(); Set setB = new Set(0,10,10); setB.add("b"); setB.add("z"); setB.add("x"); setB.add("y"); Set setC = setA.union(setB,setA); System.out.println("Elements of setA"); setA.printSet(); System.out.println("Union of setA and setB"); setC.printSet(); } } The union method works a sense that somehow I can call another method on it but it doesn't do the job, i supposedly would create and union of all elements of setA and setB but it only return element of setB. Sample output follows: java SetDemo Enter string element to be added hello You entered hello Set size is: 10 Set is empty? false Element sixb is member of setA? -1 Set is full? true Element b located on index: 2 Set size is: 9 Elements of setA Member elements on index: 0 hello Member elements on index: 1 world Member elements on index: 2 six Member elements on index: 3 seven Union of setA and setB Member elements on index: 0 b Member elements on index: 1 z Member elements on index: 2 x Member elements on index: 3 y thanks, lupin

    Read the article

  • Is it Possible to show a previously hidden JFrame using a keylistener

    - by JIM
    here is my code, i basically just did a tester for the most common listeners, which i might later use in future projects, the main problem is in the keylistener at the bottom, i am trying to re-show the frame but i think it just cant be done that way, please help ps: no idea why the imports dont show up right. package newpackage; import java.awt.Color; import javax.swing.JButton; import javax.swing.JFrame; import javax.swing.JLabel; import javax.swing.JTextField; import javax.swing.JSeparator; import java.awt.event.ActionListener; import java.awt.event.ActionEvent; import java.awt.event.KeyEvent; import java.awt.event.KeyListener; import java.awt.event.MouseEvent; import java.awt.event.MouseListener; import javax.swing.JTextArea; import javax.swing.SwingUtilities; import javax.swing.UIManager; public class NewClass1 extends JFrame { private JLabel item1,infomouse,infoclicks,infoKeys,writehere; private JButton button1,button2,button3; private JTextArea text1,status,KeyStatus; private JTextField text2,text3,mouse,clicks,test; private JSeparator sep1; private int clicknumber; public NewClass1() { super("Listener Tests"); setLayout(null); sep1 = new JSeparator(); button1 = new JButton("Button1"); button2 = new JButton("Button2"); button3 = new JButton("Button3"); item1 = new JLabel("Button Status :"); infomouse = new JLabel("Mouse Status :"); infoclicks = new JLabel("Nº of clicks :"); infoKeys = new JLabel("Keyboard status:"); writehere = new JLabel("Write here: "); text1 = new JTextArea(); text2 = new JTextField(20); text3 = new JTextField(20); status = new JTextArea(); mouse = new JTextField(20); clicks = new JTextField(4); KeyStatus = new JTextArea(); test = new JTextField(3); clicks.setText(String.valueOf(clicknumber)); text1.setEditable(true); text2.setEditable(false); text3.setEditable(false); status.setEditable(false); mouse.setEditable(false); clicks.setEditable(false); KeyStatus.setEditable(false); text1.setBounds(135, 310, 150, 20); text2.setBounds(135, 330, 150, 20); text3.setBounds(135, 350, 150, 20); status.setBounds(15, 20, 240, 20); infomouse.setBounds(5,45,120,20); infoKeys.setBounds(5,90,120,20); KeyStatus.setBounds(15,115,240,85); test.setBounds(15,225,240,20); mouse.setBounds(15,70,100,20); infoclicks.setBounds(195, 45, 140, 20); clicks.setBounds(195, 70, 60, 20); item1.setBounds(5, 0, 120, 20); button1.setBounds(10, 310, 115, 20); button2.setBounds(10, 330, 115, 20); button3.setBounds(10, 350, 115, 20); sep1.setBounds(5, 305, 285, 10); sep1.setBackground(Color.BLACK); status.setBackground(Color.LIGHT_GRAY); button1.addActionListener(new button1list()); button2.addActionListener(new button1list()); button3.addActionListener(new button1list()); button1.addMouseListener(new MouseList()); button2.addMouseListener(new MouseList()); button3.addMouseListener(new MouseList()); getContentPane().addMouseListener(new MouseList()); test.addKeyListener(new KeyList()); this.addKeyListener(new KeyList()); test.requestFocus(); add(item1); add(button1); add(button2); add(button3); add(text1); add(text2); add(text3); add(status); add(infomouse); add(mouse); add(infoclicks); add(clicks); add(infoKeys); add(KeyStatus); add(test); add(sep1); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); try{ UIManager.setLookAndFeel(UIManager.getSystemLookAndFeelClassName()); }catch (Exception e){System.out.println("Error");} SwingUtilities.updateComponentTreeUI(this); setSize(300, 400); setResizable(false); setVisible(true); test.setFocusable(true); test.setFocusTraversalKeysEnabled(false); setLocationRelativeTo(null); } public class button1list implements ActionListener { public void actionPerformed(ActionEvent e) { String buttonpressed = e.getActionCommand(); if (buttonpressed.equals("Button1")) { text1.setText("just"); } else if (buttonpressed.equals("Button2")) { text2.setText(text2.getText()+"testing "); } else if (buttonpressed.equals("Button3")) { text3.setText("this"); } } } public class MouseList implements MouseListener{ public void mouseEntered(MouseEvent e){ if(e.getSource()==button1){ status.setText("button 1 hovered"); } else if(e.getSource()==button2){ status.setText("button 2 hovered"); } else if(e.getSource()==button3){ status.setText("button 3 hovered"); } } public void mouseExited(MouseEvent e){ status.setText(""); } public void mouseReleased(MouseEvent e){ if(!status.getText().equals("")){ status.replaceRange("", 0, 22); } } public void mousePressed(MouseEvent e){ if(e.getSource()==button1){ status.setText("button 1 being pressed"); } else if(e.getSource()==button2){ status.setText("button 2 being pressed"); } else if(e.getSource()==button3){ status.setText("button 3 being pressed"); } } public void mouseClicked(MouseEvent e){ clicknumber++; mouse.setText("mouse working"); clicks.setText(String.valueOf(clicknumber)); } } public class KeyList implements KeyListener{ public void keyReleased(KeyEvent e){} public void keyPressed(KeyEvent e){ KeyStatus.setText(""); test.setText(""); String full = e.paramString(); String [] temp = null; temp = full.split(","); for(int i=0; i<7 ;i++){ KeyStatus.append(temp[i] + "\n"); } if(e.getKeyChar()=='h'){setVisible(false); test.requestFocus(); } if(e.getKeyChar()=='s'){setVisible(true);} } public void keyTyped(KeyEvent e){} } }

    Read the article

  • parallelizing code using openmp

    - by anubhav
    Hi, The function below contains nested for loops. There are 3 of them. I have given the whole function below for easy understanding. I want to parallelize the code in the innermost for loop as it takes maximum CPU time. Then i can think about outer 2 for loops. I can see dependencies and internal inline functions in the innermost for loop . Can the innermost for loop be rewritten to enable parallelization using openmp pragmas. Please tell how. I am writing just the loop which i am interested in first and then the full function where this loop exists for referance. Interested in parallelizing the loop mentioned below. //* LOOP WHICH I WANT TO PARALLELIZE *// for (y = 0; y < 4; y++) { refptr = PelYline_11 (ref_pic, abs_y++, abs_x, img_height, img_width); LineSadBlk0 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk0 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk0 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk0 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk1 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk1 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk1 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk1 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk2 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk2 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk2 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk2 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk3 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk3 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk3 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk3 += byte_abs [*refptr++ - *orgptr++]; } The full function where this loop exists is below for referance. /*! *********************************************************************** * \brief * Setup the fast search for an macroblock *********************************************************************** */ void SetupFastFullPelSearch (short ref, int list) // <-- reference frame parameter, list0 or 1 { short pmv[2]; pel_t orig_blocks[256], *orgptr=orig_blocks, *refptr, *tem; // created pointer tem int offset_x, offset_y, x, y, range_partly_outside, ref_x, ref_y, pos, abs_x, abs_y, bindex, blky; int LineSadBlk0, LineSadBlk1, LineSadBlk2, LineSadBlk3; int max_width, max_height; int img_width, img_height; StorablePicture *ref_picture; pel_t *ref_pic; int** block_sad = BlockSAD[list][ref][7]; int search_range = max_search_range[list][ref]; int max_pos = (2*search_range+1) * (2*search_range+1); int list_offset = ((img->MbaffFrameFlag)&&(img->mb_data[img->current_mb_nr].mb_field))? img->current_mb_nr%2 ? 4 : 2 : 0; int apply_weights = ( (active_pps->weighted_pred_flag && (img->type == P_SLICE || img->type == SP_SLICE)) || (active_pps->weighted_bipred_idc && (img->type == B_SLICE))); ref_picture = listX[list+list_offset][ref]; //===== Use weighted Reference for ME ==== if (apply_weights && input->UseWeightedReferenceME) ref_pic = ref_picture->imgY_11_w; else ref_pic = ref_picture->imgY_11; max_width = ref_picture->size_x - 17; max_height = ref_picture->size_y - 17; img_width = ref_picture->size_x; img_height = ref_picture->size_y; //===== get search center: predictor of 16x16 block ===== SetMotionVectorPredictor (pmv, enc_picture->ref_idx, enc_picture->mv, ref, list, 0, 0, 16, 16); search_center_x[list][ref] = pmv[0] / 4; search_center_y[list][ref] = pmv[1] / 4; if (!input->rdopt) { //--- correct center so that (0,0) vector is inside --- search_center_x[list][ref] = max(-search_range, min(search_range, search_center_x[list][ref])); search_center_y[list][ref] = max(-search_range, min(search_range, search_center_y[list][ref])); } search_center_x[list][ref] += img->opix_x; search_center_y[list][ref] += img->opix_y; offset_x = search_center_x[list][ref]; offset_y = search_center_y[list][ref]; //===== copy original block for fast access ===== for (y = img->opix_y; y < img->opix_y+16; y++) for (x = img->opix_x; x < img->opix_x+16; x++) *orgptr++ = imgY_org [y][x]; //===== check if whole search range is inside image ===== if (offset_x >= search_range && offset_x <= max_width - search_range && offset_y >= search_range && offset_y <= max_height - search_range ) { range_partly_outside = 0; PelYline_11 = FastLine16Y_11; } else { range_partly_outside = 1; } //===== determine position of (0,0)-vector ===== if (!input->rdopt) { ref_x = img->opix_x - offset_x; ref_y = img->opix_y - offset_y; for (pos = 0; pos < max_pos; pos++) { if (ref_x == spiral_search_x[pos] && ref_y == spiral_search_y[pos]) { pos_00[list][ref] = pos; break; } } } //===== loop over search range (spiral search): get blockwise SAD ===== **// =====THIS IS THE PART WHERE NESTED FOR STARTS=====** for (pos = 0; pos < max_pos; pos++) // OUTERMOST FOR LOOP { abs_y = offset_y + spiral_search_y[pos]; abs_x = offset_x + spiral_search_x[pos]; if (range_partly_outside) { if (abs_y >= 0 && abs_y <= max_height && abs_x >= 0 && abs_x <= max_width ) { PelYline_11 = FastLine16Y_11; } else { PelYline_11 = UMVLine16Y_11; } } orgptr = orig_blocks; bindex = 0; for (blky = 0; blky < 4; blky++) // SECOND FOR LOOP { LineSadBlk0 = LineSadBlk1 = LineSadBlk2 = LineSadBlk3 = 0; for (y = 0; y < 4; y++) //INNERMOST FOR LOOP WHICH I WANT TO PARALLELIZE { refptr = PelYline_11 (ref_pic, abs_y++, abs_x, img_height, img_width); LineSadBlk0 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk0 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk0 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk0 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk1 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk1 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk1 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk1 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk2 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk2 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk2 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk2 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk3 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk3 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk3 += byte_abs [*refptr++ - *orgptr++]; LineSadBlk3 += byte_abs [*refptr++ - *orgptr++]; } block_sad[bindex++][pos] = LineSadBlk0; block_sad[bindex++][pos] = LineSadBlk1; block_sad[bindex++][pos] = LineSadBlk2; block_sad[bindex++][pos] = LineSadBlk3; } } //===== combine SAD's for larger block types ===== SetupLargerBlocks (list, ref, max_pos); //===== set flag marking that search setup have been done ===== search_setup_done[list][ref] = 1; } #endif // _FAST_FULL_ME_

    Read the article

  • Exchange Web Service (EWS) call fails under ASP.NET but not a console application

    - by Vince Panuccio
    I'm getting an error when I attempt to connect to Exchange Web Services via ASP.NET. The following code works if I call it via a console application but the very same code fails when executed on a ASP.NET web forms page. Just as a side note, I am using my own credentials throughout this entire code sample. "When making a request as an account that does not have a mailbox, you must specify the mailbox primary SMTP address for any distinguished folder Ids." I thought I might be able to fix the issue by specifying an impersonated user. exchangeservice.ImpersonatedUserId = new ImpersonatedUserId(ConnectingIdType.SmtpAddress, "[email protected]"); But then I get a different error. "The account does not have permission to impersonate the requested user." The App Pool that the web application is running under is also my own account (same as the console application) so I have no idea what might be causing this issue. I am using .NET framework 3.5. Here is the code in full. var exchangeservice = new ExchangeService(ExchangeVersion.Exchange2010_SP1) { Timeout = 10000 }; var credentials = new System.Net.NetworkCredential("username", "pass", "domain"); exchangeservice.AutodiscoverUrl("[email protected]") FolderId rootFolderId = new FolderId(WellKnownFolderName.Inbox); var folderView = new FolderView(100) { Traversal = FolderTraversal.Shallow }; FindFoldersResults findFoldersResults = service.FindFolders(rootFolderId, folderView);

    Read the article

  • Running MSBuild fails to read SDKToolsPath

    - by Scott Mayfield
    Howdy, I'm having a bit of an issue runnning a NAnt script that used to properly build my .Net 2.0 based website, when compiling with VS2008 and it's associated tools. I've recently upgraded all the project/solution files to VS2010, and now my build fails with the following error: [exec] C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Microsoft.Common.targets(2249,9): error MSB3086: Task could not find "sgen.exe" using the S dkToolsPath "" or the registry key "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SDKs\Windows\v7.0A". Make sure the SdkToolsPath is set and the tool exists in the correct processor specific location under the SdkToolsPath and that the Microsoft Windows SDK is installed Now, I DO have prior versions (.Net 3.5) of the Windows SDK installed on the build server, and the full .Net 4.0 framework is installed, but I've not run across a .Net 4.0 specific version of the Windows SDK. After a bit of experimentation and research, I finally just setup a new environmental variable "SDKToolsPath" and pointed it to the copy of sgen.exe in my windows 6.0 sdk folder. This generated the same error, but it got me to notice that even though the SDKToolsPath environmental variable IS set (confirmed that I can "echo" it at the command line and it has the expected value), the error message seems to indicated that it's not being read (note the empty quotes). Most of the information I've found is .Net 3.5 (or earlier) specific. Not much 4.0 related out there yet. Searching for error code MSB3086 generated nothing useful either. Any idea what this might be? Scott

    Read the article

  • Why am I getting "ArgumentError: wrong number of arguments (1 for 0)" when running my rails function

    - by Hisham
    I'm stumped on what's causing this. I get this error and stack trace in all my functional tests where I call 'post'. Here is the full stack trace: 7) Error: test_should_validate(UsersControllerTest): ArgumentError: wrong number of arguments (1 for 0) /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/routing/route.rb:48:in `to_query' /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/routing/route.rb:48:in `build_query_string' /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/routing/route.rb:46:in `each' /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/routing/route.rb:46:in `build_query_string' /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/routing/route.rb:233:in `append_query_string' generated code (/Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/routing/route.rb:154):3:in `generate' /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/routing/route_set.rb:365:in `__send__' /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/routing/route_set.rb:365:in `generate' /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/routing/route_set.rb:364:in `each' /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/routing/route_set.rb:364:in `generate' /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/url_rewriter.rb:208:in `rewrite_path' /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/url_rewriter.rb:187:in `rewrite_url' /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/url_rewriter.rb:165:in `rewrite' /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/test_process.rb:450:in `build_request_uri' /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/test_process.rb:406:in `process' /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/test_process.rb:376:in `post' functional/users_controller_test.rb:57:in `test_should_validate' /Users/hisham/src/rails/ftuBackend/vendor/rails/activesupport/lib/active_support/testing/setup_and_teardown.rb:60:in `__send__' /Users/hisham/src/rails/ftuBackend/vendor/rails/activesupport/lib/active_support/testing/setup_and_teardown.rb:60:in `run' This is the test I'm running: def test_should_validate post :validate, :user => { :email => '[email protected]', :password => 'quire', :password_confirmation => 'quire', :agreed_to_terms => "true" } assert assigns(:user).errors.empty? assert_response :success end

    Read the article

  • java.lang.ClassCastException: java.lang.Integer cannot be cast to java.util.HashMap

    - by kongkea
    I've got this Error When I click listview to show full image size. how can i solve it? Error 11-20 10:27:47.039: D/AndroidRuntime(5078): Shutting down VM 11-20 10:27:47.039: W/dalvikvm(5078): threadid=1: thread exiting with uncaught exception (group=0x40c061f8) 11-20 10:27:47.047: E/AndroidRuntime(5078): FATAL EXCEPTION: main 11-20 10:27:47.047: E/AndroidRuntime(5078): java.lang.ClassCastException: java.lang.Integer cannot be cast to java.util.HashMap 11-20 10:27:47.047: E/AndroidRuntime(5078): at com.example.mylistview.MainActivity$1.onItemClick(MainActivity.java:103) 11-20 10:27:47.047: E/AndroidRuntime(5078): at android.widget.AdapterView.performItemClick(AdapterView.java:292) 11-20 10:27:47.047: E/AndroidRuntime(5078): at android.widget.AbsListView.performItemClick(AbsListView.java:1173) 11-20 10:27:47.047: E/AndroidRuntime(5078): at android.widget.AbsListView$PerformClick.run(AbsListView.java:2701) 11-20 10:27:47.047: E/AndroidRuntime(5078): at android.widget.AbsListView$1.run(AbsListView.java:3453) 11-20 10:27:47.047: E/AndroidRuntime(5078): at android.os.Handler.handleCallback(Handler.java:605) 11-20 10:27:47.047: E/AndroidRuntime(5078): at android.os.Handler.dispatchMessage(Handler.java:92) 11-20 10:27:47.047: E/AndroidRuntime(5078): at android.os.Looper.loop(Looper.java:137) 11-20 10:27:47.047: E/AndroidRuntime(5078): at android.app.ActivityThread.main(ActivityThread.java:4514) 11-20 10:27:47.047: E/AndroidRuntime(5078): at java.lang.reflect.Method.invokeNative(Native Method) 11-20 10:27:47.047: E/AndroidRuntime(5078): at java.lang.reflect.Method.invoke(Method.java:511) 11-20 10:27:47.047: E/AndroidRuntime(5078): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:790) 11-20 10:27:47.047: E/AndroidRuntime(5078): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:557) 11-20 10:27:47.047: E/AndroidRuntime(5078): at dalvik.system.NativeStart.main(Native Method) MainActivity public class MainActivity extends Activity { public static final int DIALOG_DOWNLOAD_JSON_PROGRESS = 0; private ProgressDialog mProgressDialog; ArrayList<HashMap<String, Object>> MyArrList; @SuppressLint("NewApi") @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); // Permission StrictMode if (android.os.Build.VERSION.SDK_INT > 9) { StrictMode.ThreadPolicy policy = new StrictMode.ThreadPolicy.Builder().permitAll().build(); StrictMode.setThreadPolicy(policy); } // Download JSON File new DownloadJSONFileAsync().execute(); } @Override protected Dialog onCreateDialog(int id) { switch (id) { case DIALOG_DOWNLOAD_JSON_PROGRESS: mProgressDialog = new ProgressDialog(this); mProgressDialog.setMessage("Downloading....."); mProgressDialog.setProgressStyle(ProgressDialog.STYLE_SPINNER); mProgressDialog.setCancelable(true); mProgressDialog.show(); return mProgressDialog; default: return null; } } // Show All Content public void ShowAllContent() { // listView1 final ListView lstView1 = (ListView)findViewById(R.id.listView1); lstView1.setAdapter(new ImageAdapter(MainActivity.this,MyArrList)); lstView1.setOnItemClickListener(new OnItemClickListener() { @Override public void onItemClick(AdapterView<?> parent, View v, int position, long id) { HashMap<String, Object> hm = (HashMap<String, Object>) lstView1.getAdapter().getItem(position); String imagePath = (String) hm.get("photo"); Intent i = new Intent(MainActivity.this,FullImageActivity.class); i.putExtra("fullImage", imagePath); startActivity(i); } }); } public class ImageAdapter extends BaseAdapter { private Context context; private ArrayList<HashMap<String, Object>> MyArr = new ArrayList<HashMap<String, Object>>(); public ImageAdapter(Context c, ArrayList<HashMap<String, Object>> myArrList) { // TODO Auto-generated method stub context = c; MyArr = myArrList; } public int getCount() { // TODO Auto-generated method stub return MyArr.size(); } public Object getItem(int position) { // TODO Auto-generated method stub return position; } public long getItemId(int position) { // TODO Auto-generated method stub return position; } public View getView(int position, View convertView, ViewGroup parent) { // TODO Auto-generated method stub LayoutInflater inflater = (LayoutInflater) context .getSystemService(Context.LAYOUT_INFLATER_SERVICE); if (convertView == null) { convertView = inflater.inflate(R.layout.activity_column, null); } // ColImage ImageView imageView = (ImageView) convertView.findViewById(R.id.ColImgPath); imageView.getLayoutParams().height = 80; imageView.getLayoutParams().width = 80; imageView.setPadding(5, 5, 5, 5); imageView.setScaleType(ImageView.ScaleType.CENTER_CROP); try { imageView.setImageBitmap((Bitmap)MyArr.get(position).get("ImageThumBitmap")); } catch (Exception e) { // When Error imageView.setImageResource(android.R.drawable.ic_menu_report_image); } // ColImgID TextView txtImgID = (TextView) convertView.findViewById(R.id.ColImgID); txtImgID.setPadding(10, 0, 0, 0); txtImgID.setText("ID : " + MyArr.get(position).get("id").toString()); // ColImgName TextView txtPicName = (TextView) convertView.findViewById(R.id.ColImgName); txtPicName.setPadding(50, 0, 0, 0); txtPicName.setText("Name : " + MyArr.get(position).get("first_name").toString()); return convertView; } } // Download JSON in Background public class DownloadJSONFileAsync extends AsyncTask<String, Void, Void> { protected void onPreExecute() { super.onPreExecute(); showDialog(DIALOG_DOWNLOAD_JSON_PROGRESS); } @Override protected Void doInBackground(String... params) { // TODO Auto-generated method stub String url = "http://192.168.10.104/adchara1/"; JSONArray data; try { data = new JSONArray(getJSONUrl(url)); MyArrList = new ArrayList<HashMap<String, Object>>(); HashMap<String, Object> map; for(int i = 0; i < data.length(); i++){ JSONObject c = data.getJSONObject(i); map = new HashMap<String, Object>(); map.put("id", (String)c.getString("id")); map.put("first_name", (String)c.getString("first_name")); // Thumbnail Get ImageBitmap To Object map.put("photo", (String)c.getString("photo")); map.put("ImageThumBitmap", (Bitmap)loadBitmap(c.getString("photo"))); // Full (for View Popup) map.put("frame", (String)c.getString("frame")); MyArrList.add(map); } } catch (JSONException e) { // TODO Auto-generated catch block e.printStackTrace(); } return null; } protected void onPostExecute(Void unused) { ShowAllContent(); // When Finish Show Content dismissDialog(DIALOG_DOWNLOAD_JSON_PROGRESS); removeDialog(DIALOG_DOWNLOAD_JSON_PROGRESS); } } /*** Get JSON Code from URL ***/ public String getJSONUrl(String url) { StringBuilder str = new StringBuilder(); HttpClient client = new DefaultHttpClient(); HttpGet httpGet = new HttpGet(url); try { HttpResponse response = client.execute(httpGet); StatusLine statusLine = response.getStatusLine(); int statusCode = statusLine.getStatusCode(); if (statusCode == 200) { // Download OK HttpEntity entity = response.getEntity(); InputStream content = entity.getContent(); BufferedReader reader = new BufferedReader(new InputStreamReader(content)); String line; while ((line = reader.readLine()) != null) { str.append(line); } } else { Log.e("Log", "Failed to download file.."); } } catch (ClientProtocolException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } return str.toString(); } /***** Get Image Resource from URL (Start) *****/ private static final String TAG = "Image"; private static final int IO_BUFFER_SIZE = 4 * 1024; public static Bitmap loadBitmap(String url) { Bitmap bitmap = null; InputStream in = null; BufferedOutputStream out = null; try { in = new BufferedInputStream(new URL(url).openStream(), IO_BUFFER_SIZE); final ByteArrayOutputStream dataStream = new ByteArrayOutputStream(); out = new BufferedOutputStream(dataStream, IO_BUFFER_SIZE); copy(in, out); out.flush(); final byte[] data = dataStream.toByteArray(); BitmapFactory.Options options = new BitmapFactory.Options(); //options.inSampleSize = 1; bitmap = BitmapFactory.decodeByteArray(data, 0, data.length,options); } catch (IOException e) { Log.e(TAG, "Could not load Bitmap from: " + url); } finally { closeStream(in); closeStream(out); } return bitmap; } private static void closeStream(Closeable stream) { if (stream != null) { try { stream.close(); } catch (IOException e) { android.util.Log.e(TAG, "Could not close stream", e); } } } private static void copy(InputStream in, OutputStream out) throws IOException { byte[] b = new byte[IO_BUFFER_SIZE]; int read; while ((read = in.read(b)) != -1) { out.write(b, 0, read); } } /***** Get Image Resource from URL (End) *****/ @Override public boolean onCreateOptionsMenu(Menu menu) { getMenuInflater().inflate(R.menu.activity_main, menu); return true; } } FullImageActivity String imagePath = getIntent().getStringExtra("fullImage"); if(imagePath != null && !imagePath.isEmpty()){ File imageFile = new File(imagePath); if(imageFile.exists()){ Bitmap myBitmap = BitmapFactory.decodeFile(imageFile.getAbsolutePath()); ImageView iv = (ImageView) findViewById(R.id.fullimage); iv.setImageBitmap(myBitmap); } }

    Read the article

  • Exception "The operation is not valid for the state of the transaction" using TransactionScope

    - by Lanfear
    We have a web service on server #1 and a database on server #2. Web service uses transaction scope to produce distributed transaction. Everything is correct. And we have another database on server #3. We had some problems with this server and we reinstalled operation system and software. We configured MSDTC and tried to use web service from server #1 to communicate with database on this server. And now after first select statement within transaction scope we get: "The operation is not valid for the state of the transaction". This exception falls in every web service request if it is using transaction scope. Server #2 and Server #3 is almost similar. The difference can be only in settings. .NET framework 3.5 SP1 installed and SQL Server SP3 on all servers. Full stacktrace: System.Transactions.TransactionState.EnlistPromotableSinglePhase(InternalTransaction tx, IPromotableSinglePhaseNotification promotableSinglePhaseNotification, Transaction atomicTransaction) ? System.Transactions.Transaction.EnlistPromotableSinglePhase(IPromotableSinglePhaseNotification promotableSinglePhaseNotification) ? System.Data.SqlClient.SqlInternalConnection.EnlistNonNull(Transaction t ? System.Data.SqlClient.SqlInternalConnection.Enlist(Transaction t ? System.Data.SqlClient.SqlInternalConnectionTds.Activate(Transaction transaction) ? System.Data.ProviderBase.DbConnectionInternal.ActivateConnection(Transaction transaction) ? System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) ? System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) ? System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) ? System.Data.SqlClient.SqlConnection.Open() ? NHibernate.Connection.DriverConnectionProvider.GetConnection() ? NHibernate.Impl.SessionFactoryImpl.OpenConnection() I searched this message but didn't found any appropriate solution. So what settings should I check and what exactly should I do to fix it?

    Read the article

  • OpenCV Mat creation memory leak

    - by Royi Freifeld
    My memory is getting full fairly quick once using the next piece of code. Valgrind shows a memory leak, but everything is allocated on stack and (supposed to be) freed once the function ends. void mult_run_time(int rows, int cols) { Mat matrix(rows,cols,CV_32SC1); Mat row_vec(cols,1,CV_32SC1); /* initialize vector and matrix */ for (int col = 0; col < cols; ++col) { for (int row = 0; row < rows; ++row) { matrix.at<unsigned long>(row,col) = rand() % ULONG_MAX; } row_vec.at<unsigned long>(1,col) = rand() % ULONG_MAX; } /* end initialization of vector and matrix*/ matrix*row_vec; } int main() { for (int row = 0; row < 20; ++row) { for (int col = 0; col < 20; ++col) { mult_run_time(row,col); } } return 0; } Valgrind shows that there is a memory leak in line Mat row_vec(cols,1,CV_32CS1): ==9201== 24,320 bytes in 380 blocks are definitely lost in loss record 50 of 50 ==9201== at 0x4026864: malloc (vg_replace_malloc.c:236) ==9201== by 0x40C0A8B: cv::fastMalloc(unsigned int) (in /usr/local/lib/libopencv_core.so.2.3.1) ==9201== by 0x41914E3: cv::Mat::create(int, int const*, int) (in /usr/local/lib/libopencv_core.so.2.3.1) ==9201== by 0x8048BE4: cv::Mat::create(int, int, int) (mat.hpp:368) ==9201== by 0x8048B2A: cv::Mat::Mat(int, int, int) (mat.hpp:68) ==9201== by 0x80488B0: mult_run_time(int, int) (mat_by_vec_mult.cpp:26) ==9201== by 0x80489F5: main (mat_by_vec_mult.cpp:59) Is it a known bug in OpenCV or am I missing something?

    Read the article

  • SharePoint 2010 - Client Object Model - Add attachment to ListItem

    - by Thorben
    Hi, I have a SharePoint List to which I'm adding new ListItems using the Client Object Model. Adding ListItems is not a problem and works great. Now I want to add attachments. I'm using the SaveBinaryDirect in the following manner: File.SaveBinaryDirect(clientCtx, url.AbsolutePath + "/Attachments/31/" + fileName, inputStream, true); It works without any problem as long as the item that I'm trying to add the attachment to, already has an attachment that was added through the SharePoint site and not using the Client Object Model. When I try to add an attachment to a item that doesnt have any attachments yet, I get the following errors (both happen but not with the same files - but those two messages appear consistently): The remote server returned an error: (409) Conflict The remote server returned an error: (404) Not Found I figured that maybe I need to create the attachment folder first for this item. When I try the following code: clientCtx.Load(ticketList.RootFolder.Folders); clientCtx.ExecuteQuery(); clientCtx.Load(ticketList.RootFolder.Folders[1]); // 1 -> Attachment folder clientCtx.Load(ticketList.RootFolder.Folders[1].Folders); clientCtx.ExecuteQuery(); Folder folder = ticketList.RootFolder.Folders[1].Folders.Add("33"); clientCtx.ExecuteQuery(); I receive an error message saying: Cannot create folder "Lists/Ticket System/Attachment/33" I have full administrator rights for the SharePoint site/list. Any ideas what I could be doing wrong? Thanks, Thorben

    Read the article

  • Tutorials and libraries for OpenGL-ES games on Android

    - by user197141
    What tutorials and libraries are available which can help beginners to develop 2D and 3D games on Android using OpenGL-ES? I'm looking for tutorials which can help me learn OpenGL-ES, and I'm looking for OpenGL-ES libraries which can make life easier for beginners in OpenGL-ES. Since Android is still small, I guess it may be help-full to read iPhone OpenGL-ES tutorials as well, as I suppose the OpenGL-ES functionality is much the same. I have found the following useful information which I would have liked to share: Android tutorials: Basic tutorial covering polygons, no textures anddev forum with some tutorials Other Android OpenGL-ES information: Google IO lecture regarding games, not much OpenGLES The The Khronos Reference Manual is also relevant to have, but its not exactly the best place to start. iPhone OpenGL-ES tutorials (where the OpenGl-ES information is probably useful): http://web.me.com/smaurice/AppleCoder/iPhone_OpenGL/Archive.html http://iphonedevelopment.blogspot.com/2009/05/opengl-es-from-ground-up-table-of.html As for libraries which a beginner might use to get a simpler hands-on experience with OpenGL-ES, I have only found Rokon, which is recently started, thus has many holes and bugs. And it's gnuGPL licensed (at the moment) which means it cannot be used, if we wish to sell our games. What else is out there?

    Read the article

  • View all ntext column text in SQL Server Management Studio for SQL CE database

    - by Dave
    I often want to do a "quick check" of the value of a large text column in SQL Server Management Studio (SSMS). The maximum number of characters that SSMS will let you view, in grid results mode, is 65535. (It is even less in text results mode.) Sometimes I need to see something beyond that range. Using SQL Server 2005 databases, I often used the trick of converting it to XML, because SSMS lets you view much larger amounts of text that way: SELECT CONVERT(xml, MyCol) FROM MyTable WHERE ... But now I am using SQL CE, and there is no Xml data type. There is still a "Maximum Characters Retreived XML" value under Options; I suppose this is useful when connecting to other data sources. I know I can just get the full value by running a little console app or something, but is there a way within SSMS to see the entire ntext column value? [Edit] OK, this didn't get much attention the first time around (18 views?!). It's not a huge concern, but maybe I'm just obsessed with it. There has to be some good way around this, doesn't there? So a modest bounty is active. What I am willing to accept as answers, in order from best-to-worst: A solution that works just as easy as the XML trick in SQL CE. That is, a single function (convert, cast, etc.) that does the job. A not-too-invasive way to hack SSMS to get it to display more text in the results. An equivalent SQL query (perhaps something that creatively uses SUBSTRING and generates multiple ad-hoc columns??) to see the results. The solution should work with nvarchar and ntext columns of any length in SQL CE from SSMS. Any ideas?

    Read the article

  • ASP.NET PowerShell Impersonation

    - by Ben
    I have developed an ASP.NET MVC Web Application to execute PowerShell scripts. I am using the VS web server and can execute scripts fine. However, a requirement is that users are able to execute scripts against AD to perform actions that their own user accounts are not allowed to do. Therefore I am using impersonation to switch the identity before creating the PowerShell runspace: Runspace runspace = RunspaceFactory.CreateRunspace(config); var currentuser = WindowsIdentity.GetCurrent().Name; if (runspace.RunspaceStateInfo.State == RunspaceState.BeforeOpen) { runspace.Open(); } I have tested using a domain admin account and I get the following exception when calling runspace.Open(): Security Exception Description: The application attempted to perform an operation not allowed by the security policy. To grant this application the required permission please contact your system administrator or change the application's trust level in the configuration file. Exception Details: System.Security.SecurityException: Requested registry access is not allowed. The web application is running in full trust and I have explicitly added the account I am using for impersonation to the local administrators group of the machine (even though the domain admins group was already there). I'm using advapi32.dll LogonUser call to perform the impersonation in a similar way to this post (http://blogs.msdn.com/webdav_101/archive/2008/09/25/howto-calling-exchange-powershell-from-an-impersonated-thead.aspx) Any help appreciated as this is a bit of a show stopper at the moment. Thanks Ben

    Read the article

< Previous Page | 413 414 415 416 417 418 419 420 421 422 423 424  | Next Page >