Search Results

Search found 30146 results on 1206 pages for 'back to the future'.

Page 79/1206 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • SQL SERVER – Backing Up and Recovering the Tail End of a Transaction Log – Notes from the Field #042

    - by Pinal Dave
    [Notes from Pinal]: The biggest challenge which people face is not taking backup, but the biggest challenge is to restore a backup successfully. I have seen so many different examples where users have failed to restore their database because they made some mistake while they take backup and were not aware of the same. Tail Log backup was such an issue in earlier version of SQL Server but in the latest version of SQL Server, Microsoft team has fixed the confusion with additional information on the backup and restore screen itself. Now they have additional information, there are a few more people confused as they have no clue about this. Previously they did not find this as a issue and now they are finding tail log as a new learning. Linchpin People are database coaches and wellness experts for a data driven world. In this 42nd episode of the Notes from the Fields series database expert Tim Radney (partner at Linchpin People) explains in a very simple words, Backing Up and Recovering the Tail End of a Transaction Log. Many times when restoring a database over an existing database SQL Server will warn you about needing to make a tail end of the log backup. This might be your reminder that you have to choose to overwrite the database or could be your reminder that you are about to write over and lose any transactions since the last transaction log backup. You might be asking yourself “What is the tail end of the transaction log”. The tail end of the transaction log is simply any committed transactions that have occurred since the last transaction log backup. This is a very crucial part of a recovery strategy if you are lucky enough to be able to capture this part of the log. Most organizations have chosen to accept some amount of data loss. You might be shaking your head at this statement however if your organization is taking transaction logs backup every 15 minutes, then your potential risk of data loss is up to 15 minutes. Depending on the extent of the issue causing you to have to perform a restore, you may or may not have access to the transaction log (LDF) to be able to back up those vital transactions. For example, if the storage array or disk that holds your transaction log file becomes corrupt or damaged then you wouldn’t be able to recover the tail end of the log. If you do have access to the physical log file then you can still back up the tail end of the log. In 2013 I presented a session at the PASS Summit called “The Ultimate Tail Log Backup and Restore” and have been invited back this year to present it again. During this session I demonstrate how you can back up the tail end of the log even after the data file becomes corrupt. In my demonstration I set my database offline and then delete the data file (MDF). The database can’t become more corrupt than that. I attempt to bring the database back online to change the state to RECOVERY PENDING and then backup the tail end of the log. I can do this by specifying WITH NO_TRUNCATE. Using NO_TRUNCATE is equivalent to specifying both COPY_ONLY and CONTINUE_AFTER_ERROR. It as its name says, does not try to truncate the log. This is a great demo however how could I achieve backing up the tail end of the log if the failure destroys my entire instance of SQL and all I had was the LDF file? During my demonstration I also demonstrate that I can attach the log file to a database on another instance and then back up the tail end of the log. If I am performing proper backups then my most recent full, differential and log files should be on a server other than the one that crashed. I am able to achieve this task by creating new database with the same name as the failed database. I then set the database offline, delete my data file and overwrite the log with my good log file. I attempt to bring the database back online and then backup the log with NO_TRUNCATE just like in the first example. I encourage each of you to view my blog post and watch the video demonstration on how to perform these tasks. I really hope that none of you ever have to perform this in production, however it is a really good idea to know how to do this just in case. It really isn’t a matter of “IF” you will have to perform a restore of a production system but more of a “WHEN”. Being able to recover the tail end of the log in these sever cases could be the difference of having to notify all your business customers of data loss or not. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Note: Tim has also written an excellent book on SQL Backup and Recovery, a must have for everyone. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • navigation controller question

    - by saimun
    i want multiple back buttons for the view i have click through , i saw this tutorial before, but now i can't find it anymore, can someone help me please for example i have view1 with button, after i click it the navigation controller will push to view2 , then on the navigation bar we have view1 (back) button which can go back to view1, inside view2 i had did the same like view1 but this time is go to view3, and on the navigation bar instead have one view2 (back) button, i want to have view1 and view2 both button so i can choose which view i want to go back to on view3 navigation bar left button will look this < view1 < view2

    Read the article

  • Javascript to detect where newly opened window came from

    - by teggy
    Let's say I have a page, http://mydomain.com/mypage.html, with a "Back to page" link. This link should take the user back to the page where they came from only if the previous page URL matches one of the following: http://mydomain.com/one.html, http://mydomain.com/two.html, and http://mydomain.com/three.html. Otherwise, it would take the user back to the homepage, http://mydomain.com. I would like the "Back to page" link to also take the user back to the homepage when http://mydomain.com/mypage.html is pasted onto the browser. How can i accomplish this with Javascript. Thanks!

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Xsigo and Oracle's Storage

    - by Philippe Deverchère
    Xsigo, a virtual network infrastructure provider, has recently been acquired by Oracle. Following this acquisition, one might ask ourselves why it is important to Oracle and how Oracle's storage is going to benefit on the long term from this virtualized infrastructure layer. Well, the first thing to understand is that Virtual Networking addresses both network and storage connectivity. Oracle Virtual Networking, as the Xsigo technology is now called, connects any server to any network and storage, so this is not just about connecting servers to the Internet or Intranet. It is also for a large part connecting servers to NAS and SAN storage. Connecting servers to storage has become increasingly complex in the past few years because of the strong emergence of virtualization at the Operating System level. 50% of enterprise workloads are now virtualized, up from 18% in 2009, resulting in a strong consolidation of various applications in a high density server footprint. At the same time, server I/O capability increased 8x in the last 8 years. All this has pushed IT administrators to multiply the number of I/O connections in the back-end of their physical servers, resulting in a messy and very hard to manage networking infrastructure. Here is a typical view of a rack back-end when no virtual networking is used. We consider that today: - 75% of users have ten or more Ethernet ports per server - 85% of users have two or more SAN ports per server - 58% have had to add connectivity to a server specifically for VMs - 65% consider cable reduction a priority The average is 12 or more ports per server, resulting in an extremely complex infrastructure to manage. What Oracle wants to achieve with its Oracle Virtual Networking offering is pretty simple. The objective is to eliminate the complexity through a dramatic reduction of cabling between servers and storage/networks. It is also to provide a software based management system so that any server can be connected to any network or any storage, on demand, and without physical intervention on the infrastructure. At the end of the day, the picture on the left shows what one wants to get for the back-end of customer's racks: just a couple of connections on each physical server to provide a simple, agile and fast network infrastructure for both storage and networking access. This is exactly what the Oracle Virtual Networking solution does. It transforms a complex, error-prone, difficult to manage and expensive networking infrastructure into a simple, high performance and agile solution for the data center. Practically speaking, and for the sake of simplicity, imagine that each server just hosts a minimal number of physical InfiniBand HCAs (Host Channel Adapter) with two links (for redundancy) onto the Oracle Fabric Interconnect director. Using the Oracle Fabric Manager software, you'll then be able to create virtual NICs and HBAs (called vNIC and vHBA) that will be seen by the servers as standard NICs and HBAs and associate them to networks and storage systems which are physically connected to the back-end of the director through standard Fibre Channel and Ethernet GbE/10GbE ports. In addition to this incredibly simple "at-a-click" connectivity capability, the Oracle Virtual Networking solution offers powerful features such as network isolation, Quality of Service, advanced performance monitoring and non-disruptive reconfiguration, migration and scalability of networking infrastructure. So let's go back now to our initial question: why is Oracle Virtual Networking especially important to Oracle's storage solutions? After all, one could connect any storage in the back-end of the Oracle Fabric Interconnect directors, right? The answer is pretty simple: since Oracle owns both the virtualized networking infrastructure and the storage (ZFS-SA, Pillar Axiom and tape), it is possible to imagine several ways in the future to add value when it comes to connect storage to a virtualized storage network: enhanced storage capabilities, converged management between storage and network, improved diagnostic capabilities and optimized integration resulting in higher performance and unique features/functions. Of course, all this is not going to be done overnight, and future will tell us is which evolutions come first. But there is little doubt that the integration of Xsigo within Oracle is going to create opportunities for Oracle's storage!

    Read the article

  • MySQL InnoDB Corruption after power outage, possible to recover?

    - by Tim Hackett
    Hey Guys, I recently started trying to get Redmine up and running after a power outage that seems to have corrupted our InnoDB database in MySQL. Redmine had an extensive set of documentation that I would like to get even if redmine isn't able to run. The service fails on startup. I have tried inserting innodb_force_recovery = 4 per the documentation from the url in the error log. (also tried 1 thru 6 as I have backed up all directories after the corruption) I have verified through "mysqld-nt --print-defaults" that it is starting with the recovery option in the params. The machine is running Windows Server 2003 SP2, Xeon E5335 with 2GB RAM, MySQL is not mirrored to another machine, nor is the machine a mirror. I do not have any backups because the previous person did not set them up. Here is the error log: InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 100308 14:50:01 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 100308 14:50:02 InnoDB: Error: page 7 log sequence number 0 935521175 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 2 log sequence number 0 935517607 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 11 log sequence number 0 935517607 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 5 log sequence number 0 972973045 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 6 log sequence number 0 972984051 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. 100308 14:50:02 InnoDB: Error: page 1577 log sequence number 0 972737368 InnoDB: is in the future! Current system log sequence number 0 933419020. InnoDB: Your database may be corrupt or you may have copied the InnoDB InnoDB: tablespace but not the InnoDB log files. See InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: for more information. InnoDB: Error: trying to access page number 4294965119 in space 0, InnoDB: space name .\ibdata1, InnoDB: which is outside the tablespace bounds. InnoDB: Byte offset 0, len 16384, i/o type 10. InnoDB: If you get this error at mysqld startup, please check that InnoDB: your my.cnf matches the ibdata files that you have in the InnoDB: MySQL server. 100308 14:50:02InnoDB: Assertion failure in thread 960 in file .\fil\fil0fil.c line 3959 InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.0/en/forcing-recovery.html InnoDB: about forcing recovery. 100308 14:50:02 [ERROR] mysqld-nt: Got signal 11. Aborting! 100308 14:50:02 [ERROR] Aborting 100308 14:50:02 [Note] mysqld-nt: Shutdown complete

    Read the article

  • I can't get router and switches configured properly for my home office network

    - by BernicusMaximus
    Networking Gurus, I recently built a new detached garage, with an office above. As such I had it tied into my existing home ethernet wiring. The ethernet signal is coming into the garage just fine, but I can not get my network configured the way I want because of problems trying to link the various router/switch devices. Please see the following links for the network diagrams: Home Network So basically, I can't my future state to work. I'm not sure if I'm using incompatible switches or what, but I tried the future state with some 4 port switches from best buy and had no luck. I resorted to setting up the Current State so I could operate. What I am looking for is help on how best to get my future state to work. Is this possible with my current configuration, and if not, what should I do? Any help is appreciated. Thanks, Bernie

    Read the article

  • DNA and Quantum computing

    - by Jacques
    I recently(A couple of weeks ago) read an article about the future of processing and how quantum-processors and DNA-processors(DNA-computing) are the future competitors of computing since both will completely outperform the computers of this era. In terms of processing speeds, what do we expect from these two different processing techniques ? Personally I believe that DNA-processing will be a major step towards AI. For labs and office work I think quantum-processing which will be more logical. I'm quite excited that i'm still so young - to see what the future of technology holds! Then again my parents will soon find out what the after-life holds... just as bloody exciting, if not more..

    Read the article

  • Delphi 6 OleServer.pas Invoke memory leak

    - by Mike Davis
    There's a bug in delphi 6 which you can find some reference online for when you import a tlb the order of the parameters in an event invocation is reversed. It is reversed once in the imported header and once in TServerEventDIspatch.Invoke. you can find more information about it here: http://cc.embarcadero.com/Item/16496 somewhat related to this issue there appears to be a memory leak in TServerEventDispatch.Invoke with a parameter of a Variant of type Var_Array (maybe others, but this is the more obvious one i could see). The invoke code copies the args into a VarArray to be passed to the event handler and then copies the VarArray back to the args after the call, relevant code pasted below: // Set our array to appropriate length SetLength(VarArray, ParamCount); // Copy over data for I := Low(VarArray) to High(VarArray) do VarArray[I] := OleVariant(TDispParams(Params).rgvarg^[I]); // Invoke Server proxy class if FServer <> nil then FServer.InvokeEvent(DispID, VarArray); // Copy data back for I := Low(VarArray) to High(VarArray) do OleVariant(TDispParams(Params).rgvarg^[I]) := VarArray[I]; // Clean array SetLength(VarArray, 0); There are some obvious work-arounds in my case: if i skip the copying back in case of a VarArray parameter it fixes the leak. to not change the functionality i thought i should copy the data in the array instead of the variant back to the params but that can get complicated since it can hold other variants and seems to me that would need to be done recursively. Since a change in OleServer will have a ripple effect i want to make sure my change here is strictly correct. can anyone shed some light on exactly why memory is being leaked here? I can't seem to look up the callstack any lower than TServerEventDIspatch.Invoke (why is that?) I imagine that in the process of copying the Variant holding the VarArray back to the param list it added a reference to the array thus not allowing it to be release as normal but that's just a rough guess and i can't track down the code to back it up. Maybe someone with a better understanding of all this could shed some light?

    Read the article

  • Multiple UIWebViewNavigationTypeBackForward not only false but make inferring the actual url impossi

    - by SG1
    I have a UIWebView and a UITextField for the url. Naturally, I want the textField to always show the current document url. This works fine for urls directly input in the field, but I also have some buttons attached to the view for reload, back, and forward. So I've added all the UIWebViewDelegate methods to my controller, so it can listen to whenever the webView navigates and change the url in the textField as needed. Here's how I'm using the shouldStartLoadWithRequest: method: - (BOOL)webView:(UIWebView *)webView shouldStartLoadWithRequest:(NSURLRequest *)request navigationType:(UIWebViewNavigationType)navigationType { NSLog(@"navigated via %d", navigationType); //loads the user cares about if ( navigationType == UIWebViewNavigationTypeLinkClicked || navigationType == UIWebViewNavigationTypeBackForward ) { //URL setting [self setUrlQuietly:request.URL]; } return YES; } Now, my problem here is that an actual click will generate a single navigation of type "LinkClicked" followed by a dozen type "Other" (redirects and ad loads I assume), which gets handled correctly by the code, but a back/forward action will generate all its requests as back/forward requests. In other words, a click calls setUrlQuietly: once, but a back/forward calls it multiple times. I am trying to use this method to determine if the user actually initiated the action (and I'd like to catch page redirects too). But if the method has no way of distinguishing between an actual "back" and a "load initiated as a result of a back", how can I make this assessment? Without this, I am completely stumped as to how I can only show the actual url and not intermediate urls. Thank you!

    Read the article

  • Suggestions for lightweight, thread-safe scheduler

    - by nirvanai
    I am trying to write a round-robin scheduler for lightweight threads (fibers). It must scale to handle as many concurrently-scheduled fibers as possible. I also need to be able to schedule fibers from threads other than the one the run loop is on, and preferably unschedule them from arbitrary threads as well (though I could live with only being able to unschedule them from the run loop). My current idea is to have a circular doubly-linked list, where each fiber is a node and the scheduler holds a reference to the current node. This is what I have so far: using Interlocked = System.Threading.Interlocked; public class Thread { internal Future current_fiber; public void RunLoop () { while (true) { var fiber = current_fiber; if (fiber == null) { // block the thread until a fiber is scheduled continue; } if (fiber.Fulfilled) fiber.Unschedule (); else fiber.Resume (); //if (current_fiber == fiber) current_fiber = fiber.next; Interlocked.CompareExchange<Future> (ref current_fiber, fiber.next, fiber); } } } public abstract class Future { public bool Fulfilled { get; protected set; } internal Future previous, next; // this must be thread-safe // it inserts this node before thread.current_fiber // (getting the exact position doesn't matter, as long as the // chosen nodes haven't been unscheduled) public void Schedule (Thread thread) { next = this; // maintain circularity, even if this is the only node previous = this; try_again: var current = Interlocked.CompareExchange<Future> (ref thread.current_fiber, this, null); if (current == null) return; var target = current.previous; while (target == null) { // current was unscheduled; negotiate for new current_fiber var potential = current.next; var actual = Interlocked.CompareExchange<Future> (ref thread.current_fiber, potential, current); current = (actual == current? potential : actual); if (current == null) goto try_again; target = current.previous; } // I would lock "current" and "target" at this point. // How can I do this w/o risk of deadlock? next = current; previous = target; target.next = this; current.previous = this; } // this would ideally be thread-safe public void Unschedule () { var prev = previous; if (prev == null) { // already unscheduled return; } previous = null; if (next == this) { next = null; return; } // Again, I would lock "prev" and "next" here // How can I do this w/o risk of deadlock? prev.next = next; next.previous = prev; } public abstract void Resume (); } As you can see, my sticking point is that I cannot ensure the order of locking, so I can't lock more than one node without risking deadlock. Or can I? I don't want to have a global lock on the Thread object, since the amount of lock contention would be extreme. Plus, I don't especially care about insertion position, so if I lock each node separately then Schedule() could use something like Monitor.TryEnter and just keep walking the list until it finds an unlocked node. Overall, I'm not invested in any particular implementation, as long as it meets the requirements I've mentioned. Any ideas would be greatly appreciated. Thanks! P.S- For the curious, this is for an open source project I'm starting at http://github.com/nirvanai/Cirrus

    Read the article

  • Optimal partition setup for Windows 7 on SSD

    - by Mike C.
    Hello, I'm setting up my system with Windows 7 right now, with knowledge that I am going to be getting a SSD in the future. What optimizations/setup should I do now to make a smoother transition in the future? Should I created two partitions - one for the OS and one for the data? Assuming this is the case, I would be able to easily ghost my OS partition onto the SSD in the future. If so, what should go on the OS drive besides the OS? Program files? If I install games or Visual Studio, should it go on the OS drive or the data drive? I can see the SSD filling up fast if I install all my program files on there. I've seen a few posts where people talk about leaving a portion of the SSD unformatted - is this something I should do? Thanks!

    Read the article

  • initalizing two pointers to same value in "for" loop

    - by MCP
    I'm working with a linked list and am trying to initalize two pointers equal to the "first"/"head" pointer. I'm trying to do this cleanly in a "for" loop. The point of all this being so that I can run two pointers through the linked list, one right behind the other (so that I can modify as needed)... Something like: //listHead = main pointer to the linked list for (blockT *front, *back = listHead; front != NULL; front = front->next) //...// back = back->next; The idea being I can increment front early so that it's one ahead, doing the work, and not incrementing "back" until the bottom of the code block in case I need to backup in order to modify the linked list... Regardless as to the "why" of this, in addition to the above I've tried: for (blockT *front = *back = listHead; /.../ for (blockT *front = listHead, blockT *back = listHead; /.../ I would like to avoid pointer to a pointer. Do I just need to initialize these before the loop? As always, thanks!

    Read the article

  • Why doesn't AJAX History work in IE7 as it does in IE8 and every other browser?

    - by Nick
    I have two ASP.NET pages, say page1 and page2. Page1 contains an update panel and I use AJAX History to allow browser back/forward button support. Users can navigate to page2 via page1 - I do a response.redirect server-side so that I can store in a session what options were on page1 when they left. On page2, a user can click the back button and return to page1 with it displaying what was there prior to navigating to page2. You can click the back button again on page1 to go back to other selectively chosen history states. This all works great in IE8, Firefox, Safari, and Chrome. However, in IE7 it doesn't. The user can get back to page1 from page2, but when they click the back button again it doesn't show the previous states of page1 - it displays the original page load content. The history data looks to still be encoded in the url when using ie7 but is not working for some reason. Does anyone know why this is happening in IE7 and how I can get around this? It really needs to work in IE7 as most of our users are on IE7. Please help.

    Read the article

  • ODBC in SSIS 2012

    - by jamiet
    In August 2011 the SQL Server client team published a blog post entitled Microsoft is Aligning with ODBC for Native Relational Data Access in which they basically said "OLE DB is the past, ODBC is the future. Deal with it.". From that blog post:We encourage you to adopt ODBC in the development of your new and future versions of your application. You don’t need to change your existing applications using OLE DB, as they will continue to be supported on Denali throughout its lifecycle. While this gives you a large window of opportunity for changing your applications before the deprecation goes into effect, you may want to consider migrating those applications to ODBC as a part of your future roadmap.I recently undertook a project using SSIS2012 and heeded that advice by opting to use ODBC Connection Managers rather than OLE DB Connection Managers. Unfortunately my finding was that the ODBC Connection Manager is not yet ready for primetime use in SSIS 2012. The main issue I found was that you can't populate an Object variable with a recordset when using an Execute SQL Task connecting to an ODBC data source; any attempt to do so will result in an error:"Disconnected recordsets are not available from ODBC connections." I have filed a bug on Connect at ODBC Connection Manager does not have same funcitonality as OLE DB. For this reason I strongly recommend that you don't make the move to ODBC Connection Managers in SSIS just yet - best to wait for the next version of SSIS before doing that.I found another couple of issues with the ODBC Connection Manager that are worth keeping in mind:It doesn't recognise System Data Source Names (DSNs), only User DSNs (bug filed at ODBC System DSNs are not available in the ODBC Connection Manager)  UPDATE: According to a comment on that Connect item this may only be a problem on 64bit.In the OLE DB Connection Manager parameter ordinals are 0-based, in the ODBC Connection Manager they are 1-based (oh I just can't wait for the upgrade mess that ensues from this one!!!)You have been warned!@jamiet

    Read the article

  • ASP.NET MVC: MVC Time Planner is available at CodePlex

    - by DigiMortal
    I get almost every week some e-mails where my dear readers ask for source code of my ASP.NET MVC and FullCalendar example. I have some great news, guys! I ported my sample application to Visual Studio 2010 and made it available at CodePlex. Feel free to visit the page of MVC Time Planner. NB! Current release of MVC Time Planner is initial one and it is basically conversion sfrom VS2008 example solution to VS2010. Current source code is not any study material but it gives you idea how to make calendars work together. Future releases will introduce many design and architectural improvements. I have planned also some new features. How MVC Time Planner looks? Image on right shows you how time planner looks like. It uses default design elements of ASP.NET MVC applications and jQueryUI. If I find some artist skills from myself I will improve design too, of course. :) Currently only day view of calendar is available, other views are coming in near future (I hope future will be week or two). Important links And here are some important links you may find useful. MVC Time Planner page @ CodePlex Documentation Release plan Help and support – questions, ideas, other communication Bugs and feature requests If you have any questions or you are interested in new features then please feel free to contact me through MVC Time Planner discussion forums.

    Read the article

  • How can I compare between web development technologies?

    - by Steve
    I would like experts to explain for me how can I compare between web development tools or technologies in order to be able to choose the right one. I'm tired from searching always in the regular way: X Technology vs Y Technology. I'm tired from peoples' biased opinions and usually I don't find a fair comparison. I have decided to put my question here about how can I compare them so you may identify to me the main standards for comparisons so I can compare them by myself and becoming able to choose the technology that is appropriate for the project I will develop. Note: in web development technologies I mean server side languages (e.g. PHP). One important requirement for me that can be defined as major one is cost efficiency and I mean that I don't care about the cost in the near future or the current cost, but what is more important for me is the cost in the future. If, for example, the site becomes one of the most 100 visited sites.   So, how can I compare the cost of different technologies for a future status of a site (such as being very famous site) so I can scale my option easily without missing a good technology like what happened with some sites when they chose not the most effective tool.

    Read the article

  • 50 Years After The Jetsons

    - by Jason Fitzpatrick
    The Jetsons, the future-oriented animated cartoon series from the 1960s, turned 50 this week. The Smithsonian takes a look at what the show meant, then and now. At the Smithsonian blog Paleofuture, Matt Novak looks back at the last 50 years and the impact that The Jetsons had. He writes: It’s important to remember that today’s political, social and business leaders were pretty much watching ”The Jetsons” on repeat during their most impressionable years. People are often shocked to learn that “The Jetsons” lasted just one season during its original run in 1962-63 and wasn’t revived until 1985. Essentially every kid in America (and many internationally) saw the series on constant repeat during Saturday morning cartoons throughout the 1960s, ’70s and ’80s. Everyone (including my own mom) seems to ask me, “How could it have been around for only 24 episodes? Did I really just watch those same episodes over and over again?” Yes, yes you did. But it’s just a cartoon, right? So what if today’s political and social elite saw ”The Jetsons” a lot? Thanks in large part to the Jetsons, there’s a sense of betrayal that is pervasive in American culture today about the future that never arrived. We’re all familiar with the rallying cries of the angry retrofuturist: Where’s my jetpack!?! Where’s my flying car!?! Where’s my robot maid?!? “The Jetsons” and everything they represented were seen by so many not as a possible future, but a promise of one. Hit up the link below for the full article–prepare to be surprised at just how few episodes of the show were ever animated and aired. 8 Deadly Commands You Should Never Run on Linux 14 Special Google Searches That Show Instant Answers How To Create a Customized Windows 7 Installation Disc With Integrated Updates

    Read the article

  • Ubuntu 12.04 slow boot on ASUS, attached with dmesg and bootchart

    - by stanleyhunk
    I heard that Ubuntu can boot up around 30sec, but I take more than 60sec every time my Ubuntu boot. I also read some forum said need to post the dmesg and bootchart to identify which process slowing down the booting time, as I'm not expert in Ubuntu and wish to learn more about it, I humbly ask any pro here to teach me how. My laptop specs: Model : ASUS K45VS RAM : 8MB CPU : Intel(R) Core(TM) i7-3630QM CPU @ 2.40GHz x 8 Graphic Card : nVidia GeForce GT 645M HDD : 750GB OS : Single boot Ubuntu 12.04LTS System.uname : Linux 3.8.0-39-generic #58~precise1-Ubuntu SMP Fri May 2 21:33:40 UTC 2014 x86_64 System.release : Ubuntu 12.04.4 LTS System.kernel.options : BOOT_IMAGE=/boot/vmlinuz-3.8.0-39-generic root=UUID=c8a71503-bce8-406c-9a5f-5aa8284f5c7c ro quiet splash My dmesg (which highlighted to the huge time frame gap): [ 30.772656] cgroup: libvirtd (1961) created nested cgroup for controller "memory" which has incomplete hierarchy support. Nested cgroups may change behavior in the future. [ 30.772659] cgroup: "memory" requires setting use_hierarchy to 1 on the root. [ 30.772683] cgroup: libvirtd (1961) created nested cgroup for controller "devices" which has incomplete hierarchy support. Nested cgroups may change behavior in the future. [ 30.772710] cgroup: libvirtd (1961) created nested cgroup for controller "blkio" which has incomplete hierarchy support. Nested cgroups may change behavior in the future. [ 32.140335] nvidia 0000:01:00.0: irq 46 for MSI/MSI-X [ 32.505619] ACPI Error: Field [TMPB] at 1081344 exceeds Buffer [ROM1] size 262144 (bits) (20121018/dsopcode-236) [ 32.505624] ACPI Error: Method parse/execution failed [\_SB_.PCI0.PEG0.PEGP._ROM] (Node ffff880224e91f00), AE_AML_BUFFER_LIMIT (20121018/psparse-537) [ 802.034422] audit_printk_skb: 69 callbacks suppressed [ 802.034428] type=1400 audit(1400914804.392:35): apparmor="DENIED" operation="capable" parent=1 profile="/usr/sbin/cupsd" pid=1683 comm="cupsd" pid=1683 comm="cupsd" capability=36 capname="block_suspend" [ 1581.300901] type=1400 audit(1400915584.816:36): apparmor="DENIED" operation="capable" parent=1 profile="/usr/sbin/cupsd" pid=1683 comm="cupsd" pid=1683 comm="cupsd" capability=36 capname="block_suspend" My Bootchart.png: Looking forward to learn to improve both my booting time and knowledge. Thanks in advance :)

    Read the article

  • Move SQL Server transaction log to another disk

    - by Jim Lahman
    When restoring a database backup, by default, SQL Server places the database files in the master database file directory.  In this example, that location is in L:\MSSQL10.CHTL\MSSQL\DATA as shown by the issuance of sp_helpfile   Hence, the restored files for the database CHTL_L2_DB are in the same directory     Per SQL Server best practices, the log file should be on its own disk drive so that the database and log file can operate in a sequential manner and perform optimally. The steps to move the log file is as follows: Record the location of the database files and the transaction log files Note the future destination of the transaction log file Get exclusive access to the database Detach from the database Move the log file to the new location Attach to the database Verify new location of transaction log Record the location of the database file To view the current location of the database files, use the system stored procedure, sp_helpfile 1: use chtl_l2_db 2: go 3:   4: sp_helpfile 5: go   Note the future destination of the transaction log file The future destination of the transaction log file will be located in K:\MSSQLLog   Get exclusive access to the database To get exclusive access to the database, alter the database access to single_user.  If users are still connected to the database, remove them by using with rollback immediate option.  Note:  If you had a pane connected to the database when the it is placed into single_user mode, then you will be presented with a reconnection dialog box. 1: alter database chtl_l2_db 2: set single_user with rollback immediate 3: go Detach from the database   Now detach from the database so that we can use windows explorer to move the transaction log file 1: use master 2: go 3:   4: sp_detach_db 'chtl_l2_db' 5: go   After copying the transaction log file re-attach to the database 1: use master 2: go 3:   4: sp_attach_db 'chtl_l2_db', 5: 'L:\MSSQL10.CHTL\MSSQL\DATA\CHTL_L2_DB.MDF', 6: 'K:\MSSQLLog\CHTL_L2_DB_4.LDF', 7: 'L:\MSSQL10.CHTL\MSSQL\DATA\CHTL_L2_DB_1.NDF', 8: 'L:\MSSQL10.CHTL\MSSQL\DATA\CHTL_L2_DB_2.NDF', 9: 'L:\MSSQL10.CHTL\MSSQL\DATA\CHTL_L2_DB_3.NDF' 10: GO

    Read the article

  • ODBC in SSIS 2012

    - by jamiet
    In August 2011 the SQL Server client team published a blog post entitled Microsoft is Aligning with ODBC for Native Relational Data Access in which they basically said "OLE DB is the past, ODBC is the future. Deal with it.". From that blog post:We encourage you to adopt ODBC in the development of your new and future versions of your application. You don’t need to change your existing applications using OLE DB, as they will continue to be supported on Denali throughout its lifecycle. While this gives you a large window of opportunity for changing your applications before the deprecation goes into effect, you may want to consider migrating those applications to ODBC as a part of your future roadmap.I recently undertook a project using SSIS2012 and heeded that advice by opting to use ODBC Connection Managers rather than OLE DB Connection Managers. Unfortunately my finding was that the ODBC Connection Manager is not yet ready for primetime use in SSIS 2012. The main issue I found was that you can't populate an Object variable with a recordset when using an Execute SQL Task connecting to an ODBC data source; any attempt to do so will result in an error:"Disconnected recordsets are not available from ODBC connections." I have filed a bug on Connect at ODBC Connection Manager does not have same funcitonality as OLE DB. For this reason I strongly recommend that you don't make the move to ODBC Connection Managers in SSIS just yet - best to wait for the next version of SSIS before doing that.I found another couple of issues with the ODBC Connection Manager that are worth keeping in mind:It doesn't recognise System Data Source Names (DSNs), only User DSNs (bug filed at ODBC System DSNs are not available in the ODBC Connection Manager)  UPDATE: According to a comment on that Connect item this may only be a problem on 64bit.In the OLE DB Connection Manager parameter ordinals are 0-based, in the ODBC Connection Manager they are 1-based (oh I just can't wait for the upgrade mess that ensues from this one!!!)You have been warned!@jamiet

    Read the article

  • Silverlight, JavaScript and HTML 5 - Who wins?

    - by Sahil Malik
    SharePoint 2010 Training: more information   Disclaimer: These are just opinions. In the past I have expressed opinions about the future of technology, and have been ridiculously accurate. I have no idea if this will be accurate or not, but that is what it’s all about. Its opinions, predicting the future.   This topic has been boiling inside me for a while, and I have discussed it in private gettogethers with fellow minded techies. But I thought it would be a good idea to put this together as a blogpost. There is some debate about the future of Silverlight, especially in light of technologies such as newer faster browsers, and HTML 5. As a .NET developer, where do I invest my time and skills – remember you have limited time and skills, and not everything that comes out of Microsoft is a smashing success. So it is very very wise for you to consider the facts, macro trends, and allocate what you have limited amounts of – “time”. Read full article ....

    Read the article

  • Advice on designing web application with a 40+ year lifetime

    - by user2708395
    Scenario Currently, I am apart of a health care project whose main requirement is to capture data with unknown attributes using user generated forms by health care providers. The second requirement is that data integrity is key and that the application will be used for 40+ years. We are currently migrating the client's data from the past 40 years from various sources (Paper, Excel, Access, etc...) to the database. Future requirements are: Workflow management of forms Schedule management of forms Security/Role based management Reporting engine Mobile/Tablet support Situation Only 6 months in, the current (contracted) architect/senior programmer has taken the "fast" approach and has designed a poor system. The database is not normalized, the code is coupled, the tiers have no dedicated purpose and data is starting to go missing since he has designed some beans to perform "deletes" on the database. The code base is extremely bloated and there are jobs just to synchronize data since the database is not normalized. His approach has been to rely on backup jobs to restore missing data and doesn't seem to believe in re-factoring. Having presented my findings to the PM, the architect will be removed when his contract ends. I have been given the task to re-architect this application. My team consists of me and one junior programmer. We have no other resources. We have been granted a 6-month requirement freeze in which we can focus on re-building this system. I suggested using a CMS system like Drupal, but for policy reasons at the client's organization, the system must be built from scratch. This is the first time that I will be designing a system with a 40+ lifespan. I have only worked on projects with 3-5 year lifespans, so this situation is very new, yet exciting. Questions What design considerations will make the system more "future proof"? What experiences have you had in designing such systems - both failures and successes? What questions should be asked to the client/PM to make the system more "future proof"?

    Read the article

  • JavaOne 2012 in Review

    - by Janice J. Heiss
    Noted freelance writer Steve Meloan has a new article up on otn/java, titled, “JavaOne 2012 Review: Make the Future Java” in which he summarizes the happenings at JavaOne 2012. Along the way, he reminds us that if the future turns out to be anything like the past, Java will do fine: The repeated theme for this year's conference was ‘Make the Future Java,’ and according to recent stats, the groundwork is already firmly in place:    There are 9 million Java developers worldwide.    Three billion devices run Java.    Five billion Java Cards are in use.    One hundred percent of Blu-ray Disc players ship with Java.    Ninety-seven percent of enterprise desktops run Java.    Eighty-nine percent of PC desktops run Java.This year's content curriculum program was organized under seven technical tracks:    Core Java Platform    Development Tools and Techniques    Emerging Languages on the JVM    Enterprise Service Architectures and the Cloud    Java EE Web Profile and Platform Technologies    Java ME, Java Card, Embedded, and Devices    JavaFX and Rich User Experiences”Meloan artfully reminds us of how JavaOne makes learning fun. Have a look at the article here.

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >