Search Results

Search found 10550 results on 422 pages for 'apache commons dbcp'.

Page 411/422 | < Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >

  • unable to connect site to different port

    - by JohnMerlino
    I have a domain was registered at godaddy named http://mysite.com/. I logged into godaddy and I went to All Products Domains Domain Management. I clicked on the appropriate domain and it took me to the Domain Details page. I clicked Launch under DNS Manager and it took me to the Zone File Editor. I noticed that notify.mysite.com was pointing to an IP address pointing to a dead server, so I switched it to an operating server. Then I pinged the domain to see where it was pointing to and it was correctly pointing to the working server. So I copied the default configuration under sites-available: sudo cp default notify.mysite.com. And then I made some edits to it to have it point to a different document root to serve files at a different port: Listen 1740 Listen 64.135.xx.xxx:1740 #I also tried this as well: NameVirtualHost 64.135.xx.xxx:1740 <VirtualHost 64.135.xx.xxx:1740> ServerAdmin [email protected] ServerName notify.mysite.com DocumentRoot /var/www/test/public <Directory /var/www/test/public> Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> Then I enabled the virtual host. Then I went to the document root and added an index.html file with some text in it. Then I restarted apache. The restart gave no errors. Then I type the correct domain in URL: http://notify.mysite.com:1740/ and I get: Oops! Google Chrome could not connect to notify.mysite.com:1740 Somehow it took out all my other sites. Now even the ones that were responding on port 80 are no longe responding, even though I did not touch the virtual hosts for them. I get this message now: Oops! Google Chrome could not connect to mysite.com However, ping responds: ping mysite.com PING mysite.com (64.135.12.134): 56 data bytes 64 bytes from 64.135.12.134: icmp_seq=0 ttl=49 time=20.839 ms 64 bytes from 64.135.12.134: icmp_seq=1 ttl=49 time=20.489 ms The result of telnet: $ telnet guarddoggps.com 80 Trying 64.135.12.134... telnet: connect to address 64.135.12.134: Connection refused telnet: Unable to connect to remote host

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Service Discovery in WCF 4.0 &ndash; Part 1

    - by Shaun
    When designing a service oriented architecture (SOA) system, there will be a lot of services with many service contracts, endpoints and behaviors. Besides the client calling the service, in a large distributed system a service may invoke other services. In this case, one service might need to know the endpoints it invokes. This might not be a problem in a small system. But when you have more than 10 services this might be a problem. For example in my current product, there are around 10 services, such as the user authentication service, UI integration service, location service, license service, device monitor service, event monitor service, schedule job service, accounting service, player management service, etc..   Benefit of Discovery Service Since almost all my services need to invoke at least one other service. This would be a difficult task to make sure all services endpoints are configured correctly in every service. And furthermore, it would be a nightmare when a service changed its endpoint at runtime. Hence, we need a discovery service to remove the dependency (configuration dependency). A discovery service plays as a service dictionary which stores the relationship between the contracts and the endpoints for every service. By using the discovery service, when service X wants to invoke service Y, it just need to ask the discovery service where is service Y, then the discovery service will return all proper endpoints of service Y, then service X can use the endpoint to send the request to service Y. And when some services changed their endpoint address, all need to do is to update its records in the discovery service then all others will know its new endpoint. In WCF 4.0 Discovery it supports both managed proxy discovery mode and ad-hoc discovery mode. In ad-hoc mode there is no standalone discovery service. When a client wanted to invoke a service, it will broadcast an message (normally in UDP protocol) to the entire network with the service match criteria. All services which enabled the discovery behavior will receive this message and only those matched services will send their endpoint back to the client. The managed proxy discovery service works as I described above. In this post I will only cover the managed proxy mode, where there’s a discovery service. For more information about the ad-hoc mode please refer to the MSDN.   Service Announcement and Probe The main functionality of discovery service should be return the proper endpoint addresses back to the service who is looking for. In most cases the consume service (as a client) will send the contract which it wanted to request to the discovery service. And then the discovery service will find the endpoint and respond. Sometimes the contract and endpoint are not enough. It also contains versioning, extensions attributes. This post I will only cover the case includes contract and endpoint. When a client (or sometimes a service who need to invoke another service) need to connect to a target service, it will firstly request the discovery service through the “Probe” method with the criteria. Basically the criteria contains the contract type name of the target service. Then the discovery service will search its endpoint repository by the criteria. The repository might be a database, a distributed cache or a flat XML file. If it matches, the discovery service will grab the endpoint information (it’s called discovery endpoint metadata in WCF) and send back. And this is called “Probe”. Finally the client received the discovery endpoint metadata and will use the endpoint to connect to the target service. Besides the probe, discovery service should take the responsible to know there is a new service available when it goes online, as well as stopped when it goes offline. This feature is named “Announcement”. When a service started and stopped, it will announce to the discovery service. So the basic functionality of a discovery service should includes: 1, An endpoint which receive the service online message, and add the service endpoint information in the discovery repository. 2, An endpoint which receive the service offline message, and remove the service endpoint information from the discovery repository. 3, An endpoint which receive the client probe message, and return the matches service endpoints, and return the discovery endpoint metadata. WCF 4.0 discovery service just covers all these features in it's infrastructure classes.   Discovery Service in WCF 4.0 WCF 4.0 introduced a new assembly named System.ServiceModel.Discovery which has all necessary classes and interfaces to build a WS-Discovery compliant discovery service. It supports ad-hoc and managed proxy modes. For the case mentioned in this post, what we need to build is a standalone discovery service, which is the managed proxy discovery service mode. To build a managed discovery service in WCF 4.0 just create a new class inherits from the abstract class System.ServiceModel.Discovery.DiscoveryProxy. This class implemented and abstracted the procedures of service announcement and probe. And it exposes 8 abstract methods where we can implement our own endpoint register, unregister and find logic. These 8 methods are asynchronized, which means all invokes to the discovery service are asynchronously, for better service capability and performance. 1, OnBeginOnlineAnnouncement, OnEndOnlineAnnouncement: Invoked when a service sent the online announcement message. We need to add the endpoint information to the repository in this method. 2, OnBeginOfflineAnnouncement, OnEndOfflineAnnouncement: Invoked when a service sent the offline announcement message. We need to remove the endpoint information from the repository in this method. 3, OnBeginFind, OnEndFind: Invoked when a client sent the probe message that want to find the service endpoint information. We need to look for the proper endpoints by matching the client’s criteria through the repository in this method. 4, OnBeginResolve, OnEndResolve: Invoked then a client sent the resolve message. Different from the find method, when using resolve method the discovery service will return the exactly one service endpoint metadata to the client. In our example we will NOT implement this method.   Let’s create our own discovery service, inherit the base System.ServiceModel.Discovery.DiscoveryProxy. We also need to specify the service behavior in this class. Since the build-in discovery service host class only support the singleton mode, we must set its instance context mode to single. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.ServiceModel.Discovery; 6: using System.ServiceModel; 7:  8: namespace Phare.Service 9: { 10: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)] 11: public class ManagedProxyDiscoveryService : DiscoveryProxy 12: { 13: protected override IAsyncResult OnBeginFind(FindRequestContext findRequestContext, AsyncCallback callback, object state) 14: { 15: throw new NotImplementedException(); 16: } 17:  18: protected override IAsyncResult OnBeginOfflineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 19: { 20: throw new NotImplementedException(); 21: } 22:  23: protected override IAsyncResult OnBeginOnlineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 24: { 25: throw new NotImplementedException(); 26: } 27:  28: protected override IAsyncResult OnBeginResolve(ResolveCriteria resolveCriteria, AsyncCallback callback, object state) 29: { 30: throw new NotImplementedException(); 31: } 32:  33: protected override void OnEndFind(IAsyncResult result) 34: { 35: throw new NotImplementedException(); 36: } 37:  38: protected override void OnEndOfflineAnnouncement(IAsyncResult result) 39: { 40: throw new NotImplementedException(); 41: } 42:  43: protected override void OnEndOnlineAnnouncement(IAsyncResult result) 44: { 45: throw new NotImplementedException(); 46: } 47:  48: protected override EndpointDiscoveryMetadata OnEndResolve(IAsyncResult result) 49: { 50: throw new NotImplementedException(); 51: } 52: } 53: } Then let’s implement the online, offline and find methods one by one. WCF discovery service gives us full flexibility to implement the endpoint add, remove and find logic. For the demo purpose we will use an internal dictionary to store the services’ endpoint metadata. In the next post we will see how to serialize and store these information in database. Define a concurrent dictionary inside the service class since our it will be used in the multiple threads scenario. 1: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)] 2: public class ManagedProxyDiscoveryService : DiscoveryProxy 3: { 4: private ConcurrentDictionary<EndpointAddress, EndpointDiscoveryMetadata> _services; 5:  6: public ManagedProxyDiscoveryService() 7: { 8: _services = new ConcurrentDictionary<EndpointAddress, EndpointDiscoveryMetadata>(); 9: } 10: } Then we can simply implement the logic of service online and offline. 1: protected override IAsyncResult OnBeginOnlineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 2: { 3: _services.AddOrUpdate(endpointDiscoveryMetadata.Address, endpointDiscoveryMetadata, (key, value) => endpointDiscoveryMetadata); 4: return new OnOnlineAnnouncementAsyncResult(callback, state); 5: } 6:  7: protected override void OnEndOnlineAnnouncement(IAsyncResult result) 8: { 9: OnOnlineAnnouncementAsyncResult.End(result); 10: } 11:  12: protected override IAsyncResult OnBeginOfflineAnnouncement(DiscoveryMessageSequence messageSequence, EndpointDiscoveryMetadata endpointDiscoveryMetadata, AsyncCallback callback, object state) 13: { 14: EndpointDiscoveryMetadata endpoint = null; 15: _services.TryRemove(endpointDiscoveryMetadata.Address, out endpoint); 16: return new OnOfflineAnnouncementAsyncResult(callback, state); 17: } 18:  19: protected override void OnEndOfflineAnnouncement(IAsyncResult result) 20: { 21: OnOfflineAnnouncementAsyncResult.End(result); 22: } Regards the find method, the parameter FindRequestContext.Criteria has a method named IsMatch, which can be use for us to evaluate which service metadata is satisfied with the criteria. So the implementation of find method would be like this. 1: protected override IAsyncResult OnBeginFind(FindRequestContext findRequestContext, AsyncCallback callback, object state) 2: { 3: _services.Where(s => findRequestContext.Criteria.IsMatch(s.Value)) 4: .Select(s => s.Value) 5: .All(meta => 6: { 7: findRequestContext.AddMatchingEndpoint(meta); 8: return true; 9: }); 10: return new OnFindAsyncResult(callback, state); 11: } 12:  13: protected override void OnEndFind(IAsyncResult result) 14: { 15: OnFindAsyncResult.End(result); 16: } As you can see, we checked all endpoints metadata in repository by invoking the IsMatch method. Then add all proper endpoints metadata into the parameter. Finally since all these methods are asynchronized we need some AsyncResult classes as well. Below are the base class and the inherited classes used in previous methods. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.Threading; 6:  7: namespace Phare.Service 8: { 9: abstract internal class AsyncResult : IAsyncResult 10: { 11: AsyncCallback callback; 12: bool completedSynchronously; 13: bool endCalled; 14: Exception exception; 15: bool isCompleted; 16: ManualResetEvent manualResetEvent; 17: object state; 18: object thisLock; 19:  20: protected AsyncResult(AsyncCallback callback, object state) 21: { 22: this.callback = callback; 23: this.state = state; 24: this.thisLock = new object(); 25: } 26:  27: public object AsyncState 28: { 29: get 30: { 31: return state; 32: } 33: } 34:  35: public WaitHandle AsyncWaitHandle 36: { 37: get 38: { 39: if (manualResetEvent != null) 40: { 41: return manualResetEvent; 42: } 43: lock (ThisLock) 44: { 45: if (manualResetEvent == null) 46: { 47: manualResetEvent = new ManualResetEvent(isCompleted); 48: } 49: } 50: return manualResetEvent; 51: } 52: } 53:  54: public bool CompletedSynchronously 55: { 56: get 57: { 58: return completedSynchronously; 59: } 60: } 61:  62: public bool IsCompleted 63: { 64: get 65: { 66: return isCompleted; 67: } 68: } 69:  70: object ThisLock 71: { 72: get 73: { 74: return this.thisLock; 75: } 76: } 77:  78: protected static TAsyncResult End<TAsyncResult>(IAsyncResult result) 79: where TAsyncResult : AsyncResult 80: { 81: if (result == null) 82: { 83: throw new ArgumentNullException("result"); 84: } 85:  86: TAsyncResult asyncResult = result as TAsyncResult; 87:  88: if (asyncResult == null) 89: { 90: throw new ArgumentException("Invalid async result.", "result"); 91: } 92:  93: if (asyncResult.endCalled) 94: { 95: throw new InvalidOperationException("Async object already ended."); 96: } 97:  98: asyncResult.endCalled = true; 99:  100: if (!asyncResult.isCompleted) 101: { 102: asyncResult.AsyncWaitHandle.WaitOne(); 103: } 104:  105: if (asyncResult.manualResetEvent != null) 106: { 107: asyncResult.manualResetEvent.Close(); 108: } 109:  110: if (asyncResult.exception != null) 111: { 112: throw asyncResult.exception; 113: } 114:  115: return asyncResult; 116: } 117:  118: protected void Complete(bool completedSynchronously) 119: { 120: if (isCompleted) 121: { 122: throw new InvalidOperationException("This async result is already completed."); 123: } 124:  125: this.completedSynchronously = completedSynchronously; 126:  127: if (completedSynchronously) 128: { 129: this.isCompleted = true; 130: } 131: else 132: { 133: lock (ThisLock) 134: { 135: this.isCompleted = true; 136: if (this.manualResetEvent != null) 137: { 138: this.manualResetEvent.Set(); 139: } 140: } 141: } 142:  143: if (callback != null) 144: { 145: callback(this); 146: } 147: } 148:  149: protected void Complete(bool completedSynchronously, Exception exception) 150: { 151: this.exception = exception; 152: Complete(completedSynchronously); 153: } 154: } 155: } 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Text; 5: using System.ServiceModel.Discovery; 6: using Phare.Service; 7:  8: namespace Phare.Service 9: { 10: internal sealed class OnOnlineAnnouncementAsyncResult : AsyncResult 11: { 12: public OnOnlineAnnouncementAsyncResult(AsyncCallback callback, object state) 13: : base(callback, state) 14: { 15: this.Complete(true); 16: } 17:  18: public static void End(IAsyncResult result) 19: { 20: AsyncResult.End<OnOnlineAnnouncementAsyncResult>(result); 21: } 22:  23: } 24:  25: sealed class OnOfflineAnnouncementAsyncResult : AsyncResult 26: { 27: public OnOfflineAnnouncementAsyncResult(AsyncCallback callback, object state) 28: : base(callback, state) 29: { 30: this.Complete(true); 31: } 32:  33: public static void End(IAsyncResult result) 34: { 35: AsyncResult.End<OnOfflineAnnouncementAsyncResult>(result); 36: } 37: } 38:  39: sealed class OnFindAsyncResult : AsyncResult 40: { 41: public OnFindAsyncResult(AsyncCallback callback, object state) 42: : base(callback, state) 43: { 44: this.Complete(true); 45: } 46:  47: public static void End(IAsyncResult result) 48: { 49: AsyncResult.End<OnFindAsyncResult>(result); 50: } 51: } 52:  53: sealed class OnResolveAsyncResult : AsyncResult 54: { 55: EndpointDiscoveryMetadata matchingEndpoint; 56:  57: public OnResolveAsyncResult(EndpointDiscoveryMetadata matchingEndpoint, AsyncCallback callback, object state) 58: : base(callback, state) 59: { 60: this.matchingEndpoint = matchingEndpoint; 61: this.Complete(true); 62: } 63:  64: public static EndpointDiscoveryMetadata End(IAsyncResult result) 65: { 66: OnResolveAsyncResult thisPtr = AsyncResult.End<OnResolveAsyncResult>(result); 67: return thisPtr.matchingEndpoint; 68: } 69: } 70: } Now we have finished the discovery service. The next step is to host it. The discovery service is a standard WCF service. So we can use ServiceHost on a console application, windows service, or in IIS as usual. The following code is how to host the discovery service we had just created in a console application. 1: static void Main(string[] args) 2: { 3: using (var host = new ServiceHost(new ManagedProxyDiscoveryService())) 4: { 5: host.Opened += (sender, e) => 6: { 7: host.Description.Endpoints.All((ep) => 8: { 9: Console.WriteLine(ep.ListenUri); 10: return true; 11: }); 12: }; 13:  14: try 15: { 16: // retrieve the announcement, probe endpoint and binding from configuration 17: var announcementEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["announcementEndpointAddress"]); 18: var probeEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["probeEndpointAddress"]); 19: var binding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 20: var announcementEndpoint = new AnnouncementEndpoint(binding, announcementEndpointAddress); 21: var probeEndpoint = new DiscoveryEndpoint(binding, probeEndpointAddress); 22: probeEndpoint.IsSystemEndpoint = false; 23: // append the service endpoint for announcement and probe 24: host.AddServiceEndpoint(announcementEndpoint); 25: host.AddServiceEndpoint(probeEndpoint); 26:  27: host.Open(); 28:  29: Console.WriteLine("Press any key to exit."); 30: Console.ReadKey(); 31: } 32: catch (Exception ex) 33: { 34: Console.WriteLine(ex.ToString()); 35: } 36: } 37:  38: Console.WriteLine("Done."); 39: Console.ReadKey(); 40: } What we need to notice is that, the discovery service needs two endpoints for announcement and probe. In this example I just retrieve them from the configuration file. I also specified the binding of these two endpoints in configuration file as well. 1: <?xml version="1.0"?> 2: <configuration> 3: <startup> 4: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> 5: </startup> 6: <appSettings> 7: <add key="announcementEndpointAddress" value="net.tcp://localhost:10010/announcement"/> 8: <add key="probeEndpointAddress" value="net.tcp://localhost:10011/probe"/> 9: <add key="bindingType" value="System.ServiceModel.NetTcpBinding, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/> 10: </appSettings> 11: </configuration> And this is the console screen when I ran my discovery service. As you can see there are two endpoints listening for announcement message and probe message.   Discoverable Service and Client Next, let’s create a WCF service that is discoverable, which means it can be found by the discovery service. To do so, we need to let the service send the online announcement message to the discovery service, as well as offline message before it shutdown. Just create a simple service which can make the incoming string to upper. The service contract and implementation would be like this. 1: [ServiceContract] 2: public interface IStringService 3: { 4: [OperationContract] 5: string ToUpper(string content); 6: } 1: public class StringService : IStringService 2: { 3: public string ToUpper(string content) 4: { 5: return content.ToUpper(); 6: } 7: } Then host this service in the console application. In order to make the discovery service easy to be tested the service address will be changed each time it’s started. 1: static void Main(string[] args) 2: { 3: var baseAddress = new Uri(string.Format("net.tcp://localhost:11001/stringservice/{0}/", Guid.NewGuid().ToString())); 4:  5: using (var host = new ServiceHost(typeof(StringService), baseAddress)) 6: { 7: host.Opened += (sender, e) => 8: { 9: Console.WriteLine("Service opened at {0}", host.Description.Endpoints.First().ListenUri); 10: }; 11:  12: host.AddServiceEndpoint(typeof(IStringService), new NetTcpBinding(), string.Empty); 13:  14: host.Open(); 15:  16: Console.WriteLine("Press any key to exit."); 17: Console.ReadKey(); 18: } 19: } Currently this service is NOT discoverable. We need to add a special service behavior so that it could send the online and offline message to the discovery service announcement endpoint when the host is opened and closed. WCF 4.0 introduced a service behavior named ServiceDiscoveryBehavior. When we specified the announcement endpoint address and appended it to the service behaviors this service will be discoverable. 1: var announcementAddress = new EndpointAddress(ConfigurationManager.AppSettings["announcementEndpointAddress"]); 2: var announcementBinding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 3: var announcementEndpoint = new AnnouncementEndpoint(announcementBinding, announcementAddress); 4: var discoveryBehavior = new ServiceDiscoveryBehavior(); 5: discoveryBehavior.AnnouncementEndpoints.Add(announcementEndpoint); 6: host.Description.Behaviors.Add(discoveryBehavior); The ServiceDiscoveryBehavior utilizes the service extension and channel dispatcher to implement the online and offline announcement logic. In short, it injected the channel open and close procedure and send the online and offline message to the announcement endpoint.   On client side, when we have the discovery service, a client can invoke a service without knowing its endpoint. WCF discovery assembly provides a class named DiscoveryClient, which can be used to find the proper service endpoint by passing the criteria. In the code below I initialized the DiscoveryClient, specified the discovery service probe endpoint address. Then I created the find criteria by specifying the service contract I wanted to use and invoke the Find method. This will send the probe message to the discovery service and it will find the endpoints back to me. The discovery service will return all endpoints that matches the find criteria, which means in the result of the find method there might be more than one endpoints. In this example I just returned the first matched one back. In the next post I will show how to extend our discovery service to make it work like a service load balancer. 1: static EndpointAddress FindServiceEndpoint() 2: { 3: var probeEndpointAddress = new EndpointAddress(ConfigurationManager.AppSettings["probeEndpointAddress"]); 4: var probeBinding = Activator.CreateInstance(Type.GetType(ConfigurationManager.AppSettings["bindingType"], true, true)) as Binding; 5: var discoveryEndpoint = new DiscoveryEndpoint(probeBinding, probeEndpointAddress); 6:  7: EndpointAddress address = null; 8: FindResponse result = null; 9: using (var discoveryClient = new DiscoveryClient(discoveryEndpoint)) 10: { 11: result = discoveryClient.Find(new FindCriteria(typeof(IStringService))); 12: } 13:  14: if (result != null && result.Endpoints.Any()) 15: { 16: var endpointMetadata = result.Endpoints.First(); 17: address = endpointMetadata.Address; 18: } 19: return address; 20: } Once we probed the discovery service we will receive the endpoint. So in the client code we can created the channel factory from the endpoint and binding, and invoke to the service. When creating the client side channel factory we need to make sure that the client side binding should be the same as the service side. WCF discovery service can be used to find the endpoint for a service contract, but the binding is NOT included. This is because the binding was not in the WS-Discovery specification. In the next post I will demonstrate how to add the binding information into the discovery service. At that moment the client don’t need to create the binding by itself. Instead it will use the binding received from the discovery service. 1: static void Main(string[] args) 2: { 3: Console.WriteLine("Say something..."); 4: var content = Console.ReadLine(); 5: while (!string.IsNullOrWhiteSpace(content)) 6: { 7: Console.WriteLine("Finding the service endpoint..."); 8: var address = FindServiceEndpoint(); 9: if (address == null) 10: { 11: Console.WriteLine("There is no endpoint matches the criteria."); 12: } 13: else 14: { 15: Console.WriteLine("Found the endpoint {0}", address.Uri); 16:  17: var factory = new ChannelFactory<IStringService>(new NetTcpBinding(), address); 18: factory.Opened += (sender, e) => 19: { 20: Console.WriteLine("Connecting to {0}.", factory.Endpoint.ListenUri); 21: }; 22: var proxy = factory.CreateChannel(); 23: using (proxy as IDisposable) 24: { 25: Console.WriteLine("ToUpper: {0} => {1}", content, proxy.ToUpper(content)); 26: } 27: } 28:  29: Console.WriteLine("Say something..."); 30: content = Console.ReadLine(); 31: } 32: } Similarly, the discovery service probe endpoint and binding were defined in the configuration file. 1: <?xml version="1.0"?> 2: <configuration> 3: <startup> 4: <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> 5: </startup> 6: <appSettings> 7: <add key="announcementEndpointAddress" value="net.tcp://localhost:10010/announcement"/> 8: <add key="probeEndpointAddress" value="net.tcp://localhost:10011/probe"/> 9: <add key="bindingType" value="System.ServiceModel.NetTcpBinding, System.ServiceModel, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"/> 10: </appSettings> 11: </configuration> OK, now let’s have a test. Firstly start the discovery service, and then start our discoverable service. When it started it will announced to the discovery service and registered its endpoint into the repository, which is the local dictionary. And then start the client and type something. As you can see the client asked the discovery service for the endpoint and then establish the connection to the discoverable service. And more interesting, do NOT close the client console but terminate the discoverable service but press the enter key. This will make the service send the offline message to the discovery service. Then start the discoverable service again. Since we made it use a different address each time it started, currently it should be hosted on another address. If we enter something in the client we could see that it asked the discovery service and retrieve the new endpoint, and connect the the service.   Summary In this post I discussed the benefit of using the discovery service and the procedures of service announcement and probe. I also demonstrated how to leverage the WCF Discovery feature in WCF 4.0 to build a simple managed discovery service. For test purpose, in this example I used the in memory dictionary as the discovery endpoint metadata repository. And when finding I also just return the first matched endpoint back. I also hard coded the bindings between the discoverable service and the client. In next post I will show you how to solve the problem mentioned above, as well as some additional feature for production usage. You can download the code here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Node.js Adventure - When Node Flying in Wind

    - by Shaun
    In the first post of this series I mentioned some popular modules in the community, such as underscore, async, etc.. I also listed a module named “Wind (zh-CN)”, which is created by one of my friend, Jeff Zhao (zh-CN). Now I would like to use a separated post to introduce this module since I feel it brings a new async programming style in not only Node.js but JavaScript world. If you know or heard about the new feature in C# 5.0 called “async and await”, or you learnt F#, you will find the “Wind” brings the similar async programming experience in JavaScript. By using “Wind”, we can write async code that looks like the sync code. The callbacks, async stats and exceptions will be handled by “Wind” automatically and transparently.   What’s the Problem: Dense “Callback” Phobia Let’s firstly back to my second post in this series. As I mentioned in that post, when we wanted to read some records from SQL Server we need to open the database connection, and then execute the query. In Node.js all IO operation are designed as async callback pattern which means when the operation was done, it will invoke a function which was taken from the last parameter. For example the database connection opening code would be like this. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: } 8: }); And then if we need to query the database the code would be like this. It nested in the previous function. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: conn.queryRaw(command, function(error, results) { 8: if(error) { 9: // failed to execute this command 10: } 11: else { 12: // records retrieved successfully 13: } 14: }; 15: } 16: }); Assuming if we need to copy some data from this database to another then we need to open another connection and execute the command within the function under the query function. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: conn.queryRaw(command, function(error, results) { 8: if(error) { 9: // failed to execute this command 10: } 11: else { 12: // records retrieved successfully 13: target.open(targetConnectionString, function(error, t_conn) { 14: if(error) { 15: // connect failed 16: } 17: else { 18: t_conn.queryRaw(copy_command, function(error, results) { 19: if(error) { 20: // copy failed 21: } 22: else { 23: // and then, what do you want to do now... 24: } 25: }; 26: } 27: }; 28: } 29: }; 30: } 31: }); This is just an example. In the real project the logic would be more complicated. This means our application might be messed up and the business process will be fragged by many callback functions. I would like call this “Dense Callback Phobia”. This might be a challenge how to make code straightforward and easy to read, something like below. 1: try 2: { 3: // open source connection 4: var s_conn = sqlConnect(s_connectionString); 5: // retrieve data 6: var results = sqlExecuteCommand(s_conn, s_command); 7: 8: // open target connection 9: var t_conn = sqlConnect(t_connectionString); 10: // prepare the copy command 11: var t_command = getCopyCommand(results); 12: // execute the copy command 13: sqlExecuteCommand(s_conn, t_command); 14: } 15: catch (ex) 16: { 17: // error handling 18: }   What’s the Problem: Sync-styled Async Programming Similar as the previous problem, the callback-styled async programming model makes the upcoming operation as a part of the current operation, and mixed with the error handling code. So it’s very hard to understand what on earth this code will do. And since Node.js utilizes non-blocking IO mode, we cannot invoke those operations one by one, as they will be executed concurrently. For example, in this post when I tried to copy the records from Windows Azure SQL Database (a.k.a. WASD) to Windows Azure Table Storage, if I just insert the data into table storage one by one and then print the “Finished” message, I will see the message shown before the data had been copied. This is because all operations were executed at the same time. In order to make the copy operation and print operation executed synchronously I introduced a module named “async” and the code was changed as below. 1: async.forEach(results.rows, 2: function (row, callback) { 3: var resource = { 4: "PartitionKey": row[1], 5: "RowKey": row[0], 6: "Value": row[2] 7: }; 8: client.insertEntity(tableName, resource, function (error) { 9: if (error) { 10: callback(error); 11: } 12: else { 13: console.log("entity inserted."); 14: callback(null); 15: } 16: }); 17: }, 18: function (error) { 19: if (error) { 20: error["target"] = "insertEntity"; 21: res.send(500, error); 22: } 23: else { 24: console.log("all done."); 25: res.send(200, "Done!"); 26: } 27: }); It ensured that the “Finished” message will be printed when all table entities had been inserted. But it cannot promise that the records will be inserted in sequence. It might be another challenge to make the code looks like in sync-style? 1: try 2: { 3: forEach(row in rows) { 4: var entity = { /* ... */ }; 5: tableClient.insert(tableName, entity); 6: } 7:  8: console.log("Finished"); 9: } 10: catch (ex) { 11: console.log(ex); 12: }   How “Wind” Helps “Wind” is a JavaScript library which provides the control flow with plain JavaScript for asynchronous programming (and more) without additional pre-compiling steps. It’s available in NPM so that we can install it through “npm install wind”. Now let’s create a very simple Node.js application as the example. This application will take some website URLs from the command arguments and tried to retrieve the body length and print them in console. Then at the end print “Finish”. I’m going to use “request” module to make the HTTP call simple so I also need to install by the command “npm install request”. The code would be like this. 1: var request = require("request"); 2:  3: // get the urls from arguments, the first two arguments are `node.exe` and `fetch.js` 4: var args = process.argv.splice(2); 5:  6: // main function 7: var main = function() { 8: for(var i = 0; i < args.length; i++) { 9: // get the url 10: var url = args[i]; 11: // send the http request and try to get the response and body 12: request(url, function(error, response, body) { 13: if(!error && response.statusCode == 200) { 14: // log the url and the body length 15: console.log( 16: "%s: %d.", 17: response.request.uri.href, 18: body.length); 19: } 20: else { 21: // log error 22: console.log(error); 23: } 24: }); 25: } 26: 27: // finished 28: console.log("Finished"); 29: }; 30:  31: // execute the main function 32: main(); Let’s execute this application. (I made them in multi-lines for better reading.) 1: node fetch.js 2: "http://www.igt.com/us-en.aspx" 3: "http://www.igt.com/us-en/games.aspx" 4: "http://www.igt.com/us-en/cabinets.aspx" 5: "http://www.igt.com/us-en/systems.aspx" 6: "http://www.igt.com/us-en/interactive.aspx" 7: "http://www.igt.com/us-en/social-gaming.aspx" 8: "http://www.igt.com/support.aspx" Below is the output. As you can see the finish message was printed at the beginning, and the pages’ length retrieved in a different order than we specified. This is because in this code the request command, console logging command are executed asynchronously and concurrently. Now let’s introduce “Wind” to make them executed in order, which means it will request the websites one by one, and print the message at the end.   First of all we need to import the “Wind” package and make sure the there’s only one global variant named “Wind”, and ensure it’s “Wind” instead of “wind”. 1: var Wind = require("wind");   Next, we need to tell “Wind” which code will be executed asynchronously so that “Wind” can control the execution process. In this case the “request” operation executed asynchronously so we will create a “Task” by using a build-in helps function in “Wind” named Wind.Async.Task.create. 1: var requestBodyLengthAsync = function(url) { 2: return Wind.Async.Task.create(function(t) { 3: request(url, function(error, response, body) { 4: if(error || response.statusCode != 200) { 5: t.complete("failure", error); 6: } 7: else { 8: var data = 9: { 10: uri: response.request.uri.href, 11: length: body.length 12: }; 13: t.complete("success", data); 14: } 15: }); 16: }); 17: }; The code above created a “Task” from the original request calling code. In “Wind” a “Task” means an operation will be finished in some time in the future. A “Task” can be started by invoke its start() method, but no one knows when it actually will be finished. The Wind.Async.Task.create helped us to create a task. The only parameter is a function where we can put the actual operation in, and then notify the task object it’s finished successfully or failed by using the complete() method. In the code above I invoked the request method. If it retrieved the response successfully I set the status of this task as “success” with the URL and body length. If it failed I set this task as “failure” and pass the error out.   Next, we will change the main() function. In “Wind” if we want a function can be controlled by Wind we need to mark it as “async”. This should be done by using the code below. 1: var main = eval(Wind.compile("async", function() { 2: })); When the application is running, Wind will detect “eval(Wind.compile(“async”, function” and generate an anonymous code from the body of this original function. Then the application will run the anonymous code instead of the original one. In our example the main function will be like this. 1: var main = eval(Wind.compile("async", function() { 2: for(var i = 0; i < args.length; i++) { 3: try 4: { 5: var result = $await(requestBodyLengthAsync(args[i])); 6: console.log( 7: "%s: %d.", 8: result.uri, 9: result.length); 10: } 11: catch (ex) { 12: console.log(ex); 13: } 14: } 15: 16: console.log("Finished"); 17: })); As you can see, when I tried to request the URL I use a new command named “$await”. It tells Wind, the operation next to $await will be executed asynchronously, and the main thread should be paused until it finished (or failed). So in this case, my application will be pause when the first response was received, and then print its body length, then try the next one. At the end, print the finish message.   Finally, execute the main function. The full code would be like this. 1: var request = require("request"); 2: var Wind = require("wind"); 3:  4: var args = process.argv.splice(2); 5:  6: var requestBodyLengthAsync = function(url) { 7: return Wind.Async.Task.create(function(t) { 8: request(url, function(error, response, body) { 9: if(error || response.statusCode != 200) { 10: t.complete("failure", error); 11: } 12: else { 13: var data = 14: { 15: uri: response.request.uri.href, 16: length: body.length 17: }; 18: t.complete("success", data); 19: } 20: }); 21: }); 22: }; 23:  24: var main = eval(Wind.compile("async", function() { 25: for(var i = 0; i < args.length; i++) { 26: try 27: { 28: var result = $await(requestBodyLengthAsync(args[i])); 29: console.log( 30: "%s: %d.", 31: result.uri, 32: result.length); 33: } 34: catch (ex) { 35: console.log(ex); 36: } 37: } 38: 39: console.log("Finished"); 40: })); 41:  42: main().start();   Run our new application. At the beginning we will see the compiled and generated code by Wind. Then we can see the pages were requested one by one, and at the end the finish message was printed. Below is the code Wind generated for us. As you can see the original code, the output code were shown. 1: // Original: 2: function () { 3: for(var i = 0; i < args.length; i++) { 4: try 5: { 6: var result = $await(requestBodyLengthAsync(args[i])); 7: console.log( 8: "%s: %d.", 9: result.uri, 10: result.length); 11: } 12: catch (ex) { 13: console.log(ex); 14: } 15: } 16: 17: console.log("Finished"); 18: } 19:  20: // Compiled: 21: /* async << function () { */ (function () { 22: var _builder_$0 = Wind.builders["async"]; 23: return _builder_$0.Start(this, 24: _builder_$0.Combine( 25: _builder_$0.Delay(function () { 26: /* var i = 0; */ var i = 0; 27: /* for ( */ return _builder_$0.For(function () { 28: /* ; i < args.length */ return i < args.length; 29: }, function () { 30: /* ; i ++) { */ i ++; 31: }, 32: /* try { */ _builder_$0.Try( 33: _builder_$0.Delay(function () { 34: /* var result = $await(requestBodyLengthAsync(args[i])); */ return _builder_$0.Bind(requestBodyLengthAsync(args[i]), function (result) { 35: /* console.log("%s: %d.", result.uri, result.length); */ console.log("%s: %d.", result.uri, result.length); 36: return _builder_$0.Normal(); 37: }); 38: }), 39: /* } catch (ex) { */ function (ex) { 40: /* console.log(ex); */ console.log(ex); 41: return _builder_$0.Normal(); 42: /* } */ }, 43: null 44: ) 45: /* } */ ); 46: }), 47: _builder_$0.Delay(function () { 48: /* console.log("Finished"); */ console.log("Finished"); 49: return _builder_$0.Normal(); 50: }) 51: ) 52: ); 53: /* } */ })   How Wind Works Someone may raise a big concern when you find I utilized “eval” in my code. Someone may assume that Wind utilizes “eval” to execute some code dynamically while “eval” is very low performance. But I would say, Wind does NOT use “eval” to run the code. It only use “eval” as a flag to know which code should be compiled at runtime. When the code was firstly been executed, Wind will check and find “eval(Wind.compile(“async”, function”. So that it knows this function should be compiled. Then it utilized parse-js to analyze the inner JavaScript and generated the anonymous code in memory. Then it rewrite the original code so that when the application was running it will use the anonymous one instead of the original one. Since the code generation was done at the beginning of the application was started, in the future no matter how long our application runs and how many times the async function was invoked, it will use the generated code, no need to generate again. So there’s no significant performance hurt when using Wind.   Wind in My Previous Demo Let’s adopt Wind into one of my previous demonstration and to see how it helps us to make our code simple, straightforward and easy to read and understand. In this post when I implemented the functionality that copied the records from my WASD to table storage, the logic would be like this. 1, Open database connection. 2, Execute a query to select all records from the table. 3, Recreate the table in Windows Azure table storage. 4, Create entities from each of the records retrieved previously, and then insert them into table storage. 5, Finally, show message as the HTTP response. But as the image below, since there are so many callbacks and async operations, it’s very hard to understand my logic from the code. Now let’s use Wind to rewrite our code. First of all, of course, we need the Wind package. Then we need to include the package files into project and mark them as “Copy always”. Add the Wind package into the source code. Pay attention to the variant name, you must use “Wind” instead of “wind”. 1: var express = require("express"); 2: var async = require("async"); 3: var sql = require("node-sqlserver"); 4: var azure = require("azure"); 5: var Wind = require("wind"); Now we need to create some async functions by using Wind. All async functions should be wrapped so that it can be controlled by Wind which are open database, retrieve records, recreate table (delete and create) and insert entity in table. Below are these new functions. All of them are created by using Wind.Async.Task.create. 1: sql.openAsync = function (connectionString) { 2: return Wind.Async.Task.create(function (t) { 3: sql.open(connectionString, function (error, conn) { 4: if (error) { 5: t.complete("failure", error); 6: } 7: else { 8: t.complete("success", conn); 9: } 10: }); 11: }); 12: }; 13:  14: sql.queryAsync = function (conn, query) { 15: return Wind.Async.Task.create(function (t) { 16: conn.queryRaw(query, function (error, results) { 17: if (error) { 18: t.complete("failure", error); 19: } 20: else { 21: t.complete("success", results); 22: } 23: }); 24: }); 25: }; 26:  27: azure.recreateTableAsync = function (tableName) { 28: return Wind.Async.Task.create(function (t) { 29: client.deleteTable(tableName, function (error, successful, response) { 30: console.log("delete table finished"); 31: client.createTableIfNotExists(tableName, function (error, successful, response) { 32: console.log("create table finished"); 33: if (error) { 34: t.complete("failure", error); 35: } 36: else { 37: t.complete("success", null); 38: } 39: }); 40: }); 41: }); 42: }; 43:  44: azure.insertEntityAsync = function (tableName, entity) { 45: return Wind.Async.Task.create(function (t) { 46: client.insertEntity(tableName, entity, function (error, entity, response) { 47: if (error) { 48: t.complete("failure", error); 49: } 50: else { 51: t.complete("success", null); 52: } 53: }); 54: }); 55: }; Then in order to use these functions we will create a new function which contains all steps for data copying. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: } 4: catch (ex) { 5: console.log(ex); 6: res.send(500, "Internal error."); 7: } 8: })); Let’s execute steps one by one with the “$await” keyword introduced by Wind so that it will be invoked in sequence. First is to open the database connection. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: } 7: catch (ex) { 8: console.log(ex); 9: res.send(500, "Internal error."); 10: } 11: })); Then retrieve all records from the database connection. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: } 10: catch (ex) { 11: console.log(ex); 12: res.send(500, "Internal error."); 13: } 14: })); After recreated the table, we need to create the entities and insert them into table storage. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage one by one 14: for (var i = 0; i < results.rows.length; i++) { 15: var entity = { 16: "PartitionKey": results.rows[i][1], 17: "RowKey": results.rows[i][0], 18: "Value": results.rows[i][2] 19: }; 20: $await(azure.insertEntityAsync(tableName, entity)); 21: console.log("entity inserted"); 22: } 23: } 24: } 25: catch (ex) { 26: console.log(ex); 27: res.send(500, "Internal error."); 28: } 29: })); Finally, send response back to the browser. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage one by one 14: for (var i = 0; i < results.rows.length; i++) { 15: var entity = { 16: "PartitionKey": results.rows[i][1], 17: "RowKey": results.rows[i][0], 18: "Value": results.rows[i][2] 19: }; 20: $await(azure.insertEntityAsync(tableName, entity)); 21: console.log("entity inserted"); 22: } 23: // send response 24: console.log("all done"); 25: res.send(200, "All done!"); 26: } 27: } 28: catch (ex) { 29: console.log(ex); 30: res.send(500, "Internal error."); 31: } 32: })); If we compared with the previous code we will find now it became more readable and much easy to understand. It’s very easy to know what this function does even though without any comments. When user go to URL “/was/copyRecords” we will execute the function above. The code would be like this. 1: app.get("/was/copyRecords", function (req, res) { 2: copyRecords(req, res).start(); 3: }); And below is the logs printed in local compute emulator console. As we can see the functions executed one by one and then finally the response back to me browser.   Scaffold Functions in Wind Wind provides not only the async flow control and compile functions, but many scaffold methods as well. We can build our async code more easily by using them. I’m going to introduce some basic scaffold functions here. In the code above I created some functions which wrapped from the original async function such as open database, create table, etc.. All of them are very similar, created a task by using Wind.Async.Task.create, return error or result object through Task.complete function. In fact, Wind provides some functions for us to create task object from the original async functions. If the original async function only has a callback parameter, we can use Wind.Async.Binding.fromCallback method to get the task object directly. For example the code below returned the task object which wrapped the file exist check function. 1: var Wind = require("wind"); 2: var fs = require("fs"); 3:  4: fs.existsAsync = Wind.Async.Binding.fromCallback(fs.exists); In Node.js a very popular async function pattern is that, the first parameter in the callback function represent the error object, and the other parameters is the return values. In this case we can use another build-in function in Wind named Wind.Async.Binding.fromStandard. For example, the open database function can be created from the code below. 1: sql.openAsync = Wind.Async.Binding.fromStandard(sql.open); 2:  3: /* 4: sql.openAsync = function (connectionString) { 5: return Wind.Async.Task.create(function (t) { 6: sql.open(connectionString, function (error, conn) { 7: if (error) { 8: t.complete("failure", error); 9: } 10: else { 11: t.complete("success", conn); 12: } 13: }); 14: }); 15: }; 16: */ When I was testing the scaffold functions under Wind.Async.Binding I found for some functions, such as the Azure SDK insert entity function, cannot be processed correctly. So I personally suggest writing the wrapped method manually.   Another scaffold method in Wind is the parallel tasks coordination. In this example, the steps of open database, retrieve records and recreated table should be invoked one by one, but it can be executed in parallel when copying data from database to table storage. In Wind there’s a scaffold function named Task.whenAll which can be used here. Task.whenAll accepts a list of tasks and creates a new task. It will be returned only when all tasks had been completed, or any errors occurred. For example in the code below I used the Task.whenAll to make all copy operation executed at the same time. 1: var copyRecordsInParallel = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage in parallal 14: var tasks = new Array(results.rows.length); 15: for (var i = 0; i < results.rows.length; i++) { 16: var entity = { 17: "PartitionKey": results.rows[i][1], 18: "RowKey": results.rows[i][0], 19: "Value": results.rows[i][2] 20: }; 21: tasks[i] = azure.insertEntityAsync(tableName, entity); 22: } 23: $await(Wind.Async.Task.whenAll(tasks)); 24: // send response 25: console.log("all done"); 26: res.send(200, "All done!"); 27: } 28: } 29: catch (ex) { 30: console.log(ex); 31: res.send(500, "Internal error."); 32: } 33: })); 34:  35: app.get("/was/copyRecordsInParallel", function (req, res) { 36: copyRecordsInParallel(req, res).start(); 37: });   Besides the task creation and coordination, Wind supports the cancellation solution so that we can send the cancellation signal to the tasks. It also includes exception solution which means any exceptions will be reported to the caller function.   Summary In this post I introduced a Node.js module named Wind, which created by my friend Jeff Zhao. As you can see, different from other async library and framework, adopted the idea from F# and C#, Wind utilizes runtime code generation technology to make it more easily to write async, callback-based functions in a sync-style way. By using Wind there will be almost no callback, and the code will be very easy to understand. Currently Wind is still under developed and improved. There might be some problems but the author, Jeff, should be very happy and enthusiastic to learn your problems, feedback, suggestion and comments. You can contact Jeff by - Email: [email protected] - Group: https://groups.google.com/d/forum/windjs - GitHub: https://github.com/JeffreyZhao/wind/issues   Source code can be download here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • MySQL 5.5 - Lost connection to MySQL server during query

    - by bully
    I have an Ubuntu 12.04 LTS server running at a german hoster (virtualized system). # uname -a Linux ... 3.2.0-27-generic #43-Ubuntu SMP Fri Jul 6 14:25:57 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux I want to migrate a Web CMS system, called Contao. It's not my first migration, but my first migration having connection issues with mysql. Migration went successfully, I have the same Contao version running (it's more or less just copy / paste). For the database behind, I did: apt-get install mysql-server phpmyadmin I set a root password and added a user for the CMS which has enough rights on its own database (and only its database) for doing the stuff it has to do. Data import via phpmyadmin worked just fine. I can access the backend of the CMS (which needs to deal with the database already). If I try to access the frontend now, I get the following error: Fatal error: Uncaught exception Exception with message Query error: Lost connection to MySQL server during query (<query statement here, nothing special, just a select>) thrown in /var/www/system/libraries/Database.php on line 686 (Keep in mind: I can access mysql with phpmyadmin and through the backend, working like a charme, it's just the frontend call causing errors). If I spam F5 in my browser I can sometimes even kill the mysql deamon. If I run # mysqld --log-warnings=2 I get this: ... 120921 7:57:31 [Note] mysqld: ready for connections. Version: '5.5.24-0ubuntu0.12.04.1' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 05:57:37 UTC - mysqld got signal 4 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=151 thread_count=1 connection_count=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 346679 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x7f1485db3b20 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7f1480041e60 thread_stack 0x30000 mysqld(my_print_stacktrace+0x29)[0x7f1483b96459] mysqld(handle_fatal_signal+0x483)[0x7f1483a5c1d3] /lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0)[0x7f1482797cb0] /lib/x86_64-linux-gnu/libm.so.6(+0x42e11)[0x7f14821cae11] mysqld(_ZN10SQL_SELECT17test_quick_selectEP3THD6BitmapILj64EEyyb+0x1368)[0x7f1483b26cb8] mysqld(+0x33116a)[0x7f148397916a] mysqld(_ZN4JOIN8optimizeEv+0x558)[0x7f148397d3e8] mysqld(_Z12mysql_selectP3THDPPP4ItemP10TABLE_LISTjR4ListIS1_ES2_jP8st_orderSB_S2_SB_yP13select_resultP18st_select_lex_unitP13st_select_lex+0xdd)[0x7f148397fd7d] mysqld(_Z13handle_selectP3THDP3LEXP13select_resultm+0x17c)[0x7f1483985d2c] mysqld(+0x2f4524)[0x7f148393c524] mysqld(_Z21mysql_execute_commandP3THD+0x293e)[0x7f14839451de] mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x10f)[0x7f1483948bef] mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x1365)[0x7f148394a025] mysqld(_Z24do_handle_one_connectionP3THD+0x1bd)[0x7f14839ec7cd] mysqld(handle_one_connection+0x50)[0x7f14839ec830] /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7f148278fe9a] /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f1481eba4bd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (7f1464004b60): is an invalid pointer Connection ID (thread ID): 1 Status: NOT_KILLED From /var/log/syslog: Sep 21 07:17:01 s16477249 CRON[23855]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Sep 21 07:18:51 s16477249 kernel: [231923.349159] type=1400 audit(1348204731.333:70): apparmor="STATUS" operation="profile_replace" name="/usr/sbin/mysqld" pid=23946 comm="apparmor_parser" Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[23990]: Upgrading MySQL tables if necessary. Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[23993]: /usr/bin/mysql_upgrade: the '--basedir' option is always ignored Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[23993]: Looking for 'mysql' as: /usr/bin/mysql Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[23993]: Looking for 'mysqlcheck' as: /usr/bin/mysqlcheck Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[23993]: This installation of MySQL is already upgraded to 5.5.24, use --force if you still need to run mysql_upgrade Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[24004]: Checking for insecure root accounts. Sep 21 07:18:53 s16477249 /etc/mysql/debian-start[24009]: Triggering myisam-recover for all MyISAM tables I'm using MyISAM tables all over, nothing with InnoDB there. Starting / stopping mysql is done via sudo service mysql start sudo service mysql stop After using google a little bit, I experimented a little bit with timeouts, correct socket path in the /etc/mysql/my.cnf file, but nothing helped. There are some old (from 2008) Gentoo bugs, where re-compiling just solved the problem. I already re-installed mysql via: sudo apt-get remove mysql-server mysql-common sudo apt-get autoremove sudo apt-get install mysql-server without any results. This is the first time I'm running into this problem, and I'm not very experienced with this kind of mysql 'administration'. So mainly, I want to know if anyone of you could help me out please :) Is it a mysql bug? Is something broken in the Ubuntu repositories? Is this one of those misterious 'use-tcp-connection-instead-of-socket-stuff-because-there-are-problems-on-virtualized-machines-with-sockets'-problem? Or am I completly on the wrong way and I just miss-configured something? Remember, phpmyadmin and access to the backend (which uses the database, too) is just fine. Maybe something with Apache? What can I do? Any help is appreciated, so thanks in advance :)

    Read the article

  • Simple-Talk development: a quick history lesson

    - by Michael Williamson
    Up until a few months ago, Simple-Talk ran on a pure .NET stack, with IIS as the web server and SQL Server as the database. Unfortunately, the platform for the site hadn’t quite gotten the love and attention it deserved. On the one hand, in the words of our esteemed editor Tony “I’d consider the current platform to be a “success”; it cost $10K, has lasted for 6 years, was finished, end to end in 6 months, and although we moan about it has got us quite a long way.” On the other hand, it was becoming increasingly clear that it needed some serious work. Among other issues, we had authors that wouldn’t blog because our current blogging platform, Community Server, was too painful for them to use. Forgetting about Simple-Talk for a moment, if you ask somebody what blogging platform they’d choose, the odds are they’d say WordPress. Regardless of its technical merits, it’s probably the most popular blogging platform, and it certainly seemed easier to use than Community Server. The issue was that WordPress is normally hosted on a Linux stack running PHP, Apache and MySQL — quite a difference from our Microsoft technology stack. We certainly didn’t want to rewrite the entire site — we just wanted a better blogging platform, with the rest of the existing, legacy site left as is. At a very high level, Simple-Talk’s technical design was originally very straightforward: when your browser sends an HTTP request to Simple-Talk, IIS (the web server) takes the request, does some work, and sends back a response. In order to keep the legacy site running, except with WordPress running the blogs, a different design is called for. We now use nginx as a reverse-proxy, which can then delegate requests to the appropriate application: So, when your browser sends a request to Simple-Talk, nginx takes that request and checks which part of the site you’re trying to access. Most of the time, it just passes the request along to IIS, which can then respond in much the same way it always has. However, if your request is for the blogs, then nginx delegates the request to WordPress. Unfortunately, as simple as that diagram looks, it hides an awful lot of complexity. In particular, the legacy site running on IIS was made up of four .NET applications. I’ve already mentioned one of these applications, Community Server, which handled the old blogs as well as managing membership and the forums. We have a couple of other applications to manage both our newsletters and our articles, and our own custom application to do some of the rendering on the site, such as the front page and the articles. When I say that it was made up of four .NET applications, this might conjure up an image in your mind of how they fit together: You might imagine four .NET applications, each with their own database, communicating over well-defined APIs. Sadly, reality was a little disappointing: We had four .NET applications that all ran on the same database. Worse still, there were many queries that happily joined across tables from multiple applications, meaning that each application was heavily dependent on the exact data schema that each other application used. Add to this that many of the queries were at least dozens of lines long, and practically identical to other queries except in a few key spots, and we can see that attempting to replace one component of the system would be more than a little tricky. However, the problems with the old system do give us a good place to start thinking about desirable qualities from any changes to the platform. Specifically: Maintainability — the tight coupling between each .NET application made it difficult to update any one application without also having to make changes elsewhere Replaceability — the tight coupling also meant that replacing one component wouldn’t be straightforward, especially if it wasn’t on a similar Microsoft stack. We’d like to be able to replace different parts without having to modify the existing codebase extensively Reusability — we’d like to be able to combine the different pieces of the system in different ways for different sites Repeatable deployments — rather than having to deploy the site manually with a long list of instructions, we should be able to deploy the entire site with a single command, allowing you to create a new instance of the site easily whether on production, staging servers, test servers or your own local machine Testability — if we can deploy the site with a single command, and each part of the site is no longer dependent on the specifics of how every other part of the site works, we can begin to run automated tests against the site, and against individual parts, both to prevent regressions and to do a little test-driven development In the next part, I’ll describe the high-level architecture we now have that hopefully brings us a little closer to these five traits.

    Read the article

  • NoSQL Java API for MySQL Cluster: Questions & Answers

    - by Mat Keep
    The MySQL Cluster engineering team recently ran a live webinar, available now on-demand demonstrating the ClusterJ and ClusterJPA NoSQL APIs for MySQL Cluster, and how these can be used in building real-time, high scale Java-based services that require continuous availability. Attendees asked a number of great questions during the webinar, and I thought it would be useful to share those here, so others are also able to learn more about the Java NoSQL APIs. First, a little bit about why we developed these APIs and why they are interesting to Java developers. ClusterJ and Cluster JPA ClusterJ is a Java interface to MySQL Cluster that provides either a static or dynamic domain object model, similar to the data model used by JDO, JPA, and Hibernate. A simple API gives users extremely high performance for common operations: insert, delete, update, and query. ClusterJPA works with ClusterJ to extend functionality, including - Persistent classes - Relationships - Joins in queries - Lazy loading - Table and index creation from object model By eliminating data transformations via SQL, users get lower data access latency and higher throughput. In addition, Java developers have a more natural programming method to directly manage their data, with a complete, feature-rich solution for Object/Relational Mapping. As a result, the development of Java applications is simplified with faster development cycles resulting in accelerated time to market for new services. MySQL Cluster offers multiple NoSQL APIs alongside Java: - Memcached for a persistent, high performance, write-scalable Key/Value store, - HTTP/REST via an Apache module - C++ via the NDB API for the lowest absolute latency. Developers can use SQL as well as NoSQL APIs for access to the same data set via multiple query patterns – from simple Primary Key lookups or inserts to complex cross-shard JOINs using Adaptive Query Localization Marrying NoSQL and SQL access to an ACID-compliant database offers developers a number of benefits. MySQL Cluster’s distributed, shared-nothing architecture with auto-sharding and real time performance makes it a great fit for workloads requiring high volume OLTP. Users also get the added flexibility of being able to run real-time analytics across the same OLTP data set for real-time business insight. OK – hopefully you now have a better idea of why ClusterJ and JPA are available. Now, for the Q&A. Q & A Q. Why would I use Connector/J vs. ClusterJ? A. Partly it's a question of whether you prefer to work with SQL (Connector/J) or objects (ClusterJ). Performance of ClusterJ will be better as there is no need to pass through the MySQL Server. A ClusterJ operation can only act on a single table (e.g. no joins) - ClusterJPA extends that capability Q. Can I mix different APIs (ie ClusterJ, Connector/J) in our application for different query types? A. Yes. You can mix and match all of the API types, SQL, JDBC, ODBC, ClusterJ, Memcached, REST, C++. They all access the exact same data in the data nodes. Update through one API and new data is instantly visible to all of the others. Q. How many TCP connections would a SessionFactory instance create for a cluster of 8 data nodes? A. SessionFactory has a connection to the mgmd (management node) but otherwise is just a vehicle to create Sessions. Without using connection pooling, a SessionFactory will have one connection open with each data node. Using optional connection pooling allows multiple connections from the SessionFactory to increase throughput. Q. Can you give details of how Cluster J optimizes sharding to enhance performance of distributed query processing? A. Each data node in a cluster runs a Transaction Coordinator (TC), which begins and ends the transaction, but also serves as a resource to operate on the result rows. While an API node (such as a ClusterJ process) can send queries to any TC/data node, there are performance gains if the TC is where most of the result data is stored. ClusterJ computes the shard (partition) key to choose the data node where the row resides as the TC. Q. What happens if we perform two primary key lookups within the same transaction? Are they sent to the data node in one transaction? A. ClusterJ will send identical PK lookups to the same data node. Q. How is distributed query processing handled by MySQL Cluster ? A. If the data is split between data nodes then all of the information will be transparently combined and passed back to the application. The session will connect to a data node - typically by hashing the primary key - which then interacts with its neighboring nodes to collect the data needed to fulfil the query. Q. Can I use Foreign Keys with MySQL Cluster A. Support for Foreign Keys is included in the MySQL Cluster 7.3 Early Access release Summary The NoSQL Java APIs are packaged with MySQL Cluster, available for download here so feel free to take them for a spin today! Key Resources MySQL Cluster on-line demo  MySQL ClusterJ and JPA On-demand webinar  MySQL ClusterJ and JPA documentation MySQL ClusterJ and JPA whitepaper and tutorial

    Read the article

  • Monitoring Events in your BPEL Runtime - RSS Feeds?

    - by Ramkumar Menon
    @10g - It had been a while since I'd tried something different. so here's what I did this week!Whenever our Developers deployed processes to the BPEL runtime, or perhaps when a process gets turned off due to connectivity issues, or maybe someone retired a process, I needed to know. So here's what I did. Step 1: Downloaded Quartz libraries and went through the documentation to understand what it takes to schedule a recurring job. Step 2: Cranked out two components using Oracle JDeveloper. [Within a new Web Project] a) A simple Java Class named FeedUpdater that extends org.quartz.Job. All this class does is to connect to your BPEL Runtime [via opmn:ormi] and fetch all events that occured in the last "n" minutes. events? - If it doesn't ring a bell - its right there on the BPEL Console. If you click on "Administration > Process Log" - what you see are events.The API to retrieve the events is //get the locator reference for the domain you are interested in.Locator l = .... //Predicate to retrieve events for last "n" minutesWhereCondition wc = new WhereCondition(...) //get all those events you needed.BPELProcessEvent[] events = l.listProcessEvents(wc); After you get all these events, write out these into an RSS Feed XML structure and stream it into a file that resides either in your Apache htdocs, or wherever it can be accessed via HTTP.You can read all about RSS 2.0 here. At a high level, here is how it looks like. <?xml version = '1.0' encoding = 'UTF-8'?><rss version="2.0">  <channel>    <title>Live Updates from the Development Environment</title>    <link>http://soadev.myserver.com/feeds/</link>    <description>Live Updates from the Development Environment</description>    <lastBuildDate>Fri, 19 Nov 2010 01:03:00 PST</lastBuildDate>    <language>en-us</language>    <ttl>1</ttl>    <item>      <guid>1290213724692</guid>      <title>Process compiled</title>      <link>http://soadev.myserver.com/BPELConsole/mdm_product/administration.jsp?mode=processLog&amp;processName=&amp;dn=all&amp;eventType=all&amp;eventDate=600&amp;Filter=++Filter++</link>      <pubDate>Fri Nov 19 00:00:37 PST 2010</pubDate>      <description>SendPurchaseOrderRequestService: 3.0 Time : Fri Nov 19 00:00:37                   PST 2010</description>    </item>   ...... </channel> </rss> For writing ut XML content, read through Oracle XML Parser APIs - [search around for oracle.xml.parser.v2] b) Now that my "Job" was done, my job was half done. Next, I wrote up a simple Scheduler Servlet that schedules the above "Job" class to be executed ever "n" minutes. It is very straight forward. Here is the primary section of the code.           try {        Scheduler sched = StdSchedulerFactory.getDefaultScheduler();         //get n and make a trigger that executes every "n" seconds        Trigger trigger = TriggerUtils.makeSecondlyTrigger(n);        trigger.setName("feedTrigger" + System.currentTimeMillis());        trigger.setGroup("feedGroup");                JobDetail job = new JobDetail("SOA_Feed" + System.currentTimeMillis(), "feedGroup", FeedUpdater.class);        sched.scheduleJob(job,trigger);         }catch(Exception ex) {            ex.printStackTrace();            throw new ServletException(ex.getMessage());        } Look up the Quartz API and documentation. It will make this look much simpler.   Now that both components were ready, I packaged the Application into a war file and deployed it onto my Application Server. When the servlet initialized, the "n" second schedule was set/initialized. From then on, the servlet kept populating the RSS Feed file. I just ensured that my "Job" code keeps only 30 latest events within it, so that the feed file is small and under control. [a few kbs]   Next I opened up the feed xml on my browser - It requested a subscription - and Here I was - watching new deployments/life cycle events all popping up on my browser toolbar every 5 (actually n)  minutes!   Well, you could do it on a browser/reader of your choice - or perhaps read them like you read an email on your thunderbird!.      

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Using Hadooop (HDInsight) with Microsoft - Two (OK, Three) Options

    - by BuckWoody
    Microsoft has many tools for “Big Data”. In fact, you need many tools – there’s no product called “Big Data Solution” in a shrink-wrapped box – if you find one, you probably shouldn’t buy it. It’s tempting to want a single tool that handles everything in a problem domain, but with large, complex data, that isn’t a reality. You’ll mix and match several systems, open and closed source, to solve a given problem. But there are tools that help with handling data at large, complex scales. Normally the best way to do this is to break up the data into parts, and then put the calculation engines for that chunk of data right on the node where the data is stored. These systems are in a family called “Distributed File and Compute”. Microsoft has a couple of these, including the High Performance Computing edition of Windows Server. Recently we partnered with Hortonworks to bring the Apache Foundation’s release of Hadoop to Windows. And as it turns out, there are actually two (technically three) ways you can use it. (There’s a more detailed set of information here: http://www.microsoft.com/sqlserver/en/us/solutions-technologies/business-intelligence/big-data.aspx, I’ll cover the options at a general level below)  First Option: Windows Azure HDInsight Service  Your first option is that you can simply log on to a Hadoop control node and begin to run Pig or Hive statements against data that you have stored in Windows Azure. There’s nothing to set up (although you can configure things where needed), and you can send the commands, get the output of the job(s), and stop using the service when you are done – and repeat the process later if you wish. (There are also connectors to run jobs from Microsoft Excel, but that’s another post)   This option is useful when you have a periodic burst of work for a Hadoop workload, or the data collection has been happening into Windows Azure storage anyway. That might be from a web application, the logs from a web application, telemetrics (remote sensor input), and other modes of constant collection.   You can read more about this option here:  http://blogs.msdn.com/b/windowsazure/archive/2012/10/24/getting-started-with-windows-azure-hdinsight-service.aspx Second Option: Microsoft HDInsight Server Your second option is to use the Hadoop Distribution for on-premises Windows called Microsoft HDInsight Server. You set up the Name Node(s), Job Tracker(s), and Data Node(s), among other components, and you have control over the entire ecostructure.   This option is useful if you want to  have complete control over the system, leave it running all the time, or you have a huge quantity of data that you have to bulk-load constantly – something that isn’t going to be practical with a network transfer or disk-mailing scheme. You can read more about this option here: http://www.microsoft.com/sqlserver/en/us/solutions-technologies/business-intelligence/big-data.aspx Third Option (unsupported): Installation on Windows Azure Virtual Machines  Although unsupported, you could simply use a Windows Azure Virtual Machine (we support both Windows and Linux servers) and install Hadoop yourself – it’s open-source, so there’s nothing preventing you from doing that.   Aside from being unsupported, there are other issues you’ll run into with this approach – primarily involving performance and the amount of configuration you’ll need to do to access the data nodes properly. But for a single-node installation (where all components run on one system) such as learning, demos, training and the like, this isn’t a bad option. Did I mention that’s unsupported? :) You can learn more about Windows Azure Virtual Machines here: http://www.windowsazure.com/en-us/home/scenarios/virtual-machines/ And more about Hadoop and the installation/configuration (on Linux) here: http://en.wikipedia.org/wiki/Apache_Hadoop And more about the HDInsight installation here: http://www.microsoft.com/web/gallery/install.aspx?appid=HDINSIGHT-PREVIEW Choosing the right option Since you have two or three routes you can go, the best thing to do is evaluate the need you have, and place the workload where it makes the most sense.  My suggestion is to install the HDInsight Server locally on a test system, and play around with it. Read up on the best ways to use Hadoop for a given workload, understand the parts, write a little Pig and Hive, and get your feet wet. Then sign up for a test account on HDInsight Service, and see how that leverages what you know. If you're a true tinkerer, go ahead and try the VM route as well. Oh - there’s another great reference on the Windows Azure HDInsight that just came out, here: http://blogs.msdn.com/b/brunoterkaly/archive/2012/11/16/hadoop-on-azure-introduction.aspx  

    Read the article

  • Five development tools I can't live without

    - by bconlon
    When applying to join Geeks with Blogs I had to specify the development tools I use every day. That got me thinking, it's taken a long time to whittle my tools of choice down to the selection I use, so it might be worth sharing. Before I begin, I appreciate we all have our preferred development tools, but these are the ones that work for me. Microsoft Visual Studio Microsoft Visual Studio has been my development tool of choice for more years than I care to remember. I first used this when it was Visual C++ 1.5 (hats off to those who started on 1.0) and by 2.2 it had everything I needed from a C++ IDE. Versions 4 and 5 followed and if I had to guess I would expect more Windows applications are written in VC++ 6 and VB6 than any other language. Then came the not so great versions Visual Studio .Net 2002 (7.0) and 2003 (7.1). If I'm honest I was still using v6. 2005 was better and 2008 was simply brilliant. Everything worked, the compiler was super fast and I was happy again...then came 2010...oh dear. 2010 is a big step backwards for me. It's not encouraging for my upcoming WPF exploits that 2010 is fronted in WPF technology, with the forever growing Find/Replace dialog, the issues with C++ intellisense, and the buggy debugger. That said it is still my tool of choice but I hope they sort the issue in SP1. I've tried other IDEs like Visual Age and Eclipse, but for me Visual Studio is the best. A really great tool. Liquid XML Studio XML development is a tricky business. The W3C standards are often difficult to get to the bottom of so it's great to have a graphical tool to help. I first used Liquid Technologies 5 or 6 years back when I needed to process XML data in C++. Their excellent XML Data Binding tool has an easy to use Wizard UI (as compared to Castor or JAXB command line tools) and allows you to generate code from an XML Schema. So instead of having to deal with untyped nodes like with a DOM parser, instead you get an Object Model providing a custom API in C++, C#, VB etc. More recently they developed a graphical XML IDE with XML Editor, XSLT, XQuery debugger and other XML tools. So now I can develop an XML Schema graphically, click a button to generate a Sample XML document, and click another button to run the Wizard to generate code including a Sample Application that will then load my Sample XML document into the generated object model. This is a very cool toolset. Note: XML Data Binding is nothing to do with WPF Data Binding, but I hope to cover both in more detail another time. .Net Reflector Note: I've just noticed that starting form the end of February 2011 this will no longer be a free tool !! .Net Reflector turns .Net byte code back into C# source code. But how can it work this magic? Well the clue is in the name, it uses reflection to inspect a compiled .Net assembly. The assembly is compiled to byte code, it doesn't get compiled to native machine code until its needed using a just-in-time (JIT) compiler. The byte code still has all of the information needed to see classes, variables. methods and properties, so reflector gathers this information and puts it in a handy tree. I have used .Net Reflector for years in order to understand what the .Net Framework is doing as it sometimes has undocumented, quirky features. This really has been invaluable in certain instances and I cannot praise enough kudos on the original developer Lutz Roeder. Smart Assembly In order to stop nosy geeks looking at our code using a tool like .Net Reflector, we need to obfuscate (mess up) the byte code. Smart Assembly is a tool that does this. Again I have used this for a long time. It is very quick and easy to use. Another excellent tool. Coincidentally, .Net Reflector and Smart Assembly are now both owned by Red Gate. Again kudos goes to the original developer Jean-Sebastien Lange. TortoiseSVN SVN (Apache Subversion) is a Source Control System developed as an open source project. TortoiseSVN is a graphical UI wrapper over SVN that hooks into Windows Explorer to enable files to be Updated, Committed, Merged etc. from the right click menu. This is an essential tool for keeping my hard work safe! Many years ago I used Microsoft Source Safe and I disliked CVS type systems. But TortoiseSVN is simply the best source control tool I have ever used. --- So there you have it, my top 5 development tools that I use (nearly) every day and have helped to make my working life a little easier. I'm sure there are other great tools that I wish I used but have never heard of, but if you have not used any of the above, I would suggest you check them out as they are all very, very cool products. #

    Read the article

  • Can't Install php5-msql

    - by user210445
    Hello friends I'm finishing the process of installing Apache/Php/mysql installations but this shows up: # sudo apt-get install mysql-server php5-msql Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package php5-msql After some adjustments this happened: angel@Voix:~$ sudo apt-get install mysql-server php5-mysql Reading package lists... Done Building dependency tree Reading state information... Done mysql-server is already the newest version. php5-mysql is already the newest version. The following packages were automatically installed and are no longer required: gir1.2-ubuntuoneui-3.0 libubuntuoneui-3.0-1 thunderbird-globalmenu Use 'apt-get autoremove' to remove them. The following extra packages will be installed: mysql-server-5.5 Suggested packages: tinyca mailx The following packages will be upgraded: mysql-server-5.5 1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 4 not fully installed or removed. Need to get 0 B/8,827 kB of archives. After this operation, 32.7 MB of additional disk space will be used. Do you want to continue [Y/n]? debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable (Reading database ... dpkg: warning: files list file for package `mysql-server-5.5' missing, assuming package has no files currently installed. (Reading database ... 172971 files and directories currently installed.) Preparing to replace mysql-server-5.5 5.5.34-0ubuntu0.12.04.1 (using .../mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb) ... debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error processing /var/cache/apt/archives/mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb (--unpack): subprocess new pre-installation script returned error exit status 1 debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error while cleaning up: subprocess new post-removal script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) angel@Voix:~$ sudo apt-get install mysql-server-5.5 Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: gir1.2-ubuntuoneui-3.0 libubuntuoneui-3.0-1 thunderbird-globalmenu Use 'apt-get autoremove' to remove them. Suggested packages: tinyca mailx The following packages will be upgraded: mysql-server-5.5 1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 4 not fully installed or removed. Need to get 0 B/8,827 kB of archives. After this operation, 32.7 MB of additional disk space will be used. debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable (Reading database ... dpkg: warning: files list file for package `mysql-server-5.5' missing, assuming package has no files currently installed. (Reading database ... 172971 files and directories currently installed.) Preparing to replace mysql-server-5.5 5.5.34-0ubuntu0.12.04.1 (using .../mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb) ... debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error processing /var/cache/apt/archives/mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb (--unpack): subprocess new pre-installation script returned error exit status 1 debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error while cleaning up: subprocess new post-removal script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) angel@Voix:~$ sudo apt-get install mysql-server php5-mysql Reading package lists... Done Building dependency tree Reading state information... Done mysql-server is already the newest version. php5-mysql is already the newest version. The following packages were automatically installed and are no longer required: gir1.2-ubuntuoneui-3.0 libubuntuoneui-3.0-1 thunderbird-globalmenu Use 'apt-get autoremove' to remove them. The following extra packages will be installed: mysql-server-5.5 Suggested packages: tinyca mailx The following packages will be upgraded: mysql-server-5.5 1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 4 not fully installed or removed. Need to get 0 B/8,827 kB of archives. After this operation, 32.7 MB of additional disk space will be used. Do you want to continue [Y/n]? y debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable (Reading database ... dpkg: warning: files list file for package `mysql-server-5.5' missing, assuming package has no files currently installed. (Reading database ... 172971 files and directories currently installed.) Preparing to replace mysql-server-5.5 5.5.34-0ubuntu0.12.04.1 (using .../mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb) ... debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error processing /var/cache/apt/archives/mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb (--unpack): subprocess new pre-installation script returned error exit status 1 debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable dpkg: error while cleaning up: subprocess new post-removal script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/mysql-server-5.5_5.5.34-0ubuntu0.12.04.1_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Rkhunter 122 suspect files; do I have a problem?

    - by user276166
    I am new to ubuntu. I am using Xfce Ubuntu 14.04 LTS. I have ran rkhunter a few weeks age and only got a few warnings. The forum said that they were normal. But, this time rkhunter reported 122 warnings. Please advise. casey@Shaman:~$ sudo rkhunter -c [ Rootkit Hunter version 1.4.0 ] Checking system commands... Performing 'strings' command checks Checking 'strings' command [ OK ] Performing 'shared libraries' checks Checking for preloading variables [ None found ] Checking for preloaded libraries [ None found ] Checking LD_LIBRARY_PATH variable [ Not found ] Performing file properties checks Checking for prerequisites [ Warning ] /usr/sbin/adduser [ Warning ] /usr/sbin/chroot [ Warning ] /usr/sbin/cron [ OK ] /usr/sbin/groupadd [ Warning ] /usr/sbin/groupdel [ Warning ] /usr/sbin/groupmod [ Warning ] /usr/sbin/grpck [ Warning ] /usr/sbin/nologin [ Warning ] /usr/sbin/pwck [ Warning ] /usr/sbin/rsyslogd [ Warning ] /usr/sbin/useradd [ Warning ] /usr/sbin/userdel [ Warning ] /usr/sbin/usermod [ Warning ] /usr/sbin/vipw [ Warning ] /usr/bin/awk [ Warning ] /usr/bin/basename [ Warning ] /usr/bin/chattr [ Warning ] /usr/bin/cut [ Warning ] /usr/bin/diff [ Warning ] /usr/bin/dirname [ Warning ] /usr/bin/dpkg [ Warning ] /usr/bin/dpkg-query [ Warning ] /usr/bin/du [ Warning ] /usr/bin/env [ Warning ] /usr/bin/file [ Warning ] /usr/bin/find [ Warning ] /usr/bin/GET [ Warning ] /usr/bin/groups [ Warning ] /usr/bin/head [ Warning ] /usr/bin/id [ Warning ] /usr/bin/killall [ OK ] /usr/bin/last [ Warning ] /usr/bin/lastlog [ Warning ] /usr/bin/ldd [ Warning ] /usr/bin/less [ OK ] /usr/bin/locate [ OK ] /usr/bin/logger [ Warning ] /usr/bin/lsattr [ Warning ] /usr/bin/lsof [ OK ] /usr/bin/mail [ OK ] /usr/bin/md5sum [ Warning ] /usr/bin/mlocate [ OK ] /usr/bin/newgrp [ Warning ] /usr/bin/passwd [ Warning ] /usr/bin/perl [ Warning ] /usr/bin/pgrep [ Warning ] /usr/bin/pkill [ Warning ] /usr/bin/pstree [ OK ] /usr/bin/rkhunter [ OK ] /usr/bin/rpm [ Warning ] /usr/bin/runcon [ Warning ] /usr/bin/sha1sum [ Warning ] /usr/bin/sha224sum [ Warning ] /usr/bin/sha256sum [ Warning ] /usr/bin/sha384sum [ Warning ] /usr/bin/sha512sum [ Warning ] /usr/bin/size [ Warning ] /usr/bin/sort [ Warning ] /usr/bin/stat [ Warning ] /usr/bin/strace [ Warning ] /usr/bin/strings [ Warning ] /usr/bin/sudo [ Warning ] /usr/bin/tail [ Warning ] /usr/bin/test [ Warning ] /usr/bin/top [ Warning ] /usr/bin/touch [ Warning ] /usr/bin/tr [ Warning ] /usr/bin/uniq [ Warning ] /usr/bin/users [ Warning ] /usr/bin/vmstat [ Warning ] /usr/bin/w [ Warning ] /usr/bin/watch [ Warning ] /usr/bin/wc [ Warning ] /usr/bin/wget [ Warning ] /usr/bin/whatis [ Warning ] /usr/bin/whereis [ Warning ] /usr/bin/which [ OK ] /usr/bin/who [ Warning ] /usr/bin/whoami [ Warning ] /usr/bin/unhide.rb [ Warning ] /usr/bin/mawk [ Warning ] /usr/bin/lwp-request [ Warning ] /usr/bin/heirloom-mailx [ OK ] /usr/bin/w.procps [ Warning ] /sbin/depmod [ Warning ] /sbin/fsck [ Warning ] /sbin/ifconfig [ Warning ] /sbin/ifdown [ Warning ] /sbin/ifup [ Warning ] /sbin/init [ Warning ] /sbin/insmod [ Warning ] /sbin/ip [ Warning ] /sbin/lsmod [ Warning ] /sbin/modinfo [ Warning ] /sbin/modprobe [ Warning ] /sbin/rmmod [ Warning ] /sbin/route [ Warning ] /sbin/runlevel [ Warning ] /sbin/sulogin [ Warning ] /sbin/sysctl [ Warning ] /bin/bash [ Warning ] /bin/cat [ Warning ] /bin/chmod [ Warning ] /bin/chown [ Warning ] /bin/cp [ Warning ] /bin/date [ Warning ] /bin/df [ Warning ] /bin/dmesg [ Warning ] /bin/echo [ Warning ] /bin/ed [ OK ] /bin/egrep [ Warning ] /bin/fgrep [ Warning ] /bin/fuser [ OK ] /bin/grep [ Warning ] /bin/ip [ Warning ] /bin/kill [ Warning ] /bin/less [ OK ] /bin/login [ Warning ] /bin/ls [ Warning ] /bin/lsmod [ Warning ] /bin/mktemp [ Warning ] /bin/more [ Warning ] /bin/mount [ Warning ] /bin/mv [ Warning ] /bin/netstat [ Warning ] /bin/ping [ Warning ] /bin/ps [ Warning ] /bin/pwd [ Warning ] /bin/readlink [ Warning ] /bin/sed [ Warning ] /bin/sh [ Warning ] /bin/su [ Warning ] /bin/touch [ Warning ] /bin/uname [ Warning ] /bin/which [ OK ] /bin/kmod [ Warning ] /bin/dash [ Warning ] [Press <ENTER> to continue] Checking for rootkits... Performing check of known rootkit files and directories 55808 Trojan - Variant A [ Not found ] ADM Worm [ Not found ] AjaKit Rootkit [ Not found ] Adore Rootkit [ Not found ] aPa Kit [ Not found ] Apache Worm [ Not found ] Ambient (ark) Rootkit [ Not found ] Balaur Rootkit [ Not found ] BeastKit Rootkit [ Not found ] beX2 Rootkit [ Not found ] BOBKit Rootkit [ Not found ] cb Rootkit [ Not found ] CiNIK Worm (Slapper.B variant) [ Not found ] Danny-Boy's Abuse Kit [ Not found ] Devil RootKit [ Not found ] Dica-Kit Rootkit [ Not found ] Dreams Rootkit [ Not found ] Duarawkz Rootkit [ Not found ] Enye LKM [ Not found ] Flea Linux Rootkit [ Not found ] Fu Rootkit [ Not found ] Fuck`it Rootkit [ Not found ] GasKit Rootkit [ Not found ] Heroin LKM [ Not found ] HjC Kit [ Not found ] ignoKit Rootkit [ Not found ] IntoXonia-NG Rootkit [ Not found ] Irix Rootkit [ Not found ] Jynx Rootkit [ Not found ] KBeast Rootkit [ Not found ] Kitko Rootkit [ Not found ] Knark Rootkit [ Not found ] ld-linuxv.so Rootkit [ Not found ] Li0n Worm [ Not found ] Lockit / LJK2 Rootkit [ Not found ] Mood-NT Rootkit [ Not found ] MRK Rootkit [ Not found ] Ni0 Rootkit [ Not found ] Ohhara Rootkit [ Not found ] Optic Kit (Tux) Worm [ Not found ] Oz Rootkit [ Not found ] Phalanx Rootkit [ Not found ] Phalanx2 Rootkit [ Not found ] Phalanx2 Rootkit (extended tests) [ Not found ] Portacelo Rootkit [ Not found ] R3dstorm Toolkit [ Not found ] RH-Sharpe's Rootkit [ Not found ] RSHA's Rootkit [ Not found ] Scalper Worm [ Not found ] Sebek LKM [ Not found ] Shutdown Rootkit [ Not found ] SHV4 Rootkit [ Not found ] SHV5 Rootkit [ Not found ] Sin Rootkit [ Not found ] Slapper Worm [ Not found ] Sneakin Rootkit [ Not found ] 'Spanish' Rootkit [ Not found ] Suckit Rootkit [ Not found ] Superkit Rootkit [ Not found ] TBD (Telnet BackDoor) [ Not found ] TeLeKiT Rootkit [ Not found ] T0rn Rootkit [ Not found ] trNkit Rootkit [ Not found ] Trojanit Kit [ Not found ] Tuxtendo Rootkit [ Not found ] URK Rootkit [ Not found ] Vampire Rootkit [ Not found ] VcKit Rootkit [ Not found ] Volc Rootkit [ Not found ] Xzibit Rootkit [ Not found ] zaRwT.KiT Rootkit [ Not found ] ZK Rootkit [ Not found ] [Press <ENTER> to continue] Performing additional rootkit checks Suckit Rookit additional checks [ OK ] Checking for possible rootkit files and directories [ None found ] Checking for possible rootkit strings [ None found ] Performing malware checks Checking running processes for suspicious files [ None found ] Checking for login backdoors [ None found ] Checking for suspicious directories [ None found ] Checking for sniffer log files [ None found ] Performing Linux specific checks Checking loaded kernel modules [ OK ] Checking kernel module names [ OK ] [Press <ENTER> to continue] Checking the network... Performing checks on the network ports Checking for backdoor ports [ None found ] Checking for hidden ports [ Skipped ] Performing checks on the network interfaces Checking for promiscuous interfaces [ None found ] Checking the local host... Performing system boot checks Checking for local host name [ Found ] Checking for system startup files [ Found ] Checking system startup files for malware [ None found ] Performing group and account checks Checking for passwd file [ Found ] Checking for root equivalent (UID 0) accounts [ None found ] Checking for passwordless accounts [ None found ] Checking for passwd file changes [ Warning ] Checking for group file changes [ Warning ] Checking root account shell history files [ None found ] Performing system configuration file checks Checking for SSH configuration file [ Not found ] Checking for running syslog daemon [ Found ] Checking for syslog configuration file [ Found ] Checking if syslog remote logging is allowed [ Not allowed ] Performing filesystem checks Checking /dev for suspicious file types [ Warning ] Checking for hidden files and directories [ Warning ] [Press <ENTER> to continue] System checks summary ===================== File properties checks... Required commands check failed Files checked: 137 Suspect files: 122 Rootkit checks... Rootkits checked : 291 Possible rootkits: 0 Applications checks... All checks skipped The system checks took: 5 minutes and 11 seconds All results have been written to the log file (/var/log/rkhunter.log)

    Read the article

  • iOS Support with Windows Azure Mobile Services – now with Push Notifications

    - by ScottGu
    A few weeks ago I posted about a number of improvements to Windows Azure Mobile Services. One of these was the addition of an Objective-C client SDK that allows iOS developers to easily use Mobile Services for data and authentication.  Today I'm excited to announce a number of improvement to our iOS SDK and, most significantly, our new support for Push Notifications via APNS (Apple Push Notification Services).  This makes it incredibly easy to fire push notifications to your iOS users from Windows Azure Mobile Service scripts. Push Notifications via APNS We've provided two complete tutorials that take you step-by-step through the provisioning and setup process to enable your Windows Azure Mobile Service application with APNS (Apple Push Notification Services), including all of the steps required to configure your application for push in the Apple iOS provisioning portal: Getting started with Push Notifications - iOS Push notifications to users by using Mobile Services - iOS Once you've configured your application in the Apple iOS provisioning portal and uploaded the APNS push certificate to the Apple provisioning portal, it's just a matter of uploading your APNS push certificate to Mobile Services using the Windows Azure admin portal: Clicking the “upload” within the “Push” tab of your Mobile Service allows you to browse your local file-system and locate/upload your exported certificate.  As part of this you can also select whether you want to use the sandbox (dev) or production (prod) Apple service: Now, the code to send a push notification to your clients from within a Windows Azure Mobile Service is as easy as the code below: push.apns.send(deviceToken, {      alert: 'Toast: A new Mobile Services task.',      sound: 'default' }); This will cause Windows Azure Mobile Services to connect to APNS (Apple Push Notification Service) and send a notification to the iOS device you specified via the deviceToken: Check out our reference documentation for full details on how to use the new Windows Azure Mobile Services apns object to send your push notifications. Feedback Scripts An important part of working with any PNS (Push Notification Service) is handling feedback for expired device tokens and channels. This typically happens when your application is uninstalled from a particular device and can no longer receive your notifications. With Windows Notification Services you get an instant response from the HTTP server.  Apple’s Notification Services works in a slightly different way and provides an additional endpoint you can connect to poll for a list of expired tokens. As with all of the capabilities we integrate with Mobile Services, our goal is to allow developers to focus more on building their app and less on building infrastructure to support their ideas. Therefore we knew we had to provide a simple way for developers to integrate feedback from APNS on a regular basis.  This week’s update now includes a new screen in the portal that allows you to optionally provide a script to process your APNS feedback – and it will be executed by Mobile Services on an ongoing basis: This script is invoked periodically while your service is active. To poll the feedback endpoint you can simply call the apns object's getFeedback method from within this script: push.apns.getFeedback({       success: function(results) {           // results is an array of objects with a deviceToken and time properties      } }); This returns you a list of invalid tokens that can now be removed from your database. iOS Client SDK improvements Over the last month we've continued to work with a number of iOS advisors to make improvements to our Objective-C SDK. The SDK is being developed under an open source license (Apache 2.0) and is available on github. Many of the improvements are behind the scenes to improve performance and memory usage. However, one of the biggest improvements to our iOS Client API is the addition of an even easier login method.  Below is the Objective-C code you can now write to invoke it: [client loginWithProvider:@"twitter"                     onController:self                        animated:YES                      completion:^(MSUser *user, NSError *error) {      // if no error, you are now logged in via twitter }]; This code will automatically present and dismiss our login view controller as a modal dialog on the specified controller.  This does all the hard work for you and makes login via Twitter, Google, Facebook and Microsoft Account identities just a single line of code. My colleague Josh just posted a short video demonstrating these new features which I'd recommend checking out: Summary The above features are all now live in production and are available to use immediately.  If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using Mobile Services today. Visit the Windows Azure Mobile Developer Center to learn more about how to build apps with Mobile Services. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • ClickOnce manifest problem

    - by TWith2Sugars
    We are currently deploying a WPF 4 app via click once and there is a scenario when the installation fails. If the user does not have .Net 4.0 Full install and attempts to install our app the framework installs fine but the app fails to install. If we re-run the installation again the app installs fine. Here is a copy of the log: PLATFORM VERSION INFO Windows : 6.1.7600.0 (Win32NT) Common Language Runtime : 2.0.50727.4927 System.Deployment.dll : 2.0.50727.4927 (NetFXspW7.050727-4900) mscorwks.dll : 2.0.50727.4927 (NetFXspW7.050727-4900) dfdll.dll : 2.0.50727.4927 (NetFXspW7.050727-4900) dfshim.dll : 4.0.31106.0 (Main.031106-0000) SOURCES Deployment url : [URL REMOVED] Server : Apache/2.0.54 Application url : [URL REMOVED] Server : Apache/2.0.54 IDENTITIES Deployment Identity : Graphicly.App.application, Version=0.3.2.0, Culture=neutral, PublicKeyToken=c982228345371fbc, processorArchitecture=msil Application Identity : Graphicly.App.exe, Version=0.3.2.0, Culture=neutral, PublicKeyToken=c982228345371fbc, processorArchitecture=msil, type=win32 APPLICATION SUMMARY * Installable application. ERROR SUMMARY Below is a summary of the errors, details of these errors are listed later in the log. * Dependency Graphicly.WCFClient.dll cannot be processed for patching. Following failure messages were detected: + Exception occurred loading manifest from file Graphicly.WCFClient.dll: the manifest may not be valid or the file could not be opened. + Cannot load internal manifest from component file. * Dependency Microsoft.Surface.Presentation.Design.dll cannot be processed for patching. Following failure messages were detected: + Exception occurred loading manifest from file Microsoft.Surface.Presentation.Design.dll: the manifest may not be valid or the file could not be opened. + Cannot load internal manifest from component file. * Dependency GalaSoft.MvvmLight.WPF4.dll cannot be processed for patching. Following failure messages were detected: + Exception occurred loading manifest from file GalaSoft.MvvmLight.WPF4.dll: the manifest may not be valid or the file could not be opened. + Cannot load internal manifest from component file. * Dependency Graphicly.Infrastructure.dll cannot be processed for patching. Following failure messages were detected: + Exception occurred loading manifest from file Graphicly.Infrastructure.dll: the manifest may not be valid or the file could not be opened. + Cannot load internal manifest from component file. * Dependency Graphicly.AutoUpdater.dll cannot be processed for patching. Following failure messages were detected: + Exception occurred loading manifest from file Graphicly.AutoUpdater.dll: the manifest may not be valid or the file could not be opened. + Cannot load internal manifest from component file. * Dependency System.Windows.Interactivity.dll cannot be processed for patching. Following failure messages were detected: + Exception occurred loading manifest from file System.Windows.Interactivity.dll: the manifest may not be valid or the file could not be opened. + Cannot load internal manifest from component file. * Dependency Microsoft.Surface.Presentation.dll cannot be processed for patching. Following failure messages were detected: + Exception occurred loading manifest from file Microsoft.Surface.Presentation.dll: the manifest may not be valid or the file could not be opened. + Cannot load internal manifest from component file. * Dependency Graphicly.Fonts.dll cannot be processed for patching. Following failure messages were detected: + Exception occurred loading manifest from file Graphicly.Fonts.dll: the manifest may not be valid or the file could not be opened. + Cannot load internal manifest from component file. * Dependency Graphicly.Reader.dll cannot be processed for patching. Following failure messages were detected: + Exception occurred loading manifest from file Graphicly.Reader.dll: the manifest may not be valid or the file could not be opened. + Cannot load internal manifest from component file. * Dependency Microsoft.Surface.Presentation.Generic.dll cannot be processed for patching. Following failure messages were detected: + Exception occurred loading manifest from file Microsoft.Surface.Presentation.Generic.dll: the manifest may not be valid or the file could not be opened. + Cannot load internal manifest from component file. * Dependency Graphicly.Controls.dll cannot be processed for patching. Following failure messages were detected: + Exception occurred loading manifest from file Graphicly.Controls.dll: the manifest may not be valid or the file could not be opened. + Cannot load internal manifest from component file. * Dependency Graphicly.SocialNetwork.dll cannot be processed for patching. Following failure messages were detected: + Exception occurred loading manifest from file Graphicly.SocialNetwork.dll: the manifest may not be valid or the file could not be opened. + Cannot load internal manifest from component file. * Dependency Graphicly.Archive.dll cannot be processed for patching. Following failure messages were detected: + Exception occurred loading manifest from file Graphicly.Archive.dll: the manifest may not be valid or the file could not be opened. + Cannot load internal manifest from component file. * Dependency Graphicly.App.exe cannot be processed for patching. Following failure messages were detected: + Exception occurred loading manifest from file Graphicly.App.exe: the manifest may not be valid or the file could not be opened. + Cannot load internal manifest from component file. * Dependency GalaSoft.MvvmLight.Extras.WPF4.dll cannot be processed for patching. Following failure messages were detected: + Exception occurred loading manifest from file GalaSoft.MvvmLight.Extras.WPF4.dll: the manifest may not be valid or the file could not be opened. + Cannot load internal manifest from component file. * Activation of [URL REMOVED] resulted in exception. Following failure messages were detected: + Exception occurred loading manifest from file GalaSoft.MvvmLight.Extras.WPF4.dll: the manifest may not be valid or the file could not be opened. + Cannot load internal manifest from component file. COMPONENT STORE TRANSACTION FAILURE SUMMARY No transaction error was detected. WARNINGS * The file named Microsoft.Windows.Design.Extensibility.dll does not have a hash specified in the manifest. Hash validation will be ignored. * The file named Ionic.Zip.Reduced.dll does not have a hash specified in the manifest. Hash validation will be ignored. * The file named Newtonsoft.Json.dll does not have a hash specified in the manifest. Hash validation will be ignored. * The file named Microsoft.WindowsAzure.StorageClient.dll does not have a hash specified in the manifest. Hash validation will be ignored. * The file named Dimebrain.TweetSharp.dll does not have a hash specified in the manifest. Hash validation will be ignored. * The file named Microsoft.Windows.Design.Interaction.dll does not have a hash specified in the manifest. Hash validation will be ignored. * The file named HtmlAgilityPack.dll does not have a hash specified in the manifest. Hash validation will be ignored. * The file named Facebook.dll does not have a hash specified in the manifest. Hash validation will be ignored. OPERATION PROGRESS STATUS * [20/05/2010 09:17:33] : Activation of [URL REMOVED] has started. * [20/05/2010 09:17:38] : Processing of deployment manifest has successfully completed. * [20/05/2010 09:17:38] : Installation of the application has started. * [20/05/2010 09:17:39] : Processing of application manifest has successfully completed. * [20/05/2010 09:17:40] : Request of trust and detection of platform is complete. ERROR DETAILS Following errors were detected during this operation. * [20/05/2010 09:17:40] System.Deployment.Application.InvalidDeploymentException (ManifestLoad) - Exception occurred loading manifest from file Graphicly.WCFClient.dll: the manifest may not be valid or the file could not be opened. - Source: System.Deployment - Stack trace: at System.Deployment.Application.Manifest.AssemblyManifest.ManifestLoadExceptionHelper(Exception exception, String filePath) at System.Deployment.Application.Manifest.AssemblyManifest.LoadFromInternalManifestFile(String filePath) at System.Deployment.Application.FileDownloader.AddFilesInHashtable(Hashtable hashtable, AssemblyManifest applicationManifest, String applicationFolder) --- Inner Exception --- System.Deployment.Application.DeploymentException (InvalidManifest) - Cannot load internal manifest from component file. - Source: - Stack trace: * [20/05/2010 09:17:40] System.Deployment.Application.InvalidDeploymentException (ManifestLoad) - Exception occurred loading manifest from file Microsoft.Surface.Presentation.Design.dll: the manifest may not be valid or the file could not be opened. - Source: System.Deployment - Stack trace: at System.Deployment.Application.Manifest.AssemblyManifest.ManifestLoadExceptionHelper(Exception exception, String filePath) at System.Deployment.Application.Manifest.AssemblyManifest.LoadFromInternalManifestFile(String filePath) at System.Deployment.Application.FileDownloader.AddFilesInHashtable(Hashtable hashtable, AssemblyManifest applicationManifest, String applicationFolder) --- Inner Exception --- System.Deployment.Application.DeploymentException (InvalidManifest) - Cannot load internal manifest from component file. - Source: - Stack trace: * [20/05/2010 09:17:40] System.Deployment.Application.InvalidDeploymentException (ManifestLoad) - Exception occurred loading manifest from file GalaSoft.MvvmLight.WPF4.dll: the manifest may not be valid or the file could not be opened. - Source: System.Deployment - Stack trace: at System.Deployment.Application.Manifest.AssemblyManifest.ManifestLoadExceptionHelper(Exception exception, String filePath) at System.Deployment.Application.Manifest.AssemblyManifest.LoadFromInternalManifestFile(String filePath) at System.Deployment.Application.FileDownloader.AddFilesInHashtable(Hashtable hashtable, AssemblyManifest applicationManifest, String applicationFolder) --- Inner Exception --- System.Deployment.Application.DeploymentException (InvalidManifest) - Cannot load internal manifest from component file. - Source: - Stack trace: * [20/05/2010 09:17:40] System.Deployment.Application.InvalidDeploymentException (ManifestLoad) - Exception occurred loading manifest from file Graphicly.Infrastructure.dll: the manifest may not be valid or the file could not be opened. - Source: System.Deployment - Stack trace: at System.Deployment.Application.Manifest.AssemblyManifest.ManifestLoadExceptionHelper(Exception exception, String filePath) at System.Deployment.Application.Manifest.AssemblyManifest.LoadFromInternalManifestFile(String filePath) at System.Deployment.Application.FileDownloader.AddFilesInHashtable(Hashtable hashtable, AssemblyManifest applicationManifest, String applicationFolder) --- Inner Exception --- System.Deployment.Application.DeploymentException (InvalidManifest) - Cannot load internal manifest from component file. - Source: - Stack trace: * [20/05/2010 09:17:40] System.Deployment.Application.InvalidDeploymentException (ManifestLoad) - Exception occurred loading manifest from file Graphicly.AutoUpdater.dll: the manifest may not be valid or the file could not be opened. - Source: System.Deployment - Stack trace: at System.Deployment.Application.Manifest.AssemblyManifest.ManifestLoadExceptionHelper(Exception exception, String filePath) at System.Deployment.Application.Manifest.AssemblyManifest.LoadFromInternalManifestFile(String filePath) at System.Deployment.Application.FileDownloader.AddFilesInHashtable(Hashtable hashtable, AssemblyManifest applicationManifest, String applicationFolder) --- Inner Exception --- System.Deployment.Application.DeploymentException (InvalidManifest) - Cannot load internal manifest from component file. - Source: - Stack trace: * [20/05/2010 09:17:40] System.Deployment.Application.InvalidDeploymentException (ManifestLoad) - Exception occurred loading manifest from file System.Windows.Interactivity.dll: the manifest may not be valid or the file could not be opened. - Source: System.Deployment - Stack trace: at System.Deployment.Application.Manifest.AssemblyManifest.ManifestLoadExceptionHelper(Exception exception, String filePath) at System.Deployment.Application.Manifest.AssemblyManifest.LoadFromInternalManifestFile(String filePath) at System.Deployment.Application.FileDownloader.AddFilesInHashtable(Hashtable hashtable, AssemblyManifest applicationManifest, String applicationFolder) --- Inner Exception --- System.Deployment.Application.DeploymentException (InvalidManifest) - Cannot load internal manifest from component file. - Source: - Stack trace: * [20/05/2010 09:17:40] System.Deployment.Application.InvalidDeploymentException (ManifestLoad) - Exception occurred loading manifest from file Microsoft.Surface.Presentation.dll: the manifest may not be valid or the file could not be opened. - Source: System.Deployment - Stack trace: at System.Deployment.Application.Manifest.AssemblyManifest.ManifestLoadExceptionHelper(Exception exception, String filePath) at System.Deployment.Application.Manifest.AssemblyManifest.LoadFromInternalManifestFile(String filePath) at System.Deployment.Application.FileDownloader.AddFilesInHashtable(Hashtable hashtable, AssemblyManifest applicationManifest, String applicationFolder) --- Inner Exception --- System.Deployment.Application.DeploymentException (InvalidManifest) - Cannot load internal manifest from component file. - Source: - Stack trace: * [20/05/2010 09:17:40] System.Deployment.Application.InvalidDeploymentException (ManifestLoad) - Exception occurred loading manifest from file Graphicly.Fonts.dll: the manifest may not be valid or the file could not be opened. - Source: System.Deployment - Stack trace: at System.Deployment.Application.Manifest.AssemblyManifest.ManifestLoadExceptionHelper(Exception exception, String filePath) at System.Deployment.Application.Manifest.AssemblyManifest.LoadFromInternalManifestFile(String filePath) at System.Deployment.Application.FileDownloader.AddFilesInHashtable(Hashtable hashtable, AssemblyManifest applicationManifest, String applicationFolder) --- Inner Exception --- System.Deployment.Application.DeploymentException (InvalidManifest) - Cannot load internal manifest from component file. - Source: - Stack trace: * [20/05/2010 09:17:40] System.Deployment.Application.InvalidDeploymentException (ManifestLoad) - Exception occurred loading manifest from file Graphicly.Reader.dll: the manifest may not be valid or the file could not be opened. - Source: System.Deployment - Stack trace: at System.Deployment.Application.Manifest.AssemblyManifest.ManifestLoadExceptionHelper(Exception exception, String filePath) at System.Deployment.Application.Manifest.AssemblyManifest.LoadFromInternalManifestFile(String filePath) at System.Deployment.Application.FileDownloader.AddFilesInHashtable(Hashtable hashtable, AssemblyManifest applicationManifest, String applicationFolder) --- Inner Exception --- System.Deployment.Application.DeploymentException (InvalidManifest) - Cannot load internal manifest from component file. - Source: - Stack trace: * [20/05/2010 09:17:40] System.Deployment.Application.InvalidDeploymentException (ManifestLoad) - Exception occurred loading manifest from file Microsoft.Surface.Presentation.Generic.dll: the manifest may not be valid or the file could not be opened. - Source: System.Deployment - Stack trace: at System.Deployment.Application.Manifest.AssemblyManifest.ManifestLoadExceptionHelper(Exception exception, String filePath) at System.Deployment.Application.Manifest.AssemblyManifest.LoadFromInternalManifestFile(String filePath) at System.Deployment.Application.FileDownloader.AddFilesInHashtable(Hashtable hashtable, AssemblyManifest applicationManifest, String applicationFolder) --- Inner Exception --- System.Deployment.Application.DeploymentException (InvalidManifest) - Cannot load internal manifest from component file. - Source: - Stack trace: * [20/05/2010 09:17:41] System.Deployment.Application.InvalidDeploymentException (ManifestLoad) - Exception occurred loading manifest from file Graphicly.Controls.dll: the manifest may not be valid or the file could not be opened. - Source: System.Deployment - Stack trace: at System.Deployment.Application.Manifest.AssemblyManifest.ManifestLoadExceptionHelper(Exception exception, String filePath) at System.Deployment.Application.Manifest.AssemblyManifest.LoadFromInternalManifestFile(String filePath) at System.Deployment.Application.FileDownloader.AddFilesInHashtable(Hashtable hashtable, AssemblyManifest applicationManifest, String applicationFolder) --- Inner Exception --- System.Deployment.Application.DeploymentException (InvalidManifest) - Cannot load internal manifest from component file. - Source: - Stack trace: * [20/05/2010 09:17:41] System.Deployment.Application.InvalidDeploymentException (ManifestLoad) - Exception occurred loading manifest from file Graphicly.SocialNetwork.dll: the manifest may not be valid or the file could not be opened. - Source: System.Deployment - Stack trace: at System.Deployment.Application.Manifest.AssemblyManifest.ManifestLoadExceptionHelper(Exception exception, String filePath) at System.Deployment.Application.Manifest.AssemblyManifest.LoadFromInternalManifestFile(String filePath) at System.Deployment.Application.FileDownloader.AddFilesInHashtable(Hashtable hashtable, AssemblyManifest applicationManifest, String applicationFolder) --- Inner Exception --- System.Deployment.Application.DeploymentException (InvalidManifest) - Cannot load internal manifest from component file. - Source: - Stack trace: * [20/05/2010 09:17:41] System.Deployment.Application.InvalidDeploymentException (ManifestLoad) - Exception occurred loading manifest from file Graphicly.Archive.dll: the manifest may not be valid or the file could not be opened. - Source: System.Deployment - Stack trace: at System.Deployment.Application.Manifest.AssemblyManifest.ManifestLoadExceptionHelper(Exception exception, String filePath) at System.Deployment.Application.Manifest.AssemblyManifest.LoadFromInternalManifestFile(String filePath) at System.Deployment.Application.FileDownloader.AddFilesInHashtable(Hashtable hashtable, AssemblyManifest applicationManifest, String applicationFolder) --- Inner Exception --- System.Deployment.Application.DeploymentException (InvalidManifest) - Cannot load internal manifest from component file. - Source: - Stack trace: * [20/05/2010 09:17:41] System.Deployment.Application.InvalidDeploymentException (ManifestLoad) - Exception occurred loading manifest from file Graphicly.App.exe: the manifest may not be valid or the file could not be opened. - Source: System.Deployment - Stack trace: at System.Deployment.Application.Manifest.AssemblyManifest.ManifestLoadExceptionHelper(Exception exception, String filePath) at System.Deployment.Application.Manifest.AssemblyManifest.LoadFromInternalManifestFile(String filePath) at System.Deployment.Application.FileDownloader.AddFilesInHashtable(Hashtable hashtable, AssemblyManifest applicationManifest, String applicationFolder) --- Inner Exception --- System.Deployment.Application.DeploymentException (InvalidManifest) - Cannot load internal manifest from component file. - Source: - Stack trace: * [20/05/2010 09:17:41] System.Deployment.Application.InvalidDeploymentException (ManifestLoad) - Exception occurred loading manifest from file GalaSoft.MvvmLight.Extras.WPF4.dll: the manifest may not be valid or the file could not be opened. - Source: System.Deployment - Stack trace: at System.Deployment.Application.Manifest.AssemblyManifest.ManifestLoadExceptionHelper(Exception exception, String filePath) at System.Deployment.Application.Manifest.AssemblyManifest.LoadFromInternalManifestFile(String filePath) at System.Deployment.Application.FileDownloader.AddFilesInHashtable(Hashtable hashtable, AssemblyManifest applicationManifest, String applicationFolder) --- Inner Exception --- System.Deployment.Application.DeploymentException (InvalidManifest) - Cannot load internal manifest from component file. - Source: - Stack trace: * [20/05/2010 09:17:41] System.Deployment.Application.InvalidDeploymentException (ManifestLoad) - Exception occurred loading manifest from file GalaSoft.MvvmLight.Extras.WPF4.dll: the manifest may not be valid or the file could not be opened. - Source: System.Deployment - Stack trace: at System.Deployment.Application.Manifest.AssemblyManifest.ManifestLoadExceptionHelper(Exception exception, String filePath) at System.Deployment.Application.Manifest.AssemblyManifest.LoadFromInternalManifestFile(String filePath) at System.Deployment.Application.DownloadManager.ProcessDownloadedFile(Object sender, DownloadEventArgs e) at System.Deployment.Application.FileDownloader.DownloadModifiedEventHandler.Invoke(Object sender, DownloadEventArgs e) at System.Deployment.Application.FileDownloader.PatchSingleFile(DownloadQueueItem item, Hashtable dependencyTable) at System.Deployment.Application.FileDownloader.PatchFiles(SubscriptionState subState) at System.Deployment.Application.FileDownloader.Download(SubscriptionState subState) at System.Deployment.Application.DownloadManager.DownloadDependencies(SubscriptionState subState, AssemblyManifest deployManifest, AssemblyManifest appManifest, Uri sourceUriBase, String targetDirectory, String group, IDownloadNotification notification, DownloadOptions options) at System.Deployment.Application.ApplicationActivator.DownloadApplication(SubscriptionState subState, ActivationDescription actDesc, Int64 transactionId, TempDirectory& downloadTemp) at System.Deployment.Application.ApplicationActivator.InstallApplication(SubscriptionState& subState, ActivationDescription actDesc) at System.Deployment.Application.ApplicationActivator.PerformDeploymentActivation(Uri activationUri, Boolean isShortcut, String textualSubId, String deploymentProviderUrlFromExtension, BrowserSettings browserSettings, String& errorPageUrl) at System.Deployment.Application.ApplicationActivator.ActivateDeploymentWorker(Object state) --- Inner Exception --- System.Deployment.Application.DeploymentException (InvalidManifest) - Cannot load internal manifest from component file. - Source: - Stack trace: COMPONENT STORE TRANSACTION DETAILS No transaction information is available. I'm baffled. Any ideas what this could be? Cheers Tony

    Read the article

  • Implementation of ZipCrypto / Zip 2.0 encryption in java

    - by gomesla
    I'm trying o implement the zipcrypto / zip 2.0 encryption algoritm to deal with encrypted zip files as discussed in http://www.pkware.com/documents/casestudies/APPNOTE.TXT I believe I've followed the specs but just can't seem to get it working. I'm fairly sure the issue has to do with my interpretation of the crc algorithm. The documentation states CRC-32: (4 bytes) The CRC-32 algorithm was generously contributed by David Schwaderer and can be found in his excellent book "C Programmers Guide to NetBIOS" published by Howard W. Sams & Co. Inc. The 'magic number' for the CRC is 0xdebb20e3. The proper CRC pre and post conditioning is used, meaning that the CRC register is pre-conditioned with all ones (a starting value of 0xffffffff) and the value is post-conditioned by taking the one's complement of the CRC residual. Here is the snippet that I'm using for the crc32 public class PKZIPCRC32 { private static final int CRC32_POLYNOMIAL = 0xdebb20e3; private int crc = 0xffffffff; private int CRCTable[]; public PKZIPCRC32() { buildCRCTable(); } private void buildCRCTable() { int i, j; CRCTable = new int[256]; for (i = 0; i <= 255; i++) { crc = i; for (j = 8; j > 0; j--) if ((crc & 1) == 1) crc = (crc >>> 1) ^ CRC32_POLYNOMIAL; else crc >>>= 1; CRCTable[i] = crc; } } private int crc32(byte buffer[], int start, int count, int lastcrc) { int temp1, temp2; int i = start; crc = lastcrc; while (count-- != 0) { temp1 = crc >>> 8; temp2 = CRCTable[(crc ^ buffer[i++]) & 0xFF]; crc = temp1 ^ temp2; } return crc; } public int crc32(int crc, byte buffer) { return crc32(new byte[] { buffer }, 0, 1, crc); } } Below is my complete code. Can anyone see what I'm doing wrong. package org.apache.commons.compress.archivers.zip; import java.io.IOException; import java.io.InputStream; public class ZipCryptoInputStream extends InputStream { public class PKZIPCRC32 { private static final int CRC32_POLYNOMIAL = 0xdebb20e3; private int crc = 0xffffffff; private int CRCTable[]; public PKZIPCRC32() { buildCRCTable(); } private void buildCRCTable() { int i, j; CRCTable = new int[256]; for (i = 0; i <= 255; i++) { crc = i; for (j = 8; j > 0; j--) if ((crc & 1) == 1) crc = (crc >>> 1) ^ CRC32_POLYNOMIAL; else crc >>>= 1; CRCTable[i] = crc; } } private int crc32(byte buffer[], int start, int count, int lastcrc) { int temp1, temp2; int i = start; crc = lastcrc; while (count-- != 0) { temp1 = crc >>> 8; temp2 = CRCTable[(crc ^ buffer[i++]) & 0xFF]; crc = temp1 ^ temp2; } return crc; } public int crc32(int crc, byte buffer) { return crc32(new byte[] { buffer }, 0, 1, crc); } } private static final long ENCRYPTION_KEY_1 = 0x12345678; private static final long ENCRYPTION_KEY_2 = 0x23456789; private static final long ENCRYPTION_KEY_3 = 0x34567890; private InputStream baseInputStream = null; private final PKZIPCRC32 checksumEngine = new PKZIPCRC32(); private long[] keys = null; public ZipCryptoInputStream(ZipArchiveEntry zipEntry, InputStream inputStream, String passwd) throws Exception { baseInputStream = inputStream; // Decryption // ---------- // PKZIP encrypts the compressed data stream. Encrypted files must // be decrypted before they can be extracted. // // Each encrypted file has an extra 12 bytes stored at the start of // the data area defining the encryption header for that file. The // encryption header is originally set to random values, and then // itself encrypted, using three, 32-bit keys. The key values are // initialized using the supplied encryption password. After each byte // is encrypted, the keys are then updated using pseudo-random number // generation techniques in combination with the same CRC-32 algorithm // used in PKZIP and described elsewhere in this document. // // The following is the basic steps required to decrypt a file: // // 1) Initialize the three 32-bit keys with the password. // 2) Read and decrypt the 12-byte encryption header, further // initializing the encryption keys. // 3) Read and decrypt the compressed data stream using the // encryption keys. // Step 1 - Initializing the encryption keys // ----------------------------------------- // // Key(0) <- 305419896 // Key(1) <- 591751049 // Key(2) <- 878082192 // // loop for i <- 0 to length(password)-1 // update_keys(password(i)) // end loop // // Where update_keys() is defined as: // // update_keys(char): // Key(0) <- crc32(key(0),char) // Key(1) <- Key(1) + (Key(0) & 000000ffH) // Key(1) <- Key(1) * 134775813 + 1 // Key(2) <- crc32(key(2),key(1) >> 24) // end update_keys // // Where crc32(old_crc,char) is a routine that given a CRC value and a // character, returns an updated CRC value after applying the CRC-32 // algorithm described elsewhere in this document. keys = new long[] { ENCRYPTION_KEY_1, ENCRYPTION_KEY_2, ENCRYPTION_KEY_3 }; for (int i = 0; i < passwd.length(); ++i) { update_keys((byte) passwd.charAt(i)); } // Step 2 - Decrypting the encryption header // ----------------------------------------- // // The purpose of this step is to further initialize the encryption // keys, based on random data, to render a plaintext attack on the // data ineffective. // // Read the 12-byte encryption header into Buffer, in locations // Buffer(0) thru Buffer(11). // // loop for i <- 0 to 11 // C <- buffer(i) ^ decrypt_byte() // update_keys(C) // buffer(i) <- C // end loop // // Where decrypt_byte() is defined as: // // unsigned char decrypt_byte() // local unsigned short temp // temp <- Key(2) | 2 // decrypt_byte <- (temp * (temp ^ 1)) >> 8 // end decrypt_byte // // After the header is decrypted, the last 1 or 2 bytes in Buffer // should be the high-order word/byte of the CRC for the file being // decrypted, stored in Intel low-byte/high-byte order. Versions of // PKZIP prior to 2.0 used a 2 byte CRC check; a 1 byte CRC check is // used on versions after 2.0. This can be used to test if the password // supplied is correct or not. byte[] encryptionHeader = new byte[12]; baseInputStream.read(encryptionHeader); for (int i = 0; i < encryptionHeader.length; i++) { encryptionHeader[i] ^= decrypt_byte(); update_keys(encryptionHeader[i]); } } protected byte decrypt_byte() { byte temp = (byte) (keys[2] | 2); return (byte) ((temp * (temp ^ 1)) >> 8); } @Override public int read() throws IOException { // // Step 3 - Decrypting the compressed data stream // ---------------------------------------------- // // The compressed data stream can be decrypted as follows: // // loop until done // read a character into C // Temp <- C ^ decrypt_byte() // update_keys(temp) // output Temp // end loop int read = baseInputStream.read(); read ^= decrypt_byte(); update_keys((byte) read); return read; } private final void update_keys(byte ch) { keys[0] = checksumEngine.crc32((int) keys[0], ch); keys[1] = keys[1] + (byte) keys[0]; keys[1] = keys[1] * 134775813 + 1; keys[2] = checksumEngine.crc32((int) keys[2], (byte) (keys[1] >> 24)); } }

    Read the article

  • I am getting an error with a oneToMany association when using annotations with gilead for hibernate

    - by user286630
    Hello Guys I'm using Gilead to persist my entities in my GWT project, im using hibernate annotations aswell. my problem is on my onetomany association.this is my User class that holds a reference to a list of FileLocations @Entity @Table(name = "yf_user_table") public class YFUser implements Serializable { @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "user_id",nullable = false) private int userId; @Column(name = "username") private String username; @Column(name = "password") private String password; @Column(name = "email") private String email; @OneToMany(cascade = CascadeType.ALL, fetch = FetchType.LAZY) @JoinColumn(name="USER_ID") private List fileLocations = new ArrayList(); This is my file location class @Entity @Table(name = "fileLocationTable") public class FileLocation implements Serializable { @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "locationId", updatable = false, nullable = false) private int ieId; @Column (name = "location") private String location; @ManyToOne(fetch=FetchType.LAZY) @JoinColumn(name="USER_ID", nullable=false) private YFUser uploadedUser; When i persist this data in a normal desktop application, it works fine , creates the tables and i can add and store data to it. but when i try to persist the data in my gwt application i get errors i will show them lower. this is my ServiceImpl class that extends PersistentRemoteService. public class TestServiceImpl extends PersistentRemoteService implements TestService { public static final String SESSION_USER = "UserWithinSession"; public TestServiceImpl(){ HibernateUtil util = new HibernateUtil(); util.setSessionFactory(com.example.server.HibernateUtil .getSessionFactory()); PersistentBeanManager pbm = new PersistentBeanManager(); pbm.setPersistenceUtil(util); pbm.setProxyStore(new StatelessProxyStore()); setBeanManager(pbm); } @Override public String registerUser(String username, String password, String email) { Session session = com.example.server.HibernateUtil.getSessionFactory().getCurrentSession(); session.beginTransaction(); YFUser newUser = new YFUser(); newUser.setUsername(username);newUser.setPassword(password);newUser.setEmail(email); session.save(newUser); session.getTransaction().commit(); return "Thank you For registering "+ username; } this is the error that im am getting. the error goes away when i remove my onetoManyRelationship and builds my session factory when i put it in , it on the line of buildsessionfactory in hibernate Util that it throws this exception. my hibernate util class is ok also. this is the error java.lang.NoSuchMethodError: javax.persistence.OneToMany.orphanRemoval()Z Mar 24, 2010 10:03:22 PM org.hibernate.cfg.annotations.Version <clinit> INFO: Hibernate Annotations 3.5.0-Beta-4 Mar 24, 2010 10:03:22 PM org.hibernate.cfg.Environment <clinit> INFO: Hibernate 3.5.0-Beta-4 Mar 24, 2010 10:03:22 PM org.hibernate.cfg.Environment <clinit> INFO: hibernate.properties not found Mar 24, 2010 10:03:22 PM org.hibernate.cfg.Environment buildBytecodeProvider INFO: Bytecode provider name : javassist Mar 24, 2010 10:03:22 PM org.hibernate.cfg.Environment <clinit> INFO: using JDK 1.4 java.sql.Timestamp handling Mar 24, 2010 10:03:22 PM org.hibernate.annotations.common.Version <clinit> INFO: Hibernate Commons Annotations 3.2.0-SNAPSHOT Mar 24, 2010 10:03:22 PM org.hibernate.cfg.Configuration configure INFO: configuring from resource: /hibernate.cfg.xml Mar 24, 2010 10:03:22 PM org.hibernate.cfg.Configuration getConfigurationInputStream INFO: Configuration resource: /hibernate.cfg.xml Mar 24, 2010 10:03:22 PM org.hibernate.cfg.Configuration doConfigure INFO: Configured SessionFactory: null Mar 24, 2010 10:03:22 PM org.hibernate.cfg.search.HibernateSearchEventListenerRegister enableHibernateSearch INFO: Unable to find org.hibernate.search.event.FullTextIndexEventListener on the classpath. Hibernate Search is not enabled. Mar 24, 2010 10:03:22 PM org.hibernate.cfg.AnnotationBinder bindClass INFO: Binding entity from annotated class: com.example.client.Entity1 Mar 24, 2010 10:03:22 PM org.hibernate.cfg.annotations.EntityBinder bindTable INFO: Bind entity com.example.client.Entity1 on table entityTable1 Mar 24, 2010 10:03:22 PM org.hibernate.cfg.AnnotationBinder bindClass INFO: Binding entity from annotated class: com.example.client.YFUser Mar 24, 2010 10:03:22 PM org.hibernate.cfg.annotations.EntityBinder bindTable INFO: Bind entity com.example.client.YFUser on table yf_user_table Initial SessionFactory creation failed.java.lang.NoSuchMethodError: javax.persistence.OneToMany.orphanRemoval()Z [WARN] Nested in javax.servlet.ServletException: init: java.lang.ExceptionInInitializerError at com.example.server.HibernateUtil.<clinit>(HibernateUtil.java:38) at com.example.server.TestServiceImpl.<init>(TestServiceImpl.java:29) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39 ) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at java.lang.Class.newInstance0(Class.java:355) at java.lang.Class.newInstance(Class.java:308) at org.mortbay.jetty.servlet.Holder.newInstance(Holder.java:153) at org.mortbay.jetty.servlet.ServletHolder.getServlet(ServletHolder.java:339) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:463) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:362) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:729) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:405) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.handler.RequestLogHandler.handle(RequestLogHandler.java:49) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:324) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:505) at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:843) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:647) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:211) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:380) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:396) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:488) Caused by: java.lang.NoSuchMethodError: javax.persistence.OneToMany.orphanRemoval()Z at org.hibernate.cfg.AnnotationBinder.processElementAnnotations(AnnotationBinder.java:1642) at org.hibernate.cfg.AnnotationBinder.bindClass(AnnotationBinder.java:772) at org.hibernate.cfg.AnnotationConfiguration.processArtifactsOfType(AnnotationConfiguration.java:629) at org.hibernate.cfg.AnnotationConfiguration.secondPassCompile(AnnotationConfiguration.java:350) at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1373) at org.hibernate.cfg.AnnotationConfiguration.buildSessionFactory(AnnotationConfiguration.java:973) at com.example.server.HibernateUtil.<clinit>(HibernateUtil.java:33) ... 26 more [WARN] Nested in java.lang.ExceptionInInitializerError: java.lang.NoSuchMethodError: javax.persistence.OneToMany.orphanRemoval()Z at org.hibernate.cfg.AnnotationBinder.processElementAnnotations(AnnotationBinder.java:1642) at org.hibernate.cfg.AnnotationBinder.bindClass(AnnotationBinder.java:772) at org.hibernate.cfg.AnnotationConfiguration.processArtifactsOfType(AnnotationConfiguration.java:629) at org.hibernate.cfg.AnnotationConfiguration.secondPassCompile(AnnotationConfiguration.java:350) at org.hibernate.cfg.Configuration.buildSessionFactory(Configuration.java:1373) at org.hibernate.cfg.AnnotationConfiguration.buildSessionFactory(AnnotationConfiguration.java:973) at com.example.server.HibernateUtil.<clinit>(HibernateUtil.java:33) at com.example.server.TestServiceImpl.<init>(TestServiceImpl.java:29) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at java.lang.Class.newInstance0(Class.java:355) at java.lang.Class.newInstance(Class.java:308) at org.mortbay.jetty.servlet.Holder.newInstance(Holder.java:153) at org.mortbay.jetty.servlet.ServletHolder.getServlet(ServletHolder.java:339) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:463) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:362) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:729) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:405) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.handler.RequestLogHandler.handle(RequestLogHandler.java:49) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:324) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:505) at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:843) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:647) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:211) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:380) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:396) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:488)

    Read the article

  • Sorting and Re-arranging List of HashMaps

    - by HonorGod
    I have a List which is straight forward representation of a database table. I am trying to sort and apply some magic after the data is loaded into List of HashMaps. In my case this is the only hard and fast way of doing it becoz I have a rules engine that actually updates the values in the HashMap after several computations. Here is a sample data representation of the HashMap (List of HashMap) - {fromDate=Wed Mar 17 10:54:12 EDT 2010, eventId=21, toDate=Tue Mar 23 10:54:12 EDT 2010, actionId=1234} {fromDate=Wed Mar 17 10:54:12 EDT 2010, eventId=11, toDate=Wed Mar 17 10:54:12 EDT 2010, actionId=456} {fromDate=Sat Mar 20 10:54:12 EDT 2010, eventId=20, toDate=Thu Apr 01 10:54:12 EDT 2010, actionId=1234} {fromDate=Wed Mar 24 10:54:12 EDT 2010, eventId=22, toDate=Sat Mar 27 10:54:12 EDT 2010, actionId=1234} {fromDate=Wed Mar 17 10:54:12 EDT 2010, eventId=11, toDate=Fri Mar 26 10:54:12 EDT 2010, actionId=1234} {fromDate=Sat Mar 20 10:54:12 EDT 2010, eventId=11, toDate=Wed Mar 31 10:54:12 EDT 2010, actionId=1234} {fromDate=Mon Mar 15 10:54:12 EDT 2010, eventId=12, toDate=Wed Mar 17 10:54:12 EDT 2010, actionId=567} I am trying to achieve couple of things - 1) Sort the list by actionId and eventId after which the data would look like - {fromDate=Wed Mar 17 10:54:12 EDT 2010, eventId=11, toDate=Wed Mar 17 10:54:12 EDT 2010, actionId=456} {fromDate=Mon Mar 15 10:54:12 EDT 2010, eventId=12, toDate=Wed Mar 17 10:54:12 EDT 2010, actionId=567} {fromDate=Wed Mar 24 10:54:12 EDT 2010, eventId=22, toDate=Sat Mar 27 10:54:12 EDT 2010, actionId=1234} {fromDate=Wed Mar 17 10:54:12 EDT 2010, eventId=21, toDate=Tue Mar 23 10:54:12 EDT 2010, actionId=1234} {fromDate=Sat Mar 20 10:54:12 EDT 2010, eventId=20, toDate=Thu Apr 01 10:54:12 EDT 2010, actionId=1234} {fromDate=Wed Mar 17 10:54:12 EDT 2010, eventId=11, toDate=Fri Mar 26 10:54:12 EDT 2010, actionId=1234} {fromDate=Sat Mar 20 10:54:12 EDT 2010, eventId=11, toDate=Wed Mar 31 10:54:12 EDT 2010, actionId=1234} 2) If we group the above list by actionId they would be resolved into 3 groups - actionId=1234, actionId=567 and actionId=456. Now here is my question - For each group having the same eventId, I need to update the records so that they have wider fromDate to toDate. Meaning, if you consider the last two rows they have same actionId = 1234 and same eventId = 11. Now we can to pick the least fromDate from those 2 records which is Wed Mar 17 10:54:12 and farther toDate which is Wed Mar 31 10:54:12 and update those 2 record's fromDate and toDate to Wed Mar 17 10:54:12 and Wed Mar 31 10:54:12 respectively. Any ideas? PS: I already have some pseudo code to start with. import java.util.ArrayList; import java.util.Calendar; import java.util.Collections; import java.util.Comparator; import java.util.Date; import java.util.HashMap; import java.util.List; import org.apache.commons.lang.builder.CompareToBuilder; public class Tester { boolean ascending = true ; boolean sortInstrumentIdAsc = true ; boolean sortEventTypeIdAsc = true ; public static void main(String args[]) { Tester tester = new Tester() ; tester.printValues() ; } public void printValues () { List<HashMap<String,Object>> list = new ArrayList<HashMap<String,Object>>() ; HashMap<String,Object> map = new HashMap<String,Object>(); map.put("actionId", new Integer(1234)) ; map.put("eventId", new Integer(21)) ; map.put("fromDate", getDate(1) ) ; map.put("toDate", getDate(7) ) ; list.add(map); map = new HashMap<String,Object>(); map.put("actionId", new Integer(456)) ; map.put("eventId", new Integer(11)) ; map.put("fromDate", getDate(1)) ; map.put("toDate", getDate(1) ) ; list.add(map); map = new HashMap<String,Object>(); map.put("actionId", new Integer(1234)) ; map.put("eventId", new Integer(20)) ; map.put("fromDate", getDate(4) ) ; map.put("toDate", getDate(16) ) ; list.add(map); map = new HashMap<String,Object>(); map.put("actionId", new Integer(1234)) ; map.put("eventId", new Integer(22)) ; map.put("fromDate",getDate(8) ) ; map.put("toDate", getDate(11)) ; list.add(map); map = new HashMap<String,Object>(); map.put("actionId", new Integer(1234)) ; map.put("eventId", new Integer(11)) ; map.put("fromDate",getDate(1) ) ; map.put("toDate", getDate(10) ) ; list.add(map); map = new HashMap<String,Object>(); map.put("actionId", new Integer(1234)) ; map.put("eventId", new Integer(11)) ; map.put("fromDate",getDate(4) ) ; map.put("toDate", getDate(15) ) ; list.add(map); map = new HashMap<String,Object>(); map.put("actionId", new Integer(567)) ; map.put("eventId", new Integer(12)) ; map.put("fromDate", getDate(-1) ) ; map.put("toDate",getDate(1)) ; list.add(map); System.out.println("\n Before Sorting \n "); for(int j = 0 ; j < list.size() ; j ++ ) System.out.println(list.get(j)); Collections.sort ( list , new HashMapComparator2 () ) ; System.out.println("\n After Sorting \n "); for(int j = 0 ; j < list.size() ; j ++ ) System.out.println(list.get(j)); } public static Date getDate(int days) { Calendar cal = Calendar.getInstance(); cal.setTime(new Date()); cal.add(Calendar.DATE, days); return cal.getTime() ; } public class HashMapComparator2 implements Comparator { public int compare ( Object object1 , Object object2 ) { if ( ascending == true ) { return new CompareToBuilder() .append(( ( HashMap ) object1 ).get ( "actionId" ), ( ( HashMap ) object2 ).get ( "actionId" )) .append(( ( HashMap ) object2 ).get ( "eventId" ), ( ( HashMap ) object1 ).get ( "eventId" )) .toComparison(); } else { return new CompareToBuilder() .append(( ( HashMap ) object2 ).get ( "actionId" ), ( ( HashMap ) object1 ).get ( "actionId" )) .append(( ( HashMap ) object2 ).get ( "eventId" ), ( ( HashMap ) object1 ).get ( "eventId" )) .toComparison(); } } } }

    Read the article

  • EKCalendar not added to iCal

    - by Alex75
    I have a strange behavior on my iPhone. I'm creating an application that uses calendar events (EventKit). The class that use is as follows: the .h one #import "GenericManager.h" #import <EventKit/EventKit.h> #define oneDay 60*60*24 #define oneHour 60*60 @protocol CalendarManagerDelegate; @interface CalendarManager : GenericManager /* * metodo che aggiunge un evento ad un calendario di nome Name nel giorno onDate. * L'evento da aggiungere viene recuperato tramite il dataSource che è quindi * OBBLIGATORIO (!= nil). * * Restituisce YES solo se il delegate è conforme al protocollo CalendarManagerDataSource. * NO altrimenti */ + (BOOL) addEventForCalendarWithName:(NSString *) name fromDate:(NSDate *)fromDate toDate: (NSDate *) toDate withDelegate:(id<CalendarManagerDelegate>) delegate; /* * metodo che aggiunge un evento per giorno compreso tra fromDate e toDate ad un * calendario di nome Name. L'evento da aggiungere viene recuperato tramite il dataSource * che è quindi OBBLIGATORIO (!= nil). * * Restituisce YES solo se il delegate è conforme al protocollo CalendarManagerDataSource. * NO altrimenti */ + (BOOL) addEventsForCalendarWithName:(NSString *) name fromDate:(NSDate *)fromDate toDate: (NSDate *) toDate withDelegate:(id<CalendarManagerDelegate>) delegate; @end @protocol CalendarManagerDelegate <NSObject> // viene inviato quando il calendario necessita informazioni sull' evento da aggiungere - (void) calendarManagerDidCreateEvent:(EKEvent *) event; @end the .m one // // CalendarManager.m // AppCampeggioSingolo // // Created by CreatiWeb Srl on 12/17/12. // Copyright (c) 2012 CreatiWeb Srl. All rights reserved. // #import "CalendarManager.h" #import "Commons.h" #import <objc/message.h> @interface CalendarManager () @end @implementation CalendarManager + (void)requestToEventStore:(EKEventStore *)eventStore delegate:(id)delegate fromDate:(NSDate *)fromDate toDate: (NSDate *) toDate name:(NSString *)name { if([eventStore respondsToSelector:@selector(requestAccessToEntityType:completion:)]) { // ios >= 6.0 [eventStore requestAccessToEntityType:EKEntityTypeEvent completion:^(BOOL granted, NSError *error) { if (granted) { [self addEventForCalendarWithName:name fromDate: fromDate toDate: toDate inEventStore:eventStore withDelegate:delegate]; } else { } }]; } else if (class_getClassMethod([EKCalendar class], @selector(calendarIdentifier)) != nil) { // ios >= 5.0 && ios < 6.0 [self addEventForCalendarWithName:name fromDate:fromDate toDate:toDate inEventStore:eventStore withDelegate:delegate]; } else { // ios < 5.0 EKCalendar *myCalendar = [eventStore defaultCalendarForNewEvents]; EKEvent *event = [self generateEventForCalendar:myCalendar fromDate: fromDate toDate: toDate inEventStore:eventStore withDelegate:delegate]; [eventStore saveEvent:event span:EKSpanThisEvent error:nil]; } } /* * metodo che recupera l'identificativo del calendario associato all'app o nil se non è mai stato creato. */ + (NSString *) identifierForCalendarName: (NSString *) name { NSString * confFileName = [self pathForFile:kCurrentCalendarFileName]; NSDictionary *confCalendar = [NSDictionary dictionaryWithContentsOfFile:confFileName]; NSString *currentIdentifier = [confCalendar objectForKey:name]; return currentIdentifier; } /* * memorizza l'identifier del calendario */ + (void) saveCalendarIdentifier:(NSString *) identifier andName: (NSString *) name { if (identifier != nil) { NSString * confFileName = [self pathForFile:kCurrentCalendarFileName]; NSMutableDictionary *confCalendar = [NSMutableDictionary dictionaryWithContentsOfFile:confFileName]; if (confCalendar == nil) { confCalendar = [NSMutableDictionary dictionaryWithCapacity:1]; } [confCalendar setObject:identifier forKey:name]; [confCalendar writeToFile:confFileName atomically:YES]; } } + (EKCalendar *)getCalendarWithName:(NSString *)name inEventStore:(EKEventStore *)eventStore withLocalSource: (EKSource *)localSource forceCreation:(BOOL) force { EKCalendar *myCalendar; NSString *identifier = [self identifierForCalendarName:name]; if (force || identifier == nil) { NSLog(@"create new calendar"); if (class_getClassMethod([EKCalendar class], @selector(calendarForEntityType:eventStore:)) != nil) { // da ios 6.0 in avanti myCalendar = [EKCalendar calendarForEntityType:EKEntityTypeEvent eventStore:eventStore]; } else { myCalendar = [EKCalendar calendarWithEventStore:eventStore]; } myCalendar.title = name; myCalendar.source = localSource; NSError *error = nil; BOOL result = [eventStore saveCalendar:myCalendar commit:YES error:&error]; if (result) { NSLog(@"Saved calendar %@ to event store. %@",myCalendar,eventStore); } else { NSLog(@"Error saving calendar: %@.", error); } [self saveCalendarIdentifier:myCalendar.calendarIdentifier andName:name]; } // You can also configure properties like the calendar color etc. The important part is to store the identifier for later use. On the other hand if you already have the identifier, you can just fetch the calendar: else { myCalendar = [eventStore calendarWithIdentifier:identifier]; NSLog(@"fetch an old-one = %@",myCalendar); } return myCalendar; } + (EKCalendar *)addEventForCalendarWithName: (NSString *) name fromDate:(NSDate *)fromDate toDate: (NSDate *) toDate inEventStore:(EKEventStore *)eventStore withDelegate: (id<CalendarManagerDelegate>) delegate { // da ios 5.0 in avanti EKCalendar *myCalendar; EKSource *localSource = nil; for (EKSource *source in eventStore.sources) { if (source.sourceType == EKSourceTypeLocal) { localSource = source; break; } } @synchronized(self) { myCalendar = [self getCalendarWithName:name inEventStore:eventStore withLocalSource:localSource forceCreation:NO]; if (myCalendar == nil) myCalendar = [self getCalendarWithName:name inEventStore:eventStore withLocalSource:localSource forceCreation:YES]; NSLog(@"End synchronized block %@",myCalendar); } EKEvent *event = [self generateEventForCalendar:myCalendar fromDate:fromDate toDate:toDate inEventStore:eventStore withDelegate:delegate]; [eventStore saveEvent:event span:EKSpanThisEvent error:nil]; return myCalendar; } + (EKEvent *) generateEventForCalendar: (EKCalendar *) calendar fromDate:(NSDate *)fromDate toDate: (NSDate *) toDate inEventStore:(EKEventStore *) eventStore withDelegate:(id<CalendarManagerDelegate>) delegate { EKEvent *event = [EKEvent eventWithEventStore:eventStore]; event.startDate=fromDate; event.endDate=toDate; [delegate calendarManagerDidCreateEvent:event]; [event setCalendar:calendar]; // ricerca dell'evento nel calendario, se ne trovo uno uguale non lo inserisco NSPredicate *predicate = [eventStore predicateForEventsWithStartDate:fromDate endDate:toDate calendars:[NSArray arrayWithObject:calendar]]; NSArray *matchEvents = [eventStore eventsMatchingPredicate:predicate]; if ([matchEvents count] > 0) { // ne ho trovati di gia' presenti, vediamo se uno e' quello che vogliamo inserire BOOL found = NO; for (EKEvent *fetchEvent in matchEvents) { if ([fetchEvent.title isEqualToString:event.title] && [fetchEvent.notes isEqualToString:event.notes]) { found = YES; break; } } if (found) { // esiste già e quindi non lo inserisco NSLog(@"OH NOOOOOO!!"); event = nil; } } return event; } #pragma mark - Public Methods + (BOOL) addEventForCalendarWithName:(NSString *) name fromDate:(NSDate *)fromDate toDate: (NSDate *) toDate withDelegate:(id<CalendarManagerDelegate>) delegate { BOOL retVal = YES; EKEventStore *eventStore=[[EKEventStore alloc] init]; if ([delegate conformsToProtocol:@protocol(CalendarManagerDelegate)]) { [self requestToEventStore:eventStore delegate:delegate fromDate:fromDate toDate: toDate name:name]; } else { retVal = NO; } return retVal; } + (BOOL) addEventsForCalendarWithName:(NSString *) name fromDate:(NSDate *)fromDate toDate: (NSDate *) toDate withDelegate:(id<CalendarManagerDelegate>) delegate { BOOL retVal = YES; NSDate *dateCursor = fromDate; EKEventStore *eventStore=[[EKEventStore alloc] init]; if ([delegate conformsToProtocol:@protocol(CalendarManagerDelegate)]) { while (retVal && ([dateCursor compare:toDate] == NSOrderedAscending)) { NSDate *finish = [dateCursor dateByAddingTimeInterval:oneDay]; [self requestToEventStore:eventStore delegate:delegate fromDate: dateCursor toDate: finish name:name]; dateCursor = [dateCursor dateByAddingTimeInterval:oneDay]; } } else { retVal = NO; } return retVal; } @end In practice, on my iphone I get the log: fetch an old-one = (null) 19/12/2012 11:33:09.520 AppCampeggioSingolo [730:8 b1b] create new calendar 19/12/2012 11:33:09.558 AppCampeggioSingolo [730:8 b1b] Saved calendar EKCalendar every time I add an event, then I look and I can not find it on iCal calendar event he added. On the iPhone of a friend of mine, however, everything is working correctly. I doubt that the problem stems from the code, but just do not understand what it could be. I searched all day yesterday and part of today on google but have not found anything yet. Any help will be greatly appreciated EDIT: I forgot the call wich is [CalendarManager addEventForCalendarWithName: @"myCalendar" fromDate:fromDate toDate: toDate withDelegate:self]; in the delegate method simply set title and notes of the event like this - (void) calendarManagerDidCreateEvent:(EKEvent *) event { event.title = @"the title"; event.notes = @"some notes"; }

    Read the article

  • How do I get my dependenices inject using @Configurable in conjunction with readResolve()

    - by bmatthews68
    The framework I am developing for my application relies very heavily on dynamically generated domain objects. I recently started using Spring WebFlow and now need to be able to serialize my domain objects that will be kept in flow scope. I have done a bit of research and figured out that I can use writeReplace() and readResolve(). The only catch is that I need to look-up a factory in the Spring context. I tried to use @Configurable(preConstruction = true) in conjunction with the BeanFactoryAware marker interface. But beanFactory is always null when I try to use it in my createEntity() method. Neither the default constructor nor the setBeanFactory() injector are called. Has anybody tried this or something similar? I have included relevant class below. Thanks in advance, Brian /* * Copyright 2008 Brian Thomas Matthews Limited. * All rights reserved, worldwide. * * This software and all information contained herein is the property of * Brian Thomas Matthews Limited. Any dissemination, disclosure, use, or * reproduction of this material for any reason inconsistent with the * express purpose for which it has been disclosed is strictly forbidden. */ package com.btmatthews.dmf.domain.impl.cglib; import java.io.InvalidObjectException; import java.io.ObjectStreamException; import java.io.Serializable; import java.lang.reflect.InvocationTargetException; import java.util.HashMap; import java.util.Map; import org.apache.commons.beanutils.PropertyUtils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.BeanFactory; import org.springframework.beans.factory.BeanFactoryAware; import org.springframework.beans.factory.annotation.Configurable; import org.springframework.util.StringUtils; import com.btmatthews.dmf.domain.IEntity; import com.btmatthews.dmf.domain.IEntityFactory; import com.btmatthews.dmf.domain.IEntityID; import com.btmatthews.dmf.spring.IEntityDefinitionBean; /** * This class represents the serialized form of a domain object implemented * using CGLib. The readResolve() method recreates the actual domain object * after it has been deserialized into Serializable. You must define * &lt;spring-configured/&gt; in the application context. * * @param <S> * The interface that defines the properties of the base domain * object. * @param <T> * The interface that defines the properties of the derived domain * object. * @author <a href="mailto:[email protected]">Brian Matthews</a> * @version 1.0 */ @Configurable(preConstruction = true) public final class SerializedCGLibEntity<S extends IEntity<S>, T extends S> implements Serializable, BeanFactoryAware { /** * Used for logging. */ private static final Logger LOG = LoggerFactory .getLogger(SerializedCGLibEntity.class); /** * The serialization version number. */ private static final long serialVersionUID = 3830830321957878319L; /** * The application context. Note this is not serialized. */ private transient BeanFactory beanFactory; /** * The domain object name. */ private String entityName; /** * The domain object identifier. */ private IEntityID<S> entityId; /** * The domain object version number. */ private long entityVersion; /** * The attributes of the domain object. */ private HashMap<?, ?> entityAttributes; /** * The default constructor. */ public SerializedCGLibEntity() { SerializedCGLibEntity.LOG .debug("Initializing with default constructor"); } /** * Initialise with the attributes to be serialised. * * @param name * The entity name. * @param id * The domain object identifier. * @param version * The entity version. * @param attributes * The entity attributes. */ public SerializedCGLibEntity(final String name, final IEntityID<S> id, final long version, final HashMap<?, ?> attributes) { SerializedCGLibEntity.LOG .debug("Initializing with parameterized constructor"); this.entityName = name; this.entityId = id; this.entityVersion = version; this.entityAttributes = attributes; } /** * Inject the bean factory. * * @param factory * The bean factory. */ public void setBeanFactory(final BeanFactory factory) { SerializedCGLibEntity.LOG.debug("Injected bean factory"); this.beanFactory = factory; } /** * Called after deserialisation. The corresponding entity factory is * retrieved from the bean application context and BeanUtils methods are * used to initialise the object. * * @return The initialised domain object. * @throws ObjectStreamException * If there was a problem creating or initialising the domain * object. */ public Object readResolve() throws ObjectStreamException { SerializedCGLibEntity.LOG.debug("Transforming deserialized object"); final T entity = this.createEntity(); entity.setId(this.entityId); try { PropertyUtils.setSimpleProperty(entity, "version", this.entityVersion); for (Map.Entry<?, ?> entry : this.entityAttributes.entrySet()) { PropertyUtils.setSimpleProperty(entity, entry.getKey() .toString(), entry.getValue()); } } catch (IllegalAccessException e) { throw new InvalidObjectException(e.getMessage()); } catch (InvocationTargetException e) { throw new InvalidObjectException(e.getMessage()); } catch (NoSuchMethodException e) { throw new InvalidObjectException(e.getMessage()); } return entity; } /** * Lookup the entity factory in the application context and create an * instance of the entity. The entity factory is located by getting the * entity definition bean and using the factory registered with it or * getting the entity factory. The name used for the definition bean lookup * is ${entityName}Definition while ${entityName} is used for the factory * lookup. * * @return The domain object instance. * @throws ObjectStreamException * If the entity definition bean or entity factory were not * available. */ @SuppressWarnings("unchecked") private T createEntity() throws ObjectStreamException { SerializedCGLibEntity.LOG.debug("Getting domain object factory"); // Try to use the entity definition bean final IEntityDefinitionBean<S, T> entityDefinition = (IEntityDefinitionBean<S, T>)this.beanFactory .getBean(StringUtils.uncapitalize(this.entityName) + "Definition", IEntityDefinitionBean.class); if (entityDefinition != null) { final IEntityFactory<S, T> entityFactory = entityDefinition .getFactory(); if (entityFactory != null) { SerializedCGLibEntity.LOG .debug("Domain object factory obtained via enity definition bean"); return entityFactory.create(); } } // Try to use the entity factory final IEntityFactory<S, T> entityFactory = (IEntityFactory<S, T>)this.beanFactory .getBean(StringUtils.uncapitalize(this.entityName) + "Factory", IEntityFactory.class); if (entityFactory != null) { SerializedCGLibEntity.LOG .debug("Domain object factory obtained via direct look-up"); return entityFactory.create(); } // Neither worked! SerializedCGLibEntity.LOG.warn("Cannot find domain object factory"); throw new InvalidObjectException( "No entity definition or factory found for " + this.entityName); } }

    Read the article

  • Spring, JPA, Hibernate, Jetty 7 Integration

    - by Jewel
    Have anyone successfully run any spring and JPA application in jetty 7? I am getting following exception. This application throws no error in jetty 6. INFO [main] org.eclipse.jetty.util.log - Logging to org.slf4j.impl.Log4jLoggerAdapter(org.eclipse.jetty.util.log) via org.eclipse.jetty.util.log.Slf4jLog INFO [main] org.eclipse.jetty.util.log - jetty-7.1.2.v20100523 INFO [main] org.eclipse.jetty.util.log - Deployment monitor G:\_Java\_Jetty\jetty-distribution-7.1.2.v20100523\contexts at interval 5 INFO [main] org.eclipse.jetty.util.log - Deployment monitor G:\_Java\_Jetty\jetty-distribution-7.1.2.v20100523\webapps at interval 5 INFO [main] org.eclipse.jetty.util.log - Deployable added: G:\_Java\_Jetty\jetty-distribution-7.1.2.v20100523\webapps\gwtrpc-spring.war INFO [main] org.eclipse.jetty.util.log - Copying WEB-INF/lib jar:file:/G:/_Java/_Jetty/jetty-distribution-7.1.2.v20100523/webapps/gwtrpc-spring.war!/WEB-INF/lib/ to C:\Documents and Settings\Jewel2\Local Settings\Temp\Jetty_0_0_0_0_8080_gwtrpc.spring.war__gwtrpc.spring__az1wdj\webinf\WEB-INF\lib INFO [main] org.eclipse.jetty.util.log - Copying WEB-INF/classes from jar:file:/G:/_Java/_Jetty/jetty-distribution-7.1.2.v20100523/webapps/gwtrpc-spring.war!/WEB-INF/classes/ to C:\Documents and Settings\Jewel2\Local Settings\Temp\Jetty_0_0_0_0_8080_gwtrpc.spring.war__gwtrpc.spring__az1wdj\webinf\WEB-INF\classes INFO [main] /gwtrpc-spring - Initializing Spring root WebApplicationContext INFO [main] org.springframework.web.context.ContextLoader - Root WebApplicationContext: initialization started INFO [main] org.springframework.web.context.support.XmlWebApplicationContext - Refreshing Root WebApplicationContext: startup date [Thu Jun 10 00:13:32 GMT+06:00 2010]; root of context hierarchy INFO [main] org.springframework.beans.factory.xml.XmlBeanDefinitionReader - Loading XML bean definitions from ServletContext resource [/WEB-INF/applicationContext.xml] INFO [main] org.springframework.beans.factory.support.DefaultListableBeanFactory - Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@467991: defining beans [org.springframework.context.annotation.internalConfigurationAnnotationProcessor,org.springframework.context.annotation.internalAutowiredAnnotationProcessor,org.springframework.context.annotation.internalRequiredAnnotationProcessor,org.springframework.context.annotation.internalCommonAnnotationProcessor,org.springframework.context.annotation.internalPersistenceAnnotationProcessor,greetingServiceImpl,testService,testService2,taskExecutor,dataSource,entityManagerFactory,org.springframework.aop.config.internalAutoProxyCreator,org.springframework.transaction.annotation.AnnotationTransactionAttributeSource#0,org.springframework.transaction.interceptor.TransactionInterceptor#0,org.springframework.transaction.config.internalTransactionAdvisor,transactionManager,org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor#0]; root of factory hierarchy INFO [main] org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor - Initializing ExecutorService 'taskExecutor' INFO [main] org.springframework.jdbc.datasource.DriverManagerDataSource - Loaded JDBC driver: com.mysql.jdbc.Driver INFO [main] org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean - Building JPA container EntityManagerFactory for persistence unit 'gwtrpc-spring-data-source' INFO [main] org.hibernate.cfg.annotations.Version - Hibernate Annotations 3.4.0.GA INFO [main] org.hibernate.cfg.Environment - Hibernate 3.3.1.GA INFO [main] org.hibernate.cfg.Environment - hibernate.properties not found INFO [main] org.hibernate.cfg.Environment - Bytecode provider name : javassist INFO [main] org.hibernate.cfg.Environment - using JDK 1.4 java.sql.Timestamp handling INFO [main] org.hibernate.annotations.common.Version - Hibernate Commons Annotations 3.1.0.GA INFO [main] org.hibernate.ejb.Version - Hibernate EntityManager 3.4.0.GA INFO [main] org.hibernate.ejb.Ejb3Configuration - Processing PersistenceUnitInfo [ name: gwtrpc-spring-data-source ...] INFO [main] org.springframework.beans.factory.support.DefaultListableBeanFactory - Destroying singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@467991: defining beans [org.springframework.context.annotation.internalConfigurationAnnotationProcessor,org.springframework.context.annotation.internalAutowiredAnnotationProcessor,org.springframework.context.annotation.internalRequiredAnnotationProcessor,org.springframework.context.annotation.internalCommonAnnotationProcessor,org.springframework.context.annotation.internalPersistenceAnnotationProcessor,greetingServiceImpl,testService,testService2,taskExecutor,dataSource,entityManagerFactory,org.springframework.aop.config.internalAutoProxyCreator,org.springframework.transaction.annotation.AnnotationTransactionAttributeSource#0,org.springframework.transaction.interceptor.TransactionInterceptor#0,org.springframework.transaction.config.internalTransactionAdvisor,transactionManager,org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor#0]; root of factory hierarchy INFO [main] org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor - Shutting down ExecutorService 'taskExecutor' ERROR [main] org.springframework.web.context.ContextLoader - Context initialization failed org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'entityManagerFactory' defined in ServletContext resource [/WEB-INF/applicationContext.xml]: Invocation of init method failed; nested exception is javax.persistence.PersistenceException: [PersistenceUnit: gwtrpc-spring-data-source] class or package not found at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1403) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:513) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:450) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:290) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:287) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:189) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:545) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:871) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:423) at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:272) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:196) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:47) at org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:636) at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:188) at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:995) at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:579) at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:381) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:55) at org.eclipse.jetty.deploy.bindings.StandardStarter.processBinding(StandardStarter.java:36) at org.eclipse.jetty.deploy.AppLifeCycle.runBindings(AppLifeCycle.java:182) at org.eclipse.jetty.deploy.DeploymentManager.requestAppGoal(DeploymentManager.java:497) at org.eclipse.jetty.deploy.DeploymentManager.addApp(DeploymentManager.java:135) at org.eclipse.jetty.deploy.providers.ScanningAppProvider$1.fileAdded(ScanningAppProvider.java:61) at org.eclipse.jetty.util.Scanner.reportAddition(Scanner.java:436) at org.eclipse.jetty.util.Scanner.reportDifferences(Scanner.java:349) at org.eclipse.jetty.util.Scanner.scan(Scanner.java:306) at org.eclipse.jetty.util.Scanner.start(Scanner.java:242) at org.eclipse.jetty.deploy.providers.ScanningAppProvider.doStart(ScanningAppProvider.java:136) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:55) at org.eclipse.jetty.deploy.DeploymentManager.startAppProvider(DeploymentManager.java:562) at org.eclipse.jetty.deploy.DeploymentManager.doStart(DeploymentManager.java:212) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:55) at org.eclipse.jetty.server.Server.doStart(Server.java:209) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:55) at org.eclipse.jetty.xml.XmlConfiguration$1.run(XmlConfiguration.java:1018) at java.security.AccessController.doPrivileged(Native Method) at org.eclipse.jetty.xml.XmlConfiguration.main(XmlConfiguration.java:983) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.eclipse.jetty.start.Main.invokeMain(Main.java:447) at org.eclipse.jetty.start.Main.start(Main.java:605) at org.eclipse.jetty.start.Main.parseCommandLine(Main.java:238) at org.eclipse.jetty.start.Main.main(Main.java:77) Caused by: javax.persistence.PersistenceException: [PersistenceUnit: gwtrpc-spring-data-source] class or package not found at org.hibernate.ejb.Ejb3Configuration.addNamedAnnotatedClasses(Ejb3Configuration.java:1093) at org.hibernate.ejb.Ejb3Configuration.addClassesToSessionFactory(Ejb3Configuration.java:871) at org.hibernate.ejb.Ejb3Configuration.configure(Ejb3Configuration.java:758) at org.hibernate.ejb.Ejb3Configuration.configure(Ejb3Configuration.java:425) at org.hibernate.ejb.HibernatePersistence.createContainerEntityManagerFactory(HibernatePersistence.java:131) at org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean.createNativeEntityManagerFactory(LocalContainerEntityManagerFactoryBean.java:225) at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.afterPropertiesSet(AbstractEntityManagerFactoryBean.java:308) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1460) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1400) ... 45 more Caused by: java.lang.ClassNotFoundException: WEB-INF.classes.org.gwtrpcspring.example.server.Person at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClassInternal(Unknown Source) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Unknown Source) at org.hibernate.util.ReflectHelper.classForName(ReflectHelper.java:135) at org.hibernate.ejb.Ejb3Configuration.classForName(Ejb3Configuration.java:1009) at org.hibernate.ejb.Ejb3Configuration.addNamedAnnotatedClasses(Ejb3Configuration.java:1081) ... 53 more WARN [main] org.eclipse.jetty.util.log - Failed startup of context WebAppContext@149f041@149f041/gwtrpc-spring,file:/C:/Documents and Settings/Jewel2/Local Settings/Temp/Jetty_0_0_0_0_8080_gwtrpc.spring.war__gwtrpc.spring__az1wdj/webinf/;jar:file:/G:/_Java/_Jetty/jetty-distribution-7.1.2.v20100523/webapps/gwtrpc-spring.war!/;,G:\_Java\_Jetty\jetty-distribution-7.1.2.v20100523\webapps\gwtrpc-spring.war org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'entityManagerFactory' defined in ServletContext resource [/WEB-INF/applicationContext.xml]: Invocation of init method failed; nested exception is javax.persistence.PersistenceException: [PersistenceUnit: gwtrpc-spring-data-source] class or package not found at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1403) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:513) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:450) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:290) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:287) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:189) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:545) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:871) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:423) at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:272) at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:196) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:47) at org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:636) at org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:188) at org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:995) at org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:579) at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:381) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:55) at org.eclipse.jetty.deploy.bindings.StandardStarter.processBinding(StandardStarter.java:36) at org.eclipse.jetty.deploy.AppLifeCycle.runBindings(AppLifeCycle.java:182) at org.eclipse.jetty.deploy.DeploymentManager.requestAppGoal(DeploymentManager.java:497) at org.eclipse.jetty.deploy.DeploymentManager.addApp(DeploymentManager.java:135) at org.eclipse.jetty.deploy.providers.ScanningAppProvider$1.fileAdded(ScanningAppProvider.java:61) at org.eclipse.jetty.util.Scanner.reportAddition(Scanner.java:436) at org.eclipse.jetty.util.Scanner.reportDifferences(Scanner.java:349) at org.eclipse.jetty.util.Scanner.scan(Scanner.java:306) at org.eclipse.jetty.util.Scanner.start(Scanner.java:242) at org.eclipse.jetty.deploy.providers.ScanningAppProvider.doStart(ScanningAppProvider.java:136) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:55) at org.eclipse.jetty.deploy.DeploymentManager.startAppProvider(DeploymentManager.java:562) at org.eclipse.jetty.deploy.DeploymentManager.doStart(DeploymentManager.java:212) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:55) at org.eclipse.jetty.server.Server.doStart(Server.java:209) at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:55) at org.eclipse.jetty.xml.XmlConfiguration$1.run(XmlConfiguration.java:1018) at java.security.AccessController.doPrivileged(Native Method) at org.eclipse.jetty.xml.XmlConfiguration.main(XmlConfiguration.java:983) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.eclipse.jetty.start.Main.invokeMain(Main.java:447) at org.eclipse.jetty.start.Main.start(Main.java:605) at org.eclipse.jetty.start.Main.parseCommandLine(Main.java:238) at org.eclipse.jetty.start.Main.main(Main.java:77) Caused by: javax.persistence.PersistenceException: [PersistenceUnit: gwtrpc-spring-data-source] class or package not found at org.hibernate.ejb.Ejb3Configuration.addNamedAnnotatedClasses(Ejb3Configuration.java:1093) at org.hibernate.ejb.Ejb3Configuration.addClassesToSessionFactory(Ejb3Configuration.java:871) at org.hibernate.ejb.Ejb3Configuration.configure(Ejb3Configuration.java:758) at org.hibernate.ejb.Ejb3Configuration.configure(Ejb3Configuration.java:425) at org.hibernate.ejb.HibernatePersistence.createContainerEntityManagerFactory(HibernatePersistence.java:131) at org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean.createNativeEntityManagerFactory(LocalContainerEntityManagerFactoryBean.java:225) at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.afterPropertiesSet(AbstractEntityManagerFactoryBean.java:308) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1460) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1400) ... 45 more Caused by: java.lang.ClassNotFoundException: WEB-INF.classes.org.gwtrpcspring.example.server.Person at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClassInternal(Unknown Source) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Unknown Source) at org.hibernate.util.ReflectHelper.classForName(ReflectHelper.java:135) at org.hibernate.ejb.Ejb3Configuration.classForName(Ejb3Configuration.java:1009) at org.hibernate.ejb.Ejb3Configuration.addNamedAnnotatedClasses(Ejb3Configuration.java:1081) ... 53 more INFO [main] org.eclipse.jetty.util.log - Started [email protected]:8080 And this is my applicationContext.xml file <?xml version="1.0" encoding="UTF-8"?> <beans> <context:annotation-config /> <context:component-scan base-package="org.gwtrpcspring.example.server" /> <bean id="taskExecutor" class="org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor"> <property name="corePoolSize" value="5" /> <property name="maxPoolSize" value="10" /> <property name="queueCapacity" value="25" /> </bean> <bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource"> <property name="driverClassName" value="com.mysql.jdbc.Driver" /> <property name="url" value="jdbc:mysql://localhost:3306/test" /> <property name="username" value="jewel" /> <property name="password" value="jewel" /> </bean> <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="dataSource" ref="dataSource" /> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"> <property name="databasePlatform" value="org.hibernate.dialect.MySQLDialect" /> <!-- <property name="databasePlatform" value="org.hibernate.dialect.HSQLDialect" /> --> <property name="showSql" value="false" /> <property name="generateDdl" value="true" /> </bean> </property> </bean> <tx:annotation-driven /> <bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager"> <property name="entityManagerFactory" ref="entityManagerFactory" /> </bean> <bean class="org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor" /> </beans>

    Read the article

  • Spring rejecting bean name, no URL paths specified

    - by richever
    I am trying to register an interceptor using a annotation-driven controller configuration. As far as I can tell, I've done everything correctly but when I try testing the interceptor nothing happens. After looking in the logs I found the following: 2010-04-04 20:06:18,231 DEBUG [main] support.AbstractAutowireCapableBeanFactory (AbstractAutowireCapableBeanFactory.java:452) - Finished creating instance of bean 'org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping#0' 2010-04-04 20:06:18,515 DEBUG [main] handler.AbstractDetectingUrlHandlerMapping (AbstractDetectingUrlHandlerMapping.java:86) - Rejected bean name 'org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping#0': no URL paths identified 2010-04-04 20:06:19,109 DEBUG [main] support.AbstractBeanFactory (AbstractBeanFactory.java:241) - Returning cached instance of singleton bean 'org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping#0' Look at the second line of this log snippet. Is Spring rejecting the DefaultAnnotationHandlerMapping bean? And if so could this be the problem with my interceptor not working? Here is my application context: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:p="http://www.springframework.org/schema/p" xmlns:context="http://www.springframework.org/schema/context" xmlns:mvc="http://www.springframework.org/schema/mvc" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc-3.0.xsd" default-autowire="byName"> <!-- Configures the @Controller programming model --> <mvc:annotation-driven /> <!-- Scan for annotations... --> <context:component-scan base-package=" com.splash.web.controller, com.splash.web.service, com.splash.web.authentication"/> <bean id="authorizedUserInterceptor" class="com.splash.web.handler.AuthorizedUserInterceptor"/> <bean class="org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping"> <property name="interceptors"> <list> <ref bean="authorizedUserInterceptor"/> </list> </property> </bean> Here is my interceptor: package com.splash.web.handler; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import org.apache.log4j.Logger; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.web.servlet.handler.HandlerInterceptorAdapter; public class AuthorizedUserInterceptor extends HandlerInterceptorAdapter { @Override public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception { log.debug(">>> Operation intercepted..."); return true; } } Does anyone see anything wrong with this? What does the error I mentioned above actually mean and could it have any bearing on the interceptor not being called? Thanks!

    Read the article

  • InteropServices COMException when executing a .net app from a web CGI script on Windows Server 2003

    - by Kurt W. Leucht
    Disclaimer: I'm completely clueless about .net and COM. I have a vendor's application that appears to be written in .net and I'm trying to wrap it with a web form (a cgi-bin Perl script) so I can eventually launch this vendor's app from a separate computer. I'm on a Windows Server 2003 R2 SE SP1 system and I'm using Apache 2.2 for the web server and ActivePerl 5.10.0.1004 for the cgi script. My cgi script calls the vendor's app that resides on the same machine using the Perl backtick operator. ... $result = "Result: " . `$vendorsPath/$vendorsExecutable $arg1 $arg2`; ... Right now I'm just running IE web browser locally on the server machine and accessing "http://localhost/cgi-bin/myPerlScript.pl". The vendor's app fails and logs a debug message that includes the following stack trace (I changed a couple names so as to not give away the vendor's identity): ... System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.Runtime.InteropServices.COMException (0x80043A1D): 0x80040154 - Class not registered --- End of inner exception stack trace --- at System.RuntimeType.InvokeDispMethod(String name, BindingFlags invokeAttr, Object target, Object[] args, Boolean[] byrefModifiers, Int32 culture, String[] namedParameters) at System.RuntimeType.InvokeMember(String name, BindingFlags invokeAttr, Binder binder, Object target, Object[] args, ParameterModifier[] modifiers, CultureInfo culture, String[] namedParameters) at VendorsTool.Engine.Core.VendorsEngine.LoadVendorsServices(String fileName, String& projectCommPath) ... When I run the vendors app from the Windows command line on the server machine with the exact same arguments that the cgi script is passing it runs just fine, so there's something about invoking their app via the web script that is causing a problem. This problem is likely security related because the whole thing runs just fine on a Windows XP Pro machine (both command line and web invocation). I actually developed my web script there and got it completely working there before I tried moving it to the Windows Server 2003 machine. So what's different about the Windows Server 2003 machine that would keep the vendor's .net app from being executed successfully by a web cgi script? Can I fix this problem somehow to make it work on my server or will the vendor have to make a change to their .net app and ship out a new version? I'm probably the only person in the world who is trying to execute this vendor's app from a separate program, so I hate to bother the vendor with the issue if there's a workaround that I can implement myself here on my server machine. Plus, I'm in kind of a hurry and I don't want to wait 4 or 6 months for the vendor to put in a fix and deploy a new version. Thanks for any advise you can give.

    Read the article

  • Injecting jms resource in servlet & best practice for MDB

    - by kislo_metal
    using ejb 3.1, servlet 3.0 (glassfish server v3) Scenario: I have MDB that listen to jms messages and give processing to some other session bean (Stateless). Servelet injecting jms resource. Question 1: Why servlet can`t inject jms resources when they use static declaration ? @Resource(mappedName = "jms/Tarturus") private static ConnectionFactory connectionFactory; @Resource(mappedName = "jms/StyxMDB") private static Queue queue; private Connection connection; and @PostConstruct public void postConstruct() { try { connection = connectionFactory.createConnection(); } catch (JMSException e) { e.printStackTrace(); } } @PreDestroy public void preDestroy() { try { connection.close(); } catch (JMSException e) { e.printStackTrace(); } } The error that I get is : [#|2010-05-03T15:18:17.118+0300|WARNING|glassfish3.0|javax.enterprise.system.container.web.com.sun.enterprise.web|_ThreadID=35;_ThreadName=Thread-1;|StandardWrapperValve[WorkerServlet]: PWC1382: Allocate exception for servlet WorkerServlet com.sun.enterprise.container.common.spi.util.InjectionException: Error creating managed object for class ua.co.rufous.server.services.WorkerServiceImpl at com.sun.enterprise.container.common.impl.util.InjectionManagerImpl.createManagedObject(InjectionManagerImpl.java:312) at com.sun.enterprise.web.WebContainer.createServletInstance(WebContainer.java:709) at com.sun.enterprise.web.WebModule.createServletInstance(WebModule.java:1937) at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1252) Caused by: com.sun.enterprise.container.common.spi.util.InjectionException: Exception attempting to inject Unresolved Message-Destination-Ref ua.co.rufous.server.services.WorkerServiceImpl/[email protected]@null into class ua.co.rufous.server.services.WorkerServiceImpl at com.sun.enterprise.container.common.impl.util.InjectionManagerImpl._inject(InjectionManagerImpl.java:614) at com.sun.enterprise.container.common.impl.util.InjectionManagerImpl.inject(InjectionManagerImpl.java:384) at com.sun.enterprise.container.common.impl.util.InjectionManagerImpl.injectInstance(InjectionManagerImpl.java:141) at com.sun.enterprise.container.common.impl.util.InjectionManagerImpl.injectInstance(InjectionManagerImpl.java:127) at com.sun.enterprise.container.common.impl.util.InjectionManagerImpl.createManagedObject(InjectionManagerImpl.java:306) ... 27 more Caused by: com.sun.enterprise.container.common.spi.util.InjectionException: Illegal use of static field private static javax.jms.Queue ua.co.rufous.server.services.WorkerServiceImpl.queue on class that only supports instance-based injection at com.sun.enterprise.container.common.impl.util.InjectionManagerImpl._inject(InjectionManagerImpl.java:532) ... 31 more |#] my MDB : /** * asadmin commands * asadmin create-jms-resource --restype javax.jms.ConnectionFactory jms/Tarturus * asadmin create-jms-resource --restype javax.jms.Queue jms/StyxMDB * asadmin list-jms-resources */ @MessageDriven(mappedName = "jms/StyxMDB", activationConfig = { @ActivationConfigProperty(propertyName = "connectionFactoryJndiName", propertyValue = "jms/Tarturus"), @ActivationConfigProperty(propertyName = "acknowledgeMode", propertyValue = "Auto-acknowledge"), @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue") }) public class StyxMDB implements MessageListener { @EJB private ActivationProcessingLocal aProcessing; public StyxMDB() { } public void onMessage(Message message) { try { TextMessage msg = (TextMessage) message; String hash = msg.getText(); GluttonyLogger.getInstance().writeInfoLog("geted jms message hash = " + hash); } catch (JMSException e) { } } } everything work good without static declaration: @Resource(mappedName = "jms/Tarturus") private ConnectionFactory connectionFactory; @Resource(mappedName = "jms/StyxMDB") private Queue queue; private Connection connection; Question 2: what is the best practice for working with MDB : processing full request in onMessage() or calling another bean(Stateless bean in my case) in onMessage() method that would process it. Processing including few calls to soap services, so the full processing time could be for a 3 seconds. Thank you.

    Read the article

  • Maven Assembly Plugin - install the created assembly

    - by Walter White
    I have a project that simply consists of files. I want to package those files into a zip and store them in a maven repository. I have the assembly plugin configured to build the zip file and that part works just fine, but I cannot seem to figure out how to install the zip file? Also, if I want to use this assembly in another artifact, how would I do that? I am intending on calling dependency:unpack, but I don't have an artifact in the repository to unpack. How can I get a zip file to be in my repository so that I may re-use it in another artifact? parent pom <build> <plugins> <plugin> <!--<groupId>org.apache.maven.plugins</groupId>--> <artifactId>maven-assembly-plugin</artifactId> <version>2.2-beta-5</version> <configuration> <filters> <filter></filter> </filters> <descriptors> <descriptor>../packaging.xml</descriptor> </descriptors> </configuration> </plugin> </plugins> </build> Child POM <parent> <groupId>com. ... .virtualHost</groupId> <artifactId>pom</artifactId> <version>0.0.1</version> <relativePath>../pom.xml</relativePath> </parent> <name>Virtual Host - ***</name> <groupId>com. ... .virtualHost</groupId> <artifactId>***</artifactId> <version>0.0.1</version> <packaging>pom</packaging> I filtered the name out. Is this POM correct? I just want to bundle files for a particular virtual host together. Thanks, Walter

    Read the article

< Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >