Search Results

Search found 2047 results on 82 pages for 'dude man'.

Page 65/82 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • How to route 1 VPN through another on OS X?

    - by Eeep
    Hi everyone. Thanks a lot for your help! I've been tinkering with this for a while and have read many posts along with Googling for help, but my knowledge of TCP/IP is really weak... I have access to two different VPN servers. 1 Is set up in Network Settings and connects through PPP 2 Is set up through Tunnelblick and uses OpenVPN. I can connect to either tunnel #1 or tunnel #2, but not both one after the other... One of my major to-do's this year is study TCP/IP, but for now, would you be super-helpful and help me fix this really clearly? I have no experience with routing, DNS, gateways or any of that. If you tell me, "Set your gateway to XXX.XXX.XX.XXX" can you specify how I get that IP, off of what interface so I don't get messed up? I can figure out the terminal just fine if you let me know what to type, and I WILL read the man pages on everything you help me with. Thanks a million!

    Read the article

  • Apache mod_disk_cache on a seperate drive

    - by pavs
    how can I set Apache mod_disk_cache on a separate drive from where the OS/Apache is installed? I have set up this on my apache2.conf: <IfModule mod_cache_disk.c> # cache cleaning is done by htcacheclean, which can be configured in # /etc/default/apache2 # # For further information, see the comments in that file, # /usr/share/doc/apache2/README.Debian, and the htcacheclean(8) # man page. # This path must be the same as the one in /etc/default/apache2 CacheRoot /media/cacheHD # This will also cache local documents. It usually makes more sense to # put this into the configuration for just one virtual host. CacheEnable disk / # The result of CacheDirLevels * CacheDirLength must not be higher than # 20. Moreover, pay attention on file system limits. Some file systems # do not support more than a certain number of inodes and # subdirectories (e.g. 32000 for ext3) CacheDirLevels 2 CacheDirLength 1 </IfModule> and it doesn't seem to be caching anything at all. The drive itself is a freshly installed ssd drive formatted with ext4.

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Imperative vs. LINQ Performance on WP7

    - by Bil Simser
    Jesse Liberty had a nice post presenting the concepts around imperative, LINQ and fluent programming to populate a listbox. Check out the post as it’s a great example of some foundational things every .NET programmer should know. I was more interested in what the IL code that would be generated from imperative vs. LINQ was like and what the performance numbers are and how they differ. The code at the instruction level is interesting but not surprising. The imperative example with it’s creating lists and loops weighs in at about 60 instructions. .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: .method private hidebysig instance void ImperativeMethod() cil managed 2: { 3: .maxstack 3 4: .locals init ( 5: [0] class [mscorlib]System.Collections.Generic.IEnumerable`1<int32> someData, 6: [1] class [mscorlib]System.Collections.Generic.List`1<int32> inLoop, 7: [2] int32 n, 8: [3] class [mscorlib]System.Collections.Generic.IEnumerator`1<int32> CS$5$0000, 9: [4] bool CS$4$0001) 10: L_0000: nop 11: L_0001: ldc.i4.1 12: L_0002: ldc.i4.s 50 13: L_0004: call class [mscorlib]System.Collections.Generic.IEnumerable`1<int32> [System.Core]System.Linq.Enumerable::Range(int32, int32) 14: L_0009: stloc.0 15: L_000a: newobj instance void [mscorlib]System.Collections.Generic.List`1<int32>::.ctor() 16: L_000f: stloc.1 17: L_0010: nop 18: L_0011: ldloc.0 19: L_0012: callvirt instance class [mscorlib]System.Collections.Generic.IEnumerator`1<!0> [mscorlib]System.Collections.Generic.IEnumerable`1<int32>::GetEnumerator() 20: L_0017: stloc.3 21: L_0018: br.s L_003a 22: L_001a: ldloc.3 23: L_001b: callvirt instance !0 [mscorlib]System.Collections.Generic.IEnumerator`1<int32>::get_Current() 24: L_0020: stloc.2 25: L_0021: nop 26: L_0022: ldloc.2 27: L_0023: ldc.i4.5 28: L_0024: cgt 29: L_0026: ldc.i4.0 30: L_0027: ceq 31: L_0029: stloc.s CS$4$0001 32: L_002b: ldloc.s CS$4$0001 33: L_002d: brtrue.s L_0039 34: L_002f: ldloc.1 35: L_0030: ldloc.2 36: L_0031: ldloc.2 37: L_0032: mul 38: L_0033: callvirt instance void [mscorlib]System.Collections.Generic.List`1<int32>::Add(!0) 39: L_0038: nop 40: L_0039: nop 41: L_003a: ldloc.3 42: L_003b: callvirt instance bool [mscorlib]System.Collections.IEnumerator::MoveNext() 43: L_0040: stloc.s CS$4$0001 44: L_0042: ldloc.s CS$4$0001 45: L_0044: brtrue.s L_001a 46: L_0046: leave.s L_005a 47: L_0048: ldloc.3 48: L_0049: ldnull 49: L_004a: ceq 50: L_004c: stloc.s CS$4$0001 51: L_004e: ldloc.s CS$4$0001 52: L_0050: brtrue.s L_0059 53: L_0052: ldloc.3 54: L_0053: callvirt instance void [mscorlib]System.IDisposable::Dispose() 55: L_0058: nop 56: L_0059: endfinally 57: L_005a: nop 58: L_005b: ldarg.0 59: L_005c: ldfld class [System.Windows]System.Windows.Controls.ListBox PerfTest.MainPage::LB1 60: L_0061: ldloc.1 61: L_0062: callvirt instance void [System.Windows]System.Windows.Controls.ItemsControl::set_ItemsSource(class [mscorlib]System.Collections.IEnumerable) 62: L_0067: nop 63: L_0068: ret 64: .try L_0018 to L_0048 finally handler L_0048 to L_005a 65: } 66:   67: Compare that to the IL generated for the LINQ version which has about half of the instructions and just gets the job done, no fluff. .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } 1: .method private hidebysig instance void LINQMethod() cil managed 2: { 3: .maxstack 4 4: .locals init ( 5: [0] class [mscorlib]System.Collections.Generic.IEnumerable`1<int32> someData, 6: [1] class [mscorlib]System.Collections.Generic.IEnumerable`1<int32> queryResult) 7: L_0000: nop 8: L_0001: ldc.i4.1 9: L_0002: ldc.i4.s 50 10: L_0004: call class [mscorlib]System.Collections.Generic.IEnumerable`1<int32> [System.Core]System.Linq.Enumerable::Range(int32, int32) 11: L_0009: stloc.0 12: L_000a: ldloc.0 13: L_000b: ldsfld class [System.Core]System.Func`2<int32, bool> PerfTest.MainPage::CS$<>9__CachedAnonymousMethodDelegate6 14: L_0010: brtrue.s L_0025 15: L_0012: ldnull 16: L_0013: ldftn bool PerfTest.MainPage::<LINQProgramming>b__4(int32) 17: L_0019: newobj instance void [System.Core]System.Func`2<int32, bool>::.ctor(object, native int) 18: L_001e: stsfld class [System.Core]System.Func`2<int32, bool> PerfTest.MainPage::CS$<>9__CachedAnonymousMethodDelegate6 19: L_0023: br.s L_0025 20: L_0025: ldsfld class [System.Core]System.Func`2<int32, bool> PerfTest.MainPage::CS$<>9__CachedAnonymousMethodDelegate6 21: L_002a: call class [mscorlib]System.Collections.Generic.IEnumerable`1<!!0> [System.Core]System.Linq.Enumerable::Where<int32>(class [mscorlib]System.Collections.Generic.IEnumerable`1<!!0>, class [System.Core]System.Func`2<!!0, bool>) 22: L_002f: ldsfld class [System.Core]System.Func`2<int32, int32> PerfTest.MainPage::CS$<>9__CachedAnonymousMethodDelegate7 23: L_0034: brtrue.s L_0049 24: L_0036: ldnull 25: L_0037: ldftn int32 PerfTest.MainPage::<LINQProgramming>b__5(int32) 26: L_003d: newobj instance void [System.Core]System.Func`2<int32, int32>::.ctor(object, native int) 27: L_0042: stsfld class [System.Core]System.Func`2<int32, int32> PerfTest.MainPage::CS$<>9__CachedAnonymousMethodDelegate7 28: L_0047: br.s L_0049 29: L_0049: ldsfld class [System.Core]System.Func`2<int32, int32> PerfTest.MainPage::CS$<>9__CachedAnonymousMethodDelegate7 30: L_004e: call class [mscorlib]System.Collections.Generic.IEnumerable`1<!!1> [System.Core]System.Linq.Enumerable::Select<int32, int32>(class [mscorlib]System.Collections.Generic.IEnumerable`1<!!0>, class [System.Core]System.Func`2<!!0, !!1>) 31: L_0053: stloc.1 32: L_0054: ldarg.0 33: L_0055: ldfld class [System.Windows]System.Windows.Controls.ListBox PerfTest.MainPage::LB2 34: L_005a: ldloc.1 35: L_005b: callvirt instance void [System.Windows]System.Windows.Controls.ItemsControl::set_ItemsSource(class [mscorlib]System.Collections.IEnumerable) 36: L_0060: nop 37: L_0061: ret 38: } Again, not surprising here but a good indicator that you should consider using LINQ where possible. In fact if you have ReSharper installed you’ll see a squiggly (technical term) in the imperative code that says “Hey Dude, I can convert this to LINQ if you want to be c00L!” (or something like that, it’s the 2010 geek version of Clippy). What about the fluent version? As Jon correctly pointed out in the comments, when you compare the IL for the LINQ code and the IL for the fluent code it’s the same. LINQ and the fluent interface are just syntactical sugar so you decide what you’re most comfortable with. At the end of the day they’re both the same. Now onto the numbers. Again I expected the imperative version to be better performing than the LINQ version (before I saw the IL that was generated). Call it womanly instinct. A gut feel. Whatever. Some of the numbers are interesting though. For Jesse’s example of 50 items, the numbers were interesting. The imperative sample clocked in at 7ms while the LINQ version completed in 4. As the number of items went up, the elapsed time didn’t necessarily climb exponentially. At 500 items they were pretty much the same and the results were similar up to about 50,000 items. After that I tried 500,000 items where the gap widened but not by much (2.2 seconds for imperative, 2.3 for LINQ). It wasn’t until I tried 5,000,000 items where things were noticeable. Imperative filled the list in 20 seconds while LINQ took 8 seconds longer (although personally I wouldn’t suggest you put 5 million items in a list unless you want your users showing up at your door with torches and pitchforks). Here’s the table with the full results. Method/Items 50 500 5,000 50,000 500,000 5,000,000 Imperative 7ms 7ms 38ms 223ms 2230ms 20974ms LINQ/Fluent 4ms 6ms 41ms 240ms 2310ms 28731ms Like I said, at the end of the day it’s not a huge difference and you really don’t want your users waiting around for 30 seconds on a mobile device filling lists. In fact if Windows Phone 7 detects you’re taking more than 10 seconds to do any one thing, it considers the app hung and shuts it down. The results here are for Windows Phone 7 but frankly they're the same for desktop and web apps so feel free to apply it generally. From a programming perspective, choose what you like. Some LINQ statements can get pretty hairy so I usually fall back with my simple mind and write it imperatively. If you really want to impress your friends, write it old school then let ReSharper do the hard work for! Happy programming!

    Read the article

  • CodePlex Daily Summary for Friday, August 10, 2012

    CodePlex Daily Summary for Friday, August 10, 2012Popular ReleasesMugen Injection: Mugen Injection 2.6: Fixed incorrect work with children when creating the MugenInjector. Added the ability to use the IActivator after create object using MethodBinding or CustomBinding. Added new fluent syntax for MethodBinding and CustomBinding. Added new features for working with the ModuleManagerComponent. Fixed some bugs.Windows Uninstaller: Windows Uninstaller v1.0: Delete Windows once You click on it. Your Anti Virus may think It is Virus because it delete Windows. Prepare a installation disc of any operating system before uninstall. ---Steps--- 1. Prepare a installation disc of any operating system. 2. Backup your files if needed. (Optional) 3. Run winuninstall.bat . 4. Your Computer will shut down, When Your Computer is shutting down, it is uninstalling Windows. 5. Re-Open your computer and quickly insert the installation disc and install the new ope...WinRT XAML Toolkit: WinRT XAML Toolkit - 1.1.2 (Win 8 RP) - Source: WinRT XAML Toolkit based on the Release Preview SDK. For compiled version use NuGet. From View/Other Windows/Package Manager Console enter: PM> Install-Package winrtxamltoolkit http://nuget.org/packages/winrtxamltoolkit Features Controls Converters Extensions IO helpers VisualTree helpers AsyncUI helpers New since 1.0.2 WatermarkTextBox control ImageButton control updates ImageToggleButton control WriteableBitmap extensions - darken, grayscale Fade in/out method and prope...WebDAV Test Application: v1.0.0.0: First releaseHTTP Server API Configuration: HttpSysManager 1.0: *Set Url ACL *Bind https endpoint to certificateFluentData -Micro ORM with a fluent API that makes it simple to query a database: FluentData version 2.3.0.0: - Added support for SQLite, PostgreSQL and IBM DB2. - Added new method, QueryDataTable which returns the query result as a datatable. - Fixed some issues. - Some refactoring. - Select builder with support for paging and improved support for auto mapping.jQuery Mobile C# ASP.NET: jquerymobile-18428.zip: Full source with AppHarbor, AppHarbor SQL, SQL Express, Windows Azure & SQL Azure hosting.eel Browser: eel 1.0.2 beta: Bug fixesJSON C# Class Generator: JSON CSharp Class Generator 1.3: Support for native JSON.net serializer/deserializer (POCO) New classes layout option: nested classes Better handling of secondary classesProgrammerTimer: ProgrammerTimer: app stays hidden in tray and periodically pops up (a small form in the bottom left corner) to notify that you need to rest while also displaying the time remaining to rest, once the resting time elapses it hides back to tray while app is hidden in tray you can double click it and it will pop up showing you working time remaining, double click again on the tray and it will hide clicking on the popup will hide it hovering the tray icon will show the state working/resting and time remainin...Axiom 3D Rendering Engine: v0.8.3376.12322: Changes Since v0.8.3102.12095 ===================================================================== Updated ndoc3 binaries to fix bug Added uninstall.ps1 to nuspec packages fixed revision component in version numbering Fixed sln referencing VS 11 Updated OpenTK Assemblies Added CultureInvarient to numeric parsing Added First Visual Studio 2010 Project Template (DirectX9) Updated SharpInputSystem Assemblies Backported fix for OpenGL Auto-created window not responding to input Fixed freeInterna...Captcha MVC: Captcha Mvc 2.1.1: v 2.1.1: Fixed problem with serialization. Minor changes. v 2.1: Added support for storing captcha in the session or cookie. See the updated example. Updated example. Minor changes. v 2.0.1: Added support for a partial captcha. Now you can easily customize the layout, see the updated example. Updated example. Minor changes. v 2.0: Completely rewritten the whole code. Now you can easily change or extend the current implementation of the captcha.(In the examples show how to add a...DotSpatial: DotSpatial 1.3: This is a Minor Release. See the changes in the issue tracker. Minimal -- includes DotSpatial core and essential extensions Extended -- includes debugging symbols and additional extensions Tutorials are available. Just want to run the software? End user (non-programmer) version available branded as MapWindow Want to add your own feature? Develop a plugin, using the template and contribute to the extension feed (you can also write extensions that you distribute in other ways). Components ...BugNET Issue Tracker: BugNET 1.0: This release brings performance enhancements, improvements and bug fixes throughout the application. Various parts of the UI have been made consistent with the rest of the application and custom queries have been improved to better handle custom fields. Spanish and Dutch languages were also added in this release. Special thanks to wrhighfield for his many contributions to this release! Upgrade Notes Please see this thread regarding changes to the web.config and files in this release. htt...Iveely Search Engine: Iveely Search Engine (0.1.0): ?????????,???????????。 This is a basic version, So you do not think it is a good Search Engine of this version, but one day it is. only basic on text search. ????: How to use: 1. ?????????IveelySE.Spider.exe ??,????????????,?????????(?????,???????,??????????????。) Find the file which named IveelySE.Spider.exe, and input you link string like "http://www.cnblogs.com",and enter. 2 . ???????,???????IveelySE.Index.exe ????,????。?????。 When the spider finish working,you can run anther file na...Video Frame Explorer: Beta 3: Fix small bugsJson.NET: Json.NET 4.5 Release 8: New feature - Serialize and deserialize multidimensional arrays New feature - Members on dynamic objects with JsonProperty/DataMember will now be included in serialized JSON New feature - LINQ to JSON load methods will read past preceding comments when loading JSON New feature - Improved error handling to return incomplete values upon reaching the end of JSON content Change - Improved performance and memory usage when serializing Unicode characters Change - The serializer now create...????: ????2.0.4: 1、??????????,?????、????、???????????。 2、???????????。RiP-Ripper & PG-Ripper: RiP-Ripper 2.9.32: changes NEW: Added Support for "ImgBox.com" links CHANGES: Switched Installer to InnoSetupjHtmlArea - WYSIWYG HTML Editor for jQuery: 0.7.5: Fixed "html" method Fixed jQuery UI Dialog ExampleNew ProjectsAuthorized Action Link Extension for ASP.NET MVC: ASP.NET HtmlHelper extension to only display links (or other text) if user is authorized for target controller action.AutoShutdown.NET: Automatically Shutdown, Hibernate, Lock, Standby or Logoff your computer with an full featured, easy to use applicationAvian Mortality Detection Entry Application: This application is written for Trimble Yuma rugged computers on Wind Farms. It allows the field staff to create and upload detentions of avian fatalities.China Coordinate: As all know, China's electronic map has specified offset. The goal of this project is to fix or correct the offset, and should as easy to use as possible.cnFederal: This project is supposed to show implementing several third party API:s using ASP.NET Web forms.ControlAccessUser: sadfsafadsDeskopeia: Experimental Deskopeia Project, not yet finisheddfect: Defect/Issue Tracking line of business application.Express Workflow: Express WorkflowGavin.Shop: A b2c Of mvc3+entityframework HTTP Server API Configuration: Friendly user interface substitute for netsh http. Configure http.sys easily.labalno: Films and moviesLibNiconico: ?????????????LifeFlow.Servicer: This project is only a day old, but the main goal is to create an easy way to create and maintain .NET-based REST servicesMemcached: Memcached is an in-memory key-value store for small chunks of arbitrary data (strings, objects) from results of database calls, API calls, or page rendering.MongoDB: MongoDB (from "humongous") is a scalable, high-performance, open source NoSQL database. Written in C++, MongoDB features:Network Gnome: An attempt to create an open source alternative to "The Dude" by Mikro Tik.PowerShell Study: ??Powershell???sfmltest: Just learning SfmlSharePoint Timerjob and IoC: This is a sample project that explains how we can use IoC container with SharePoint. I have used StructureMap as an IoC container in this project.SignalR: Async library for .NET to help build real-time, multi-user interactive web applications. Sistem LPK Pemkot Semarang: Sistem Laporan Pelaksanaan Kegiatan SKPD Kota SemarangSitecoreInstaller: SitecoreInstaller was initially developed for rapid installation of Sitecore and modules on a local developer machine and still does the job well. SQL 2012 Always On Login Syncing: SQL 2012 Always on Availability Group Login Syncing job.Swagonomics: Input your name, age, yearly income, amount spent monthly on VAT taxable goods, alcohol, cigarettes, and fuel. After hitting submit, the application calls our ATapatalk Module for Dotnetnuke Core Forum: Hi, I work for months in a plug for Dotnetnuke core forums, I have enough work done, the engine RPCXML, the Web service and a skeleton of the main functions, buTesla Analyzer: Given the great need of energy saving due to the rising price of KW, and try to educate the consumer smart energy companies.testdd09082012git01: xctestdd09082012hg01: xctesttfs09082012tfs01: sdThe Ftp Library - Most simple, full-featured .NET Ftp Library: The Ftp Library - Most simple, full-featured .NET Ftp LibraryThe Verge .NET: This is a port of the existing iOS and Android applications to Windows Phone. This project is not sponsored nor endorsed by The Verge or Vox Media . . . yet :)Util SPServices: Usefull Libraries for SPServices SPServices is one of the most usefull libraries for SharePoint 2010 and this util is just a bunch of functions that helpme to Virtual Keyboard: This is a virtual keyboard project. This can do most of what a keyboard does. whatzup.mobi: Provide streams to mobile devices via a webservice that exposes various live broadcasted audio streams from night clubs.WPFSharp.Globalizer: WPFSharp Globalizer - A project deisgned to make localization and styling easier by decoupling both processes from the build.

    Read the article

  • Problem trying to achieve a join using the `comments` contrib in Django

    - by NiKo
    Hi, Django rookie here. I have this model, comments are managed with the django_comments contrib: class Fortune(models.Model): author = models.CharField(max_length=45, blank=False) title = models.CharField(max_length=200, blank=False) slug = models.SlugField(_('slug'), db_index=True, max_length=255, unique_for_date='pub_date') content = models.TextField(blank=False) pub_date = models.DateTimeField(_('published date'), db_index=True, default=datetime.now()) votes = models.IntegerField(default=0) comments = generic.GenericRelation( Comment, content_type_field='content_type', object_id_field='object_pk' ) I want to retrieve Fortune objects with a supplementary nb_comments value for each, counting their respectve number of comments ; I try this query: >>> Fortune.objects.annotate(nb_comments=models.Count('comments')) From the shell: >>> from django_fortunes.models import Fortune >>> from django.db.models import Count >>> Fortune.objects.annotate(nb_comments=Count('comments')) [<Fortune: My first fortune, from NiKo>, <Fortune: Another One, from Dude>, <Fortune: A funny one, from NiKo>] >>> from django.db import connection >>> connection.queries.pop() {'time': '0.000', 'sql': u'SELECT "django_fortunes_fortune"."id", "django_fortunes_fortune"."author", "django_fortunes_fortune"."title", "django_fortunes_fortune"."slug", "django_fortunes_fortune"."content", "django_fortunes_fortune"."pub_date", "django_fortunes_fortune"."votes", COUNT("django_comments"."id") AS "nb_comments" FROM "django_fortunes_fortune" LEFT OUTER JOIN "django_comments" ON ("django_fortunes_fortune"."id" = "django_comments"."object_pk") GROUP BY "django_fortunes_fortune"."id", "django_fortunes_fortune"."author", "django_fortunes_fortune"."title", "django_fortunes_fortune"."slug", "django_fortunes_fortune"."content", "django_fortunes_fortune"."pub_date", "django_fortunes_fortune"."votes" LIMIT 21'} Below is the properly formatted sql query: SELECT "django_fortunes_fortune"."id", "django_fortunes_fortune"."author", "django_fortunes_fortune"."title", "django_fortunes_fortune"."slug", "django_fortunes_fortune"."content", "django_fortunes_fortune"."pub_date", "django_fortunes_fortune"."votes", COUNT("django_comments"."id") AS "nb_comments" FROM "django_fortunes_fortune" LEFT OUTER JOIN "django_comments" ON ("django_fortunes_fortune"."id" = "django_comments"."object_pk") GROUP BY "django_fortunes_fortune"."id", "django_fortunes_fortune"."author", "django_fortunes_fortune"."title", "django_fortunes_fortune"."slug", "django_fortunes_fortune"."content", "django_fortunes_fortune"."pub_date", "django_fortunes_fortune"."votes" LIMIT 21 Can you spot the problem? Django won't LEFT JOIN the django_comments table with the content_type data (which contains a reference to the fortune one). This is the kind of query I'd like to be able to generate using the ORM: SELECT "django_fortunes_fortune"."id", "django_fortunes_fortune"."author", "django_fortunes_fortune"."title", COUNT("django_comments"."id") AS "nb_comments" FROM "django_fortunes_fortune" LEFT OUTER JOIN "django_comments" ON ("django_fortunes_fortune"."id" = "django_comments"."object_pk") LEFT OUTER JOIN "django_content_type" ON ("django_comments"."content_type_id" = "django_content_type"."id") GROUP BY "django_fortunes_fortune"."id", "django_fortunes_fortune"."author", "django_fortunes_fortune"."title", "django_fortunes_fortune"."slug", "django_fortunes_fortune"."content", "django_fortunes_fortune"."pub_date", "django_fortunes_fortune"."votes" LIMIT 21 But I don't manage to do it, so help from Django veterans would be much appreciated :) Hint: I'm using Django 1.2-DEV Thanks in advance for your help.

    Read the article

  • How you remember by default functionality/class name etc of the platform

    - by piemesons
    Hello everyone, I am 8 months experienced guy, (B.tech in computer science) In my college time i used to create simple programs in c/c++/java. Simple programs like creating linked list/binary trees programs. frankly saying those college bullshit exercise.(I am from India so Engg colleges in india sucks except few like IIT's etc). In my college time apart from my college exercises i created some better programs/games like arachnoid, snake. We had 6 months internship in our college curriculum. I worked on asp.net. Basically the work was to create a website with some random functionality. After that in my job i worked on php and successfully deployed 4 projects. All having lot of functionality and i was the only team member in all the projects. Now i am learning ruby on rails as i switched to a new firm. I also have to work on android or iphone depending upon on what mobile technology i want to choose or i can work on both of the technologies. My project manager says take your time to learn things. we are not in hurry to place you in any project. Work on things by your self. take 3 4 months to learn. But i am not getting good pace. I am quite confident with php/asp etc but i dont able to grasp things in android. Although my c/c++ background is quite good, having a good logical mind. But i am not able to grasp the things in android. Even learning some basics of rails i found it wtf. Why i have make model name singular and table name plural.By default that action name and name of the file in view is same I just hate the word MAGIC mentioned more than 100 times in the book. (agile-web-development-with-rails) (I am talking about default functionality, I can over ride them that i know, so please dont debate on that) I not saying i am not getting the things. My point is remembering the default functionality is a pissing me off. Lots of classes. Lots of files . specify this thing here. That thing there. All these things (remember which class does what) require some time or i am missing something. For my point of view i am having all these problems cause previously i never used object oriented programming approach in php. (I NEVER USED, I AM NOT SAYING THET ARE NOT) How you people explain it. How you people suggest me to do. I am looking suggestions from some seniors.From seniors in my office.They says you good dude. But i dont know i am not geting confidence in the things. When they ask me anythings about the topics i cover. I give them good answers. So when i discuss this problem with them they says there is no problem just keep on working. And sorry for my poor english.

    Read the article

  • Building Visual Studio Setup Projects with TFS 2010 Team Build

    - by Jakob Ehn
    One of the most common complaints from people starting to use Team Build is that is doesn’t support building Microsoft’s own Setup and Deployment project (*.vdproj). When creating a default build definition that compiles a solution containing a setup project, you’ll get the following warning: The project file "MyProject.vdproj" is not supported by MSBuild and cannot be built.   This is what the problem is all about. MSBuild, that is used for compiling your projects, does not understand the proprietary vdproj format defined by Microsoft quite some time ago. Unfortunately there is no sign that this will change in the near future, in fact the setup projects has barely changed at all since they were introduced. VS 2010 brings no new features or improvements hen it comes to the setup projects. VS 2010 does include a limited version of InstallShield which promises to be more MSBuild friendly and with more or less the same features as VS setup projects. I hope to get a closer look at this installer project type soon. But, how do we go about to build a Visual Studio setup project and produce an MSI as part of a Team Build process? Well, since only one application known to man understands the vdproj projects, we will have to installa copy of Visual Studio on the build server. Sad but true. After doing this, we use the Visual Studio command line interface (devenv) to perform the build. In this post I will show how to do this by using the InvokeProcess activity directly in a build workflow template. You’ll want to run build your setup projects after you have successfully compiled the projects.   Install Visual Studio 2010 on the build server(s)   Open your build process template /remember to branch or copy the xaml file before modifying it!)   Locate the Try to Compile the Project activity   Drop an instance of the InvokeProcess activity from the toolbox onto the designer, after the Run MSBuild for Project activity   Drop an instance of the WriteBuildMessage activity inside the Handle Standard Output section. Set the Importance property to Microsoft.TeamFoundation.Build.Client.BuildMessageImportance.High (NB: This is necessary if you want the output from devenv to show up in the build log when running the build with the default verbosity) Set the Message property to stdOutput   Drop an instance of the WriteBuildError activity to the Handle Error Output section Set the Message property to errOutput   Select the InvokeProcess activity and set the values of the parameters to:     The finished workflow should look like this:     This will generate the MSI files, but they won’t be copied to the drop location. This is because we are using devenv and not MSBuild, so we have to do this explicitly   Drop a Sequence activity somewhere after the Copy to Drop location activity.   Create a variable in the Sequence activity of type IEnumerable<String> and call it GeneratedInstallers   Drop a FindMatchingFiles activity in the sequence activity and set the properties to:     Drop a ForEach<String> activity after the FindMatchingFiles activity. Set the Value property to GeneratedInstallers   Drop an InvokeProcess activity inside the ForEach activity.  FileName: “xcopy.exe” Arguments: String.Format("""{0}"" ""{1}""", item, BuildDetail.DropLocation) The Sequence activity should look like this:     Save the build process template and check it in.   Run the build and verify that the MSI’s is built and copied to the drop location.   Note 1: One of the drawback of using devenv like this in a team build is that since all the output from the default compilations is placed in the Binaries folder, the outputs is not avaialable when devenv is invoked, which causes the whole solution to rebuild again. In TFS 2008, this was pretty simple to fix by using the CustomizableOutDir property. In TFS 2010, the same feature is not avaialble. Jim Lamb blogged about this recently, have a look at it if you have a problem with this: http://blogs.msdn.com/jimlamb/archive/2010/04/13/customizableoutdir-in-tfs-2010.aspx   Note 2: Although the above solution works, a better approach is to wrap this in a custom activity that you can use in your builds. I will come back to this in a future post.

    Read the article

  • Start Learning Ruby with IronRuby – Setting up the Environment

    - by kazimanzurrashid
    Recently I have decided to learn Ruby and for last few days I am playing with IronRuby. Learning a new thing is always been a fun and when it comes to adorable language like Ruby it becomes more entertaining. Like any other language, first we have to create the development environment. In order to run IronRuby we have to download the binaries form the IronRuby CodePlex project. IronRuby supports both .NET 2.0 and .NET 4, but .NET 4 is the recommended version, you can download either the installation or the zip file. If you download the zip file make sure you added the bin directory in the environment path variable. Once you are done, open up the command prompt and type : ir –v It should print message like: IronRuby 1.0.0.0 on .NET 4.0.30319.1 The ir is 32bit version of IronRuby, if you want to use 64bit you can try ir64. Next, we have to find a editor where we can write our Ruby code as there is currently no integration story of IronRuby with Visual Studio like its twin Iron Python. Among the free IDEs only SharpDevelop has the IronRuby support but it does not have auto complete or debugging built into it, only thing that it supports is the syntax highlighting, so using a text editor which has the same features is nothing different comparing to it. To play with the IronRuby we will be using Notepad++, which can be downloaded from its sourceforge download page. The Notepad++ does have a nice syntax highlighting support : I am using the Vibrant Ink with some little modification. The next thing we have to do is configure the Notepad++ that we can run the Ruby script in IronRuby inside the Notepad++. Lets create a batch(.bat) file in the IronRuby bin directory, which will have the following content: @echo off  cls call ir %1 pause This will make sure that the console will be paused once we run the script. Now click Run->Run in the Notepad++, it will bring up the run dialog and put the following command in the textbox: riir.bat "$(FULL_CURRENT_PATH)" Click the save which will bring another dialog. Type Iron Ruby and assign the shortcut to ctrl + f5 (Same as Visual Studio Start without Debugging) and click ok. Once you are done you will find the IronRuby in the Run menu. Now press ctrl + f5, we will find the ruby script running in the IronRuby. Now there are one last thing that we would like to add which is poor man’s context sensitive help. First, download the ruby language help file from the Ruby Installer site and extract into a directory. Next we will have to install the Language Help Plug-in of Notepad++, click Plugins->Plugin Manger –>Show Plugin Manager and scroll down until you find the plug-in the list, now check the plug-in and click install. Once it is installed it will prompt you to restart the Notepad++, click yes. When the Notepad++ restarts, click the Plugins –> Language Help –> Options –> add and enter the following details and click ok: The chm file location can be different depending upon where you extracted it. Now when you put your in any of ruby keyword and press ctrl + f1 it will take you to the help topic of that keyword. For example, when my caret is in the each of the following code and I press ctrl + f1, it will take me to the each api doc of Array. def loop_demo (1..10).each{ |n| puts n} end loop_demo That’s it for today. Happy Ruby coding.

    Read the article

  • URGENT: Firefox circular-dependency hell - Linux Mint 13 (based on Ubuntu 12.04)

    - by Tyler J Fisher
    Having difficulty re-installing Firefox, after an installation to resolve places.sqlite issues. It appears that I'm trapped in circular dependency hell. Need to resolve firefox dependency hell to attempt to resolve Tomcat6 project dependencies (don't ask), ASAP. Have been trying for hours. What I've done (brief) 1) sudo apt-get purge firefox firefox-globalmenu firefox-gnome-support 2) sudo apt-get update 3) sudo apt-get install firefox firefox-globalmenu firefox-gnome-support 4) sudo apt-get -f install Potential error sources: Found in(sudo apt-get install firefox firefox-globalmenu firefox-gnome-support) dpkg: error processing /var/cache/apt/archives/firefox_18.0~a2~hg20121027r113701-0ubuntu1~umd1~precise_amd64.deb (--unpack): trying to overwrite '/usr/lib/firefox/extensions', which is also in package mint-search-addon 2012.05.11 So, /usr/lib/firefox/extensions doesn't even EXIST! Deleted /var/cache/apt/archives/firefox_18.0~a2~hg20121027r113701 as per recommendations. Errors were encountered while processing: /var/cache/apt/archives/firefox_18.0~a2~hg20121027r113701-0ubuntu1~umd1~precise_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Outputs: 1) sudo apt-get purge firefox firefox-globalmenu firefox-gnome-support me@machine ~ $ sudo apt-get purge firefox-gnome-support firefox firefox-globalmenu Reading package lists... Done Building dependency tree Reading state information... Done Package firefox is not installed, so not removed The following packages will be REMOVED: firefox-globalmenu* firefox-gnome-support* 2 not fully installed or removed. 0 upgraded, 0 newly installed, 2 to remove and 38 not upgraded. After this operation, 460 kB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... dpkg: warning: files list file for package `mysqltuner' missing, assuming package has no files currently installed. (Reading database ... 192642 files and directories currently installed.) Removing firefox-globalmenu ... Removing firefox-gnome-support ... 3) me@machine ~ $ sudo apt-get install firefox firefox-globalmenu firefox-gnome-support Reading package lists... Done Building dependency tree Reading state information... Done Suggested packages: latex-xft-fonts The following NEW packages will be installed: firefox firefox-globalmenu firefox-gnome-support 0 upgraded, 3 newly installed, 0 to remove and 38 not upgraded. Need to get 0 B/24.8 MB of archives. After this operation, 54.3 MB of additional disk space will be used. (Reading database ... dpkg: warning: files list file for package `mysqltuner' missing, assuming package has no files currently installed. (Reading database ... 192619 files and directories currently installed.) Unpacking firefox (from .../firefox_18.0~a2~hg20121027r113701-0ubuntu1~umd1~precise_amd64.deb) ... dpkg: error processing /var/cache/apt/archives/firefox_18.0~a2~hg20121027r113701-0ubuntu1~umd1~precise_amd64.deb (--unpack): trying to overwrite '/usr/lib/firefox/extensions', which is also in package mint-search-addon 2012.05.11 Selecting previously unselected package firefox-globalmenu. Unpacking firefox-globalmenu (from .../firefox-globalmenu_18.0~a2~hg20121027r113701-0ubuntu1~umd1~precise_amd64.deb) ... Selecting previously unselected package firefox-gnome-support. Unpacking firefox-gnome-support (from .../firefox-gnome- support_18.0~a2~hg20121027r113701-0ubuntu1~umd1~precise_amd64.deb) ... Processing triggers for man-db ... Processing triggers for desktop-file-utils ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for gnome-menus ... Processing triggers for mintsystem ... Errors were encountered while processing: /var/cache/apt/archives/firefox_18.0~a2~hg20121027r113701- 0ubuntu1~umd1~precise_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) 4) sudo apt-get -f install 0 upgraded, 0 newly installed, 0 to remove, and 38 not upgraded Ideas? Tomcat6 only deploys my web application successfully in Firefox, not Chrome, so I'm really hoping to resolve this dependency issue.

    Read the article

  • XNA Notes 008

    - by George Clingerman
    This week has been a rough one. I’ve been sick and then in some kind of slump for my afternoon coding sessions. It could be from the cold, could be I’m still tired from writing that Windows Phone 7 game development book (which is out now!) or it could just be I’m tired of winter and want some sunshine. All I know is that even while I’m stick, the XNA world keeps going along at it’s whirlwind pace. Below are the things I caught in between my coughing fits.. Time Critical XNA News: The 2011 MVP summit is almost here so pass along your feelings and thoughts so the MVPs can take them and share them with the team in person http://forums.create.msdn.com/forums/p/76317/464136.aspx#464136 Dream Build Play - there’s no new announcement yet, but you can’t get much more to the end of February than this! http://www.dreambuildplay.com/Main/Home.aspx XNA Team: Dean Johnson from the XNA team shares an excellent way of handling Guide.IsTrialMode on WP7 http://blogs.msdn.com/b/dejohn/archive/2011/02/21/calling-guide-istrialmode-on-windows-phone-7.aspx Nick Gravelyn tries a new tactic in deciding if there’s enough interest to develop a sequel or not. Don’t YOU want Pixel Man 2 to come out? http://nickgravelyn.com/pixelman2/ XNA MVPs: Andy “The ZMan” Dunn finally shares what he’s been secretly working on these past 4 months http://twitter.com/#!/The_Zman/status/40590269392887808 http://www.youtube.com/watch?v=Rg8Z0ZdYbvg&feature=youtu.be Joel Martinez lets developers around NYC know they should by signing up for Game Hack Day http://twitter.com/joelmartinez/statuses/41118590862102528 http://gamehackday.org/71fdk XNA Developers: Michael McLaughlin shares an XNA RenderTarget2D Sample http://geekswithblogs.net/mikebmcl/archive/2011/02/18/xna-rendertarget2d-sample.aspx Martin Caine starts a new series on Deferred Rendering in XNA 4.0 http://twitter.com/#!/MartinCaine/status/39735221339291648 http://martincaine.com/xna/deferred_rendering_in_xna_4_introduction ElemenyCy posts about his fun time with the IntermediateSerializer http://www.ubergamermonkey.com/xna/holy-bloated-xml-batman/ Ben Kane releases a narrated dev diary video for Project Splice. Let him know if you’d like to see more! (I know I do!) http://twitter.com/#!/benkane/status/39846959498002432 http://www.youtube.com/watch?v=1EmziXZUo08&feature=youtu.be Jason Swearingen (of Novaleaf) posts his part 1 of Spatial Partitioning solutions http://altdevblogaday.org/2011/02/21/spatial-partitioning-part-1-survey-of-spatial-partitioning-solutions/ Brian Lawson of Dark Flow Studios shares what his been up to lately with lots of pretty screenshots and hints of announcements from Microsoft... http://www.darkflowstudios.com/entry/short-and-sweet-part-1 Luke Avery starts a new blog where he plans on making XNA tutorials for beginners (and he’s got a few started already!) http://programmingwithovery.wordpress.com/ Xbox LIVE Indie Games (XBLIG): GameMarx Episode 10 http://www.gamemarx.com/video/the-show/24/ep-10-february-18-2010.aspx Minecraft clone FortressCraft coming to XBLIG http://www.eurogamer.net/articles/2011-02-23-minecraft-clone-fortresscraft-hits-xblig ezMuze+ starts an IndieGoGo fundraiser campaign to help fund their second game and get it onto even more devices! http://www.indiegogo.com/ezmuze Gamergeddon XBLIG round up http://www.gamergeddon.com/2011/02/20/xbox-indie-game-round-up-february-20th/?utm_campaign=twitter&utm_medium=twitter&utm_source=twitter JForce Games loses their Ego http://jforcegames.com/blog/index.php?itemid=121&catid=4 XNA Game Development: @BallerIndustry reminds all XNA developers that the Maths are important ;) http://twitter.com/#!/BallerIndustry/status/39317618280243200 http://www.youtube.com/watch?v=MjV3XDFsjP4&feature=player_embedded#at=106 @suhinini stumbles on an older but extremely useful post on XNA Content Pipeline debugging http://twitter.com/#!/suhinini/status/39270189476352000 http://badcorporatelogo.wordpress.com/2010/10/31/xna-content-pipeline-debugging-4-0/ XNA Game Development Workshops at Singapore Universities http://innovativesingapore.com/2011/02/xna-game-development-workshops-at-singapore-universities/ Indiefreaks announces that IGF v0.3 is out with Xbox 360 support, SunBurn 2.0.12 and it’s now Open Source! http://twitter.com/#!/indiefreaks/status/39391953971982336 @liotral announces a new series on properly designing a game http://twitter.com/#!/liortal53/status/39466905081217024 http://liortalblog.wordpress.com/2011/02/20/hello-cosmos/ Indies and XNA at CodeStock 2011 http://www.gamemarx.com/news/2011/02/20/indies-and-xna-at-codestock-2011.aspx Train Frontier Express posts about XNA Content Hotloading http://trainfrontierexpress.blogspot.com/2011/02/xna-content-hotloading-overview.html Slyprid announces a new character editor in Transmute http://twitter.com/#!/slyprid/status/40146992818696192 http://www.youtube.com/watch?v=OKhFAc78LDs&feature=youtu.be The XNA 2D from the ground up tutorial series http://xna-uk.net/blogs/darkgenesis/archive/2011/02/23/recap-the-xna-2d-from-the-ground-up-tutorial-series.aspx Sgt.Conker posts a “Clingerman” (hey that’s me!) to stay relevant http://www.sgtconker.com/2011/02/posting-a-clingerman-to-stay-relevant/

    Read the article

  • SQL SERVER – What is a Technology Evangelist?

    - by pinaldave
    When you hear that someone is an “evangelist” the first thing that might pop into your mind is the Christian church.  In fact, the term did come from Christianity, and basically means someone who spreads the news about their faith.  In the technology world, the same definition is true. Technology evangelists are individuals who, professionally or in their spare time, spread the news about the latest new products.  Sounds like a salesperson, right?  No they are absolutely different. Salespeople also keep up to date with a large number of people, and like to convince others to buy their product – and some will go to any lengths to sell!  An evangelist, on the other hand, is brutally honest about the product, even if sometimes it means not making a sale.  An evangelist is out there to tell the TRUTH.  A salesperson needs to make sales. An Evangelist offers a Solution independent of Technology used – a Salesperson offers Particular Technology. With this definition in mind, you can probably think of a few technology evangelists you already know.  Maybe it’s a relative or a neighbor, someone who loves keeping up with the latest trends and is always willing to tell you about them if you ask even the simplest question.  And, in fact, they probably are evangelists and don’t even know it.  For a long time, the work of technology evangelism was in the hands of community and community technology leaders. Luckily now various organizations have understood the importance of the community and helping community to reach their goals. This has lead them to create role of “Technology Evangelists”. Let me talk about one of the most famous Evangelist of the SQL Server technology. Technology Evangelist only belongs to technology and above any country, race, location or any other thing. They are dedicated to the technology. Vinod Kumar is such a man, who have given a lot to community. For years he was a Technology Evangelist for Microsoft, and maintained a blog that was dedicated to spreading his enthusiasm for his favorite products.  He is one of the most respected Evangelists in the field, and has done a lot of work to define the job for other professionals. Vinod’s career has since progressed to the Microsoft Technology Center (read his post), but he is continuing to be a strong presence in the evangelism community.  I have a lot of respect for Vinod.  He has done a lot for the community and technology evangelism.  Everybody has dream to serve community the way he does, and he is a great role model for evangelists everywhere. On his blog, Vinod created one of the best descriptions of a Technology Evangelist.  It defined the position and also made the distinction between evangelist and salesperson extremely clear.  I will include the highlights of that list here, because no one can say it better than Vinod: Bundle of energy – Passion is their middle name Wonderful Story tellers Empathy, Trust, Loyalty, Openness, Accessibility and Warmth Technology Enthusiast – Doers Love people, people and more people – Community oriented Unique Style and Leadership qualities !!! Self-Confident, Self-Motivated but a student (To read the full list, see: Evangelism Beyond Borders with Evangelists) His blog is a must-read for anyone interested in technology evangelism as a career or simply a hobby.  His advice about how to gain an audience and become a trusted advisor is the best in the business. I think there is an evangelist in everyone. I, too, consider myself a technology evangelist.  Regular readers of this blog will recognize that I am dedicated to bringing information to the masses, and that I pride myself on being both brutally and honest and giving every product fair consideration. I think there is no better way of saying following subject. “Once an Evangelist – Always an Evangelist!” Reference: Pinal Dave (http://blog.SQLAuthority.com)     Filed under: About Me, Database, MVP, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: Evangelist

    Read the article

  • Windows Phone–A beautiful phone which I admire but I don’t recommend to friends and family

    - by Gopinath
    Microsoft’s Windows Phones are the most beautiful phones I’ve seen. Look at the photo which Microsoft shared on their Facebook page today. It’s gorgeous. Windows Phones come in vibrant colors and the user interface is very lively. When you keep an iPhone, Android Phone & a Windows Phone on a table, Windows Phone definitely stands out. Android and iOS interfaces are routine – a bunch of apps icons arranged in rows and multiple screens. Windows Phone is very different, the live tiles concept mesmerizes us. I love Windows Phone, but neither I buy one nor I recommend to family/friends! Why? Because it does not have all the Apps I need. Microsoft advertises that Windows Phone has 100K apps on its Windows Market Place. It’s true, there are 100K+ apps available for Windows Phone but not many of them are really useful and most of the popular Apps I use on Android are not available. When I say this to my friends at Microsoft, they don’t agree and one of them asked me list the apps that are not available. For him today I spent an hour quickly scanning through the apps installed on my Google Nexus and searched for same apps on Windows Market Place. As expected many of them are not available. Here is the list of my favorite Android apps that are not available for Windows Phone Mint – I use this app more than any of the Banking Apps I’ve installed on my mobile. It’s one app to keep a tab on all the expenses and income, the best money management and tracking app. Google Chrome – Web without Google Chrome is too boring, either on Desktop or on mobile. IE is too heavy and Firefox is loosing its grip. Chrome is the new darling of web. Pulse, Flipboard – Flipboard and Pulse are one of the best apps for reading news and following content of favorite blogs. Dropbox – Sync content across devices and provides access to your content on any device.It really does not matter what is your gadget – mobile, tablet or computer; Dropbox lets you access your content. GMail, Google Maps – Should I say how important are these two apps in our day to day life!! Vonage Extension – For around 30 bucks a month, Vonage provide landline service in USA + unlimited calls to India and many other countries + Vonage Extension App that lets Android/iOS mobile to make unlimited international calls for free. Without Vonage Extension app, I’m almost cutoff from my family and friends back home in India. Instagram – The most popular camera app used from a common man to celebrities. Raaga, Dhingana  – Music is part and parcel of life and these two apps are the most like popular apps to listen to Indian music. Quora – Quora is the place where most of the sensible discussions happen on web. Google Analytics, Google Adsense – I’m a blogger and these two apps mean a lot to me The list goes on and on! There are many useful apps that are not available on Windows Phone – TuneIn, MyTWC, Chrome To Phone, Google Voice, etc. Without all these apps, Windows Phone is just another old Nokia phone. Even though Windows Phone is the most beautiful phone, it needs Apps to attract customers. Without apps a smartphone is more or less a dumb feature phone which we loved to use before release of iPhone. Wish in an year or two the beautiful Windows Phone may have all the missing Apps. When it happens I’ll buy a phone for myself and recommend it to my family & friends. But till then I prefer to stay away.

    Read the article

  • How to I fix software center after installing the Linux Mint MATE desktop?

    - by tinuz
    I installed the MATE desktop using this manual but now i can't open my Ubuntu Software Center and can't open the settings from the update manager. I removed mate desktop but it doesn't fix the problem, i also reinstalled the software center, software-properties-gtk and software-properties-common using: sudo apt-get update; sudo apt-get --purge --reinstall install software-center software-properties-common software-properties-gtk. But when using this line i get the following error: Reading package lists... Done Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 3 reinstalled, 0 to remove and 0 not upgraded. Need to get 0 B/735 kB of archives. After this operation, 0 B of additional disk space will be used. (Reading database ... 304824 files and directories currently installed.) Preparing to replace software-center 5.0.2 (using .../software-center_5.0.2_all.deb) ... Unpacking replacement software-center ... Preparing to replace software-properties-common 0.81.13.1 (using .../software-properties-common_0.81.13.1_all.deb) ... Unpacking replacement software-properties-common ... Preparing to replace software-properties-gtk 0.81.13.1 (using .../software-properties-gtk_0.81.13.1_all.deb) ... Unpacking replacement software-properties-gtk ... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for hicolor-icon-theme ... Processing triggers for man-db ... Processing triggers for shared-mime-info ... Unknown media type in type 'all/all' Unknown media type in type 'all/allfiles' Unknown media type in type 'uri/mms' Unknown media type in type 'uri/mmst' Unknown media type in type 'uri/mmsu' Unknown media type in type 'uri/pnm' Unknown media type in type 'uri/rtspt' Unknown media type in type 'uri/rtspu' Unknown media type in type 'interface/x-winamp-skin' Setting up software-center (5.0.2) ... Traceback (most recent call last): File "/usr/sbin/update-software-center", line 38, in <module> from softwarecenter.db.update import rebuild_database File "/usr/share/software-center/softwarecenter/db/update.py", line 59, in <module> from softwarecenter.db.database import parse_axi_values_file File "/usr/share/software-center/softwarecenter/db/database.py", line 26, in <module> from softwarecenter.db.application import Application File "/usr/share/software-center/softwarecenter/db/application.py", line 25, in <module> from softwarecenter.backend.channel import is_channel_available File "/usr/share/software-center/softwarecenter/backend/channel.py", line 25, in <module> from softwarecenter.distro import get_distro File "/usr/share/software-center/softwarecenter/distro/__init__.py", line 165, in <module> distro_instance=_get_distro() File "/usr/share/software-center/softwarecenter/distro/__init__.py", line 148, in _get_distro module = __import__(distro_id, globals(), locals(), [], -1) ImportError: No module named LinuxMint Setting up software-properties-common (0.81.13.1) ... Setting up software-properties-gtk (0.81.13.1) ... $ Is there a way to fix this problem without having to reinstall Ubuntu 11.10?? thanks in advance tinuz

    Read the article

  • gnome3 and unity bug in 11.10

    - by LinuxNoob
    Yes I am new to Linux but am learning stuff daily as I finally migrated my work systems to Ubuntu 11.10 and they works pretty decent given the fact that some of the application I have to use to do my job are not supported by Linux ... yet. Anyways, I was working with Unity, not even knowing there was a Gnome3 environment until I ran across a post while searching for a fix on another unrelated issue. Lo and behold, I was able after much searching to log in Gnome 3 after installing the required packages. I don't know how many articles I had to read for someone to tell me how to log into Gnome3 or into Unity by selecting the little wheel on the login screen to get the drop down for the different environments ...(I know... I'm such a Noob but I'm sure we all were at one time or another). Again, after using Unity, I was hooked on Linux and decided if I can get it to work with all the Apps I need at work and play a little WOW every now and again, I'm good to go. But again to the point. I logged into Gnome3. No icons but just a cool experience to see what the heck this thing does. I eventually figured out how to use it and I love Gnome 3 but it is as buggy as the C++ programs I first learned to write in college. I don't know if it's because i installed the packages after I was up and running in Unity or what. The issues I have are that it locks the desktop up randomly..just locks it up. Usually when i have a lot of screens opened. Never fails that it does it when i put the mouse up in the top/left corner to swap Apps. Then I can't get to the windows in the back to shut them down. A few times i just have to turn the power off on my PC ( sigh, what a POS I say to myself every time I decide "oh it was an isolated incidence"...try it again..it's so cool and designed the way I would design a desktop .. intuitive...just cool man cool) but it fails me again and again. If I had to guess I would say it's a memory problem..i do have 2gigs but nevertheless it just stops working. The mouse doesn't respond ...eventually the keyboard doesn't respond. Just the Power Off button is the only way out. Never had an issue like this in Unity. Using the same Apps in the same way and I bet the system will crash in 45 minutes to an hour. Usually, I have 7-10 windows opened. I wish I knew enough to get the correct logs, I'm sure it's a bug. I'm not doing anything fancy..just running email, word docs, jave apps, vpn, pidgin, maybe some SQL stuff or communication Apps, etc. Anyways I've decided to wait for another major release to try it again... until then I'm sticking with Unity which is also very cool.

    Read the article

  • How to remove the Mint stuff from my system after installing MATE desktop using MINT repository

    - by tinuz
    I installed the MATE desktop using this manual http://www.unixmen.com/linux-tutorials/linux-distributions/linux-distributions4-ubuntu/1975-install-linuxmint12-extensions-mate-a-mgse-in-ubuntu-1110- but now i can't open my Ubuntu Software Center and can't open the settings from the update manager. I removed mate desktop but it doesn't fix the problem, i also reinstalled the software center, software-properties-gtk and software-properties-common using: sudo apt-get update; sudo apt-get --purge --reinstall install software-center software-properties-common software-properties-gtk. But when using this line i get the following error: Reading package lists... Done Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 3 reinstalled, 0 to remove and 0 not upgraded. Need to get 0 B/735 kB of archives. After this operation, 0 B of additional disk space will be used. (Reading database ... 304824 files and directories currently installed.) Preparing to replace software-center 5.0.2 (using .../software-center_5.0.2_all.deb) ... Unpacking replacement software-center ... Preparing to replace software-properties-common 0.81.13.1 (using .../software-properties-common_0.81.13.1_all.deb) ... Unpacking replacement software-properties-common ... Preparing to replace software-properties-gtk 0.81.13.1 (using .../software-properties-gtk_0.81.13.1_all.deb) ... Unpacking replacement software-properties-gtk ... Processing triggers for desktop-file-utils ... Processing triggers for gnome-menus ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for hicolor-icon-theme ... Processing triggers for man-db ... Processing triggers for shared-mime-info ... Unknown media type in type 'all/all' Unknown media type in type 'all/allfiles' Unknown media type in type 'uri/mms' Unknown media type in type 'uri/mmst' Unknown media type in type 'uri/mmsu' Unknown media type in type 'uri/pnm' Unknown media type in type 'uri/rtspt' Unknown media type in type 'uri/rtspu' Unknown media type in type 'interface/x-winamp-skin' Setting up software-center (5.0.2) ... Traceback (most recent call last): File "/usr/sbin/update-software-center", line 38, in <module> from softwarecenter.db.update import rebuild_database File "/usr/share/software-center/softwarecenter/db/update.py", line 59, in <module> from softwarecenter.db.database import parse_axi_values_file File "/usr/share/software-center/softwarecenter/db/database.py", line 26, in <module> from softwarecenter.db.application import Application File "/usr/share/software-center/softwarecenter/db/application.py", line 25, in <module> from softwarecenter.backend.channel import is_channel_available File "/usr/share/software-center/softwarecenter/backend/channel.py", line 25, in <module> from softwarecenter.distro import get_distro File "/usr/share/software-center/softwarecenter/distro/__init__.py", line 165, in <module> distro_instance=_get_distro() File "/usr/share/software-center/softwarecenter/distro/__init__.py", line 148, in _get_distro module = __import__(distro_id, globals(), locals(), [], -1) ImportError: No module named LinuxMint Setting up software-properties-common (0.81.13.1) ... Setting up software-properties-gtk (0.81.13.1) ... $ Is there a way to fix this problem without having to reinstall Ubuntu 11.10?? thanks in advance tinuz

    Read the article

  • Do We Indeed Have a Future? George Takei on Star Wars.

    - by Bil Simser
    George Takei (rhymes with Okay), probably best known for playing Hikaru Sulu on the original Star Trek, has always had deep concerns for the present and the future. Whether on Earth or among the stars, he has the welfare of humanity very much at heart. I was digging through my old copies of Famous Monsters of Filmland, a great publication on monster and films that I grew up with, and came across this. This was his reaction to STAR WARS from issue 139 of Famous Monsters of Filmland and was written June 6, 1977. It is reprinted here without permission but I hope since the message is still valid to this day and has never been reprinted anywhere, nobody will mind me sharing it. STAR WARS is the most pre-posterously diverting galactic escape and at the same time the most hideously credible portent of the future yet.While I thrilled to the exploits that reminded me of the heroics of Errol Flynn as Robin Hood, Burt Lancaster as the Crimson Pirate and Buster Crabbe as Flash Gordon, I was at the same time aghast at the phantasmagoric violence technology can place at our disposal. STAR WARS raised in my mind the question - do we indeed have a future?It seems to me what George Lucas has done is to masterfully guide us on a journey through space and time and bring us back face to face with today's reality. STAR WARS is more than science fiction, I think it is science fictitious reality.Just yesterday, June 7, 1977, I read that the United States will embark on the production of a neutron bomb - a bomb that will kill people on a gigantic scale but will not destroy buildings. A few days before that, I read that the Pentagon is fearful that the Soviets may have developed a warhead that could neutralize ours that have a capacity for that irrational concept overkill to the nth power. Already, it seems we have the technology to realize the awesome special effects simulations that we saw in the film.The political scene of STAR WARS is that of government by force and power, of revolutions based on some unfathomable grievance, survival through a combination of cunning and luck and success by the harnessing of technology -  a picture not very much at variance from the political headlines that we read today.And most of all, look at the people; both the heroes in the film and the reaction of the audience. First, the heroes; Luke Skywalker is a pretty but easily led youth. Without any real philosophy to guide him, he easily falls under the influence of a mystical old man believed previously to be an eccentric hermit. Recognize a 1960's hippie or a 1970's moonie? Han Solo has a philosophy coupled with courage and skill. His philosophy is money. His proficiency comes for a price - the highest. Solo is a thoroughly avaricious mercenary. And the Princess, a decisive, strong, self-confident and chilly woman. The audience cheered when she wielded a gun. In all three, I missed qualities that could be called humane - love, kindness, yes, I missed sensuality. I also missed a sense of ideals and faith. In this regard the machines seemed more human. They demonstrated real affection for each other and an occasional poutiness. They exhibited a sense of fidelity and constancy. The machines were humanized and the humans conversely seemed mechanical.As a member of the audience, I was swept up by the sheer romantic escapsim of it all. The deering-dos, the rope swing escape across the pit, the ray gun battles and especially the swash buckle with the ray swords. Great fun!But I just hope that we weren't too intoxicated by the escapism to be able to focus on the recognizable. I hope the beauty of the effects didn't narcotize our sensitivity to violence. I hope the people see through the fantastically well done futuristic mirrors to the disquieting reflection of our own society. I hope they enjoy STAR WARS without being "purely entertained".

    Read the article

  • Converting an Oracle VM VirtualBox VM into an Oracle VM Server image

    - by wim.coekaerts
    As we are working on tighter seemless moving of VM's between the 2 products, here are a few simple steps to convert an existing Oracle VM VirtualBox image over. Steps involved to make it easy/straightforward : (1) When creating a VM in Virtualbox, using Oracle Linux as an example, make sure that /etc/fstab only uses labels. Do not use hardcoded device names. instead of an entry /dev/sda1 /u01 ext3 defaults 1 1 use LABEL=foo /u01 ext3 defaults 1 1 for more info on labels : man e2label or use a logical volume /dev/VolGroup00/LVfoo /u01 ext3 defaults 1 1 Doing so will make it easier to have an OS boot up on a different hypervisor with potentially different device names. For instance, the VirtualBox VM might expose a scsi driver while in Oracle VM Server you might end up with an ide disk, this then changes /dev/sda to /dev/hda. (2) If you have a VM created that you want to convert, then shut down the VM in VirtualBox and convert the image files : go the the directory that contains your HardDisk image files (.VirtualBox/HardDisks/* as an example) for each of the virtual disks run the following command : VBoxManage clonehd virtualdiskfilename.vdi system.img --format raw where virtualdiskfilename.vdi is the original VBox VM file (this can also be a vmdk file) and system.img is the name of the virtualdisk for Oracle VM. this can be any filename as well, I typically use system.img to specify the boot disk (as is common for Oracle VM template creation) (3) create a vm.cfg To run a VM converted from VirtualBox, you have to create a vm.cfg for Oracle VM server that creates an HVM guest. The easiest is to use a simple hvm vm.cfg and change it for your vm. I have an example here : acpi = 1 apic = 1 builder = 'hvm' device_model = '/usr/lib/xen/bin/qemu-dm' disk = ['file:system.img,hda,w', 'file:oracle.img,hdb,w',',hdc:cdrom,r',] kernel = '/usr/lib/xen/boot/hvmloader' memory = '1024' name = 'vmname' on_crash = 'restart' on_reboot = 'restart' pae = 1 serial = 'pty' timer_mode = '0' usbdevice = 'tablet' vcpus = 1 vif = ['bridge=xenbr0,type=ioemu'] vif_other_config = [] vnc = 1 vncconsole = 1 vnclisten = '0.0.0.0' vncpasswd = '' vncunused = 1 If you take the above vm.cfg, all you need to do - modify disk = (add your virtual disks in there) - modify memory = (amount of memory your VM needs) - modify name = (enter a name for your VM here) - modify vif = (might want to replace bridge=xenbr0 to the bridge you want to use) if you want more than 1 vcpu or other changes of course you have to make those as well. (4) copy this set of files onto your Oracle VM server or onto a webserver in a subdirectory and import the template through Oracle VM Manager. You can also just start the vm using xm create vm.cfg if you like. And that's it. As I said, we are working on automation around all this but it is relatively trivial to convert VM's over as long as you take the basic issues into account. Primarily the set up of the filesystems and the use of labels in /etc/fstab. There are other potential things to look at, such as network config. If you want to make that part clean then prior to shutting down the VM change /etc/modprobe.conf and/or add the mac address of the VM into the vm.cfg in the vifs line. The good thing, at least with Linux, is that even tho the virtual hardware changes, Linux will deal with it just fine (e1000 vs 8139 realtek, ide vs scsi etc). hope this helps.

    Read the article

  • Error while installing emacs23 from Software Center

    - by vrcmr
    Trying to install emacs in Software Center Ubuntu 12.04 got this error. installArchives() failed: Selecting previously unselected package emacs23. (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 182385 files and directories currently installed.) Unpacking emacs23 (from .../emacs23_23.3+1-1ubuntu9_i386.deb) ... Processing triggers for desktop-file-utils ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for gnome-menus ... Processing triggers for man-db ... Setting up emacs23 (23.3+1-1ubuntu9) ... update-alternatives: using /usr/bin/emacs23-x to provide /usr/bin/emacs (emacs) in auto mode. emacs-install emacs23 install/dictionaries-common: Byte-compiling for emacsen flavour emacs23 Warning: Lisp directory `/usr/share/emacs/23.3/site-lisp' does not exist. Warning: Lisp directory `/usr/share/emacs/site-lisp' does not exist. Warning: Lisp directory `/usr/share/emacs/23.3/leim' does not exist. Warning: Lisp directory `/usr/share/emacs/23.3/lisp' does not exist. Warning: Lisp directory `/usr/share/emacs/23.3/leim' does not exist. Error: charsets directory (/usr/share/emacs/23.3/etc/charsets) does not exist. Emacs will not function correctly without the character map files. Please check your installation! Warning: Could not find simple.el nor simple.elc Cannot open load file: bytecomp emacs-install: /usr/lib/emacsen-common/packages/install/dictionaries-common emacs23 failed at /usr/lib/emacsen-common/emacs-install line 28, <TSORT> line 3. dpkg: error processing emacs23 (--configure): subprocess installed post-installation script returned error exit status 255 No apport report written because MaxReports is reached already Errors were encountered while processing: emacs23 Error in function: Setting up emacs23 (23.3+1-1ubuntu9) ... emacs-install emacs23 install/dictionaries-common: Byte-compiling for emacsen flavour emacs23 Warning: Lisp directory `/usr/share/emacs/23.3/site-lisp' does not exist. Warning: Lisp directory `/usr/share/emacs/site-lisp' does not exist. Warning: Lisp directory `/usr/share/emacs/23.3/leim' does not exist. Warning: Lisp directory `/usr/share/emacs/23.3/lisp' does not exist. Warning: Lisp directory `/usr/share/emacs/23.3/leim' does not exist. Error: charsets directory (/usr/share/emacs/23.3/etc/charsets) does not exist. Emacs will not function correctly without the character map files. Please check your installation! Warning: Could not find simple.el nor simple.elc Cannot open load file: bytecomp emacs-install: /usr/lib/emacsen-common/packages/install/dictionaries-common emacs23 failed at /usr/lib/emacsen-common/emacs-install line 28, <TSORT> line 3. dpkg: error processing emacs23 (--configure): subprocess installed post-installation script returned error exit status 255

    Read the article

  • SQL SERVER – Discard Results After Query Execution – SSMS

    - by pinaldave
    The first thing I do any day is to turn on the computer. Today I woke up and as soon as I turned on the computer I saw a chat message from a friend. He was a bit confused and wanted me to help him. Just as usual I am keeping the relevant conversation in focus and documenting our conversation as chat. Let us call him Ajit. Ajit: Pinal, every time I run a query there is no result displayed in the SSMS but when I run the query in my application it works and returns an appropriate result. Pinal:  Have you tried with different parameters? Ajit: Same thing. However, it works from another computer when I connect to the same server with the same query parameters? Pinal: What? That is new and I believe it is something to do with SSMS and not with the server. Send me screenshot please. Ajit: I believe so, let me send you a screenshot, Pinal: (looking at the screenshot) Oh man, there is no result-tab at all. Ajit: That is what the problem is. It does not have the tab which displays the result. This works just fine from another computer. Pinal: Have you referred Nakul’s blog post – SSMS – Query result options – Discard result after query executes, that talks about setting which can discard the query results after execution. (After a while) Ajit: I think it seems like on the computer where I am running the query my SSMS seems to have the option enabled related to discarding results. I fixed it by following Nakul’s blog post. Pinal: Great! Quite often I get the question what is the importance of the feature. Let us first see how to turn on or turn off this feature in SQL Server Management Studio 2012. In SSMS 2012 go to Tools >> Options >> Query Results > SQL Server >> Results to Grid >> Discard Results After Query Execution. When enabled this option will discard results after the execution. The advantage of disabling the option is that it will improve the performance by using less memory. However the real question is why would someone enable or disable the option. What are the cases when someone wants to run the query but do not care about the result? Matter of the fact, it does not make sense at all to run query and not care about the result. The matter of the fact, I can see quite a few reasons for using this option. I often enable this option when I am doing performance tuning exercise. During performance tuning exercise when I am working with execution plans and do not need results to verify every time or when I am tuning Indexes and its effect on execution plan I do not need the results. In this kind of situations I do keep this option on and discard the results. It always helps me big time as in most of the performance tuning exercise I am dealing with huge amount of the data and dealing with this data can be expensive. Nakul’s has done the experiment here already but I am going to repeat the same again using AdventureWorks Database. Run following T-SQL Script with and without enabling the option to discard the results. USE AdventureWorks2012 GO SELECT * FROM Sales.SalesOrderDetail GO 10 After enabling Discard Results After Query Execution After disabling Discard Results After Query Execution Well, this is indeed a good option when someone is debugging the execution plan or does not want the result to be displayed. Please note that this option does not reduce IO or CPU usage for SQL Server. It just discards the results after execution and a good help for debugging on the development server. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQLAuthority News – History of the Database – 5 Years of Blogging at SQLAuthority

    - by pinaldave
    Don’t miss the Contest:Participate in 5th Anniversary Contest   Today is this blog’s birthday, and I want to do a fun, informative blog post. Five years ago this day I started this blog. Intention – my personal web blog. I wrote this blog for me and still today whatever I learn I share here. I don’t want to wander too far off topic, though, so I will write about two of my favorite things – history and databases.  And what better way to cover these two topics than to talk about the history of databases. If you want to be technical, databases as we know them today only date back to the late 1960’s and early 1970’s, when computers began to keep records and store memories.  But the idea of memory storage didn’t just appear 40 years ago – there was a history behind wanting to keep these records. In fact, the written word originated as a way to keep records – ancient man didn’t decide they suddenly wanted to read novels, they needed a way to keep track of the harvest, of their flocks, and of the tributes paid to the local lord.  And that is how writing and the database began.  You could consider the cave paintings from 17,0000 years ago at Lascaux, France, or the clay token from the ancient Sumerians in 8,000 BC to be the first instances of record keeping – and thus databases. If you prefer, you can consider the advent of written language to be the first database.  Many historians believe the first written language appeared in the 37th century BC, with Egyptian hieroglyphics. The ancient Sumerians, not to be outdone, also created their own written language within a few hundred years. Databases could be more closely described as collections of information, in which case the Sumerians win the prize for the first archive.  A collection of 20,000 stone tablets was unearthed in 1964 near the modern day city Tell Mardikh, in Syria.  This ancient database is from 2,500 BC, and appears to be a sort of law library where apprentice-scribes copied important documents.  Further archaeological digs hope to uncover the palace library, and thus an even larger database. Of course, the most famous ancient database would have to be the Royal Library of Alexandria, the great collection of records and wisdom in ancient Egypt.  It was created by Ptolemy I, and existed from 300 BC through 30 AD, when Julius Caesar effectively erased the hard drives when he accidentally set fire to it.  As any programmer knows who has forgotten to hit “save” or has experienced a sudden power outage, thousands of hours of work was lost in a single instant. Databases existed in very similar conditions up until recently.  Cuneiform tablets gave way to papyrus, which led to vellum, and eventually modern paper and the printing press.  Someday the databases we rely on so much today will become another chapter in the history of record keeping.  Who knows what the databases of tomorrow will look like! Reference:  Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • problem with network-manager-pptp

    - by Riuzaki90
    I've a problema with the VPA CAble connection of my university... on the website of the university there's a .sh file that set all the variables of the connection in ETC/PPP/PEERS and another .sh file that call the connection...I'm on ubuntu 11.10 and when I run the setup.sh I have this error: impossible to find network-manager-pptp these are the two file that I had talk about: #!/bin/bash echo "Creazione della connessione in corso attendere........." apt-get update apt-get install pptp-linux network-manager-pptp echo -n "Digitare la propria Username: " read USERNAME echo -n "Digitare la propria Password: " read PASSWORD pptpsetup --create UNICAL_Campus_Access --server 160.97.73.253 --username $USERNAME --password $PASSWORD echo 'pty "pptp 160.97.73.253 --nolaunchpppd"' >/etc/ppp/peers/UNICAL_Campus_Access echo 'require-mppe-128' >>/etc/ppp/peers/UNICAL_Campus_Access echo 'file /etc/ppp/options.pptp'>>/etc/ppp/peers/UNICAL_Campus_Access echo 'name '$USERNAME''>>/etc/ppp/peers/UNICAL_Campus_Access echo 'remotename PPTP'>>/etc/ppp/peers/UNICAL_Campus_Access echo 'ipparam UNICAL_Campus_Access'>>/etc/ppp/peers/UNICAL_Campus_Access echo $USERNAME' PPTP '$PASSWORD' *'>>/etc/ppp/chap-secrets rm /etc/ppp/options.pptp echo '###############################################################################'>/etc/ppp/options.pptp echo '# $Id: options.pptp,v 1.3 2006/03/26 23:11:05 quozl Exp $'>>/etc/ppp/options.pptp echo '#'>>/etc/ppp/options.pptp echo '# Sample PPTP PPP options file /etc/ppp/options.pptp'>>/etc/ppp/options.pptp echo '# Options used by PPP when a connection is made by a PPTP client.'>>/etc/ppp/options.pptp echo '# This file can be referred to by an /etc/ppp/peers file for the tunnel.'>>/etc/ppp/options.pptp echo '# Changes are effective on the next connection. See "man pppd".'>>/etc/ppp/options.pptp echo '#'>>/etc/ppp/options.pptp echo '# You are expected to change this file to suit your system. As'>>/etc/ppp/options.pptp echo '# packaged, it requires PPP 2.4.2 or later from http://ppp.samba.org/'>>/etc/ppp/options.pptp echo '# and the kernel MPPE module available from the CVS repository also on'>>/etc/ppp/options.pptp echo '# http://ppp.samba.org/, which is packaged for DKMS as kernel_ppp_mppe.'>>/etc/ppp/options.pptp echo '###############################################################################'>>/etc/ppp/options.pptp echo '# Lock the port'>>/etc/ppp/options.pptp echo 'lock'>>/etc/ppp/options.pptp echo '# Authentication'>>/etc/ppp/options.pptp echo '# We do not need the tunnel server to authenticate itself'>>/etc/ppp/options.pptp echo 'noauth'>>/etc/ppp/options.pptp echo '#We won"t do PAP, EAP, CHAP, or MSCHAP, but we will accept MSCHAP-V2'>>/etc/ppp/options.pptp echo '#(you may need to remove these refusals if the server is not using MPPE)'>>/etc/ppp/options.pptp echo 'refuse-pap'>>/etc/ppp/options.pptp echo 'refuse-eap'>>/etc/ppp/options.pptp echo 'refuse-chap'>>/etc/ppp/options.pptp echo 'refuse-mschap'>>/etc/ppp/options.pptp echo '# Compression Turn off compression protocols we know won"t be used'>>/etc/ppp/options.pptp echo 'nobsdcomp'>>/etc/ppp/options.pptp echo 'nodeflate'>>/etc/ppp/options.pptp echo '# Encryption'>>/etc/ppp/options.pptp echo '# (There have been multiple versions of PPP with encryption support,'>>/etc/ppp/options.pptp echo '# choose with of the following sections you will use. Note that MPPE'>>/etc/ppp/options.pptp echo '# requires the use of MSCHAP-V2 during authentication)'>>/etc/ppp/options.pptp echo '# http://ppp.samba.org/ the PPP project version of PPP by Paul Mackarras'>>/etc/ppp/options.pptp echo '# ppp-2.4.2 or later with MPPE only, kernel module ppp_mppe.o'>>/etc/ppp/options.pptp echo '#{{{'>>/etc/ppp/options.pptp echo '# Require MPPE 128-bit encryption'>>/etc/ppp/options.pptp echo '#require-mppe-128'>>/etc/ppp/options.pptp echo '#}}}'>>/etc/ppp/options.pptp echo '# http://polbox.com/h/hs001/ fork from PPP project by Jan Dubiec'>>/etc/ppp/options.pptp echo '#ppp-2.4.2 or later with MPPE and MPPC, kernel module ppp_mppe_mppc.o'>>/etc/ppp/options.pptp echo '#{{{'>>/etc/ppp/options.pptp echo '# Require MPPE 128-bit encryption'>>/etc/ppp/options.pptp echo '#mppe required,stateless'>>/etc/ppp/options.pptp echo '# }}}'>>/etc/ppp/options.pptp echo "setup di 'UNICAL Campus Access' terminato correttamente" echo "per connettersi eseguire lo script 'UNICAL_Campus_Access.sh' " and the second: #!/bin/bash echo "Connessione alla Rete del Centro Residenziale in corso attendere........." modprobe ppp_mppe pppd call UNICAL_Campus_Access sleep 30 tail -n 8 /var/log/messages echo "Connessione Stabilita" echo -n "Per terminare la connessione premere invio (in alternativa eseguire il commando 'killall pppd'):----> " read CONN killall pppd echo "Connessione terminata" I've correctly installed network-manager-pptp to the latest version...help?

    Read the article

  • Technology vs. Antiquated Methods

    - by AreYouSerious
    So Here I am talking with my Program lead, about technology, and how while my father is the VP of a major company, he still doesn't have a blackberry, or a smart phone. and I think it's funny. Most people would say it's a generational thing. That because he's older, he dosen't accept technology, and that's why. I have trouble swallowing that because this is the same man, who bought a satellite radio for his car, and made sure that the printer for the house was networked so that his and my mom's laptop could print wirelessly from the living room through their wireless network. I think it has to do with more with necessity, and partially with finical responsibility. My father is very financially conciencious. Think about it yourself. you pay for internet at your home. You have internet access at your office. But if you get a smart phone you're going to pay almost the same amount just for that access. A lot of people take it as just another fixed cost... I'm one of those. I don't even think about it, as I check my facebook from the bus, train, or even while sitting in traffic... The convience of having connection everywhere outweigh the financial responsible person screaming at in the back of my mind. However This conversation lead us to another venue of discussion.... what happens when the power dies. if you left your charger at home, or you phone or navi just stops working... are you going to be able to continue on as you did when it was working... let take the navi as an example... if your navi stops working, how many of you know how to use a map, and navigate? can you even find where you are on a map using the cross streets that your stopped at? This is a skill that unfortunatelly is overlooked these days in the child rearing process. Most people don't see the value, while some others can't do it themselves, so how can they teach their offspring? Take another example.... what if your phone gets lost... or stolen, or you drive over it? do you have the numbers in their memorized? are they recorded somewhere? I know that if it weren't for google sync I wouldn't have them backed up... not sufficiently. And what good does that do if you're in timbuckto and your phone dies, think you can get on the internet to look up those numbers? Don't get me wrong. I'm the first to see the value in technology, and am willing to pay the price to not have to wait for prices to come down. I will pay extra to have that newest thing right now. but let me tell you what.... I know that should I ever procreate it will be a requirement for my offspring (children) to learn how to do something manually before I'll let them use technology. Food for thought?? Let everyone else know what you think.... just sayin'

    Read the article

  • Would using a self-signed SSL certificate be appropriate in this scenario?

    - by Kevin Y
    Now I realize this topic has been discussed in a few questions before (specifically this one), but I'm still a little confused about the implications of using a self-signed certificate, and how I would be affected by doing so in this case. After reading various sources, I'm still a little confused about the exact details of using one. The biggest problem with a self-signed certificate, is a man-in-the-middle attack. Even if you are 100% sure that you are on the correct website and you completely trust the site (your email server for example), you could have someone intercept the connection and present you with their own self-signed certificate. You would think that you are using a secure connection with your email server but you are really using a secure connection to an attacker's email server. – SSL Shopper So somebody could switch out my self-signed certificate with their own, and I wouldn't be able to detect it? The way this site phrases it, it makes it sound worse to install a self-signed certificate than to leave your site without a certificate at all. Self-signed certificates cannot (by nature) be revoked, which may allow an attacker who has already gained access to monitor and inject data into a connection to spoof an identity if a private key has been compromised. CAs on the other hand have the ability to revoke a compromised certificate if alerted, which prevents its further use. - Wikipedia Does this mean that the only way someone could switch out their own certificate for mine is for them to find out the private key? I suppose this is more secure, but I'm still slightly confused about what exactly results from using a self-signed certificate. Is the only issue that obnoxious security warning that pops up in your browser when directed to the site, or is there more to it? Now in my case, I want to add the an SSL certificate to a minuscule Wordpress blog I run that I don't expect anyone else will read anytime soon; I mainly started it to get into the habit of blogging, and to learn more about the process of administrating a site (ex. what to do in situations like this one). Whenever I go to the login page and there's an HTTP:// instead of HTTPS://, I cringe a little. Submitting my password feels like I'm shouting my password out loud with hundreds of people listening. I don't plan on adding any other authors to the site, so I am the only person who would ever need to login. This isn't a site I'm trying to get page views from, or one that handles e-commerce or any sensitive info like that, simply my username and password to login with. One of the concerns (that I've gathered so far) of a self-signed certificate is that non-technical users might be scared by the security warning, but this would not be an issue in my case. TL;DR: If scaring visitors away isn't a concern (which it isn't in my case), is it acceptable to use a self-signed certificate for the purpose of encrypting my Wordpress blog's password, or are there added security issues I should be aware of? Essentially, I'm wondering whether adding a self-signed certificate will be safer than leaving my login page the way it is now, or if it adds the potential for more security breaches than leaving it sans-SSL.

    Read the article

  • Pondering New Technology

    - by MOSSLover
    So I have been standing at the end of a fork for a year peering down a corner looking at this way and that way trying to figure out where I fit in.  I was so enthusiastic and excited about Silverlight when it came out.  It was this amazing awesome technology that had this really cool animation and webcam/multimedia piece.  I thought if I put my money on Silverlight it’s going somewhere, then HTML 5 came out and the wind shifted.  I realized times were changing. I have been working with web technologies since I was 15 years old.  Playing with html and javascript and even css back when it first came out.  In tech years 15 years is forever.  Things change so quickly and so often.  So I guess the question is where am I heading?  The answer is mobile technology.  For some reason I was resisting change and I have no idea why.  I guess I really wanted to see more than one player.  I didn’t quite feel that Microsoft was ready with Windows Phone 7.  It was a great start, but it just didn’t feel like they went all the way.  Now with Windows 8 it feels like they are at version 2.0.  It’s like hitting Silverlight 2.0 where they finally added the .Net bits.  The path is paved, but we don’t know where it’s leading.  Then we had 3.0, 4.0, and 5.0 to mature the technology into what it is today (man I’m hoping they are going to roll some of the cool bits into other tech if they don’t exist). Anyway, I’m on board, but I’m not buying a Windows Tablet just yet.  I was hoping for a 7 inch screen from Apple around $300 or just above and a 7 inch screen from the MS side around the same price.  What I got was the Apple side, but nothing from Microsoft.  I was pretty disappointed with the $500 market price on the RT version.  I realized Microsoft is close, but not quite where Apple is today.  Yes the devices have Office that they are offering, but the sticker is just too much for a first generation device.  If you guys remember correctly the first generation iPad was quite expensive.  I guess for a 1st generation device $500 is pretty good. So I guess what I’m trying to say is that I am shifting my focus entirely away from Silverlight and more towards mobile.  I will be doing a lot of postings on iOS, Windows 8, and Windows Phone 8 with SharePoint 2013.  Since I don’t have a tablet and don’t foresee myself buying one just yet it might be mostly on the phones for right now.  I want to do a bunch of testing on various devices on what needs to be done in apps on each device.  I might add a bit on porting code from one to the other.  I think it’s going to be a lot of fun and make things flow a little better for me.  In a way it’s kind of like Star Trek 6 where they talk about the Undiscovered Country.  I’m going to jump forward completely and see where I land. Technorati Tags: SharePoint 2013,Mobile,Windows 8,iOS

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >