Search Results

Search found 4451 results on 179 pages for 'fred 22'.

Page 14/179 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Parsing an MS Project 2007 xml project file.

    - by fred-22
    Has anyone got any idea how to read the XML file saved by MS Project 2007? The standard binary format is .MPP but I'd like to view a project in a different viewer. I've saved the project spec as XML and the viewer I'm using needs the parent task Id for each task. Where can i find that in the rather huge amount of XML data created by ms project?

    Read the article

  • OpenCV - DLL missing, but it's not?

    - by charles-22
    I am trying just a basic program with OpenCV with the following code: #include "cv.h" #include "highgui.h" int main() { IplImage* newImg; newImg = cvLoadImage("~/apple.bmp", 1); cvNamedWindow("Window", 1); cvShowImage("Window", newImg); cvWaitKey(0); cvDestroyWindow("Window"); cvReleaseImage(&newImg); return 0; } When I run this, I get The program can't start because libcxcore200.dll is missing from your computer. Try reinstalling the program to fix this problem. However, I can see this DLL. It exists. I have added the following to the input dependencies for my linker C:\OpenCV2.0\lib\libcv200.dll.a C:\OpenCV2.0\lib\libcvaux200.dll.a C:\OpenCV2.0\lib\libcxcore200.dll.a C:\OpenCV2.0\lib\libhighgui200.dll.a What gives? I'm using visual studio 2008.

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Remove duplicates from a list of nested dictionaries

    - by user2924306
    I'm writing my first python program to manage users in Atlassian On Demand using their RESTful API. I call the users/search?username= API to retrieve lists of users, which returns JSON. The results is a list of complex dictionary types that look something like this: [ { "self": "http://www.example.com/jira/rest/api/2/user?username=fred", "name": "fred", "avatarUrls": { "24x24": "http://www.example.com/jira/secure/useravatar?size=small&ownerId=fred", "16x16": "http://www.example.com/jira/secure/useravatar?size=xsmall&ownerId=fred", "32x32": "http://www.example.com/jira/secure/useravatar?size=medium&ownerId=fred", "48x48": "http://www.example.com/jira/secure/useravatar?size=large&ownerId=fred" }, "displayName": "Fred F. User", "active": false }, { "self": "http://www.example.com/jira/rest/api/2/user?username=andrew", "name": "andrew", "avatarUrls": { "24x24": "http://www.example.com/jira/secure/useravatar?size=small&ownerId=andrew", "16x16": "http://www.example.com/jira/secure/useravatar?size=xsmall&ownerId=andrew", "32x32": "http://www.example.com/jira/secure/useravatar?size=medium&ownerId=andrew", "48x48": "http://www.example.com/jira/secure/useravatar?size=large&ownerId=andrew" }, "displayName": "Andrew Anderson", "active": false } ] I'm calling this multiple times and thus getting duplicate people in my results. I have been searching and reading but cannot figure out how to deduplicate this list. I figured out how to sort this list using a lambda function. I realize I could sort the list, then iterate and delete duplicates. I'm thinking there must be a more elegant solution. Thank you!

    Read the article

  • Flash doesn't connect to socket even though policy allows it

    - by Bart van Heukelom
    In my Flash app, I'm connecting to my server like this: Security.loadPolicyFile("xmlsocket://example.com:12860"); socket = new Socket("example.com", 12869); socket.writeByte(...); ... socket.flush(); At port 12860 I'm running a socket policy server, which (according to this document) correctly serves up my policy like this: 00000000 3c 70 6f 6c 69 63 79 2d 66 69 6c 65 2d 72 65 71 <policy- file-req 00000010 75 65 73 74 2f 3e 00 uest/>. 00000000 3c 63 72 6f 73 73 2d 64 6f 6d 61 69 6e 2d 70 6f <cross-d omain-po 00000010 6c 69 63 79 3e 3c 73 69 74 65 2d 63 6f 6e 74 72 licy><si te-contr 00000020 6f 6c 20 70 65 72 6d 69 74 74 65 64 2d 63 72 6f ol permi tted-cro 00000030 73 73 2d 64 6f 6d 61 69 6e 2d 70 6f 6c 69 63 69 ss-domai n-polici 00000040 65 73 3d 22 6d 61 73 74 65 72 2d 6f 6e 6c 79 22 es="mast er-only" 00000050 20 2f 3e 3c 61 6c 6c 6f 77 2d 61 63 63 65 73 73 /><allo w-access 00000060 2d 66 72 6f 6d 20 64 6f 6d 61 69 6e 3d 22 2a 22 -from do main="*" 00000070 20 74 6f 2d 70 6f 72 74 73 3d 22 31 32 38 36 39 to-port s="12869 00000080 22 20 2f 3e 3c 2f 63 72 6f 73 73 2d 64 6f 6d 61 " /></cr oss-doma 00000090 69 6e 2d 70 6f 6c 69 63 79 3e 00 in-polic y>. I get no security warnings, which I used to get before the policy server was in place. Still, the connection to port 12869 doesn't work. It's made (I can see with Wireshark and on the server), but no data is sent by Flash. It might be worth knowing that the SWF itself is served from example.com as well.

    Read the article

  • ???????/???Oracle Database Core Tech Seminar Oracle Data Guard,Oracle Recovery Manager(RMAN),Flashback

    - by user788995
    ????? ??:2012/05/14 ??:??????/?? Oracle Database????????????????Core Tech Seminar? ????????????????????????????????????Oracle Data Guard?Oracle Recovery Manager?Oracle Flashback Technology????????????·?????????? Active Data GuardRecovery Manager(RMAN)Flashback?????? ????????? ????????????????? http://otndnld.oracle.co.jp/ondemand/otn-seminar/movie/D3-22.wmv http://otndnld.oracle.co.jp/ondemand/otn-seminar/movie/mp4/D3-22.mp4 http://www.oracle.com/technetwork/jp/ondemand/database/db-new/d3-22-dl-1626591-ja.pdf

    Read the article

  • More SharePoint 2010 Expression Builders

    - by Ricardo Peres
    Introduction Following my last post, I decided to publish the whole set of expression builders that I use with SharePoint. For all who don’t know about expression builders, they allow us to employ a declarative approach, so that we don’t have to write code for “gluing” things together, like getting a value from the query string, the page’s underlying SPListItem or the current SPContext and assigning it to a control’s property. These expression builders are for some quite common scenarios, I use them quite often, and I hope you find them useful as well. SPContextExpression This expression builder allows us to specify an expression to be processed on the SPContext.Current property object. For example: 1: <asp:Literal runat="server" Text=“<%$ SPContextExpression:Site.RootWeb.Lists[0].Author.LoginName %>”/> It is identical to having the following code: 1: String authorName = SPContext.Current.Site.RootWeb.Lists[0].Author.LoginName; SPFarmProperty Returns a property stored on the farm level: 1: <asp:Literal runat="server" Text="<%$ SPFarmProperty:SomeProperty %>"/> Identical to: 1: Object someProperty = SPFarm.Local.Properties["SomeProperty"]; SPField Returns the value of a selected page’s list item field: 1: <asp:Literal runat="server" Text="<%$ SPField:Title %>"/> Does the same as: 1: String title = SPContext.Current.ListItem["Title"] as String; SPIsInAudience Checks if the current user belongs to an audience: 1: <asp:CheckBox runat="server" Checked="<%$ SPIsInAudience:SomeAudience %>"/> Equivalent to: 1: AudienceManager audienceManager = new AudienceManager(SPServiceContext.Current); 2: Audience audience = audienceManager.Audiences["SomeAudience"]; 3: Boolean isMember = audience.IsMember(SPContext.Current.Web.User.LoginName); SPIsInGroup Checks if the current user belongs to a group: 1: <asp:CheckBox runat="server" Checked="<%$ SPIsInGroup:SomeGroup %>"/> The equivalent C# code is: 1: SPContext.Current.Web.CurrentUser.Groups.OfType<SPGroup>().Any(x => String.Equals(x.Name, “SomeGroup”, StringComparison.OrdinalIgnoreCase)); SPProperty Returns the value of a user profile property for the current user: 1: <asp:Literal runat="server" Text="<%$ SPProperty:LastName %>"/> Where the same code in C# would be: 1: UserProfileManager upm = new UserProfileManager(SPServiceContext.Current); 2: UserProfile u = upm.GetUserProfile(false); 3: Object property = u["LastName"].Value; SPQueryString Returns a value passed on the query string: 1: <asp:GridView runat="server" PageIndex="<%$ SPQueryString:PageIndex %>" /> Is equivalent to (no SharePoint code this time): 1: Int32 pageIndex = Convert.ChangeType(typeof(Int32), HttpContext.Current.Request.QueryString["PageIndex"]); SPWebProperty Returns the value of a property stored at the site level: 1: <asp:Literal runat="server" Text="<%$ SPWebProperty:__ImagesListId %>"/> You can get the same result as: 1: String imagesListId = SPContext.Current.Web.AllProperties["__ImagesListId"] as String; Code OK, let’s move to the code. First, a common abstract base class, mainly for inheriting the conversion method: 1: public abstract class SPBaseExpressionBuilder : ExpressionBuilder 2: { 3: #region Protected static methods 4: protected static Object Convert(Object value, PropertyInfo propertyInfo) 5: { 6: if (value != null) 7: { 8: if (propertyInfo.PropertyType.IsAssignableFrom(value.GetType()) == false) 9: { 10: if (propertyInfo.PropertyType.IsEnum == true) 11: { 12: value = Enum.Parse(propertyInfo.PropertyType, value.ToString(), true); 13: } 14: else if (propertyInfo.PropertyType == typeof(String)) 15: { 16: value = value.ToString(); 17: } 18: else if ((typeof(IConvertible).IsAssignableFrom(propertyInfo.PropertyType) == true) && (typeof(IConvertible).IsAssignableFrom(value.GetType()) == true)) 19: { 20: value = System.Convert.ChangeType(value, propertyInfo.PropertyType); 21: } 22: } 23: } 24:  25: return (value); 26: } 27: #endregion 28:  29: #region Public override methods 30: public override CodeExpression GetCodeExpression(BoundPropertyEntry entry, Object parsedData, ExpressionBuilderContext context) 31: { 32: if (String.IsNullOrEmpty(entry.Expression) == true) 33: { 34: return (new CodePrimitiveExpression(String.Empty)); 35: } 36: else 37: { 38: return (new CodeMethodInvokeExpression(new CodeMethodReferenceExpression(new CodeTypeReferenceExpression(this.GetType()), "GetValue"), new CodePrimitiveExpression(entry.Expression.Trim()), new CodePropertyReferenceExpression(new CodeArgumentReferenceExpression("entry"), "PropertyInfo"))); 39: } 40: } 41: #endregion 42:  43: #region Public override properties 44: public override Boolean SupportsEvaluate 45: { 46: get 47: { 48: return (true); 49: } 50: } 51: #endregion 52: } Next, the code for each expression builder: 1: [ExpressionPrefix("SPContext")] 2: public class SPContextExpressionBuilder : SPBaseExpressionBuilder 3: { 4: #region Public static methods 5: public static Object GetValue(String expression, PropertyInfo propertyInfo) 6: { 7: SPContext context = SPContext.Current; 8: Object expressionValue = DataBinder.Eval(context, expression.Trim().Replace('\'', '"')); 9:  10: expressionValue = Convert(expressionValue, propertyInfo); 11:  12: return (expressionValue); 13: } 14:  15: #endregion 16:  17: #region Public override methods 18: public override Object EvaluateExpression(Object target, BoundPropertyEntry entry, Object parsedData, ExpressionBuilderContext context) 19: { 20: return (GetValue(entry.Expression, entry.PropertyInfo)); 21: } 22: #endregion 23: }   1: [ExpressionPrefix("SPFarmProperty")] 2: public class SPFarmPropertyExpressionBuilder : SPBaseExpressionBuilder 3: { 4: #region Public static methods 5: public static Object GetValue(String propertyName, PropertyInfo propertyInfo) 6: { 7: Object propertyValue = SPFarm.Local.Properties[propertyName]; 8:  9: propertyValue = Convert(propertyValue, propertyInfo); 10:  11: return (propertyValue); 12: } 13:  14: #endregion 15:  16: #region Public override methods 17: public override Object EvaluateExpression(Object target, BoundPropertyEntry entry, Object parsedData, ExpressionBuilderContext context) 18: { 19: return (GetValue(entry.Expression, entry.PropertyInfo)); 20: } 21: #endregion 22: }   1: [ExpressionPrefix("SPField")] 2: public class SPFieldExpressionBuilder : SPBaseExpressionBuilder 3: { 4: #region Public static methods 5: public static Object GetValue(String fieldName, PropertyInfo propertyInfo) 6: { 7: Object fieldValue = SPContext.Current.ListItem[fieldName]; 8:  9: fieldValue = Convert(fieldValue, propertyInfo); 10:  11: return (fieldValue); 12: } 13:  14: #endregion 15:  16: #region Public override methods 17: public override Object EvaluateExpression(Object target, BoundPropertyEntry entry, Object parsedData, ExpressionBuilderContext context) 18: { 19: return (GetValue(entry.Expression, entry.PropertyInfo)); 20: } 21: #endregion 22: }   1: [ExpressionPrefix("SPIsInAudience")] 2: public class SPIsInAudienceExpressionBuilder : SPBaseExpressionBuilder 3: { 4: #region Public static methods 5: public static Object GetValue(String audienceName, PropertyInfo info) 6: { 7: Debugger.Break(); 8: audienceName = audienceName.Trim(); 9:  10: if ((audienceName.StartsWith("'") == true) && (audienceName.EndsWith("'") == true)) 11: { 12: audienceName = audienceName.Substring(1, audienceName.Length - 2); 13: } 14:  15: AudienceManager manager = new AudienceManager(); 16: Object value = manager.IsMemberOfAudience(SPControl.GetContextWeb(HttpContext.Current).CurrentUser.LoginName, audienceName); 17:  18: if (info.PropertyType == typeof(String)) 19: { 20: value = value.ToString(); 21: } 22:  23: return(value); 24: } 25:  26: #endregion 27:  28: #region Public override methods 29: public override Object EvaluateExpression(Object target, BoundPropertyEntry entry, Object parsedData, ExpressionBuilderContext context) 30: { 31: return (GetValue(entry.Expression, entry.PropertyInfo)); 32: } 33: #endregion 34: }   1: [ExpressionPrefix("SPIsInGroup")] 2: public class SPIsInGroupExpressionBuilder : SPBaseExpressionBuilder 3: { 4: #region Public static methods 5: public static Object GetValue(String groupName, PropertyInfo info) 6: { 7: groupName = groupName.Trim(); 8:  9: if ((groupName.StartsWith("'") == true) && (groupName.EndsWith("'") == true)) 10: { 11: groupName = groupName.Substring(1, groupName.Length - 2); 12: } 13:  14: Object value = SPControl.GetContextWeb(HttpContext.Current).CurrentUser.Groups.OfType<SPGroup>().Any(x => String.Equals(x.Name, groupName, StringComparison.OrdinalIgnoreCase)); 15:  16: if (info.PropertyType == typeof(String)) 17: { 18: value = value.ToString(); 19: } 20:  21: return(value); 22: } 23:  24: #endregion 25:  26: #region Public override methods 27: public override Object EvaluateExpression(Object target, BoundPropertyEntry entry, Object parsedData, ExpressionBuilderContext context) 28: { 29: return (GetValue(entry.Expression, entry.PropertyInfo)); 30: } 31: #endregion 32: }   1: [ExpressionPrefix("SPProperty")] 2: public class SPPropertyExpressionBuilder : SPBaseExpressionBuilder 3: { 4: #region Public static methods 5: public static Object GetValue(String propertyName, System.Reflection.PropertyInfo propertyInfo) 6: { 7: SPServiceContext serviceContext = SPServiceContext.GetContext(HttpContext.Current); 8: UserProfileManager upm = new UserProfileManager(serviceContext); 9: UserProfile up = upm.GetUserProfile(false); 10: Object propertyValue = (up[propertyName] != null) ? up[propertyName].Value : null; 11:  12: propertyValue = Convert(propertyValue, propertyInfo); 13:  14: return (propertyValue); 15: } 16:  17: #endregion 18:  19: #region Public override methods 20: public override Object EvaluateExpression(Object target, BoundPropertyEntry entry, Object parsedData, ExpressionBuilderContext context) 21: { 22: return (GetValue(entry.Expression, entry.PropertyInfo)); 23: } 24: #endregion 25: }   1: [ExpressionPrefix("SPQueryString")] 2: public class SPQueryStringExpressionBuilder : SPBaseExpressionBuilder 3: { 4: #region Public static methods 5: public static Object GetValue(String parameterName, PropertyInfo propertyInfo) 6: { 7: Object parameterValue = HttpContext.Current.Request.QueryString[parameterName]; 8:  9: parameterValue = Convert(parameterValue, propertyInfo); 10:  11: return (parameterValue); 12: } 13:  14: #endregion 15:  16: #region Public override methods 17: public override Object EvaluateExpression(Object target, BoundPropertyEntry entry, Object parsedData, ExpressionBuilderContext context) 18: { 19: return (GetValue(entry.Expression, entry.PropertyInfo)); 20: } 21: #endregion 22: }   1: [ExpressionPrefix("SPWebProperty")] 2: public class SPWebPropertyExpressionBuilder : SPBaseExpressionBuilder 3: { 4: #region Public static methods 5: public static Object GetValue(String propertyName, PropertyInfo propertyInfo) 6: { 7: Object propertyValue = SPContext.Current.Web.AllProperties[propertyName]; 8:  9: propertyValue = Convert(propertyValue, propertyInfo); 10:  11: return (propertyValue); 12: } 13:  14: #endregion 15:  16: #region Public override methods 17: public override Object EvaluateExpression(Object target, BoundPropertyEntry entry, Object parsedData, ExpressionBuilderContext context) 18: { 19: return (GetValue(entry.Expression, entry.PropertyInfo)); 20: } 21: #endregion 22: } Registration You probably know how to register them, but here it goes again: add this following snippet to your Web.config file, inside the configuration/system.web/compilation/expressionBuilders section: 1: <add expressionPrefix="SPContext" type="MyNamespace.SPContextExpressionBuilder, MyAssembly, Culture=neutral, Version=1.0.0.0, PublicKeyToken=xxx" /> 2: <add expressionPrefix="SPFarmProperty" type="MyNamespace.SPFarmPropertyExpressionBuilder, MyAssembly, Culture=neutral, Version=1.0.0.0, PublicKeyToken=xxx" /> 3: <add expressionPrefix="SPField" type="MyNamespace.SPFieldExpressionBuilder, MyAssembly, Culture=neutral, Version=1.0.0.0, PublicKeyToken=xxx" /> 4: <add expressionPrefix="SPIsInAudience" type="MyNamespace.SPIsInAudienceExpressionBuilder, MyAssembly, Culture=neutral, Version=1.0.0.0, PublicKeyToken=xxx" /> 5: <add expressionPrefix="SPIsInGroup" type="MyNamespace.SPIsInGroupExpressionBuilder, MyAssembly, Culture=neutral, Version=1.0.0.0, PublicKeyToken=xxx" /> 6: <add expressionPrefix="SPProperty" type="MyNamespace.SPPropertyExpressionBuilder, MyAssembly, Culture=neutral, Version=1.0.0.0, PublicKeyToken=xxx" /> 7: <add expressionPrefix="SPQueryString" type="MyNamespace.SPQueryStringExpressionBuilder, MyAssembly, Culture=neutral, Version=1.0.0.0, PublicKeyToken=xxx" /> 8: <add expressionPrefix="SPWebProperty" type="MyNamespace.SPWebPropertyExpressionBuilder, MyAssembly, Culture=neutral, Version=1.0.0.0, PublicKeyToken=xxx" /> I’ll leave it up to you to figure out the best way to deploy this to your server!

    Read the article

  • SerialPort ReadLine() after Thread.Sleep() goes crazy

    - by Mat
    Hi everybody, I've been fighting with this issue for a day and I can't find answer for it. I am trying to read data from GPS device trough COM port in Compact Framework C#. I am using SerialPort class (actually my own ComPort class boxing SerialPort, but it adds only two fields I need, nothing special). Anyway.. I am running while loop in a separate thread which reads line from the port, analyze NMEA data, print them, catch all exceptions and then I Sleep(200) the thread, cause I need CPU for other threads... Without Sleep it works fine, but uses 100% CPU.. When I dont use Sleep after few minutes the output from COM port looks like this: GPGSA,A,3,09,12,22,17,15,27,,,,,,,2.6,1.6,2.1*3F GSA,A,3,09,12,22,17,15,27,,,,,,,2.6,1.6,2.1*3F A,A,3,09,12,22,17,15,27,,,,,,,2.6,1.6,2.1*3F ,18,12,271,24,24,05,020,24,14,04,326,25,11,03,023,*76 A,3,09,12,22,17,15,27,,,,,,,2.6,1.6,2.1*3F 3,09,12,22,17,15,27,,,,,,,2.6,1.6,2.1*3F 09,12,22,17,15,27,,,,,,,2.6,1.6,2.1*3F ,12,22,17,15,27,,,,,,,2.6,1.6,2.1*3F as you can see the same message is read few times but cut. I wonder what I'm doing wrong... My port configuration: port.ReadBufferSize = 4096; port.BaudRate = 4800; port.DataBits = 8; port.Parity = Parity.None; port.StopBits = StopBits.One; port.NewLine = "\r\n"; port.ReadTimeout = 1000; port.ReceivedBytesThreshold = 100000; And my reading function: private void processGps(){ while (!closing) { //reconnect if needed try { string sentence = port.ReadLine(); //here print the sentence //analyze the sentence (this takes some time 50-100ms) } catch (TimeoutException) { Thread.Sleep(0); } catch (IOException ioex) { //handling IO exception (some info on the screen) } Thread.Sleep(200); } } There is some more stuff in this function like reconnection if the device is lost etc.. but it is not called when the GPS is connected properly.. I was trying port.DiscardInBuffer(); after some blocks of code (in TimeoutException, after read..) Did anyone had similar problem? I really dont know what I'm doing wrong.. The only way to get rig of it is removing the last Sleep... Thanks in advance! Best Regards, Mat

    Read the article

  • Calculate year for end date: PostgreSQL

    - by Dave Jarvis
    Background Users can pick dates as shown in the following screen shot: Any starting month/day and ending month/day combinations are valid, such as: Mar 22 to Jun 22 Dec 1 to Feb 28 The second combination is difficult (I call it the "tricky date scenario") because the year for the ending month/day is before the year for the starting month/day. That is to say, for the year 1900 (also shown selected in the screen shot above), the full dates would be: Dec 22, 1900 to Feb 28, 1901 Dec 22, 1901 to Feb 28, 1902 ... Dec 22, 2007 to Feb 28, 2008 Dec 22, 2008 to Feb 28, 2009 Problem Writing a SQL statement that selects values from a table with dates that fall between the start month/day and end month/day, regardless of how the start and end days are selected. In other words, this is a year wrapping problem. Inputs The query receives as parameters: Year1, Year2: The full range of years, independent of month/day combination. Month1, Day1: The starting day within the year to gather data. Month2, Day2: The ending day within the year (or the next year) to gather data. Previous Attempt Consider the following MySQL code (that worked): end_year = start_year + greatest( -1 * sign( datediff( date( concat_ws('-', year, end_month, end_day ) ), date( concat_ws('-', year, start_month, start_day ) ) ) ), 0 ) How it works, with respect to the tricky date scenario: Create two dates in the current year. The first date is Dec 22, 1900 and the second date is Feb 28, 1900. Count the difference, in days, between the two dates. If the result is negative, it means the year for the second date must be incremented by 1. In this case: Add 1 to the current year. Create a new end date: Feb 28, 1901. Check to see if the date range for the data falls between the start and calculated end date. If the result is positive, the dates have been provided in chronological order and nothing special needs to be done. This worked in MySQL because the difference in dates would be positive or negative. In PostgreSQL, the equivalent functionality always returns a positive number, regardless of their relative chronological order. Question How should the following (broken) code be rewritten for PostgreSQL to take into consideration the relative chronological order of the starting and ending month/day pairs (with respect to an annual temporal displacement)? SELECT m.amount FROM measurement m WHERE (extract(MONTH FROM m.taken) >= month1 AND extract(DAY FROM m.taken) >= day1) AND (extract(MONTH FROM m.taken) <= month2 AND extract(DAY FROM m.taken) <= day2) Any thoughts, comments, or questions? (The dates are pre-parsed into MM/DD format in PHP. My preference is for a pure PostgreSQL solution, but I am open to suggestions on what might make the problem simpler using PHP.) Versions PostgreSQL 8.4.4 and PHP 5.2.10

    Read the article

  • iOs receivedData from NSURLConnection is nil

    - by yhl
    I was wondering if anyone could point out why I'm not able to capture a web reply. My NSLog shows that my [NSMutableData receivedData] has a length of 0 the entire run of the connection. The script that I hit when I click my login button returns a string. My NSLog result is pasted below, and after that I've pasted both the .h and .m files that I have. NSLog Result 2012-11-28 23:35:22.083 [12548:c07] Clicked on button_login 2012-11-28 23:35:22.090 [12548:c07] theConnection is succesful 2012-11-28 23:35:22.289 [12548:c07] didReceiveResponse 2012-11-28 23:35:22.290 [12548:c07] didReceiveData 2012-11-28 23:35:22.290 [12548:c07] 0 2012-11-28 23:35:22.290 [12548:c07] connectionDidFinishLoading 2012-11-28 23:35:22.290 [12548:c07] 0 ViewController.h #import <UIKit/UIKit.h> @interface ViewController : UIViewController // Create an Action for the button. - (IBAction)button_login:(id)sender; // Add property declaration. @property (nonatomic,assign) NSMutableData *receivedData; @end ViewController.m #import ViewController.h @interface ViewController () @end @implementation ViewController @synthesize receivedData; - (void)connection:(NSURLConnection *)connection didReceiveResponse:(NSURLResponse *)response { NSLog(@"didReceiveResponse"); [receivedData setLength:0]; } - (void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data { NSLog(@"didReceiveData"); [receivedData appendData:data]; NSLog(@"%d",[receivedData length]); } - (void)connectionDidFinishLoading:(NSURLConnection *)connection { NSLog(@"connectionDidFinishLoading"); NSLog(@"%d",[receivedData length]); } - (IBAction)button_login:(id)sender { NSLog(@"Clicked on button_login"); NSString *loginScriptURL = [NSString stringWithFormat:@"http://www.website.com/app/scripts/login.php?"]; NSMutableURLRequest *theRequest = [NSMutableURLRequest requestWithURL:[NSURL URLWithString:loginScriptURL]]; NSString *postString = [NSString stringWithFormat:@"&paramUsername=user&paramPassword=pass"]; NSData *postData = [postString dataUsingEncoding:NSASCIIStringEncoding allowLossyConversion:YES]; [theRequest setHTTPMethod:@"POST"]; [theRequest setHTTPBody:postData]; // Create the actual connection using the request. NSURLConnection *theConnection = [[NSURLConnection alloc] initWithRequest:theRequest delegate:self]; // Capture the response if (theConnection) { NSLog(@"theConnection is succesful"); } else { NSLog(@"theConnection failed"); } } @end

    Read the article

  • Penetration testing with Nikto, unknown results found

    - by heldrida
    I've scanned my new webserver and I'm surprised to find that in the results there's programs that I never installed. This is a fresh new install of Ubuntu 12.04 and just installed Php 5.3, mysql, fail2ban, apache2, git, a few other things. Not sure if related, but I've got Wordpress installed but this doesn't have anything to do with myphpnuke does it? I'd like to understand why am I getting this results ? + OSVDB-27071: /phpimageview.php?pic=javascript:alert(8754): PHP Image View 1.0 is vulnerable to Cross Site Scripting (XSS). http://www.cert.org/advisories/CA-2000-02.html. + OSVDB-3931: /myphpnuke/links.php?op=search&query=[script]alert('Vulnerable);[/script]?query=: myphpnuke is vulnerable to Cross Site Scripting (XSS). http://www.cert.org/advisories/CA-2000-02.html. + OSVDB-3931: /myphpnuke/links.php?op=MostPopular&ratenum=[script]alert(document.cookie);[/script]&ratetype=percent: myphpnuke is vulnerable to Cross Site Scripting (XSS). http://www.cert.org/advisories/CA-2000-02.html. + /modules.php?op=modload&name=FAQ&file=index&myfaq=yes&id_cat=1&categories=%3Cimg%20src=javascript:alert(9456);%3E&parent_id=0: Post Nuke 0.7.2.3-Phoenix is vulnerable to Cross Site Scripting (XSS). http://www.cert.org/advisories/CA-2000-02.html. + /modules.php?letter=%22%3E%3Cimg%20src=javascript:alert(document.cookie);%3E&op=modload&name=Members_List&file=index: Post Nuke 0.7.2.3-Phoenix is vulnerable to Cross Site Scripting (XSS). http://www.cert.org/advisories/CA-2000-02.html. + OSVDB-4598: /members.asp?SF=%22;}alert('Vulnerable');function%20x(){v%20=%22: Web Wiz Forums ver. 7.01 and below is vulnerable to Cross Site Scripting (XSS). http://www.cert.org/advisories/CA-2000-02.html. + OSVDB-2946: /forum_members.asp?find=%22;}alert(9823);function%20x(){v%20=%22: Web Wiz Forums ver. 7.01 and below is vulnerable to Cross Site Scripting (XSS). http://www.cert.org/advisories/CA-2000-02.html. Thanks for looking!

    Read the article

  • what should be limit to use for IPTABLE rate limiting for a webserver

    - by Registered User
    I see on my webserver some logs as follows 203.252.157.98 - :25:02 "GET //phpmyadmin/ HTTP/1.1" 404 393 "-" "Made by ZmEu @ WhiteHat Team - www.whitehat.ro" 203.252.157.98 - :25:03 "GET //phpMyAdmin/ HTTP/1.1" 404 394 "-" "Made by ZmEu @ WhiteHat Team - www.whitehat.ro" 203.252.157.98 - :25:03 "GET //pma/ HTTP/1.1" 404 388 "-" "Made by ZmEu @ WhiteHat Team - www.whitehat.ro" 203.252.157.98 - :25:04 "GET //dbadmin/ HTTP/1.1" 404 391 "-" "Made by ZmEu @ WhiteHat Team - www.whitehat.ro" 203.252.157.98 - :25:05 "GET //myadmin/ HTTP/1.1" 404 391 "-" "Made by ZmEu @ WhiteHat Team - www.whitehat.ro" 203.252.157.98 - :25:06 "GET //phppgadmin/ HTTP/1.1" 404 394 "-" "Made by ZmEu @ WhiteHat Team - www.whitehat.ro" 203.252.157.98 - :25:06 "GET //PMA/ HTTP/1.1" 404 389 "-" "Made by ZmEu @ WhiteHat Team - www.whitehat.ro" 203.252.157.98 - :25:07 "GET //admin/ HTTP/1.1" 404 389 "-" "Made by ZmEu @ WhiteHat Team - www.whitehat.ro" 203.252.157.98 - :25:08 "GET //MyAdmin/ HTTP/1.1" 404 392 "-" "Made by ZmEu @ WhiteHat Team - www.whitehat.ro" 203.252.157.98 - :27:36 "GET //phpmyadmin/ HTTP/1.1" 404 393 "-" "Made by ZmEu @ WhiteHat Team - www.whitehat.ro" 203.252.157.98 - :27:42 "GET //phpMyAdmin/ HTTP/1.1" 404 394 "-" "Made by ZmEu @ WhiteHat Team - www.whitehat.ro" 203.252.157.98 - :27:42 "GET //pma/ HTTP/1.1" 404 388 "-" "Made by ZmEu @ WhiteHat Team - www.whitehat.ro" 203.252.157.98 - :27:43 "GET //dbadmin/ HTTP/1.1" 404 391 "-" "Made by ZmEu @ WhiteHat Team - www.whitehat.ro" 203.252.157.98 - - "GET //myadmin/ HTTP/1.1" 404 391 "-" "Made by ZmEu @ WhiteHat Team - www.whitehat.ro" and some more as follows 118.219.234.254 - - [19/Oct/2010:22:57:41 "GET /pma/scripts/setup.php HTTP/1.1" 404 399 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98)" 118.219.234.254 - - [19/Oct/2010:22:57:41 "GET /scripts/setup.php HTTP/1.1" 404 397 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98)" 118.219.234.254 - - [19/Oct/2010:22:57:42 "GET /sqlweb/scripts/setup.php HTTP/1.1" 404 401 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98)" 118.219.234.254 - - [19/Oct/2010:22:57:42 "GET /web/phpMyAdmin/scripts/setup.php HTTP/1.1" 404 408 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98)" 118.219.234.254 - - [19/Oct/2010:22:57:43 "GET /web/phpmyadmin/scripts/setup.php HTTP/1.1" 404 408 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98)" 118.219.234.254 - - [19/Oct/2010:22:57:44 "GET /web/scripts/setup.php HTTP/1.1" 404 400 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98)" 118.219.234.254 - - [19/Oct/2010:22:57:44 "GET /webadmin/scripts/setup.php HTTP/1.1" 404 403 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98)" 118.219.234.254 - - [19/Oct/2010:22:57:45 "GET /webdb/scripts/setup.php HTTP/1.1" 404 401 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98)" 118.219.234.254 - - [19/Oct/2010:22:57:45 "GET /websql/scripts/setup.php HTTP/1.1" 404 401 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98)" 118.219.234.254 - - [19/Oct/2010:05:38:51 "GET /admin/phpmyadmin/scripts/setup.php HTTP/1.1" 404 407 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98)" 118.219.234.254 - - [19/Oct/2010:05:38:52 "GET /admin/pma/scripts/setup.php HTTP/1.1" 404 404 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98)" 118.219.234.254 - - [19/Oct/2010:05:38:52 "GET /admin/scripts/setup.php HTTP/1.1" 404 401 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98)" 118.219.234.254 - - [19/Oct/2010:05:38:53 "GET /db/scripts/setup.php HTTP/1.1" 404 399 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98)" 118.219.234.254 - - [19/Oct/2010:05:38:54 "GET /dbadmin/scripts/setup.php HTTP/1.1" 404 402 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98)" 118.219.234.254 - - [19/Oct/2010:05:38:54 "GET /myadmin/scripts/setup.php HTTP/1.1" 404 403 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98)" 118.219.234.254 - - [19/Oct/2010:05:38:55 "GET /mysql/scripts/setup.php HTTP/1.1" 404 401 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98)" 118.219.234.254 - - [19/Oct/2010:05:38:55 "GET /mysqladmin/scripts/setup.php HTTP/1.1" 404 405 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98)" 118.219.234.254 - - [19/Oct/2010:05:38:56 "GET /phpMyAdmin/scripts/setup.php HTTP/1.1" 404 405 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98)" 118.219.234.254 - - [19/Oct/2010:05:38:56 "GET /phpadmin/scripts/setup.php HTTP/1.1" 404 403 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98)" 118.219.234.254 - - [19/Oct/2010:05:38:57 "GET /phpmyadmin/scripts/setup.php HTTP/1.1" 404 404 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98)" 118.219.234.254 - - [19/Oct/2010:05:38:57 "GET /pma/scripts/setup.php HTTP/1.1" 404 399 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98)" 118.219.234.254 - - [19/Oct/2010:05:38:58 "GET /scripts/setup.php HTTP/1.1" 404 397 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98)" 118.219.234.254 - - [19/Oct/2010:05:38:58 "GET /sqlweb/scripts/setup.php HTTP/1.1" 404 401 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98)" 118.219.234.254 - - [19/Oct/2010:05:38:59 "GET /web/phpMyAdmin/scripts/setup.php HTTP/1.1" 404 408 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98)" 118.219.234.254 - - [19/Oct/2010:05:38:59 "GET /web/phpmyadmin/scripts/setup.php HTTP/1.1" 404 408 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98)" 118.219.234.254 - - [19/Oct/2010:05:39:00 "GET /web/scripts/setup.php HTTP/1.1" 404 400 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98)" 118.219.234.254 - - [19/Oct/2010:05:39:01 "GET /webadmin/scripts/setup.php HTTP/1.1" 404 403 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98)" 118.219.234.254 - - [19/Oct/2010:05:39:01 "GET /webdb/scripts/setup.php HTTP/1.1" 404 401 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98)" 118.219.234.254 - - [19/Oct/2010:05:39:02 "GET /websql/scripts/setup.php HTTP/1.1" 404 401 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98)" I have 2 questions 1) When such an attack happens on my site then while such scanning is going on how do I detect it? (In a very less time) 2)I have decided to rate limit the IPTABLES so as to reduce such DOS attacks by some script kiddies (to scan for vulnerabilities in phpmyadmin or some other script) to some extent.So how much should it be limited so that genuine users do not get kicked out.What is the best practise for question 2?

    Read the article

  • iptables -P FORWARD DROP makes port forwarding slow

    - by Isaac
    I have three computers, linked like this: box1 (ubuntu) box2 router & gateway (debian) box3 (opensuse) [10.0.1.1] ---- [10.0.1.18,10.0.2.18,10.0.3.18] ---- [10.0.3.15] | box4, www [10.0.2.1] Among other things I want box2 to do nat and port forwarding, so that I can do ssh -p 2223 box2 to reach box3. For this I have the following iptables script: #!/bin/bash # flush iptables -F INPUT iptables -F FORWARD iptables -F OUTPUT iptables -t nat -F PREROUTING iptables -t nat -F POSTROUTING iptables -t nat -F OUTPUT # default default_action=DROP for chain in INPUT OUTPUT;do iptables -P $chain $default_action done iptables -P FORWARD DROP # allow ssh to local computer allowed_ssh_clients="10.0.1.1 10.0.3.15" for ip in $allowed_ssh_clients;do iptables -A OUTPUT -p tcp --sport 22 -d $ip -j ACCEPT iptables -A INPUT -p tcp --dport 22 -s $ip -j ACCEPT done # allow DNS iptables -A OUTPUT -p udp --dport 53 -m state \ --state NEW,ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -p udp --sport 53 -m state \ --state ESTABLISHED,RELATED -j ACCEPT # allow HTTP & HTTPS iptables -A OUTPUT -p tcp -m multiport --dports 80,443 -j ACCEPT iptables -A INPUT -p tcp -m multiport --sports 80,443 -j ACCEPT # # ROUTING # # allow routing echo 1 >/proc/sys/net/ipv4/ip_forward # nat iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE # http iptables -A FORWARD -p tcp --dport 80 -j ACCEPT iptables -A FORWARD -p tcp --sport 80 -j ACCEPT # ssh redirect iptables -t nat -A PREROUTING -p tcp -i eth1 --dport 2223 -j DNAT \ --to-destination 10.0.3.15:22 iptables -A FORWARD -p tcp --sport 22 -j ACCEPT iptables -A FORWARD -p tcp --dport 22 -j ACCEPT iptables -A FORWARD -p tcp --sport 1024:65535 -j ACCEPT iptables -A FORWARD -p tcp --dport 1024:65535 -j ACCEPT iptables -I FORWARD -j LOG --log-prefix "iptables denied: " While this works, it takes about 10 seconds to get a password promt from my ssh command. Afterwards, the connection is as responsive as could be. If I change the default policy for my FORWARD chain to "ACCEPT", then the password promt is there imediatly. I have tried analysing the logs, but I can not spot a difference in the logs for ACCEPT/DROP in my FORWARD chain. Also I have tried allowing all the unprivileged ports, as box1 uses thoses for doing ssh to box2. Any hints? (If the whole setup seems strange to you - the point of the exercise is to understand iptables ;))

    Read the article

  • Why can blocked IPs get through my iptables? What's wrong with this configuration?

    - by NeedSomeHelp
    (Why can/How are) blocked IPs (get/getting) through my iptables? Hello and thanks for your consideration... I have configured iptables and included (below) output from the command "iptables --line-numbers -n -L" yet IP addresses (like 31.41.219.180) from IP blocks I have already blocked are getting through. Please take a look and share any input you may have. Thank you. P.S. The initial ACCEPT IP addresses are for CloudFlare. . Chain INPUT (policy DROP 0 packets, 0 bytes) num pkts bytes target prot opt in out source destination 1 32267 14M ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 2 0 0 REJECT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp flags:!0x17/0x02 state NEW reject-with tcp-reset 3 149 8570 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 state INVALID 4 434 25606 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 5 0 0 ACCEPT udp -- * * 103.21.244.0/22 0.0.0.0/0 6 0 0 ACCEPT udp -- * * 103.22.200.0/22 0.0.0.0/0 7 0 0 ACCEPT udp -- * * 103.31.4.0/22 0.0.0.0/0 8 0 0 ACCEPT udp -- * * 104.16.0.0/12 0.0.0.0/0 9 0 0 ACCEPT udp -- * * 108.162.192.0/18 0.0.0.0/0 10 0 0 ACCEPT udp -- * * 141.101.64.0/18 0.0.0.0/0 11 0 0 ACCEPT udp -- * * 162.158.0.0/15 0.0.0.0/0 12 0 0 ACCEPT udp -- * * 173.245.48.0/20 0.0.0.0/0 13 0 0 ACCEPT udp -- * * 188.114.96.0/20 0.0.0.0/0 14 0 0 ACCEPT udp -- * * 190.93.240.0/20 0.0.0.0/0 15 0 0 ACCEPT udp -- * * 197.234.240.0/22 0.0.0.0/0 16 0 0 ACCEPT udp -- * * 198.41.128.0/17 0.0.0.0/0 17 0 0 ACCEPT udp -- * * 199.27.128.0/21 0.0.0.0/0 18 0 0 ACCEPT tcp -- * * 103.21.244.0/22 0.0.0.0/0 19 9 468 ACCEPT tcp -- * * 103.22.200.0/22 0.0.0.0/0 20 0 0 ACCEPT tcp -- * * 103.31.4.0/22 0.0.0.0/0 21 0 0 ACCEPT tcp -- * * 104.16.0.0/12 0.0.0.0/0 22 858 44616 ACCEPT tcp -- * * 108.162.192.0/18 0.0.0.0/0 23 376 19552 ACCEPT tcp -- * * 141.101.64.0/18 0.0.0.0/0 24 0 0 ACCEPT tcp -- * * 162.158.0.0/15 0.0.0.0/0 25 257 13364 ACCEPT tcp -- * * 173.245.48.0/20 0.0.0.0/0 26 0 0 ACCEPT tcp -- * * 188.114.96.0/20 0.0.0.0/0 27 0 0 ACCEPT tcp -- * * 190.93.240.0/20 0.0.0.0/0 28 0 0 ACCEPT tcp -- * * 197.234.240.0/22 0.0.0.0/0 29 0 0 ACCEPT tcp -- * * 198.41.128.0/17 0.0.0.0/0 30 92 4784 ACCEPT tcp -- * * 199.27.128.0/21 0.0.0.0/0 31 0 0 DROP tcp -- * * 1.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 32 0 0 DROP tcp -- * * 101.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 33 0 0 DROP tcp -- * * 102.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 34 0 0 DROP tcp -- * * 103.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 35 18 1080 DROP tcp -- * * 109.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 36 0 0 DROP tcp -- * * 112.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 37 12 656 DROP tcp -- * * 113.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 38 0 0 DROP tcp -- * * 114.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 39 0 0 DROP tcp -- * * 115.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 40 8 352 DROP tcp -- * * 116.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 41 0 0 DROP tcp -- * * 117.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 42 0 0 DROP tcp -- * * 118.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 43 2 120 DROP tcp -- * * 119.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 44 0 0 DROP tcp -- * * 120.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 45 0 0 DROP tcp -- * * 121.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 46 4 160 DROP tcp -- * * 122.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 47 4 240 DROP tcp -- * * 123.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 48 0 0 DROP tcp -- * * 125.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 49 0 0 DROP tcp -- * * 134.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 50 0 0 DROP tcp -- * * 146.185.0.0/16 0.0.0.0/0 tcp dpts:1:50000 51 6 360 DROP tcp -- * * 148.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 52 0 0 DROP tcp -- * * 151.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 53 0 0 DROP tcp -- * * 175.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 54 0 0 DROP tcp -- * * 176.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 55 0 0 DROP tcp -- * * 177.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 56 46 2696 DROP tcp -- * * 178.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 57 0 0 DROP tcp -- * * 179.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 58 4 224 DROP tcp -- * * 180.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 59 0 0 DROP tcp -- * * 181.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 60 0 0 DROP tcp -- * * 182.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 61 34 2040 DROP tcp -- * * 183.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 62 0 0 DROP tcp -- * * 185.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 63 0 0 DROP tcp -- * * 186.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 64 0 0 DROP tcp -- * * 187.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 65 18 912 DROP tcp -- * * 188.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 66 0 0 DROP tcp -- * * 189.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 67 0 0 DROP tcp -- * * 190.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 68 2 120 DROP tcp -- * * 192.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 69 0 0 DROP tcp -- * * 196.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 70 0 0 DROP tcp -- * * 197.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 71 5 300 DROP tcp -- * * 198.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 72 0 0 DROP tcp -- * * 2.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 73 0 0 DROP tcp -- * * 200.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 74 0 0 DROP tcp -- * * 201.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 75 6 360 DROP tcp -- * * 202.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 76 0 0 DROP tcp -- * * 203.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 77 4 160 DROP tcp -- * * 210.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 78 0 0 DROP tcp -- * * 211.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 79 2 96 DROP tcp -- * * 212.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 80 4 240 DROP tcp -- * * 213.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 81 0 0 DROP tcp -- * * 214.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 82 0 0 DROP tcp -- * * 215.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 83 0 0 DROP tcp -- * * 216.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 84 0 0 DROP tcp -- * * 217.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 85 4 172 DROP tcp -- * * 218.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 86 12 576 DROP tcp -- * * 219.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 87 7 372 DROP tcp -- * * 220.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 88 0 0 DROP tcp -- * * 222.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 89 0 0 DROP tcp -- * * 27.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 90 12 608 DROP tcp -- * * 31.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 91 11 528 DROP tcp -- * * 37.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 92 0 0 DROP tcp -- * * 41.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 93 0 0 DROP tcp -- * * 42.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 94 0 0 DROP tcp -- * * 43.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 95 8 480 DROP tcp -- * * 46.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 96 0 0 DROP tcp -- * * 49.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 97 6 360 DROP tcp -- * * 5.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 98 0 0 DROP tcp -- * * 58.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 99 0 0 DROP tcp -- * * 60.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 100 4 160 DROP tcp -- * * 61.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 101 32 1848 DROP tcp -- * * 62.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 102 0 0 DROP tcp -- * * 63.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 103 20 1200 DROP tcp -- * * 64.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 104 0 0 DROP tcp -- * * 65.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 105 266 15960 DROP tcp -- * * 66.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 106 3 180 DROP tcp -- * * 69.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 107 5 272 DROP tcp -- * * 72.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 108 0 0 DROP tcp -- * * 78.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 109 0 0 DROP tcp -- * * 81.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 110 3 180 DROP tcp -- * * 82.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 111 0 0 DROP tcp -- * * 83.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 112 8 384 DROP tcp -- * * 84.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 113 0 0 DROP tcp -- * * 85.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 114 0 0 DROP tcp -- * * 86.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 115 6 360 DROP tcp -- * * 87.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 116 7 408 DROP tcp -- * * 88.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 117 0 0 DROP tcp -- * * 89.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 118 0 0 DROP tcp -- * * 90.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 119 0 0 DROP tcp -- * * 91.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 120 3 152 DROP tcp -- * * 92.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 121 20 992 DROP tcp -- * * 93.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 122 9 512 DROP tcp -- * * 94.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 123 5 272 DROP tcp -- * * 95.0.0.0/8 0.0.0.0/0 tcp dpts:1:50000 124 0 0 DROP udp -- * * 1.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 125 0 0 DROP udp -- * * 101.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 126 0 0 DROP udp -- * * 102.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 127 0 0 DROP udp -- * * 103.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 128 0 0 DROP udp -- * * 109.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 129 0 0 DROP udp -- * * 112.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 130 0 0 DROP udp -- * * 113.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 131 0 0 DROP udp -- * * 114.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 132 1 112 DROP udp -- * * 115.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 133 0 0 DROP udp -- * * 116.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 134 0 0 DROP udp -- * * 117.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 135 0 0 DROP udp -- * * 118.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 136 0 0 DROP udp -- * * 119.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 137 0 0 DROP udp -- * * 120.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 138 0 0 DROP udp -- * * 121.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 139 0 0 DROP udp -- * * 122.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 140 0 0 DROP udp -- * * 123.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 141 0 0 DROP udp -- * * 125.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 142 0 0 DROP udp -- * * 134.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 143 0 0 DROP udp -- * * 146.185.0.0/16 0.0.0.0/0 udp dpts:1:50000 144 0 0 DROP udp -- * * 148.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 145 0 0 DROP udp -- * * 151.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 146 0 0 DROP udp -- * * 175.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 147 0 0 DROP udp -- * * 176.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 148 1 70 DROP udp -- * * 177.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 149 0 0 DROP udp -- * * 178.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 150 0 0 DROP udp -- * * 179.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 151 0 0 DROP udp -- * * 180.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 152 0 0 DROP udp -- * * 181.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 153 0 0 DROP udp -- * * 182.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 154 0 0 DROP udp -- * * 183.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 155 0 0 DROP udp -- * * 185.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 156 1 74 DROP udp -- * * 186.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 157 0 0 DROP udp -- * * 187.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 158 0 0 DROP udp -- * * 188.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 159 0 0 DROP udp -- * * 189.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 160 0 0 DROP udp -- * * 190.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 161 0 0 DROP udp -- * * 192.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 162 0 0 DROP udp -- * * 196.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 163 0 0 DROP udp -- * * 197.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 164 0 0 DROP udp -- * * 198.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 165 0 0 DROP udp -- * * 2.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 166 0 0 DROP udp -- * * 200.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 167 0 0 DROP udp -- * * 201.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 168 0 0 DROP udp -- * * 202.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 169 0 0 DROP udp -- * * 203.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 170 0 0 DROP udp -- * * 210.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 171 0 0 DROP udp -- * * 211.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 172 0 0 DROP udp -- * * 212.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 173 0 0 DROP udp -- * * 213.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 174 0 0 DROP udp -- * * 214.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 175 0 0 DROP udp -- * * 215.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 176 0 0 DROP udp -- * * 216.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 177 0 0 DROP udp -- * * 217.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 178 1 80 DROP udp -- * * 218.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 179 0 0 DROP udp -- * * 219.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 180 0 0 DROP udp -- * * 220.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 181 0 0 DROP udp -- * * 222.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 182 0 0 DROP udp -- * * 27.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 183 0 0 DROP udp -- * * 31.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 184 0 0 DROP udp -- * * 37.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 185 0 0 DROP udp -- * * 41.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 186 0 0 DROP udp -- * * 42.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 187 0 0 DROP udp -- * * 43.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 188 0 0 DROP udp -- * * 46.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 189 0 0 DROP udp -- * * 49.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 190 0 0 DROP udp -- * * 5.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 191 0 0 DROP udp -- * * 58.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 192 0 0 DROP udp -- * * 60.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 193 0 0 DROP udp -- * * 61.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 194 0 0 DROP udp -- * * 62.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 195 0 0 DROP udp -- * * 63.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 196 0 0 DROP udp -- * * 64.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 197 0 0 DROP udp -- * * 65.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 198 0 0 DROP udp -- * * 66.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 199 0 0 DROP udp -- * * 69.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 200 0 0 DROP udp -- * * 72.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 201 0 0 DROP udp -- * * 78.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 202 0 0 DROP udp -- * * 81.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 203 0 0 DROP udp -- * * 82.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 204 0 0 DROP udp -- * * 83.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 205 0 0 DROP udp -- * * 84.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 206 0 0 DROP udp -- * * 85.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 207 0 0 DROP udp -- * * 86.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 208 0 0 DROP udp -- * * 87.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 209 0 0 DROP udp -- * * 88.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 210 0 0 DROP udp -- * * 89.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 211 0 0 DROP udp -- * * 90.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 212 0 0 DROP udp -- * * 91.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 213 0 0 DROP udp -- * * 92.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 214 2 72 DROP udp -- * * 93.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 215 0 0 DROP udp -- * * 94.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 216 0 0 DROP udp -- * * 95.0.0.0/8 0.0.0.0/0 udp dpts:1:50000 217 0 0 DROP tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:12443 218 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:11443 219 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:11444 220 23 1104 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8447 221 24 1152 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8443 222 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8880 223 207 11096 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 224 19 996 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 225 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:21 226 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 227 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:587 228 4 216 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:25 229 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:465 230 14 840 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:110 231 2 120 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:995 232 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:143 233 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:993 234 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:106 235 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:3306 236 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:5432 237 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:9008 238 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:9080 239 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:137 240 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:138 241 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:139 242 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:445 243 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:1194 244 0 0 ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 udp dpt:53 245 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:53 246 73 4488 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 icmp type 8 code 0 247 77 23598 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy DROP 0 packets, 0 bytes) num pkts bytes target prot opt in out source destination 1 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 2 0 0 REJECT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp flags:!0x17/0x02 state NEW reject-with tcp-reset 3 0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 state INVALID 4 0 0 ACCEPT all -- lo lo 0.0.0.0/0 0.0.0.0/0 5 0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy DROP 0 packets, 0 bytes) num pkts bytes target prot opt in out source destination 1 31004 25M ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 2 1 333 REJECT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp flags:!0x17/0x02 state NEW reject-with tcp-reset 3 0 0 DROP all -- * * 0.0.0.0/0 0.0.0.0/0 state INVALID 4 434 25606 ACCEPT all -- * lo 0.0.0.0/0 0.0.0.0/0 5 328 21324 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0

    Read the article

  • 5.5.0 smtp;554 transaction failed spam message not queued

    - by Miguel
    Some users are trying to send email to certain domains using Exchange Server 2003, but the message is always is rejected and the following message is shown: 5.5.0 smtp;554 Transaction Failed Spam Message not queued The IP is not in a black list (checked using http://whatismyipaddress.com/blacklist-check and is clean - not listed). The emails were checked using using smtpdiag ("a troubleshooting tool designed to work directly on a Windows server with IIS/SMTP service enabled or with Exchange Server installed") and the connection using port 25 is ok. Also, an nslookup with set type=ptr shows (names and IP changed, "" means I typed something): C:\Documents and Settings\administrator>nslookup Default Server: publicdns.isp.net Address: 10.10.10.10 > server publicdns.isp.net Default Server: publicdns.isp.net Address: 10.10.10.10 > set type=ptr >mydomain.com Server: publicdns.isp.net Address: 10.10.10.10 mydomain.com primary name server = publicdns.isp.net responsible mail addr = root.isp.net serial = 2011061301 refresh = 10800 (3 hours) retry = 3600 (1 hour) expire = 604800 (7 days) default TTL = 86400 (1 day) > 20.21.22.23 Server: publicdns.isp.net Address: 10.10.10.10 23.22.21.20.in-addr.arpa name = mail.mydomain.com 20.21.in-addr.arpa nameserver = publicdns.isp.net 20.21.in-addr.arpa nameserver = publicdns2.isp.net publicdns2.isp.net internet address = 10.10.10.11 publicdns.isp.net internet address = 10.10.10.10 Server: publicdns.isp.net Address: 10.10.10.10 23.22.21.20.in-addr.arpa name = mail.mydomain.com 20.21.in-addr.arpa nameserver = publicdns.isp.net 20.21.in-addr.arpa nameserver = publicdns2.isp.net publicdns2.isp.net internet address = 10.10.10.11 publicdns.isp.net internet address = 10.10.10.10 > set type=mx > mydomain.com Server: publicdns.isp.net Address: 10.10.10.10 mydomain.com MX preference = 10, mail exchanger = mail.mydomain.com mydomain.com nameserver = publicdns.isp.net mydomain.com nameserver = publicdns2.isp.net mail.mydomain.com internet address = 20.21.22.23 publicdns2.isp.net internet address = 10.10.10.11 publicdns.isp.net internet address = 10.10.10.10 > set type=a > mydomain.com Server: publicdns.isp.net Address: 10.10.10.10 Nombre: mydomain.com Address: 20.21.22.23 When I test the spf record with http://www.mxtoolbox.com it shows: TXT mydomain.com 24 hrs v=spf1 a mx ptr ip4:20.21.22.23 mx:mail.mydomain.com -all Any clues of what's happening here?

    Read the article

  • Apache2 cgi's crash on odbc db access (but run fine from shell)

    - by Martin
    Problem overview (details below): I'm having an apache2 + ruby integration problem when trying to connect to an ODBC data source. The main problem boils down to the fact that scripts that run fine from an interactive shell crash ruby on the database connect line when run as a cgi from apache2. Ruby cgi's that don't try to access the ODBC datasource work fine. And (again) ruby scripts that connect to a database with ODBC do fine when executed from the command line (or cron). This behavior is identical when I use perl instead of ruby. So, the issue seems to be with the environment provided for ruby (perl) by apache2, but I can't figure out what is wrong or what to do about it. Does anyone have any suggestions on how to get these cgi scripts to work properly? I've tried many different things to get this to work, and I'm happy to provide more detail of any aspect if that will help. Details: Mac OS X Server 10.5.8 Xserve 2 x 2.66 Dual-Core Intel Xeon (12 GB) Apache 2.2.13 ruby 1.8.6 (2008-08-11 patchlevel 287) [universal-darwin9.0] ruby-odbc 0.9997 dbd-odbc (0.2.5) dbi (0.4.3) mod_ruby 1.3.0 Perl -- 5.8.8 DBI -- 1.609 DBD::ODBC -- 1.23 odbc driver: DataDirect SequeLink v5.5 (/Library/ODBC/SequeLink.bundle/Contents/MacOS/ivslk20.dylib) odbc datasource: FileMaker Server 10 (v10.0.2.206) ) a minimal version of a script (anonymized) that will crash in apache but run successfully from a shell: #!/usr/bin/ruby require 'cgi' require 'odbc' cgi = CGI.new("html3") aConnection = ODBC::connect('DBFile', "username", 'password') aQuery = aConnection.prepare("SELECT zzz_kP_ID FROM DBTable WHERE zzz_kP_ID = 81044") aQuery.execute aRecord = aQuery.fetch_hash.inspect aQuery.drop aConnection.disconnect # aRecord = '{"zzz_kP_ID"=>81044.0}' cgi.out{ cgi.html{ cgi.body{ "<pre>Primary Key: #{aRecord}</pre>" } } } Example of running this from a shell: gamma% ./minimal.rb (offline mode: enter name=value pairs on standard input) Content-Type: text/html Content-Length: 134 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><HTML><BODY><pre>Primary Key: {"zzz_kP_ID"=>81044.0}</pre></font></BODY></HTML>% gamma% ) typical crash log lines: Dec 22 14:02:38 gamma ReportCrash[79237]: Formulating crash report for process perl[79236] Dec 22 14:02:38 gamma ReportCrash[79237]: Saved crashreport to /Library/Logs/CrashReporter/perl_2009-12-22-140237_HTCF.crash using uid: 0 gid: 0, euid: 0 egid: 0 Dec 22 14:03:13 gamma ReportCrash[79256]: Formulating crash report for process perl[79253] Dec 22 14:03:13 gamma ReportCrash[79256]: Saved crashreport to /Library/Logs/CrashReporter/perl_2009-12-22-140311_HTCF.crash using uid: 0 gid: 0, euid: 0 egid: 0

    Read the article

  • Apache 2.4.3 php-fpm mod_fast_cgi and mod_cache

    - by Anjia
    Did anybody successfully configured mod_cache in apache 2.4 with php-fpm and fastcgi? my cgi config: <IfModule mod_fastcgi.c> Alias /php5.fastcgi /var/www/fastcgi/php5.fastcgi AddHandler php-script .php FastCGIExternalServer /var/www/fastcgi/php5.fastcgi -socket /mnt/tmp/fast/php-fpm.sock -idle-timeout 1600 -pass-header Authorization Action php-script /php5.fastcgi virtual My php-fpm config is standard and I am loading mod_cache and mod_disk_cache in Apache. However the Apache does not seems to cache any content. The debug log file: Fri Sep 07 23:22:59.691333 2012] [cache:debug] [pid 35623:tid 123613201929984] mod_cache.c(161): [client 10.0.0.22:21938] AH00750: Adding CACHE_SAVE filter for /index.html [Fri Sep 07 23:22:59.691345 2012] [cache:debug] [pid 35623:tid 123613201929984] mod_cache.c(171): [client 10.0.0.22:21938] AH00751: Adding CACHE_REMOVE_URL filter for /index.html [Fri Sep 07 23:23:01.326598 2012] [cache:debug] [pid 35623:tid 123613185144576] cache_storage.c(626): [client 10.0.0.110:5414] AH00698: cache: Key for entity /index.html?(null) is `http://10.0.1.16:8080/index.html?`

    Read the article

  • Iptables config breaks Java + Elastic Search communication

    - by Agustin Lopez
    I am trying to set up a firewall for a server hosting a java app and ES. Both are on the same server and communicate to each other. The problem I am having is that my firewall configuration prevents java from connecting to ES. Not sure why really.... I have tried lot of stuff like opening the port range 9200:9400 to the server ip without any luck but from what I know all communication inside the server should be allowed with this configuration. The idea is that ES should not be accessible from outside but it should be accessible from this java app and ES uses the port range 9200:9400. This is my iptables script: echo -e Deleting rules for INPUT chain iptables -F INPUT echo -e Deleting rules for OUTPUT chain iptables -F OUTPUT echo -e Deleting rules for FORWARD chain iptables -F FORWARD echo -e Setting by default the drop policy on each chain iptables -P INPUT DROP iptables -P OUTPUT ACCEPT iptables -P FORWARD DROP echo -e Open all ports from/to localhost iptables -A INPUT -i lo -j ACCEPT echo -e Open SSH port 22 with brute force security iptables -A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -m recent --set --name SSH --rsource iptables -A INPUT -p tcp -m tcp --dport 22 -m recent --rcheck --seconds 30 --hitcount 4 --rttl --name SSH --rsource -j REJECT --reject-with tcp-reset iptables -A INPUT -p tcp -m tcp --dport 22 -m recent --rcheck --seconds 30 --hitcount 3 --rttl --name SSH --rsource -j LOG --log-prefix "SSH brute force " iptables -A INPUT -p tcp -m tcp --dport 22 -m recent --update --seconds 30 --hitcount 3 --rttl --name SSH --rsource -j REJECT --reject-with tcp-reset iptables -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT echo -e Open NGINX port 80 iptables -A INPUT -p tcp --dport 80 -j ACCEPT echo -e Open NGINX SSL port 443 iptables -A INPUT -p tcp --dport 443 -j ACCEPT echo -e Enable DNS iptables -A INPUT -p tcp -m tcp --sport 53 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT iptables -A INPUT -p udp -m udp --sport 53 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT And I get this in the java app when this config is in place: org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];[SERVICE_UNAVAILABLE/2/no master]; at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:292) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1185) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:537) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:475) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:304) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:228) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:300) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:195) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:700) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:760) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:482) at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:403) Do any of you see any problem with this configuration and ES? Thanks in advance

    Read the article

  • ProFTPD / PAM issues with new centos/virtualmin install

    - by iamthewit
    I just installed CentOS 5.4 on a rackspace cloud server and installed virtualmin which all seemed to go fine. The only problem I have is that I can not access the virtual servers directories via FTP. I get the following from filezilla: Status: Connecting to 1.1.1.1:21... Status: Connection established, waiting for welcome message... Response: 220 FTP Server ready. Command: USER username Response: 331 Password required for username. Command: PASS *************** Response: 230 User username logged in. Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is current directory. Command: TYPE I Response: 200 Type set to I Command: PASV Response: 227 Entering Passive Mode (1,1,1,1,216,214) Command: LIST Error: Connection timed out Error: Failed to retrieve directory listing and I get this from my /var/secure/log file Sep 22 19:40:42 stickeeserver proftpd: pam_unix(proftpd:session): session opened for user username by (uid=0) Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - USER nastypasty: Login successful. Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - Preparing to chroot to directory '/home/username' Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - mod_delay/0.5: delaying for 728 usecs Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - error setting IPV6_V6ONLY: Protocol not available Any help would be greatly appreciated, I'm not totally new to Linux but it's not my strongest subject. I do like to know exactly why problems occur though and how exactly to fix them so the more detail the better! cheers

    Read the article

  • SSH over HTTPS with proxytunnel and nginx

    - by Thermionix
    I'm trying to setup an ssh over https connection using nginx. I haven't found any working examples, so any help would be appreciated! ~$ cat .ssh/config Host example.net Hostname example.net ProtocolKeepAlives 30 DynamicForward 8118 ProxyCommand /usr/bin/proxytunnel -p ssh.example.net:443 -d localhost:22 -E -v -H "User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Win32)" ~$ ssh [email protected] Local proxy ssh.example.net resolves to 115.xxx.xxx.xxx Connected to ssh.example.net:443 (local proxy) Tunneling to localhost:22 (destination) Communication with local proxy: -> CONNECT localhost:22 HTTP/1.0 -> Proxy-Connection: Keep-Alive -> User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Win32) <- <html> <- <head><title>400 Bad Request</title></head> <- <body bgcolor="white"> <- <center><h1>400 Bad Request</h1></center> <- <hr><center>nginx/1.0.5</center> <- </body> <- </html> analyze_HTTP: readline failed: Connection closed by remote host ssh_exchange_identification: Connection closed by remote host Nginx config on the server; ~$ cat /etc/nginx/sites-enabled/ssh upstream tunnel { server localhost:22; } server { listen 443; server_name ssh.example.net; location / { proxy_pass http://tunnel; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_redirect off; } ssl on; ssl_certificate /etc/ssl/certs/server.cer; ssl_certificate_key /etc/ssl/private/server.key; } ~$ tail /var/log/nginx/access.log 203.xxx.xxx.xxx - - [08/Feb/2012:15:17:39 +1100] "CONNECT localhost:22 HTTP/1.0" 400 173 "-" "-"

    Read the article

  • cygwin sshd times out for remote login

    - by reve_etrange
    I have configured SSHD using Cygwin on Windows 7. I have checked and double-checked all of the following points: Port forwarding is correctly configured Windows Firewall is configured to pass port 22 Local login attempts (using Cygwin SSH) succeed sshd_config has UseDNS No Using nmap from remote machine confirms port 22 is accessible /etc/passwd and /etc/group are correctly populated However, remote login attempts time out. This includes from the local network. user@host:~$ ssh -vvv [email protected] OpenSSH_5.5p1 Debian-4ubuntu6, OpenSSL 0.9.8o 01 Jun 2010 debug1: Reading configuration data /home/user/.ssh/config debug1: Applying options for * debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to the.ip.add.ress [the.ip.add.ress] port 22. debug1: connect to address the.ip.add.ress port 22: Connection timed out ssh: connect to the.ip.add.ress port 22: Connection timed out No messages are logged to /var/log/sshd.log. I suspect that there is a permissions issue with a particular file somewhere, however I have checked the permissions of all my Cygwin binaries, DLLs and the particular files important to Cygwin sshd, including all of: /etc/passwd /etc/group /var /var/log/sshd.log /var/empty Others who have reported this or similar errors appear to have missed one of the points enumerated above. Can anyone point me to a possible solution?

    Read the article

  • Root directory permissions on Mac OS X 10.6?

    - by Agos
    Hi, I was wondering if it's normal that the root directory / should be owned by “root”. I get asked for my password every time I want to do something there (e.g. save a file, create a directory) and I don't remember this happening before (though this may just be my faulty memory). Here's the relevant terminal output: MacBook:~ ago$ ls -lah / total 37311 drwxr-xr-x@ 35 root staff 1,2K 22 Mar 12:34 . drwxr-xr-x@ 35 root staff 1,2K 22 Mar 12:34 .. -rw-rw-r--@ 1 root admin 21K 22 Mar 10:21 .DS_Store drwx------ 3 root admin 102B 28 Feb 2008 .Spotlight-V100 d-wx-wx-wt 2 root admin 68B 31 Ago 2009 .Trashes -rw-r--r--@ 1 ago 501 45K 23 Gen 2008 .VolumeIcon.icns srwxrwxrwx 1 root staff 0B 22 Mar 12:34 .dbfseventsd ---------- 1 root admin 0B 23 Giu 2009 .file drwx------ 27 root admin 918B 22 Mar 10:55 .fseventsd -rw-r--r--@ 1 ago admin 59B 30 Ott 2007 .hidden -rw------- 1 root wheel 320K 30 Nov 11:42 .hotfiles.btree drwxr-xr-x@ 2 root wheel 68B 18 Mag 2009 .vol drwxrwxr-x+ 276 root admin 9,2K 19 Mar 18:28 Applications drwxrwxr-x@ 21 root admin 714B 14 Nov 12:01 Developer drwxrwxr-t+ 74 root admin 2,5K 18 Dic 22:14 Library drwxr-xr-x@ 2 root wheel 68B 23 Giu 2009 Network drwxr-xr-x 4 root wheel 136B 13 Nov 17:49 System drwxr-xr-x 6 root admin 204B 31 Ago 2009 Users drwxrwxrwt@ 4 root admin 136B 22 Mar 12:35 Volumes drwxr-xr-x@ 39 root wheel 1,3K 13 Nov 17:44 bin drwxrwxr-t@ 2 root admin 68B 23 Giu 2009 cores dr-xr-xr-x 3 root wheel 5,1K 17 Mar 11:29 dev lrwxr-xr-x@ 1 root wheel 11B 31 Ago 2009 etc -> private/etc dr-xr-xr-x 2 root wheel 1B 17 Mar 11:30 home drwxrwxrwt@ 3 root wheel 102B 31 Ago 2009 lost+found -rw-r--r--@ 1 root wheel 18M 3 Nov 19:40 mach_kernel dr-xr-xr-x 2 root wheel 1B 17 Mar 11:30 net drwxr-xr-x@ 3 root admin 102B 24 Nov 2007 opt drwxr-xr-x@ 6 root wheel 204B 31 Ago 2009 private drwxr-xr-x@ 64 root wheel 2,1K 13 Nov 17:44 sbin lrwxr-xr-x@ 1 root wheel 11B 31 Ago 2009 tmp -> private/tmp drwxr-xr-x@ 17 root wheel 578B 12 Set 2009 usr lrwxr-xr-x@ 1 root wheel 11B 31 Ago 2009 var -> private/var Are these ownerships / permissions ok? Should I chmod/chown something? Thanks in advance

    Read the article

  • how to display these text on blackberry and how to show the hyperlinks

    - by Changqi
    <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Strict//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>A history of Canoe Cove /</title> </head> <body> <div class="tei"> <p> A History of </p> <p> The General Stores </p> <p> There were several general stores in our <a class="search orgName" target="_blank" href="http://islandlives.net/fedora/ilives_book_search/tei.orgNameTERM:%22Cove%22+AND+dc.type:collection">Cove</a> at different times. The one that lasted longest was at the Corner across from the school and it had many owners. Who established it is unclear but John MacKenzie, the piper, who was also a shoe maker lived there. He was a relative of the present day MacKenzies of <a class="search placeName" target="_blank" href="http://islandlives.net/fedora/ilives_book_search/tei.placeNameTERM:%22Canoe Cove%22+AND+dc.type:collection">Canoe Cove</a>. William MacKay who married Christena MacLean was operating it when it burned down and a store which had belonged to Neil "Cooper" MacLean was moved across to the site. This was later bought by <span class="persName"><a class="search persName" target="_blank" href="http://islandlives.net/fedora/ilives_book_search/tei.persNameTERM:%22MacCannell+Neil%22+AND+dc.type:collection"> Neil MacCannell </a></span> of Long Creek , a schoolteacher who taught in the <a class="search orgName" target="_blank" href="http://islandlives.net/fedora/ilives_book_search/tei.orgNameTERM:%22Cove%22+AND+dc.type:collection">Cove</a> for a few years. <span class="persName"><a class="search persName" target="_blank" href="http://islandlives.net/fedora/ilives_book_search/tei.persNameTERM:%22MacNevin+Hector%22+AND+dc.type:collection"> Hector MacNevin </a></span> from <a class="search placeName" target="_blank" href="http://islandlives.net/fedora/ilives_book_search/tei.placeNameTERM:%22St. Catherines%22+AND+dc.type:collection">St. Catherines</a> operated it for a year while it still belonged to <span class="persName"><a class="search persName" target="_blank" href="http://islandlives.net/fedora/ilives_book_search/tei.persNameTERM:%22MacCannell+Neil%22+AND+dc.type:collection"> Neil MacCannell </a></span> because Neil had accepted a job in <a class="search placeName" target="_blank" href="http://islandlives.net/fedora/ilives_book_search/tei.placeNameTERM:%22Charlottetown%22+AND+dc.type:collection">Charlottetown</a> as clerk of the Court. Later Mrs. John Angus Darrach bought it and she and her son George ran it for years until both had health problems, and had to close the store after which closing it never reopened. After George died and his wife Hazel moved to Montague to live with her family the building was sold to Robert Patterson . Rob lived in it for a few years, making many improvements then sold it to Kirk McAleer. </p> </div>

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >