Search Results

Search found 82071 results on 3283 pages for 'file url'.

Page 7/3283 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • SEO URL structure for tag search on site

    - by Theo G
    I am looking to add tags to each product on my site e.g. brown, x products under £x, second hand x, refurbished x etc. Once you click these tags it will then search for other tags that are similar. I was thinking of using a url structure of www.site.com/tags/this%is%the%tag%name and then simply have a page that shows the results of all the products with that tag. I heard a while back that google generally ignores or downgrades anything with ‘search’ in the url and was wondering if anyone had any experience with this? Also, would you say /tags/ is a pretty valid destination or is it best to break it down and add more levels e.g. /product-type/product%variation Thanks in advance!

    Read the article

  • How do I word my url so that it doesn't get blocked or appear spammy

    - by user18681
    I'm creating a fairly large site. Will my links appear spammy if I use the same word as in the pathfile in the url? For example: www.example.com/apples/great-apple-recipes www.example.com/apples/fresh-apple-pie www.example.com/apples/delicious-apple-turnovers I do not want my link to appear spammy. But is it ok if the keyword is almost always the same as in the pathfile on a huge site? Does the pathfile count as part of the keyword? Also, how many words in total should a url (including pathfile etc...) be?

    Read the article

  • How to optimise the url for search engines?

    - by phpheini
    I have a php template which has one index.php and all the different pages (content1.html, content2.html etc) are shown on the index.phppage. So for example I can open www.example.com/index.php?content1 and it will show the content1.html. Now what I would like is this: Often you see websites where the url is like: www.example.com/this-is-the-content I know how to do this with an exclamation mark like www.example.com/?content1 <-- here you just dont write the index.php. But how can I do an url name, which is completely different from the filename? For example www.example.com/this-is-some-page would show me the content of index.php?content1

    Read the article

  • Will URL encoding the image names

    - by TheGateKeeper
    Just wondering if it makes any difference to Google whether or not I URL encode the image names when linking to them. For example if I have an image named "test-1234-!.jpg", does it make a difference if I name it refer to it as "test-1234-%21.jpg"? The reason I am asking is because I am doing a major shift in the way my website works and while all new image names will not be URL encoded, all of the past ones are. I want to see if it is worth it renaming all of them or if I should just leave it like that.

    Read the article

  • Will URL encoding the image names affect Google

    - by TheGateKeeper
    Just wondering if it makes any difference to Google whether or not I URL encode the image names when linking to them. For example if I have an image named "test-1234-!.jpg", does it make a difference if I name it refer to it as "test-1234-%21.jpg"? The reason I am asking is because I am doing a major shift in the way my website works and while all new image names will not be URL encoded, all of the past ones are. I want to see if it is worth it renaming all of them or if I should just leave it like that.

    Read the article

  • PHP File Downloading Questions

    - by nsearle
    Hey All! I am currently running into some problems with user's downloading a file stored on my server. I have code set up to auto download a file once the user hits the download button. It is working for all files, but when the size get's larger than 30 MB it is having issues. Is there a limit on user download? Also, I have supplied my example code and am wondering if there is a better practice than using the PHP function 'file_get_contents'. Thank You all for the help! $path = $_SERVER['DOCUMENT_ROOT'] . '../path/to/file/'; $filename = 'filename.zip'; $filesize = filesize($path . $filename); @header("Content-type: application/zip"); @header("Content-Disposition: attachment; filename=$filename"); @header("Content-Length: $filesize") echo file_get_contents($path . $filename);

    Read the article

  • Using Multiple File Handles for Single File

    - by Ryan Rosario
    I have an O(n^2) operation that requires me to read line i from a file, and then compare line i to every line in the file. This repeats for all i. I wrote the following code to do this with 2 file handles, but it does not yield the result I am looking for. I imagine this is a simple error on my part. IN1 = open("myfile.dat","r") IN2 = open("myfile.dat","r") for line1 in IN1: for line2 in IN2: print line1.strip(), line2.strip() IN1.close() IN2.close() The result: Hello Hello Hello World Hello This Hello is Hello an Hello Example Hello of Hello Using Hello Two Hello File Hello Pointers Hello to Hello Read Hello One Hello File The output should contain 15^2 lines.

    Read the article

  • How to place search params in URL?

    - by sa125
    Hi - How do I get a form to submit it's params in the url, such that the rendered page will contain the query (rails 2.3)? Something like this: example.com/search?name=john&age=25&city=atlanta Simple, I know, but I'm not sure how to do it... :) thanks.

    Read the article

  • How to beautify the URL?

    - by mysqllearner
    I am sick of this kind of URL: www.domain.com/something/?id=person&photos=photoID&variable1=others&... I am using apache, learning to write .htaccess now. Can anyone show me the basic code for this one? Ugly: www.domain.com/something/?id=person&photos=photoID Goal: www.domain.com/something/person/photoID

    Read the article

  • rewriting single url help

    - by Rich
    Hi, I'm trying to rewrite the following url: index.php?route=checkout/cart to /cart using: RewriteRule ^index.php?route=checkout/cart$ /basket [L] However it doesn't seem to work. Anyone know what I'm doing wrong? Thanks

    Read the article

  • Rewriting URL's in ASP.net (Simple question)

    - by Tom Gullen
    On my master page I have: <link rel="stylesheet" href="css/default.css" /> I also have a page "blog.aspx" in the root directory. I have the rule: <rewrite url="~/blog/blog.aspx" to="~/blog.aspx" /> My questions are: Am I meant to make all my links in my site point to blog/blog.aspx now instead of just blog.aspx where it is physically located How is best to cope with the paths of the stylesheets etc now being messed up because they are one dir up?

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • How to handle existing indexed Mixed Case url's?

    - by marcusstarnes
    I have an asp.net web forms application that has been live for a number of years and as such has quite a lot of indexed content on google. Ideally, I'd prefer that all Url's for the website are in lowercase but I understand that having 2 versions of the same content indexed in search engines (MixedCase.aspx and mixedcase.aspx) will be bad for seo. I was wondering: a) Should I just leave everything in its current Mixed Case form and never change it? OR b) I can change the code so everything is in lowercase from here on in, BUT, is there a way of doing this so as the search engines are aware of this change and don't penalise me?

    Read the article

  • ASP.net MVC support for URL's with hyphens

    - by John
    Is there an easy way to get the MvcRouteHandler to convert all hyphens in the action and controller sections of an incoming URL to underscores as hyphens are not supported in method or class names. This would be so that I could support such structures as sample.com/test-page/edit-details mapping to Action edit_details and Controller test_pagecontroller while continuing to use MapRoute method. I understand I can specify an action name attribute and support hyphens in controller names which out manually adding routes to achieve this however I am looking for an automated way so save errors when adding new controllers and actions.

    Read the article

  • How to handle URLs with diacritic characters

    - by user359650
    I am wondering how to handle URLs which correspond to strings containing diacritic (á, u, ´...). I believe what we're seeing mostly are URLs where diacritic characters where converted to their closest ASCII equivalent, for instance Rånades på Skyttis i Ö-vik converted to ranades-pa-skyttis-i-o-vik. However depending on the corresponding language, such conversion might be incorrect. For instance in German, ü should be converted to ue and not just u, as seen with the below URL representing the Bayern München string as bayern-muenchen: http://www.bundesliga.de/en/liga/clubs/fc-bayern-muenchen/index.php However what I've also noticed, is that browsers can render non-ASCII characters when they are percent-encoded in the URL, which is the approach Wikipedia has chosen, for instance http://de.wikipedia.org/wiki/FC_Bayern_M%C3%BCnchen which is rendered as: Therefore I'm considering the following approach for creating URL slugs: -(1) convert strings while replacing non-ASCII characters to their recommended ASCII representation: Bayern München - bayern-muenchen -(2) also convert strings to percent encoding: Bayern München - bayern_m%C3%BCnchen -create a 301 redirect from version (1) to version (2) Version (1) URLs could be used for marketing purposes (e.g. mywebsite.com/bayern-muenchen) but the URLs that would end being displayed in the browser bar would be version (2) URLs (e.g. mywebsite.com/bayern-münchen). Can you foresee particular problems with this approach? (Wikipedia is not doing it and I wonder why, apart from the fact that they don't need to market their URLs)

    Read the article

  • Rewrite a dynamic URL to a new dynamic URL

    - by Jmino14
    I am new to the RewriteEngine and have not been able to find an answer to the following issue. I run an ecommerce site with an ever changing catalog of product skus. Our URLs are dynamic. The question is, what if I want to have a dynamic variable redirect to a different dynamic variable. For instance, I want: http://www.mydomain.com/product.jhtm?id=12345 to now go to: www.mydomain.com/product.jhtm?id=78910 How can I do this through the .htaccess? Thanks in advance.

    Read the article

  • htaccess mask dynamic url with a static url

    - by Enkay
    I'm trying to do something with .htaccess that I'm not sure can be done. First thing I did is hide the .php extensions using the following code: RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME}\.php -f RewriteRule ^(.*)$ $1.php Works great. Now what I'm trying to do and can't seem to figure out is the following: When a user types "mywebsite.com/products?id=12345" into the browser address bar, I want the server to serve the right product page according to the ID but display it in the address bar as "mywebsite.com/product" no matter what the product ID is. Is this possible to do? If yes how? Thanks

    Read the article

  • URLRewriter.net fails relative paths when using more than one substring in URL

    - by Andreas Strandfelt
    Hi. I have installed the URLRewriter on my server, and it works fine, but I have a rather big problem. Relative links in hyperlinks, CSS-links, images etc. doesn't work when I have URLs with more than one substring. E.g. (sorry, no http:// in front, as I do not have enough reputation): dkbyg.strandweb.dk/Leje-og-udlejning-arbejdskraft leads to the path dkbyg.strandweb.dk/Workers.aspx and works just fine. But dkbyg.strandweb.dk/Leje-og-udlejning-arbejdskraft/Midtjylland leads to dkbyg.strandweb.dk/Workers.aspx?Region=Midtjylland using this line in the Web.config: <rewrite url="~/Leje-og-udlejning-arbejdskraft/(.+)" to="~/Workers.aspx?Region=$1"/> It rewrites just fine, but my relative links doesn't work anymore. CSS, Images, links and so on thinks my root is now http://dkbyg.strandweb.dk/Leje-og-udlejning-arbejdskraft, which of course doesn't exist. Can't this be fixed? All my links are correctly set using the ~/, like this: <asp:HyperLink ID="HyperLink3" CssClass="black_text" NavigateUrl="~/Forgot-Password" runat="server">I have forgotten my password</asp:HyperLink>

    Read the article

  • PHP Form POST to external URL with Redirect to another URL

    - by Marlon
    So, what I am trying to accomplish is have a self-posting PHP form, POST to an external page (using CURL) which in turn redirects to another page. Currently, what is happening is that once I click "Submit" on the form (in contact.php) it will POST to itself (as it is a self-posting form). The script then prepares the POST using CURL and performs the post. The external page does its processing and then, the external page is supposed to redirect back to another page, in a referring domain. However, what happens instead, is that it seems like the contact.php page loads the HTML from the page the external page redirected to, and then, the contact.php's HTML loads after that, ON THE SAME PAGE. The effect, is what looks like two separate pages rendered as one page. Naturally, I just want to perform the POST and have the browser render the page it is supposed to redirect to as specified by the external page. Here is the code I have so far: <?php if(isset($_POST['submit'])) { doCURLPost(); } function doCURLPost() { $emailid = "2, 4"; $hotel = $_POST['hotel']; //you will need to setup an array of fields to post with //then create the post string $data = array ( "recipient" => $emailid, "subject" => "Hotel Contact Form", "redirect" => "http://www.localhost.com/thanx.htm", "Will be staying in hotel: " => $_POST['hotel'], "Name" => $_POST['Name'], "Phone" => $_POST['Phone'], "Comments" => $_POST['Comments']); $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, "http://www.externallink.com/external.aspx"); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true); curl_setopt($ch, CURLOPT_POST, true); curl_setopt($ch, CURLOPT_POSTFIELDS, $data); curl_setopt($ch, CURLOPT_HTTPHEADER, array("Referer: http://www.localhost.com/contact.php")); $output = curl_exec($ch); $info = curl_getinfo($ch); curl_close($ch); } ?>

    Read the article

  • Grails - Language prefix in url mappings

    - by Art79
    Hi there im having problem with language mappings. The way i want it to work is that language is encoded in the url like /appname/de/mycontroller/whatever If you go to /appname/mycontroller/action it should check your session and if there is no session pick language based on browser preference and redirect to the language prefixed site. If you have session then it will display english. English does not have en prefix (to make it harder). So i created mappings like this: class UrlMappings { static mappings = { "/$lang/$controller/$action?/$id?"{ constraints { lang(matches:/pl|en/) } } "/$lang/store/$category" { controller = "storeItem" action = "index" constraints { lang(matches:/pl|en/) } } "/$lang/store" { controller = "storeItem" action = "index" constraints { lang(matches:/pl|en/) } } "/$controller/$action?/$id?"{ lang="en" constraints { } } "/store/$category" { lang="en" controller = "storeItem" action = "index" } "/store" { lang="en" controller = "storeItem" action = "index" } "/"(view:"/index") "500"(view:'/error') } } Its not fully working and langs are hardcoded just for now. I think i did something wrong. Some of the reverse mappings work but some dont add language. If i use link tag and pass params:[lang:'pl'] then it works but if i add params:[lang:'pl', page:2] then it does not. In the second case both lang and page number become parameters in the query string. What is worse they dont affect the locale so page shows in english. Can anyone please point me to the documentation what are the rules of reverse mappings or even better how to implement such language prefix in a good way ? THANKS

    Read the article

  • URL naming conventions

    - by LookitsPuck
    So, this may be a can of worms. But I'm curious what your practices are? For example, let's say your website consists of the following needs (very basic): A landing page An information page for an event (static) A listing of places for that event (dynamic) An information page for each place With that said, how would you design your URLs? Typically, I'd do something like the following: www.domain.com/ - landing page [also accessible via www.domain.com/home] www.domain.com/event - event information page www.domain.com/places - listing of all places www.domain.com/places/{id} - place information page Now, here's a question. Just grammatically speaking, I have a hangup of referring to a given place in a url as being plural. Shouldn't it make more sense to go with this: www.domain.com/place/{id} as opposed to www.domain.com/places/{id} In some frameworks, you have a convention to follow (for example, ASP.NET MVC) by default. Yes, you can define custom routes to have /place/{id} route to the PlacesController. However, I'm just trying to keep this a bit abstract in discussion. With that being said, let's see for instance on another page of your site, you have a link, that when clicked, would open a modal popup populated with place information. Where you place that information? We could go with something like this: www.domain.com/ajax/places/{id} OR www.domain.com/places/{id} and serve based on the request header (that is, if requesting JSON, return JSON?}. Finally, for SEO reasons, typically I use a slug associated with a given resource. So, something like such: www.domain.com/ajax/places/{id}/london Where london is only there to add decoration to the link for SEO reasons. Is this sound? I ask all of these questions, because these are practices that I've been using for awhile, and I'd just like to see what other developers are doing or if I'm approaching things incorrectly. Thanks!

    Read the article

  • jQuery / Loading content into div and changing url's (working but buggy)

    - by Bruno
    This is working, but I'm not being able to set an index.html file on my server root where i can specify the first page to go. It also get very buggy in some situations. Basically it's a common site (menu content) but the idea is to load the content without refreshing the page, defining the div to load the content, and make each page accessible by the url. One of the biggest problems here it's dealing with all url situations that may occur. The ideal would be to have a rel="divToLoadOn" and then pass it on my loadContent() function... so I would like or ideas/solutions for this please. Thanks in advance! //if page comes from URL if(window.location.hash != ''){ var url = window.location.hash; url = '..'+url.substr(1, url.length); loadContent(url); } //if page comes from an internal link $("a:not([target])").click(function(e){ e.preventDefault(); var url = $(this).attr("href"); if(url != '#'){ loadContent($(this).attr("href")); } }); //LOAD CONTENT function loadContent(url){ var contentContainer = $("#content"); //set load animation $(contentContainer).ajaxStart(function() { $(this).html('loading...'); }); $.ajax({ url: url, dataType: "html", success: function(data){ //store data globally so it can be used on complete window.data = data; }, complete: function(){ var content = $(data).find("#content").html(); var contentTitle = $(data).find("title").text(); //change url var parsedUrl = url.substr(2,url.length) window.location.hash = parsedUrl; //change title var titleRegex = /(.*)<\/title/.exec(data); contentTitle = titleRegex[1]; document.title = contentTitle; //renew content $(contentContainer).fadeOut(function(){ $(this).html(content).fadeIn(); }); }); }

    Read the article

  • URL Rewrite problem. (Many directory)

    - by marharépa
    Hello! I'd like to make a htaccess file, which can make a good structure for my websites. My .htaccess is now: RewriteEngine On RewriteCond %{REQUEST_FILENAME} !\.(jpg|jpeg|gif|png|css|js|pl|txt)$ RewriteCond %{REQUEST_URI} !^/admin/* RewriteRule ^(.*)$ index.php?q=$1 [QSA] (based on Sombat's comment) And I want to make this, with it: for every elements but (jpg|jpeg|gif|png|css|js|pl|txt) if domain.xx/admin redirect to the domain.xx/admin directory and don't make a rewrite at all i mean: let me use domain.xx/admin/index.php?asd=1&asdd=2 else rewrite everything as rule one, to index.php. Thanks for the help.

    Read the article

  • How to suppress PHPSESSID in URL for Googlebot?

    - by Roque Santa Cruz
    I use cookie based sessions, and they work for normal interaction with our site. However, when Googlebot comes crawling out PHP framework, Yii, needs to append ?PHPSESSID to each URL, which doesn't look that good in SERP. Any ways to suppress this behavior? PS. I tried to utilize ini_set('session.use_only_cookies', '1');, but it does not work. PPS. To get an impression of the SERP, they look like this: http://www.google.com/search?q=site:wwwdup.uni-leipzig.de+inurl:jobportal

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >