Search Results

Search found 108384 results on 4336 pages for 'input type file'.

Page 4/4336 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Help with possible Haskell type inference quiz questions

    - by Linda Cohen
    foldr:: (a -> b -> b) -> b -> [a] -> b map :: (a -> b) -> [a] -> [b] mys :: a -> a (.) :: (a -> b) -> (c -> a) -> c -> b what is inferred type of: a.map mys :: b.mys map :: c.foldr map :: d.foldr map.mys :: I've tried to create mys myself using mys n = n + 2 but the type of that is mys :: Num a => a -> a What's the difference between Num a = a - a and just a - a? Or what does 'Num a =' mean? Is it that mys would only take Num type? So anyways, a) I got [a] - [a] I think because it would just take a list and return it +2'd according to my definition of mys b) (a - b) - [a] - [b] I think because map still needs take both arguments like (*3) and a list then returns [a] which goes to mys and returns [b] c) I'm not sure how to do this 1. d) I'm not sure how to do this 1 either but map.mys means do mys first then map right? Are my answers and thoughts correct? If not, why not? THANKS!

    Read the article

  • Convert List of one type to Array of another type using Dozer

    - by aheu
    I'm wondering how to convert a List of one type to an array of another type in Java using Dozer. The two types have all the same property names/types. For example, consider these two classes. public class A{ private String test = null; public String getTest(){ return this.test } public void setTest(String test){ this.test = test; } } public class B{ private String test = null; public String getTest(){ return this.test } public void setTest(String test){ this.test = test; } } I've tried this with no luck. List<A> listOfA = getListofAObjects(); Mapper mapper = DozerBeanMapperSingletonWrapper.getInstance(); B[] bs = mapper.map(listOfA, B[].class); I've also tried using the CollectionUtils class. CollectionUtils.convertListToArray(listOfA, B.class) Neither are working for me, can anyone tell me what I am doing wrong? The mapper.map function works fine if I create two wrapper classes, one containing a List and the other a b[]. See below: public class C{ private List<A> items = null; public List<A> getItems(){ return this.items; } public void setItems(List<A> items){ this.items = items; } } public class D{ private B[] items = null; public B[] getItems(){ return this.items; } public void setItems(B[] items){ this.items = items; } } This works oddly enough... List<A> listOfA = getListofAObjects(); C c = new C(); c.setItems(listOfA); Mapper mapper = DozerBeanMapperSingletonWrapper.getInstance(); D d = mapper.map(listOfA, D.class); B[] bs = d.getItems(); How do I do what I want to do without using the wrapper classes (C & D)? There has got to be an easier way... Thanks!

    Read the article

  • Rogue black-box java application not responding to standard input redirect

    - by Stefan Kendall
    I have an external java application (blackbox), which requires authentication. I need to run this application in a batch setting, but it seems to be reading from standard input in some nonstandard way. That is, if I set the calling of the program to redirect STDIN to a file (... <password.txt) or pipe data to it (echo mypasword | ...), it does not recognize the input. As I run it, also, it seems to intercept Cntrl+c and Cntrl+d and Cntrl+z as legitimate password characters, so it must be doing something odd and not just reading from standard in. Any idea what this application could be doing to read in input? I need to be able to send it information programmatically, and I'm stumped for the moment.

    Read the article

  • Defining Form Input with Dialog

    - by Workoholic
    Hi. I want my php application to allow the user to select a file from their computer. I then want to store the file's path, but I don't want to upload the file. With the <input type="file" />, the file is uploaded, which would take too long. Is there a way to do this? Maybe if I change the input's type to text with jquery right before the form is sent? I could just let the user type the path, but that is not very convenient...

    Read the article

  • Check to see if file transfer is complete

    - by Cymon
    We have a daily job that processes files delivered from an external source. The process usually runs fine without any issues but every once in a while we have an issue of attempting to process a file that is not completely transferred. The external source SCPs these files from a UNIX server to our Windows server. From there we try to process the files. Is there a way to check to see if a file is still being transferred? Does UNIX put a lock on a file while SCPing it that we could check on the Windows side?

    Read the article

  • Handling keyboard and mouse input (Win API)

    - by Deluxe
    There is a number of ways to catch mouse or keyboard under Windows. So I tried some of them, but every of them has some advantages and drawbacks. I want to ask you: Which method do use? I've tried these: WM_KEYDOWN/WM_KEYUP - Main disadvantage is that, I can't distinguish between left and right-handed keys like ALT, CONTROL or SHIFT. GetKeyboardState - This solves problem of first method, but there is new one. When I get that the Right-ALT key is pressed, I also get that the Left-Control key is down. This behaviour happens only when using localized keyboard layout (Czech - CS). WM_INPUT (Raw Input) - This method also doesn't distinguish left and right-handed keys (if I can remember) and for mouse movement sometimes generates message with zero delta values of mouse position.

    Read the article

  • Input Signal Out of Range 1920x1080

    - by Zach
    I've recently installed Ubuntu 12.04 on my computer. Now, every time I boot and when I shut down, my monitor goes blank and says "Input Signal Out of Range - Change Setting to 1920x1080 60Hz." Once the computer gets to the login screen, it's okay again. This problem also happens when I try to open any 3d app. My graphics card is NVIDIA GEforce 6150 SE. I tried updating the drivers, but it broke everything and I had to reinstall Ubuntu. Any help?

    Read the article

  • Do any languages other than haskell/agda have a hindley-milner type system and type classes?

    - by Jimmy Hoffa
    In pondering what gives Haskell such a layer of mental pain in becoming proficient the main thing I can think of are the Monads, Applicatives, Functors, and gaining an intuition to know how a list or maybe will behave in regards to sequence or alternate or bind etc. But why haven't other languages presented these same concepts given the usefulness of monads/applicatives/etc? It occurs to me, type classes are the key, so the question is: Have any languages other than Haskell/Agda actually implemented type classes in the same or similar way?

    Read the article

  • Webpage loading with wrong content-type after setting up CloudFlare

    - by Daniel Little
    I recently migrated my blog to the Ghost service, I've also setup an alias DNS record with CloudFlare. While showing the blog to a colleague I discovered one of the posts wasn't loading properly and would instead prompt to be downloaded with an application/octet-stream content-type. I can view all the pages without any issues and I believe we're both on the same network as well. Has anyone received a wrong content type like application/octet-stream using CloudFlare, or know what I can do to correct this?

    Read the article

  • Why doesn't Haskell have type-level lambda abstractions?

    - by Petr Pudlák
    Are there some theoretical reasons for that (like that the type checking or type inference would become undecidable), or practical reasons (too difficult to implement properly)? Currently, we can wrap things into newtype like newtype Pair a = Pair (a, a) and then have Pair :: * -> * but we cannot do something like ?(a:*). (a,a). (There are some languages that have them, for example, Scala does.)

    Read the article

  • Webservice Return Generic Result Type or Purposed Result Type

    - by hanzolo
    I'm building a webservice which returns JSON / XML / SOAP at the moment.. and I'm not entirely sure which approach for returning results is best. Which would be a better return value? A generic "transfer" type structure, which carries Generic properties or a purposed type with distinct properties: class GenericTransferObject{ public string returnVal; public string returnType; } VS class PurposedTransferObject_1{ public string Property1; } //and then building additional "types" for additional values class PurposedTransferObject_2 { public string PropertyA; public string PropertyB; } Now, this would be the serialized and returned from a web service call via some client technology, JQuery in this example. SO if I called: /GetDaysInWeek/ I would either get back: {"returnType": "DaysInWeek", "returnVal": "365" } OR {"DaysInWeek": "365"} And then it would go from there. On the one hand there's flexibilty with the 1st example. I can add "returnTypes" without needing to adjust the client other than referencing an additional "index".. but if I had to add a property, now i'm changing a structure definition.. Is there an obvious choice in this situation?

    Read the article

  • Input Handling and Game loop

    - by Bob Coder
    So, I intercept the WM_KEYDOWN and other messages. Thing is, my game can't/shouldn't react to these messages just yet, since my game might be currently drawing to the screen or in the middle of updating my game entities. So the idea is to keep a keyboardstate and mousestate, which is updated by the part of my code that intercepts the windows messages. These states just keep track of which keys/buttons are currently pressed. Then, at the start of my game's update function, I access these keyboard and mouse states and my game reacts to the user input. Now, which is the best way to access these states? I assume that windows messages can be sent whenever, so the keyboard/mouse states are constantly being edited. Accessing say a list of currently pressed keys in the keyboard state the same time another part of the code is editing the list would cause problems. Should I make a deep copy of a state and act on that? How would I deal with the garbage generated though, this would take place every frame.

    Read the article

  • Java keyboard input [on hold]

    - by dØd
    I'm trying to implement a input system that can detect whether a certain key was held or was only pressed briefly. So far I have this: KEY_INTERACTION_TRESHOLD = 400ms //inside a constructor shouldMeasure = true; @Override public void keyPressed(KeyEvent e) { if (shouldMeasure) { startTime = System.currentTimeMillis(); shouldMeasure = false; return; } System.out.println("Button is held down"); e.consume(); } @Override public void keyReleased(KeyEvent e) { if (System.currentTimeMillis() - startTime < KEY_INTERACTION_TRESHOLD) { System.out.println("Button was only pressed briefly"); } startTime = 0; shouldMeasure = true; e.consume(); } Now this works, but the problem is that there is this delay between when I press a key to hold and when the message 'Button is held down' gets displayed. I understand why this delay occurs (for example when you press and hold a letter there will be a similar delay between the first and the second letter printed out), but I would like to somehow avoid it. I'm using only the Java API.

    Read the article

  • Input/Output console window in XNA

    - by Will Bagley
    I am currently making a simple game in XNA but am at a point where testing various aspect gets a bit tricky, especially when you have to wait till you have 1000 score to see if your animation is playing correctly etc. Of course i could just edit the starting variable in the code before I launched but I have recently been interested in trying to implement a console style window which can print out values and take input to alter public variables during run-time. I am aware that VS has the immediate window which achieves a similar thing but i would prefer mine is an actual part of the game with the intention that the user may have limited access to it in the future. Some of the key things i have yet to find an answer to after looking around for a while are: how i would support free text entry how i would access variables during runtime how i would edit these variable I have also read about using a property grid from windows form aps (and partially reflection) which looked like it could simplify a lot of things but i am not sure how I would get that running inside my XNA game window or how i would get it to not look out of place (as the visual aspect of is seems to be aimed just for development time viewing). All in all I'm quite open to any suggestions on how to approach this task as currently I'm not sure where to start. Thanks in advance.

    Read the article

  • PHP File Downloading Questions

    - by nsearle
    Hey All! I am currently running into some problems with user's downloading a file stored on my server. I have code set up to auto download a file once the user hits the download button. It is working for all files, but when the size get's larger than 30 MB it is having issues. Is there a limit on user download? Also, I have supplied my example code and am wondering if there is a better practice than using the PHP function 'file_get_contents'. Thank You all for the help! $path = $_SERVER['DOCUMENT_ROOT'] . '../path/to/file/'; $filename = 'filename.zip'; $filesize = filesize($path . $filename); @header("Content-type: application/zip"); @header("Content-Disposition: attachment; filename=$filename"); @header("Content-Length: $filesize") echo file_get_contents($path . $filename);

    Read the article

  • Using Multiple File Handles for Single File

    - by Ryan Rosario
    I have an O(n^2) operation that requires me to read line i from a file, and then compare line i to every line in the file. This repeats for all i. I wrote the following code to do this with 2 file handles, but it does not yield the result I am looking for. I imagine this is a simple error on my part. IN1 = open("myfile.dat","r") IN2 = open("myfile.dat","r") for line1 in IN1: for line2 in IN2: print line1.strip(), line2.strip() IN1.close() IN2.close() The result: Hello Hello Hello World Hello This Hello is Hello an Hello Example Hello of Hello Using Hello Two Hello File Hello Pointers Hello to Hello Read Hello One Hello File The output should contain 15^2 lines.

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Replace input type=file by an image

    - by nikospkrk
    Hi, Like a lot of people, I'd like to customize the ugly input type=file, and I know that it can't be done without some hacks and/or javascript. But, the thing is that in my case the upload file buttons are just for uploading images (jpeg|jpg|png|gif), so I was wondering if I could use a "clickable" image which would act exactly as an input type file (show the dialog box, and same $_FILE on submitted page). I found some workaround here, and this interesting one too (but does not work on Chrome =/). What do you guys do when you want to add some style to your file buttons? If you have any point of view about it, just hit the answer button ;) Cheers, Nicolas

    Read the article

  • Apache serving wrong Content-Type for Rails files

    - by NudeCanalTroll
    Apache keeps serving up my Rails files with a Content-Type of 'text/plain' in the header. I have mod_mime installed, a mime.types files with all the correct MIME assignments, and the following code in my configuration. Any thoughts? DefaultType text/plain <IfModule mime_module> TypesConfig /etc/apache2/mime.types AddType application/x-compress .Z AddType application/x-gzip .gz .tgz </IfModule>

    Read the article

  • Is conditional return type ever a good idea?

    - by qegal
    So I have a method that's something like this: -(BOOL)isSingleValueRecord And another method like this: -(Type)typeOfSingleValueRecord And it occurred to me that I could combine them into something like this: -(id)isSingleValueRecord And have the implementation be something like this: -(id)isSingleValueRecord { //If it is single value if(self.recordValue = 0) { //Do some stuff to determine type, then return it return typeOfSingleValueRecord; } //If its not single value else { //Return "NO" return [NSNumber numberWithBool:NO]; } } So combining the two methods makes it more efficient but makes the readability go down. In my gut, I feel like I should go with the two-method version, but is that really right? Is there any case that I should go with the combined version?

    Read the article

  • .NET Type Conversion Issue: Simple but difficult

    - by jaderanderson
    Well, the question is kinda simple. I have a object defined as: public class FullListObject : System.Collections.ArrayList, IPagedCollection And when i try to: IPagedCollection pagedCollection = (IPagedCollection)value; It don't work... value is a FullListObject... this is my new code trying to get around a issue with the "is" operator. When the system tests (value is IPagedCollection) it never gets true for FullListObject. How to cast the object to another object with a interface type?

    Read the article

  • How to change SMP affinity of an IRQ on Ubuntu domU inside Xen XCP?

    - by Alexander Gladysh
    I'd like to change IRQ SMP affinity for reasons, outlined in this question: CPU0 is swamped with eth1 interrupts But I can't — I see Input/output error when I try to write to /proc/irq/*/smp_affinity. Please point me to the HOWTO on the matter. (A formal reference on /proc/irq/*/ would be cool as well.) Gory details: Note that this is a VM inside an Ubuntu-based Xen XCP host. $ uname -a Linux MYHOST 2.6.38-15-virtual #59-Ubuntu SMP Fri Apr 27 16:40:18 UTC 2012 i686 i686 i386 GNU/Linux $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 11.04 Release: 11.04 Codename: natty $ sudo cat /proc/irq/*/smp_affinity 01 01 01 01 01 80 80 80 80 80 80 40 40 40 40 40 40 20 20 20 20 20 20 10 10 10 10 10 10 08 08 08 08 08 08 04 04 04 04 04 04 02 02 02 02 02 02 01 01 01 01 01 01 Update. The error details: $ N=$(grep -c processor /proc/cpuinfo) $ echo $N 8 $ printf %x $((2**N-1)) ff $ printf %x $((2**N-1)) | sudo tee /proc/irq/*/smp_affinity fftee: /proc/irq/288/smp_affinity: Input/output error tee: /proc/irq/289/smp_affinity: Input/output error tee: /proc/irq/290/smp_affinity: Input/output error tee: /proc/irq/291/smp_affinity: Input/output error tee: /proc/irq/292/smp_affinity: Input/output error tee: /proc/irq/293/smp_affinity: Input/output error tee: /proc/irq/294/smp_affinity: Input/output error tee: /proc/irq/295/smp_affinity: Input/output error tee: /proc/irq/296/smp_affinity: Input/output error tee: /proc/irq/297/smp_affinity: Input/output error tee: /proc/irq/298/smp_affinity: Input/output error tee: /proc/irq/299/smp_affinity: Input/output error tee: /proc/irq/300/smp_affinity: Input/output error tee: /proc/irq/301/smp_affinity: Input/output error tee: /proc/irq/302/smp_affinity: Input/output error tee: /proc/irq/303/smp_affinity: Input/output error tee: /proc/irq/304/smp_affinity: Input/output error tee: /proc/irq/305/smp_affinity: Input/output error tee: /proc/irq/306/smp_affinity: Input/output error tee: /proc/irq/307/smp_affinity: Input/output error tee: /proc/irq/308/smp_affinity: Input/output error tee: /proc/irq/309/smp_affinity: Input/output error tee: /proc/irq/310/smp_affinity: Input/output error tee: /proc/irq/311/smp_affinity: Input/output error tee: /proc/irq/312/smp_affinity: Input/output error tee: /proc/irq/313/smp_affinity: Input/output error tee: /proc/irq/314/smp_affinity: Input/output error tee: /proc/irq/315/smp_affinity: Input/output error tee: /proc/irq/316/smp_affinity: Input/output error tee: /proc/irq/317/smp_affinity: Input/output error tee: /proc/irq/318/smp_affinity: Input/output error tee: /proc/irq/319/smp_affinity: Input/output error tee: /proc/irq/320/smp_affinity: Input/output error tee: /proc/irq/321/smp_affinity: Input/output error tee: /proc/irq/322/smp_affinity: Input/output error tee: /proc/irq/323/smp_affinity: Input/output error tee: /proc/irq/324/smp_affinity: Input/output error tee: /proc/irq/325/smp_affinity: Input/output error tee: /proc/irq/326/smp_affinity: Input/output error tee: /proc/irq/327/smp_affinity: Input/output error tee: /proc/irq/328/smp_affinity: Input/output error tee: /proc/irq/329/smp_affinity: Input/output error tee: /proc/irq/330/smp_affinity: Input/output error tee: /proc/irq/331/smp_affinity: Input/output error tee: /proc/irq/332/smp_affinity: Input/output error tee: /proc/irq/333/smp_affinity: Input/output error tee: /proc/irq/334/smp_affinity: Input/output error tee: /proc/irq/335/smp_affinity: Input/output error Update. irqbalance is running: $ sudo service irqbalance status irqbalance start/running, process 560

    Read the article

  • input / output error, drives randomly refusing to read / write

    - by ILMV
    I have an issue with one of our servers running Ubuntu 10.04, it is running BackupPC and collects backups from various machines / servers around the building. On the 8th minute (12:08, 12:18, 12:28 etc) the backups are transferred to an external hard drive, we have three and rotate one drive for another everyday. The problem we are having is we are randomly experiencing input / output errors, when this happens you cannot read / write to the drive, it hasn't unmounted so I can cd to the mount point /media/backup1. The drives are not faulty as it's happening on all of them, so I'm at a loss as to what the problem could be, here is an example of the many errors we get: gzip: stdout: Input/output error /var/lib/backuppc/backuppc_offline: line 47: /media/backup1/Tue/offline.log: Input/output error ls: cannot access /media/backup1/Tue/incr_1083_host1.something.co.uk.tar.gz: Input/output error ls: cannot access /media/backup1/Tue/incr_1088_host1.something.co.uk.tar.gz: Input/output error ls: cannot access /media/backup1/Tue/incr_1089_host1.something.co.uk.tar.gz: Input/output error ls: cannot access /media/backup1/Tue/incr_1090_host1.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 39: /media/backup1/Tue/offline.log: Input/output error /var/lib/backuppc/backuppc_offline: line 44: /media/backup1/Tue/offline.log: Input/output error /var/lib/backuppc/backuppc_offline: line 45: /media/backup1/Tue/incr_1090_host1.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 47: /media/backup1/Tue/offline.log: Input/output error ls: cannot access /media/backup1/Tue/incr_591_tech2.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 44: /media/backup1/Tue/offline.log: Input/output error /var/lib/backuppc/backuppc_offline: line 45: /media/backup1/Tue/incr_591_tech2.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 47: /media/backup1/Tue/offline.log: Input/output error ls: cannot access /media/backup1/Tue/incr_592_tech3.something.co.uk.tar.gz: Input/output error ls: cannot access /media/backup1/Tue/incr_593_tech3.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 44: /media/backup1/Tue/offline.log: Input/output error /var/lib/backuppc/backuppc_offline: line 45: /media/backup1/Tue/incr_593_tech3.something.co.uk.tar.gz: Input/output error /var/lib/backuppc/backuppc_offline: line 47: /media/backup1/Tue/offline.log: Input/output error EDIT » Resolved So it turns out Quamis was right, even though I didn't think it was possible it was actually a problem with the drive. You see we have three drives all formatted to ext2, on two of them we were getting I/O errors frequently, I cam back to Quamis' answer and discovered the fsck command, so ran it against the problems drives: fsck /dev/sdb1 This found and fixed a load of problems on the drive, most probably caused by power outages / unsafe removal of drives etc, as the drives are in the xt2 format they aren't journalled and thus aren't protected against such issues. Drives are now working beautifully, thanks all! :D

    Read the article

  • No USB 04b4:0307 mike input after 10.04

    - by Papou
    I am using an USB phone that is in fact a so-called "sound card" based on the 04b4:0307 chip. In fact, I have two different phones using 04b4:0307 and in fact I have a sound USB key too. This, I believe, is the start of why they call 04b4:0307 "ubiquitous" (instead of oh four bee...). But not "eternal". The mike worked in Ubuntu 9.10 and 10.04 but not later ([email protected]). "not working" means that 04b4:0307 shows in Sound Preference but that its vu-meter is mute. I have posted the full lsusb and the result of tests in various systems here: http://www.papou.byethost9.com/tmp/1043601.html Note: Tests done on VirtualBox (thankfully). But UbuntUnity no longer works on VB, so I used the LinuxMint equivalent. ?hat's fate. I could not find any problem report close enough. What should be my next step? I believe the problem occurs in module snd-usb-audio. One thing I might try if I knew how is hacking a DEB with its 10.04's source. I can hack DEBs. Any hint welcome on how to make a DEB overriding a kernel packed module. I mean that the newly installed module should have precedence, be loaded instead, the module installed with the kernel. TIA !

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >