Search Results

Search found 41882 results on 1676 pages for 'png files'.

Page 34/1676 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • Memory mapped files and "soft" page faults. Unavoidable?

    - by Robert Oschler
    I have two applications (processes) running under Windows XP that share data via a memory mapped file. Despite all my efforts to eliminate per iteration memory allocations, I still get about 10 soft page faults per data transfer. I've tried every flag there is in CreateFileMapping() and CreateFileView() and it still happens. I'm beginning to wonder if it's just the way memory mapped files work. If anyone there knows the O/S implementation details behind memory mapped files I would appreciate comments on the following theory: If two processes share a memory mapped file and one process writes to it while another reads it, then the O/S marks the pages written to as invalid. When the other process goes to read the memory areas that now belong to invalidated pages, this causes a soft page fault (by design) and the O/S knows to reload the invalidated page. Also, the number of soft page faults is therefore directly proportional to the size of the data write. My experiments seem to bear out the above theory. When I share data I write one contiguous block of data. In other words, the entire shared memory area is overwritten each time. If I make the block bigger the number of soft page faults goes up correspondingly. So, if my theory is true, there is nothing I can do to eliminate the soft page faults short of not using memory mapped files because that is how they work (using soft page faults to maintain page consistency). What is ironic is that I chose to use a memory mapped file instead of a TCP socket connection because I thought it would be more efficient. Note, if the soft page faults are harmless please note that. I've heard that at some point if the number is excessive, the system's performance can be marred. If soft page faults intrinsically are not significantly harmful then if anyone has any guidelines as to what number per second is "excessive" I'd like to hear that. Thanks.

    Read the article

  • Trouble when changing pixel data with alpha on png on iphone --okay on simulator

    - by Ted
    I'm trying to change the color of the pixels (lighten or darken) without changing the value of the alpha channel using CGDataProviderCopyData. I leave every 4th databyte untouched. It work fine of the iphone simulator, however on the real thing the alpha goes white as I increase the values of the other pixels. I've tried changing just the first byte, or the second, or the third. Does anybody have any idea what is going on? The basic code is borrowed from Jorge. I like this simple approach --I'm new to this. But I want to make it work with png images with some transparency. here is most of the code by Jorge : CFDataRef CopyImagePixels(CGImageRef inImage){ return CGDataProviderCopyData(CGImageGetDataProvider(inImage)); } CGImageRef img=originalImage.CGImage; CFDataRef dataref=CopyImagePixels(img); UInt8 *data=(UInt8 *)CFDataGetBytePtr(dataref); int length=CFDataGetLength(dataref); for(int index=0;index255){ data[index+i]=255; }else{ data[index+i]+=value; } } } } size_t width=CGImageGetWidth(img); size_t height=CGImageGetHeight(img); size_t bitsPerComponent=CGImageGetBitsPerComponent(img); size_t bitsPerPixel=CGImageGetBitsPerPixel(img); size_t bytesPerRow=CGImageGetBytesPerRow(img); CGColorSpaceRef colorspace=CGImageGetColorSpace(img); CGBitmapInfo bitmapInfo=CGImageGetBitmapInfo(img); CGImageAlphaInfo alphaInfo = kCGBitmapAlphaInfoMask(img); NSLog(@"bitmapinfo: %d",bitmapInfo); CFDataRef newData=CFDataCreate(NULL,data,length); CGDataProviderRef provider=CGDataProviderCreateWithCFData(newData); CGImageRef newImg=CGImageCreate(width,height,bitsPerComponent,bitsPerPixel,bytesPerRow,colorspace,bitmapInfo,provider,NULL,true,kCGRenderingIntentDefault); [iv setImage:[UIImage imageWithCGImage:newImg]]; CGImageRelease(newImg); CGDataProviderRelease(provider);

    Read the article

  • LI Background Images (.PNG) not appeared in IE6

    - by Balkar
    Hi I am using the following CSS but it never shows background images in IE6. But if I remove the filter .. AlphaLoader command, then it shows with grey background. Here is my CSS Code .fg-block1 ul, .fg-block3 ul { list-style:none; } .fg-block1 ul li, .fg-block3 ul li { padding-left:28px; background:url(images/bullet-2.png) no-repeat left top; font-family:Verdana, Arial, Helvetica, sans-serif; font-size:11px; border-bottom:1px dotted #fff; text-align:left; background-position:1px 0; line-height:16px; padding-bottom:5px; margin-bottom:5px; } .fg-block3 ul li { border-bottom:none; } .fg-block1 ul li a, .fg-block3 ul li a { color:#fff; text-decoration:none; } .fg-block1 ul li a:hover, .fg-block3 ul li a:hover { color:#fff; text-decoration:underline; }

    Read the article

  • How can I split my conkeror-rc config over multiple files?

    - by Ryan Thompson
    Short version: can you help me fill in this code? var conkeror_settings_dir = ".conkeror.mozdev.org/settings"; function load_all_js_files_in_dir (dir) { var full_path = get_home_directory().appendRelativePath(dir); // YOUR CODE HERE } load_all_js_files_in_dir(conkeror_settings_dir); Background I'm trying out Conkeror for web browsing. It's an emacs-like browser running on Mozilla's rendering engine, using javascript as configuration language (filling the role that elisp plays for emacs). In my emacs config, I have split my customizations into a series of files, where each file is a single unit of related options (for example, all my perl-related settings might be in perl-settings.el. All these settings files are loaded automatically by a function in my .emacs that simply loads every elisp file under my "settings" directory. I am looking to structure my Conkeror config in the same way, with my main conkeror-rc file basically being a stub that loads all the js files under a certain directory relative to my home directory. Unfortunately, I am much less literate in javascript than I am in elisp, so I don't even know how to "source" a file.

    Read the article

  • use a sprite png in li-class and display in div

    - by bonny
    hello i have some problem with displaying a sprite in a div that is in a li-class. so the structure is: <li id="aa"> <div><a href="#">one</a></div> </li> and the css: li{ width: 120px; height: 18px; margin-top: 5px; } li div{ width: 20px; height: 10px; background-image:url(../images/sprite.png) background-repeat: no-repeat; margin-left: 0px; font-weight:bolder; border: 1px solid #fff; } when i use this i can see the sprite even outside the div. so i tried adding to li background-image: none; that makes the image in the div not visible too. so if there is someone who know about that i really would appreciate. thanks alot.

    Read the article

  • PNG Textures not loading on HTC desire

    - by Matthew Tatum
    Hi I'm developing a game for android using OpenGL es and have hit a problem: My game loads fine in the emulator (windows xp and vista from eclipse), it also loads fine on a T-Mobile G2 (HTC Hero) however when I load it on my new HTC Desire none of the textures appear to load correctly (or at all). I'm suspecting the BitmapFactory.decode method although I have no evidence that that is the problem. All of my textures are power of 2 and JPG textures seem to load (although they don't look great quality) but anything that is GIF or PNG just doesn't load at all except for a 2x2 red square which loads fine and one texture that maps to a 3d object but seems to fill each triangle of the mesh with the nearest colour). This is my code for loading images: AssetManager am = androidContext.getAssets(); BufferedInputStream is = null; try { is = new BufferedInputStream(am.open(fileName)); Bitmap bitmap; bitmap = BitmapFactory.decodeStream(is); GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0); bitmap.recycle(); } catch(IOException e) { Logger.global.log(Level.SEVERE, e.getLocalizedMessage()); } finally { try { is.close(); } catch(Exception e) { // Ignore. } } thanks

    Read the article

  • JQuery cycle plugin + PNG overlay + Image alt text as title conflict

    - by Dave
    I am trying to achive the following: Create an image gallery using the JQuery Cycle plugin that shows images with titles (taken from the alt text) Each image has a PNG absolutley posiioned over the top to achieve rounded corners effect Here's my gallery HTML: <div id="slideshow" class="pics"> <div class="photo-container" > <img src="/path/to/image" alt="alt text as title" /> <img class="mask" src="path/to/mask" /> </div> </div><!-- /slideshow --> <div id="title"></div> Here's my Jquery gallery function: $(function() { $('#slideshow').after('<div id="nav" class="nav"><span style="margin: 0 5px 0 30px;">next image</span>').cycle({ fx: 'fade', timeout: 0, cleartypeNoBg: true, pager: '#nav', next: '#slideshow', before: onBefore }); function onBefore() { $('#title').html(this.alt); } $('#nav a').after('<span>&gt;</span>') }); </script> Here is my CSS that handles the mask: .photo-container { position: relative; display: block; overflow:hidden; border: none; } img.mask { position: absolute; top: 0; left: 0; overflow:hidden; border: none; } The above does not output the alt text into the "title" div. When I remove the mask, it works: <div id="slideshow" class="pics"> <img src="/path/to/image" alt="alt text as title" /> </div><!-- /slideshow --> <div id="title"></div> Any ideas why the additonal div / image is casuing the title to not display? Thank you

    Read the article

  • Where to create/keep secret files for license information/trials on Windows/Mac OS X/Linux?

    - by BastiBense
    I'm writing a commercial product which uses a simple registration mechanism and allows the user to use the application for a demo period before purchasing. My application must somewhere store the registration information (if entered) and/or the date of the first launch to calculate if the user is still within the demo/trail period. While I'm pretty much finished with the registration mechanism itself, I now have to find a good way to store the registration information on the user's disk. The most obvious idea would be to store the trial period in the preferences file, but since user tend to delete/tinker with those from time to time, it might be a good idea to keep the registration information in a separate, more hidden file. So here's my question: What is the best place/strategy to keep and create such hidden files on Windows, Mac OS X and Linux? Here is what came to my mind so far: Linux/Mac OS X Most Unix-like systems are rather locked down when it comes to places a user can write files to. In most cases this is only the /tmp directory and the user's home directory. I guess the easiest here is probably to create a file with a dot-prefix to make it less visible, then give it a name that won't make it obvious that it's associated with my application. Windows Probably much like Linux/Mac OS X - more recent Windows versions become more restrictive when it comes to file system permissions. Anyway, I'd like to hear your ideas and thoughs. Even better if you have already implemented something similar in the past. Thanks! Update For me the places for such files is more relevant than the discussion of the question if this way for copy protection is good or bad.

    Read the article

  • php png image transparency

    - by user1334130
    I have been working with some code to draw a circle but I am having problems with removing the black background from the shape. I am using imagecopyresampled for its AA features in order to draw a smooth circle, so I can't use a different drawing function. Thanks. <?php $img_2 = imagecreatetruecolor(200, 200); $red = imagecolorallocate($img_2, 255, 0, 0); imagefill($img_2, 0, 0, $red); //set circle values $xPos = 50; $yPos = 80; $diameter = 40; $img_1 = imagecreatetruecolor(($diameter + 2) * 2, ($diameter + 2) * 2); $green = imagecolorallocate($img_1, 0, 255, 0); //draw the circle imagefilledarc($img_1, $diameter+1, $diameter+1, ($diameter + 2) * 2, ($diameter + 2) * 2, 0, 360, $green, IMG_ARC_PIE); imagecopyresampled($img_2, $img_1, $xPos, $yPos, 0, 0, $diameter+2, $diameter+2, ($diameter + 2) * 2, ($diameter + 2) * 2); header("Content-type: image/png"); imagepng($img_2); imagedestroy($img_1); imagedestroy($img_2); ?>

    Read the article

  • Local server updates for the network

    - by Brendon
    I have setup one computer on our network as the file server. Because Internet here in Tanzania is both slow and expensive I would like that one system to download all the updates and then the other 10 computers on the network to get those update files from the server. I'm a bit of a noobie to Ubuntu, but really want to learn how to get this working smoothly so as to help other NGOs and schools here in Tanzania. Brendon

    Read the article

  • How to clean and add options to the Open With list of apps

    - by Luis Alvarado
    After installing several PPAs (Wine, PoL) and opening several files with other apps (Like changing from Totem to VLC) I discovered that the Open With option had 2 problems: Many items on the list are duplicated (As seen on the image for "A Wine Program") Sometimes the app I want to use to open is not shown there (For example, Virtualbox or VLC) So how can I edit this list to clean the duplicates and add missing apps from the list.

    Read the article

  • "Ghost" output from locate?

    - by Hailwood
    I deleted some files, but they seem to still exist. Can anyone please explain the output of this: m@work:~$ locate cfx.css | xargs rm m@work:~$ locate cfx.css /var/www/wfox/hbr.co.nz/cfx/a/c/cfx.css /var/www/wfox/modules/gallery/cfx/a/c/cfx.css /var/www/wfox/phoenix/fp.co.nz/cfx/a/c/cfx.css /var/www/wfox/tmp.co.nz/cfx/a/c/cfx.css m@work:~$ cat /var/www/wfox/hbr.co.nz/cfx/a/c/cfx.css cat: /var/www/wfox/hbr.co.nz/cfx/a/c/cfx.css: No such file or directory

    Read the article

  • Application Crash cleared the content of the Folder

    - by Ameya
    Recently while working on the LinuxDC++ over the network the application crashed while downloading files. Now my Downloads folder which had at least 60-80GB of data is completely cleaned but the system is not reporting the available the correct free space. Is there way to restore the contents of the folder only as the solution available are for the whole partition. I just want to recover the contents from one folder.

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Why opening Ubuntu's default wallpaper (warty-final-ubuntu.png) in Image Viewer always fails?

    - by Kush
    Everytime I try to open Ubuntu (any version, since from 8.04 with which I started) default wallpaper, named "warty-final-ubuntu.png", I get the following error. I have also reported bug for the same, more than a year ago but it is still unresolved. Also I don't get the point why the default wallpaper is still named as "warty-final-ubuntu.png" instead of having actual code name prefix to which wallpaper belongs eg. "precise-final-ubuntu.png" and so on. General Thoughts Lots of community effort goes under development of this marvelous distribution but we're still missing out to fix such silly issues, which is directly/indirectly affecting the number of new adopters.

    Read the article

  • How to create a theme with a picture panel svg or png for GNOME Flashback (Compiz), Unity, Gnome Classic?

    - by user285802
    How to create a theme with a picture panel svg or png for GNOME Flashback (Compiz) and GNOME Flashback (Metacity) and Unity in Ubuntu 14.04. How to create a theme with a picture panel (svg or png) for Unity and Gnome Classic in Ubuntu 12.04. I can not get gnome-panel.css unity.css and image display panel. This is easily done in GTK 2. Is it possible to GTK 3? Give reference to exemplary themes which are correctly displayed panel with image svg or png.

    Read the article

  • Would Like Multiple Checkboxes to Update World PNG Image Using Mogrify to FloodFill Countries With C

    - by socrtwo
    Hi. I'm seeing in another forum if the best way to do this is with Javascript or Ajax but I'm wondering if there is an even easier simpler way. I'm trying to create a web service where users can check which countries they have visited from a list of 175 or so and a World map image would then instantly update with a filled color. There are other similar services, but I'm envisioning mine to be both updating from checks in checkboxes and by clicking on the target country in the displayed image say with an imagemap. Additionally other solutions display all the visited countries in the same color. I would like different colors for different countries or at least for those countries that touch. Eventually I would like to include a feature that enables the choice of which colors to assign countries. I found a Sourceforge project called pwmfccd. It's simply an open source image of the world and the coordinates on the PNG image for all the countries. You can use mogrify from ImageMagick and floodfill to fill the countries with color. I have done this successfully, locally with batch files. My ISP has told me where mogrify is located, basically "/usr/bin/mogrify". I now have a horrendously complicated cgi script which if it worked is set to redraw the world map image with each checkbox. It's here. It also redraws the whole web page with each check. The web page starts here. Of course this is not at all efficient, and I think probably the real way to go is Ajax or Javascript, so that maybe just the image gets changed and redrawn, not the whole web page. Sorry I don't even know the difference between Javascript and Ajax and their relative merits at this point. I suppose you could make just one part of the image update with each check or click on the image instead of even just the image redrawing, but I have never even heard of a hint at being able to do that for irregularly shaped image elements like countries. So I guess an Image map and sister checkbox entries tied to mogrify events redrawing the user's personal copy of the image with an image refresh would be the only way to go. So how do you do this with something other than Javascript or Ajax or is that definitely the way to go and if so, how would you do it? Or can you after all cut up a web based image into irregular puzzle shaped piece which you can redraw individually at will. Thanks in advance for reading and considering answering this post.

    Read the article

  • iPhone: Changing CGImageAlphaInfo of CGImage

    - by TechZen
    I have a PNG image that has an unsupported bitmap graphics context pixel format. Whenever I attempt to resize the image, CGBitmapContextCreate() chokes on the unsupported format (Error formatted for easy reading): CGBitmapContextCreate: unsupported parameter combination: 8 integer bits/component; 32 bits/pixel; 3-component colorspace; kCGImageAlphaLast; 1344 bytes/row. The list of supported pixel formats definitely does not support this combination. It appears I need to redraw the image and move the alpha channel information to kCGImageAlphaPremultipliedFirst or kCGImageAlphaPremultipliedLast. I have no idea how to go about doing this. There is nothing unusual about the PNG file and it isn't corrupted. It works in all other context just fine. I encountered this error just by chance but obviously my users might have similarly formatted files so I will have to check my app's imported images and correct for this problem.

    Read the article

  • Need to get .obj file names of Executable(which one is crrently executing) at runtime programaticall

    - by Usman
    Suppose I have a VC++ project contains no of(say e.g 5) Source files(.cpp files),it will generate 5 .obj files(obj files corresponding to my .cpp's files not all kernel and OS layers including .obj files) e.g my project includes xyz_1.cpp,xyz_2.cpp,xyz_3.cpp,xyz_4.cpp,it will corresponds 4 respective .objs. By programtaically HOW CAN I TAKE AND GET THE NAMES OF THESE 4 .OBJ files at runtime(On run time I need to check how many obj files & names of those objs). REMEMBER I DON'T NEED ALL KERNEL AND OS LAYER .OBJS I ONLY NEED OBJS OF MY .CPPs. Regards Usman

    Read the article

  • How to embed images in a single HTML / PHP file?

    - by Tatu Ulmanen
    Hi, I am creating a lightweight, single-file database administration tool and I would like to bundle some small icons with it. What is the best way to embed images in a HTML/PHP file? I know a method using PHP where I would call the same file with a GET parameter that would output hardcoded binary data with the correct header, but that seems a bit complicated. Can I use something to pass the image directly in a CSS background-image declaration? This would allow me to utilize the CSS sprite technique. Browser support isn't an issue here, so more exotic methods are welcome also. EDIT Does someone have a link/example to how to generate Data URL's properly with PHP? I'd figure echo 'data:image/png;base64,'.base64_encode(file_get_contents('image.png')) would suffice but I could be wrong.

    Read the article

  • Is there any simple way to test two PNGs for equality?

    - by Mason Wheeler
    I've got a bunch of PNG images, and I'm looking for a way to identify duplicates. By duplicates I mean, specifically, two PNG files whose uncompressed image data are identical, not necessarily whose files are identical. This means I can't do something simple like compare CRC hash values. I figure this can actually be done reliably since PNGs use lossless compression, but I'm worried about speed. I know I can winnow things down a little by testing for equal dimensions first, but when it comes time to actually compare the images against each other, is there any way to do it reasonably efficiently? (ie. faster than the "double-for-loop checking pixel values against each other" brute-force method?)

    Read the article

  • Playing around with Eclipse features - Project files are now hidden?

    - by Daddy Warbox
    I don't even remember how, but somehow I managed to make all of my project's source files hidden in Eclipse's Package and Project Explorer panels. Go figure. 'Show Filtered Children (alt+click)' temporarily reveals the files, and only in Package Explorer can I double-click to reopen them from this view. They go back into hiding after I select another item, though. Plus, now I'm getting other annoyances, such as all of the folded non-hidden trees altogether expanding when I click on any item, and the entire file folder tree of my project now being shown in these panels (including my .svn subversion folders... which shouldn't be any of Eclipse's business, presently). Long story short, my Package/Project Explorers' just blew up on me, and I want to know how to fix this. Thanks in advance. P.S. What's a good guide I can use to learn my way around this silly contraption, anyway?

    Read the article

  • How can I download all files of a specific type from a website using PHP?

    - by CheeseConQueso
    I want to get all midi (*.mid) files from a site that's set up pretty simple in terms of directory tree structure. I wish we had wget installed here, but that's another party.... The site is VGMusic.com and the path containing all of the midi files is: http://www.vgmusic.com/music/console/nintendo/nes/ I tried glob'ing it out, but I suppose that glob only works locally? Here is what I wrote to try to make it happen (doesn't work.. obviously..): <?php echo 'not a blizzard<br>'; foreach(glob('http://www.vgmusic.com/music/console/nintendo/nes/*.mid') as $filename) { echo $filename.'<br>'; //$newfile = 'http://www.mydomain.com/nes/'.$filename; //copy($filename, $newfile) } ?> I tried it also without the http:// in there with no luck.

    Read the article

  • PHP + GD: imagetruecolortopalette not keeping transparency

    - by AlexMax
    I am using GD to output an image that is a truecolor+alpha channel PNG file using imagepng just fine. However, I would like to have the ability to also output an ie6-compatible 256-color PNG as well. According to the official documentation for imagetruecolortopalette: The code has been modified to preserve as much alpha channel information as possible in the resulting palette, in addition to preserving colors as well as possible. However, I am finding that the results of this function do not properly preserve any transparency at all. I used this firefox image with text superimposed on top of it as a test, and all the function did was give me a white background and a weird dark blue border. I know that I can't hope to preserve the full alpha channel, but surely this function would at least pick up on the transparent background. Is there something I'm missing? Are there any alternative approaches that I can take?

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >