Search Results

Search found 77950 results on 3118 pages for 'large file upload'.

Page 12/3118 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • HTML Form Upload PDF with other form contents using PHP

    - by Sev
    I have a HTML form with fields such as name, address, notes, etc. I also have a field to upload a PDF. The uploaded PDF get's stored on the file system. How can I accomplish this if possibly the PDF files are larger than 2 MBs? Also, for some reason, the uploading of the PDF (< 2 MBs) works fine in Chrome, but not in IE. In IE, the upload doesn't even begin, but in Chrome, it completes fine. Form header looks like: method='post' ENCTYPE='multipart/formdata' edit: the ini setting didn't help The HTML <input type='text' name='user' /> <input type='file' name='userfile' /> The basic PHP ( I do some preg matching above, that I haven't included) $uploaddir = 'uploads/'; $uploadfile = $uploaddir . basename($_FILES['userfile']['name']); if (move_uploaded_file($_FILES['userfile']['tmp_name'], $uploadfile)) { echo "File is valid, and was successfully uploaded.\n"; } else { echo "File uploading failed.\n"; }

    Read the article

  • HTML Upload Form will only upload files found in the directory of the PHP file.

    - by vette982
    I have an image uploader that uses the imgur.com API and jQuery's .ajax() function to upload images to their servers. However, if I browse for an image using the <input type="file"/> element of the form, it will only be successful in uploading an image if the image file is found in the same directory as the page.php file that the form is found in (shown below). How can I allow the form to upload images from any directory on my computer? page.php: <form action="page.php" method="post"> <input type="file" name="doc" id="doc" /><br/> <input type="image" src="go.gif" name="submit" id="submit" /> </form>

    Read the article

  • FileReference and HttpService Browse Image Modify it then Upload it

    - by user177787
    Hello, I am trying to do an image uploader, user can: - browse local file with button.browse - select one and save it as a FileReference. - then we do FileReference.load() then bind the data to our image control. - after we make a rotation on it and change the data of image. - and to finish we upload it to a server. To change the data of image i get the matrix of the displayed image and transform it then i re-use the new matrix and bind it to my old image: private function TurnImage():void { //Turn it var m:Matrix = _img.transform.matrix; rotateImage(m); _img.transform.matrix = m; } Now the mater is that i really don't know how to send the data as a file to my server cause its not stored in the FileReference and data inside FileReference is readOnly so we can't change it or create a new, so i can't use .upload();. Then i tried HttpService.send but i can't figure out how you send a file and not a mxml.

    Read the article

  • Slow PDF upload for Confluence

    - by JPJedi
    We run a site that we host that uses the Atlassian Confluence. The site works great and is being used now. But there is one thing. It seems like when pdf and gifs are uploaded the upload speed will be slower. But the smaller files will upload fine. Has anyone else having an issue with uploading pdf's into confluence? I am trying to use fiddler to track the speed but am not having luck with that. Any information would be greatly appreciated.

    Read the article

  • Windows 7 ssh file server.

    - by Siriss
    Hello all- I have looked at the other posts, but have not quite found an answer I have a question about windows file sharing over SSH. I have copssh installed and it is working for Remote desktop connections. I have port 22 forwarded on my router etc. I connect from a Mac or Putty with this address: ssh -l copsshusername 3391:localhost:3389 [external ip] That works fine. I would like to configure Windows 7 to allow my ssh account that I use to login, access to certain shared folders. I have documents and videos and things that I would like to be able to download externally. I have done this before on Linux and a long time ago on XP, but I cannot figure out what I am missing on Windows 7. There is a designated SSH user that copssh uses to run the service and that I use to to login as. I have googled and googled and have not found a solution that does everything I need that is why I am turning here for ideas. I hope I am explaining this correctly. Thank you very much for your help!

    Read the article

  • Automatic upload of 10KB file to web service?

    - by Joseph Turian
    I am writing an application, similar to Seti@Home, that allows users to run processing on their home machine, and then upload the result to the central server. However, the final result is maybe a 10K binary file. (Processing to achieve this output is several hours.) What is the simplest reliable automatic method to upload this file to the central server? What do I need to do server-side to prevent blocking? Perhaps having the client send mail is simple and reliable? NB the client program is currently written in Python, if that matters.

    Read the article

  • creating a custom upload progress bar

    - by michael
    hi, i have seen all the upload progress bar plugins , widgets, etc. they all suck. their either too bulky with too much useless code or they dont work. what i want to know is where can i read up on how to display an easy upload progress indicator. most browsers have a status progress bar on them below but it isnt very professional to use just that when dealing with clients. how does the browser do it? i want to know the internals of how the browser work with indication a status of something uploading and maybe i can make something using php & jquery. thanks

    Read the article

  • Unable upload large file size on Google Docs

    - by Preeti
    Hi, I am uploading document on Google Docs as: DocumentsService myService = new DocumentsService(""); myService.setUserCredentials("[email protected]", password ); DocumentEntry newEntry = myService.UploadDocument(@"C:\Sample.txt", "Sample.txt"); But when i try to upload a file of 3 MB it result into exception: An unhandled exception of type 'Google.GData.Client.GDataRequestException' occurred in Google.GData.Client.dll Additional information: Execution of request failed: http://docs.google.com/feeds/documents/private/full How can i upload large size file on Google Docs? I am using Google API ver 2. Thanx

    Read the article

  • Resize image on upload php

    - by blasteralfred
    Hi, I have a php script for image upload as below <?php $LibID = $_POST[name]; define ("MAX_SIZE","10000"); function getExtension($str) { $i = strrpos($str,"."); if (!$i) { return ""; } $l = strlen($str) - $i; $ext = substr($str,$i+1,$l); return $ext; } $errors=0; $image=$_FILES['image']['name']; if ($image) { $filename = stripslashes($_FILES['image']['name']); $extension = getExtension($filename); $extension = strtolower($extension); if (($extension != "jpg") && ($extension != "jpeg")) { echo '<h1>Unknown extension!</h1>'; $errors=1; exit(); } else { $size=filesize($_FILES['image']['tmp_name']); if ($size > MAX_SIZE*1024) { echo '<h1>You have exceeded the size limit!</h1>'; $errors=1; exit(); } $image_name=$LibID.'.'.$extension; $newname="uimages/".$image_name; $copied = copy($_FILES['image']['tmp_name'], $newname); if (!$copied) { echo '<h1>image upload unsuccessfull!</h1>'; $errors=1; exit(); }}} ?> which uploads the image file to a folder "uimages" in the root. I have made changes in the html file for the compact display of the image by defining "max-height" and "max-width". But i want to resize the image file on upload. The image file may have a maximum width of 100px and maximum height of 150px. The image proportions must be constrained. That is, the image may be smaller than the above dimensions, but, it should not exceed the limit. How can I make this possible?? Thanks in advance :) blasteralfred..

    Read the article

  • Joomla , forms with upload and custom field from inside the administration panel

    - by Stathis
    I want a plugin for joomla like jforms or chronoforms in order to make a form to upload videos along with other custom fields to db and manage them. The only problem is I want this functionality to be made from inside the administrator console and not to appear on a page at my site's frontend. My site does not have a login service , so I need to make the admin able to login to administration panel and from there to upload and manage videos. Do you know of a plugin wich supports this functionality? Thank you in advance.

    Read the article

  • upload multiple images, display thumbnails, manipulate image

    - by robert
    hy, i need a little help here i want to be able to upload multiple images after i upload all i want to display thumbnails, when i click on a thumb i want to be able to rotate the image (rotate the original image not only the thumb) All uploaded images i want to be in one php array, in the order they have been uploaded. Or if i can to change the order of the images, i know is possible with jQuery! how i can code this?? i started but i can't get this done! here is my project so far: http://www.mediafire.com/?3uzzgx5onzn any help is welcome! Thanks!

    Read the article

  • Upload Image to Facebook Objective-C

    - by boopyman
    I'm currently trying to upload an image from my Mac application to Facebook. To do this, I'd like for the user to simply input his username and password, and to click a button. The only issue is, Facebook doesn't actually have an API for the Mac, it only has one for iOS. This shouldn't be a problem, except for the fact that to login, you must use a web view, something I'm not to keen on doing, since I'd like the interface to be two simple text fields. I've also looked into PHFacebook, a class I found online, but it also seems to utilize an NSWebView. I'm wondering if there's a security issue when you use text fields; indeed, it's slightly strange no available API offers this function ! So, to conclude, is it possible, or is there an API, that lets you upload an image and lets you provide the user's credentials through simple NSStrings?

    Read the article

  • Ubuntu 12.4 - Terminal - Huge/Large text on each command line [closed]

    - by gotqn
    Possible Duplicate: Is it possible to change my terminal window prompt text? I have been using "Ubuntu 12.4" for few days now (no previous Linux experiences at all) and I have noticed that the symbols on each command line more then this in many examples in the network. For example, I have: And I want to remove the "gotqn-System-Product-Name" part, because it is taking too much space? What should I do to change this?

    Read the article

  • Android: writing a file to sdcard

    - by Sumit M Asok
    I'm trying to write a file from an Http post reply to a file on the sdcard. Everything works fine until the byte array of data is retrieved. I've tried setting WRITE_EXTERNAL_STORAGE permission in the manifest and tried many different combinations of tutorials I found on the net. All I could find was using the openFileOutput("",MODE_WORLD_READABLE) method, of the activity but how my app writes file is by using a thread. Specifically, a thread is invoked from another thread when a file has to be written, so giving an activity object didn't work even though I tried it. The app has come a long way and I cannot change how the app is currently written. Please, someone help me? CODE: File file = new File(bgdmanip.savLocation); FileOutputStream filecon = null; filecon = new FileOutputStream(file); // bgdmanip.savLocation holds the whole files path byte[] myByte; myByte = Base64Coder.decode(seReply); Log.d("myBytes", String.valueOf(myByte)); bos.write(myByte); filecon.write(myByte); myvals = x * 11024; seReply is a string reply from HttpPost response. the second set of code is looped with reference to x. the file is created but remains 0 bytes

    Read the article

  • AS3 Working With Arbitrarily Large Files

    - by Kekoa
    I am trying to read a very large file in AS3 and am having problems with the runtime just crashing on me. I'm currently using a FileStream to open the file asynchronously. This does not work(crashes without an Exception) for files bigger than about 300MB. _fileStream = new FileStream(); _fileStream.addEventListener(IOErrorEvent.IO_ERROR, loadError); _fileStream.addEventListener(Event.COMPLETE, loadComplete); _fileStream.openAsync(myFile, FileMode.READ); In looking at the documentation, it sounds like the FileStream class still tries to read in the entire file to memory(which is bad for large files). Is there a more suitable class to use for reading large files? I really would like something like a buffered FileStream class that only loads the bytes from the files that are going to be read next. I'm expecting that I may need to write a class that does this for me, but then I would need to read only a piece of a file at a time. I'm assuming that I can do this by setting the position and readAhead properties of the FileStream to read a chunk out of a file at a time. I would love to save some time if there is a class like this that already exists. Is there a good way to process large files in AS3, without loading entire contents into memory?

    Read the article

  • How to create Large resumable download from a secured location .NET

    - by Kelvin H
    I need to preface I'm not a .NET coder at all, but to get partial functionality, I modified a technet chunkedfilefetch.aspx script that uses chunked Data Reading and writing Streamed method of doing file transfer, to get me half-way. iStream = New System.IO.FileStream(path, System.IO.FileMode.Open, _ IO.FileAccess.Read, IO.FileShare.Read) dataToRead = iStream.Length Response.ContentType = "application/octet-stream" Response.AddHeader("Content-Length", file.Length.ToString()) Response.AddHeader("Content-Disposition", "attachment; filename=" & filedownload) ' Read and send the file 16,000 bytes at a time. ' While dataToRead 0 If Response.IsClientConnected Then length = iStream.Read(buffer, 0, 16000) Response.OutputStream.Write(buffer, 0, length) Response.Flush() ReDim buffer(16000) ' Clear the buffer ' dataToRead = dataToRead - length Else ' Prevent infinite loop if user disconnects ' dataToRead = -1 End If End While This works great on files up to 2GB and is fully functioning now.. But only one problem it doesn't allow for resume. I took the original code called it fetch.aspx and pass an orderNUM through the URL. fetch.aspx&ordernum=xxxxxxx It then reads the filename/location from the database occording to the ordernumber, and chunks it out from a secured location NOT under the webroot. I need a way to make this resumable, by the nature of the internet and large files people always get disconnected and would like to resume where they left off. But any resumable articles i've read, assume the file is within the webroot.. ie. http://www.devx.com/dotnet/Article/22533/1954 Great article and works well, but I need to stream from a secured location. I'm not a .NET coder at all, at best i can do a bit of coldfusion, if anyone could help me modify a handler to do this, i would really appreciate it. Requirements: I Have a working fetch.aspx script that functions well and uses the above code snippet as a base for the streamed downloading. Download files are large 600MB and are stored in a secured location outside of the webroot. Users click on the fetch.aspx to start the download, and would therefore be clicking it again if it was to fail. If the ext is a .ASPX and the file being sent is a AVI, clicking on it would completely bypass an IHTTP handler mapped to .AVI ext, so this confuses me From what I understand the browser will read and match etag value and file modified date to determine they are talking about the same file, then a subsequent accept-range is exchanged between the browser and IIS. Since this dialog happens with IIS, we need to use a handler to intercept and respond accordingly, but clicking on the link would send it to an ASPX file which the handeler needs to be on an AVI fiel.. Also confusing me. If there is a way to request the initial HTTP request header containing etag, accept-range into the normal .ASPX file, i could read those values and if the accept-range and etag exist, start chunking at that byte value somehow? but I couldn't find a way to transfer the http request headers since they seem to get lost at the IIS level. OrderNum which is passed in the URL string is unique and could be used as the ETag Response.AddHeader("ETag", request("ordernum")) Files need to be resumable and chunked out due to size. File extensions are .AVI so a handler could be written around it. IIS 6.0 Web Server Any help would really be appreciated, i've been reading and reading and downloading code, but none of the examples given meet my situation with the original file being streamed from outside of the webroot. Please help me get a handle on these httphandlers :)

    Read the article

  • PHP file_put_contents File Locking

    - by hozza
    The Senario: You have a file with a string (average sentence worth) on each line. For arguments sake lets say this file is 1Mb in size (thousands of lines). You have a script that reads the file, changes some of the strings within the document (not just appending but also removing and modifying some lines) and then overwrites all the data with the new data. The Questions: Does 'the server' PHP, OS or httpd etc. already have systems in place to stop issues like this (reading/writing half way through a write)? i. If it does, please explain how it works and give examples or links to relevant documentation. ii. If not, are there things I can enable or set-up, such as locking a file until a write is completed and making all other reads and/or writes fail until the previous script has finished writing? My Assumptions and Other Information: The server in question is running PHP and Apache or Lighttpd. If the script is called by one user and is halfway through writing to the file and another user reads the file at that exact moment. The user who reads it will not get the full document, as it hasn't been written yet. (If this assumption is wrong please correct me) I'm only concerned with PHP writing and reading to a text file, and in particular, the functions "fopen"/"fwrite" and mainly "file_put_contents". I have looked at the "file_put_contents" documentation but have not found the level of detail or a good explanation of what the "LOCK_EX" flag is or does. The senario is an EXAMPLE of a worst case senario where I would assume these issues are more likely to occur, due to the large size of the file and the way the data is edited. I want to learn more about these issues and don't want or need answers or comments such as "use mysql" or "why are you doing that" because I'm not doing that, I just want to learn about file read/writing with PHP and don't seem to be looking in the right places/documentation and yes I understand PHP is not the perfect language for working with files in this way...

    Read the article

  • How to add an SSH user to my Ubuntu 12 server to upload PHP files

    - by user229209
    I have an Ubuntu 12 VPS and wanted to create a user account to upload and download my PHP code. So when logged in as root I created a user "chris" and then created a directory /var/www/chris I want "chris" to be able to upload and run files to the /var/www/chris directory. Permissions for the chris dir look like this: drwxrwxr-x 2 root chris 4096 Aug 20 03:35 chris As root I created a sample file called abc.php and put it in the chris dir. It worked fine when I test it in a browser. I logged in as chris and uploaded a file called 1234.php. That did not work. I just got a blank PHP page. The code was identical in both files. So it is not the code. The permissions now look like this: -rw-r--r-- 1 root chris 59 Aug 20 03:34 1234.php -rw-r--r-- 1 root root 49 Aug 20 03:21 abc.php How do I alow the "chris" user to upload files and get them to work?

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • Computing MD5SUM of large files in C#

    - by spkhaira
    I am using following code to compute MD5SUM of a file - byte[] b = System.IO.File.ReadAllBytes(file); string sum = BitConverter.ToString(new MD5CryptoServiceProvider().ComputeHash(b)); This works fine normally, but if I encounter a large file (~1GB) - e.g. an iso image or a DVD VOB file - I get an Out of Memory exception. Though, I am able to compute the MD5SUM in cygwin for the same file in about 10secs. Please suggest how can I get this to work for big files in my program. Thanks

    Read the article

  • Multiple file descriptors to the same file, C

    - by Gigi
    I have a multithreaded application that is opening and reading the same file (not writing). I am opening a different file descriptor for each thread (but they all point to the same file). Each thread then reads the file and may close it and open it again if EOF is reached. Is this ok? If I perform fclose() on a file descriptor does it affect the other file descritptors that point to the same file?

    Read the article

  • Uploadify refuses to upload WMV, FLV and MP4 files - SOLVED

    - by Jon Winstanley
    The uploadify plugin for JQuery seems very good and works for most file types. However, it allows me to upload all file types apart from the ones I need! Namely .WMV, .FLV and .MP4 Uploads of any other type work. I have already tried changing the fileExt parameter and also tried removing it altogether. I have testing in Google Chrome, IE7 and Firefox and none work for these file types. I have a ton of local projects already and uploading is not an issue on any other project, I even use the same example files (This is the first time I have used Uploadify) Is there a known reason for this behaviour? EDIT: Have found the issue. I had forgotten to add my usual .htaccess file to the example project.

    Read the article

  • FTP zip upload and unpack

    - by DR.GEWA
    Hi Alsways uploading made web-sites , projects, I want to make such thing make zip file, upload one file and then extract with default CHMOD for folders lets say 755 and for files 664 With Cpanel hostings its OK, I can do it via file manager... But for hostings without I can't. Baybe someone can give a hint how...????

    Read the article

  • Splitting a file before upload?

    - by Yevgeniy Brikman
    On a webpage, is it possible to split large files into chunks before the file is uploaded to the server? For example, split a 10MB file into 1MB chunks, and upload one chunk at a time while showing a progress bar? It sounds like JavaScript doesn't have any file manipulation abilities, but what about Flash and Java applets? This would need to work in IE6+, Firefox and Chrome. Update: forgot to mention that (a) we are using Grails and (b) this needs to run over https.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >