Search Results

Search found 89549 results on 3582 pages for 'large file support'.

Page 16/3582 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Uploading many large files to a remote server

    - by TiernanO
    I am in the process of creating an offsite backup, and need to do a initial load of data. Currently, that's about 400Gb, give or take 10Gb or so... The backup system is producing files which are about 4Gb each, and has some other, smaller related files also. So, i need to transfer all 400ish gigs to a remote server, but how? What is the best method? I have full remote access to the server, so i can install anything i need to install. There are Windows, Linux and a Solaris VM running on the box itself, so any of those can be used there, and i have Windows and Linux at home. I have 2 internet connections in house, 10Mb/s uploading on each, so something that could potentially split the number of connections would be handy (kind of like GetRight, but in reverse... PutRight?).

    Read the article

  • Getting the most out of My Oracle Support

    - by JanSyss
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Have you often wondered where to go to find the latest information about My Oracle Support? Or are you a new user who simply needs help getting started with using My Oracle Support? The My Oracle Support User Resource Center provides easy access to what’s new, help and training to commonly used features, frequently asked questions, and more. Here you will find: My Oracle Support Speed Training – each module is less than 10 minutes Working Effectively with Support best practices – get the most out of your support experience Advisor Webcast Program – product based training with an interactive forum to ask questions  Additionally there are many ways to stay informed about My Oracle Support: Follow us on Twitter by subscribing to myoraclesupport Set up “Hot Topic” notifications once you log into My Oracle Support (Settings -> Hot Topics) Check out the “Stay Informed” content on the Get Proactive page for your product /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}

    Read the article

  • Delete temp file during finally vs delete output file during catch

    - by Russell
    This is in Java 6. I've seen more than once that people create temp files, do something, then rename it to the output file. Everything is wrapped in a try-finally block, where the temp file is deleted in finally in case something goes wrong in between. try { //do something with tempFile //do something with tempFile //do something with tempFile tempFile.renameTo(outputFile); } finally { if (tempFile.exists()) tempFile.delete() } I was wondering what are the benefits of doing that instead of doing something to the output file directly and delete it in case of exceptions. try { //do something with outputFile //do something with outputFile //do something with outputFile } catch (Exception e) { if (outputFile.exists()) outputFile.delete(); } My guess is that deleting temp files in finally benefits me when the try block can throw many kinds of exceptions. Is my guess right? What else?

    Read the article

  • [c++] How to create a std::ofstream to a temp file?

    - by dehmann
    Okay, mkstemp is the preferred way to create a temp file in POSIX. But it opens the file and returns an int, which is a file descriptor. From that I can only create a FILE*, but not an std::ofstream, which I would prefer in C++. (Apparently, on AIX and some other systems, you can create an std::ofstream from a file descriptor, but my compiler complains when I try that.) I know I could get a temp file name with tmpnam and then open my own ofstream with it, but that's apparently unsafe due to race conditions, and results in a compiler warning (g++ v3.4. on Linux): warning: the use of `tmpnam' is dangerous, better use `mkstemp' So, is there any portable way to create an std::ofstream to a temp file?

    Read the article

  • File won't save output to file, and prints out a string oddly C++ Linux

    - by Predictability
    I'm trying to make a password code, the user enters a password, then it will save the password to a file in /tmp/ and then it will output the password (For me so I can find bugs). I have included the "string" library, and I set the password type to string, but when I output it, it outputs like this: 0x7fffb55baac0password // <-- thats the password I entered It will output hex (I think), then the password I entered, and it won't save it to the file in /tmp/ I want it to (Or any file in /tmp/). Here's the source code: http://codepad.org/3aamAv7R Thank you for all the help you guys have given me so far.

    Read the article

  • Sorting a very large text file in Java

    - by Alice
    Hi, I have a large text file I need to sort in Java. The format is: word [tab] frequency [new line] The algorithm for sorting is: Read some of the file, filtering for purlely alphabetic words. Once you have X number of alphabetic words, call Collections.sort and write the result to a file. Repeat until you have finished reading the file. Start reading two sorted files, comparing line by line for the word with higher frequency, and writing at the same time to a new file as to not load much into your memory Repeat until all files are merged into one large file Right now I've divided the large file into smaller ones (sorted by descending frequency) with 10,000 lines each. I know I need to somehow merge these files back together, but I'm not sure how to go about this. I've created a LinkedList to keep track of all the files created. The algorithm says to compare each line in the two files, except I've tried a case where , say file1 = 8,6,5,3,1 and file2 = 9,8,8,8,8. Then if I compare them line by line I would get file3 = 9,8,8,6,8,5,8,3,8,1 which is incorrectly sorted (they should be in decreasing order). I think I'm misunderstanding some part of the algorithm. If someone could point out what I should do instead, I'd greatly appreciate it. Thanks. edit: Yes this is an assignment. We aren't allowed to increase memory unfortunately :(

    Read the article

  • MySQL Binary Storage using BLOB VS OS File System: large files, large quantities, large problems.

    - by Quantico773
    Hi Guys, Versions I am running (basically latest of everything): PHP: 5.3.1 MySQL: 5.1.41 Apache: 2.2.14 OS: CentOS (latest) Here is the situation. I have thousands of very important documents, ranging from customer contracts to voice signatures (recordings of customer authorisation for contracts), with file types including, but not limited to jpg, gif, png, tiff, doc, docx, xls, wav, mp3, pdf, etc. All of these documents are currently stored on several servers including Windows 32 bit, CentOS and Mac, among others. Some files are also stored on employees desktop computers and laptops, and some are still hard copies stored in hundreds of boxes and filing cabinets. Now because customers or lawyers could demand evidence of contracts at any time, my company has to be able to search and locate the correct document(s) effectively, for this reason ALL of these files have to be digitised (if not already) and correlated into some sort of order for searching and accessing. As the programmer, I have created a full Customer Relations Management tool that the whole company uses. This includes Customer Profiles management, Order and job Tracking tools, Job/sale creation and management modules, etc, and at the moment any file that is needed at a customer profile level (drivers licence, credit authority, etc) or at a job/sale level (contracts, voice signatures, etc) can be uploaded to the server and sits in a parent/child hierarchy structure, just like Windows Explorer or any other typical file managment model. The structure appears as such: drivers_license |- DL_123.jpg voice_signatures |- VS_123.wav |- VS_4567.wav contracts So the files are uplaoded using PHP and Apache, and are stored in the file system of the OS. At the time of uploading, certain information about the file(s) is stored in a MySQL database. Some of the information stored is: TABLE: FileUploads FileID CustomerID (the customer id that the file belongs to, they all have this.) JobID/SaleID (the id of the job/sale associated, if any.) FileSize FileType UploadedDateTime UploadedBy FilePath (the directory path the file is stored in.) FileName (current file name of uploaded file, combination of CustomerID and JobID/SaleID if applicable.) FileDescription OriginalFileName (original name of the source file when uploaded, including extension.) So as you can see, the file is linked to the database by the File Name. When I want to provide a customers' files for download to a user all I have to do is "SELECT * FROM FileUploads WHERE CustomerID = 123 OR JobID = 2345;" and this will output all the file details I require, and with the FilePath and FileName I can provide the link for download. http... server / FilePath / FileName There are a number of problems with this method: Storing files in this "database unconcious" environment means data integrity is not kept. If a record is deleted, the file may not be deleted also, or vice versa. Files are strewn all over the place, different servers, computers, etc. The file name is the ONLY thing matching the binary to the database and customer profile and customer records. etc, etc. There are so many reasons, some of which are described here: http://www.dreamwerx.net/site/article01 . Also there is an interesting article here too: sietch.net/ViewNewsItem.aspx?NewsItemID=124 . SO, after much research I have pretty much decided I am going to store ALL of these files in the database, as a BLOB or LONGBLOB, but there are still many considerations before I do this. I know that storing them in the database is a viable option, however there are a number of methods of storing them. I also know storing them is one thing; correlating and accessing them in a manageable way is another thing entirely. The article provided at this link: dreamwerx.net/site/article01 describes a way of splitting the uploaded binary files into 64kb chunks and storing each chunk with the FileID, and then streaming the actual binary file to the client using headers. This is a really cool idea since it alleviates preassure on the servers memory; instead of loading an entire 100mb file into the RAM and then sending it to the client, it is doing it 64kb at a time. I have tried this (and updated his scripts) and this is totally successful, in a very small frame of testing. So if you are in agreeance that this method is a viable, stable and robust long-term option to store moderately large files (1kb to couple hundred megs), and large quantities of these files, let me know what other considerations or ideas you have. Also, I am considering getting a current "File Management" PHP script that gives an interface for managing files stored in the File System and converting it to manage files stored in the database. If there is already any software out there that does this, please let me know. I guess there are many questions I could ask, and all the information is up there ^^ so please, discuss all aspects of this and we can pass ideas back and forth and teach each other. Cheers, Quantico773

    Read the article

  • Get More Value From Your Oracle Premier Support Investment

    - by Get Proactive Customer Adoption Team
    Untitled Document The Return on Investment in Support Training I’m a typical software user. I’ve been using spreadsheets almost daily for the past 10 years or so. I know how to enter simple formulas, format cells, import files, and I can sort and filter. Sometimes I even use a pivot table. I never attended training. I learnt everything I know on the fly. Sometimes it was intuitive and easy, other times I had to spend minutes and even hours searching for a solution. Yet when I see what some other people can do with their spreadsheets, I know I’m utilizing maybe 15% of the functionality. Pity, one day I really have to sign up for training. Why haven’t I done it yet? Ah, you know, I’m a busy person, I have work to do. And if I need to use a feature that I am unfamiliar with, I’ll spend time on it only when I really need it. Now wait. When I recall how much time I spent trying to figure how things work compared to time I spent doing the productive work, I realize it was not insignificant. I’m unable to sum up all the time I spent ‘learning’ on the fly, but I’m sure it’s been days or even weeks. And after all this time, I’ve mastered 15% of its features. If only I had attended training years ago. That investment would have paid back 10 times! Working with My Oracle Support is no different. Our customers typically use simple search, create service requests, and download patches. They think they know how to use My Oracle Support. And they’re right. They know something but often they’re utilizing only a fragment of My Oracle Support’s potential. For the investment that has been made, using only a small subset of the capabilities offered in My Oracle Support leaves value on the table. There is much more available in My Oracle Support. Dozens of diagnostic tools and proactive health checks will keep verifying your Oracle environments against best practices that Oracle gathers every day thanks to our comprehensive knowledge management process. Automated patch recommendations will help prevent known issues, and upgrade planning and more is included in My Oracle Support. Why are you not utilizing all of these best practices, capabilities and tools? Is it because you don’t have time to invest 2-3 hours of your time to learn about the features? Simply because you think you can learn on the fly like I thought I could? Does learning on the fly how to properly use the Service Request escalation process when you already have critical issue sound like a good idea? My advice is: Invest your time now to learn how My Oracle Support can help you prevent issues on your systems. Learn how to find answers faster and resolve problems more efficiently. Understand how to properly complete a service request. Invest in Support training, offered at no additional cost to Oracle Premier Support customers. It will pay back quicker than you think. It will bring you more value than you think. Discover your advantage with Oracle Premier Support's Proactive Portfolio.

    Read the article

  • perl + shell script + remove all not uniq lines except the first line uniq (FILE NAME) + perl in she

    - by yael
    hi I have the following file How to remove by sed all FILE NAME lines except the first uniq FILE NAME For example need to remove all FILE NAME lines from the file except the first: FILE NAME: /dir1/dir2/dir3/dir4/dir5/file FILE NAME: /dirA/dirB/dirC/dirD/dirE/file the file: FILE NAME: /dir1/dir2/dir3/dir4/dir5/file PARAMETER NAME: blablabla TARGET FILE: 12 SOURCE FILE: 565 FILE NAME: /dir1/dir2/dir3/dir4/dir5/file PARAMETER NAME: blablabla TARGET FILE: 18 SOURCE FILE: 552 FILE NAME: /dir1/dir2/dir3/dir4/dir5/file PARAMETER NAME: blablabla TARGET FILE: 14 SOURCE FILE: 559 FILE NAME: /dirA/dirB/dirC/dirD/dirE/file PARAMETER NAME: blablabla TARGET FILE: 134 SOURCE FILE: 344 FILE NAME: /dirA/dirB/dirC/dirD/dirE/file PARAMETER NAME: blablabla TARGET FILE: 13 SOURCE FILE: 445 FILE NAME: /dirA/dirB/dirC/dirD/dirE/file PARAMETER NAME: blablabla TARGET FILE: 13

    Read the article

  • How to pass a resource file string to a function in an external .js file

    - by hima
    Hi , There is a Messages.resx file under App_GlobalResources in my solution. Everytime i create a string and give a value to the string internal static string Alert_EnterUserName { get { return ResourceManager.GetString("Alert_EnterUserName", resourceCulture); } } is created automatically. Can anybody guide me how to use this string in a function which is present in a seperate .js file? Thanks in advance

    Read the article

  • File format for animated scene

    - by stephelton
    I've got a custom OpenGL based rendering engine and I'd like to add support for cinema-type scene animation. The artist that is helping me uses primarily 3DSMax. I'd like a file format for exporting and importing this data. I'm also in need of a file format for skeletal animation data, which may have an impact here. I've been looking at MAXScript to manually export this stuff, which would buy me the most flexibility, but I have virtually no experience with 3DSMax itself, so I get a little lost when it comes to terminology. So I'd like to know what file formats exist for animated scene data, and whether they are appropriate for my use (my fear is that they will be way too broad for my fairly simple needs.) The way I view animated scene data is basically a bunch of references to [animated] models with keyframe-based matrices describing their orientation over time. And probably some special camera stuff to handle perspective. I might also want some event type stuff for adding/removing objects. Is this a sane concept?

    Read the article

  • copying same file name from client to server using tcp protocol with same size of file

    - by user3686570
    This is the client and server program where a client sends a file to server to save in the server. There is a issuse in that same file name is not getting copied on the server with same file size Please help me in this Client program import socket import sys s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect(("localhost",9999)) path=raw_input("Please enter the complete PATH of your file : ") f=open (path, "rb") l = f.read(256) while (l): s.sendall(l) l = f.read(10000) s.close() Server Program import socket import sys s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind(("localhost",9999)) s.listen(10) while True: s, address = s.accept() print address i=1 f = open( str(i),'wb') #open in binary #i=i+1 while (True): l=s.recv(256) #while (l): f.write(l) l=s.recv(256) print 'File recieve succesfully' f.close() #sc.close() s.close() Thanks in advance

    Read the article

  • How to limit concurrent file access on a Samba share?

    - by JPbuntu
    I have a Ubuntu 12.04 file server running Samba. There are 6 windows machines that access the server, as well as two people that will occasionally access files remotely. The problem that I am having is that the CAD/CAM software we are using doesn't seem to request file locks, meaning if two people open a file at the same time, the first person to close the file will get their changes overwritten if the second person saves the file. I tried changing the smb.conf to strict locking = yes but this doesn't seem to have any effect. File locking with excel seems to work fine, so I know that Samba is using the file locks...if they were put on the file in the first place. Is there a way (either in Samba or Ubuntu) to only allow one user to have a file open at a time? If not does anyone have any suggestions for managing a problem like this?

    Read the article

  • File Access problems with SLES 10 SP2 OES2 SP1

    - by Blackhawk131
    We have identified a couple of repeatable, demonstrable scenarios with unexplained rejected folder access on our servers for Mac users. Hopefully, this can be presented to Novell for a solution. What we did to demonstrate scenario 1; 1. setup a PC and Mac side-by-side 2. login to our server and open up to a central location on both Mac and PC 3. on the PC in that central location create a folder 4. on the Mac in that central location drag the created folder to the Mac desktop, this should work fine, no problem 5. on the PC rename that folder 6. on the Mac drag a file to that renamed folder, this should error with the following message; a. You cannot copy some of these items to the destination because their names are too long for the destination. Do you want to skip copying these items and continue copying the other items? b. Select skip, response is the filename is copied to the location with zero or small byte size. Try opening it and you get file is corrupted error message. What we did to demonstrate scenario 2; 1. setup a PC and Mac side-by-side 2. login to our server and open up to a central location on both Mac and PC 3. on the PC in that central location create a folder then create a subfolder 4. copy some content into the subfolder 5. on the Mac in that central location drag the created top level folder to the Mac desktop, this should work fine, no problem 6. on the PC rename that subfolder 7. on the Mac drag that top level folder to the Mac desktop, this should error on the Mac with the following; a. The operation cannot be completed because you do not have sufficient privileges for b. The operation cannot be completed because you do not have sufficient privileges for 8. on the Mac, if you open that subfolder you can see the file copied in step 4 above but, you can not open that file, you get the following message if you try; a. There was an error opening this document. You do not have permission to open this file. 9. on the PC drag some content into the top level folder 10. on the Mac you can open that file directly from the server or copy it locally, no problem, however-the subfolder is still corrupted or locked, whichever 11. on the PC rename the top level folder 12. on the Mac that same file just opened in step 10 above is now not accessible, get the following message; a. The document could not be opened. I have observed some variances in the above. For instance, a change on the PC side may take a moment before you can observer or act on the Mac side - kind of like the server is slow to respond. Also, the error message may vary. However, the key is once a folder, or subfolder, gets renamed by a PC, Mac problems commence. The solution is to create a new folder from a PC and copy the contents of the corrupted folder to the new folder and not rename the folder name. This has to be done on a PC because the corrupted folder is not accessible by a Mac user. Another problem that dovetails with the above is that we know certain characters are not allowed for PC folder or filenames. If a Mac user creates a folder with a slash in the file name, from the PC the user does not see that slash in the name. As soon as the PC user copies a file to that folder, the Mac user is locked from that folder. Will get the following error message; - Sorry, the operation could not be completed because an unexpected error occurred. - (Error code - 50) In addition to the above mentioned character issue with folders, the problem is more evil with filenames. If, for example, you create a file with a slash in the filename on a Mac and copy it to the server you will get the following error message; - You cannot copy some of these items to the destination because their names are too long for the destination. Do you want to skip copying these items and continue copying the other items? Select either Stop or Skip buttons. It does not matter which button is selected. The file name gets copied to the destination location at a reduced size. Depending on the file type, the icon associated with the file may or may not be present. Furthermore, if you open that file on the server you will get the following message; - Couldnt open the file. It may be corrupt or a file format that doesnt recognize. From the users perspective, if they are not observant of the icon or file size, they may disregard the error message and think their file has copied as intended. Only later do they discover the file is corrupt if they open that file. I want to make a note on this problem. It is the PC causing the issue. You can change folder and file names all day on a MAC and you don't have a problem as long as a character is not the issue. Once you change the file name or folder name from a PC the entire folder structure from that level down is corrupted. But it has to be resolved from a PC by creating a new folder and copying the contents to the new folder like stated above. Is something not configured correctly? SUSE Linux Enterprise Server 10 (x86_64) VERSION = 10 PATCHLEVEL = 2 LSB_VERSION="core-2.0-noarch:core-3.0-noarch:core-2.0-x86_64:core-3.0-x86_64" Novell Open Enterprise Server 2.0.1 (x86_64) VERSION = 2.0.1 PATCHLEVEL = 1 BUILD Note: We use Novell clients on all windows systems to connect to the servers for file access and network storage. We use AFP to allow OSx systems to connect to servers.

    Read the article

  • File Access problems with SLES 10 SP2 OES2 SP1

    - by Blackhawk131
    We have identified a couple of repeatable, demonstrable scenarios with unexplained rejected folder access on our servers for Mac users. Hopefully, this can be presented to Novell for a solution. What we did to demonstrate scenario 1; 1. setup a PC and Mac side-by-side 2. login to our server and open up to a central location on both Mac and PC 3. on the PC in that central location create a folder 4. on the Mac in that central location drag the created folder to the Mac desktop, this should work fine, no problem 5. on the PC rename that folder 6. on the Mac drag a file to that renamed folder, this should error with the following message; a. You cannot copy some of these items to the destination because their names are too long for the destination. Do you want to skip copying these items and continue copying the other items? b. Select skip, response is the filename is copied to the location with zero or small byte size. Try opening it and you get file is corrupted error message. What we did to demonstrate scenario 2; 1. setup a PC and Mac side-by-side 2. login to our server and open up to a central location on both Mac and PC 3. on the PC in that central location create a folder then create a subfolder 4. copy some content into the subfolder 5. on the Mac in that central location drag the created top level folder to the Mac desktop, this should work fine, no problem 6. on the PC rename that subfolder 7. on the Mac drag that top level folder to the Mac desktop, this should error on the Mac with the following; a. The operation cannot be completed because you do not have sufficient privileges for b. The operation cannot be completed because you do not have sufficient privileges for 8. on the Mac, if you open that subfolder you can see the file copied in step 4 above but, you can not open that file, you get the following message if you try; a. There was an error opening this document. You do not have permission to open this file. 9. on the PC drag some content into the top level folder 10. on the Mac you can open that file directly from the server or copy it locally, no problem, however-the subfolder is still corrupted or locked, whichever 11. on the PC rename the top level folder 12. on the Mac that same file just opened in step 10 above is now not accessible, get the following message; a. The document could not be opened. I have observed some variances in the above. For instance, a change on the PC side may take a moment before you can observer or act on the Mac side - kind of like the server is slow to respond. Also, the error message may vary. However, the key is once a folder, or subfolder, gets renamed by a PC, Mac problems commence. The solution is to create a new folder from a PC and copy the contents of the corrupted folder to the new folder and not rename the folder name. This has to be done on a PC because the corrupted folder is not accessible by a Mac user. Another problem that dovetails with the above is that we know certain characters are not allowed for PC folder or filenames. If a Mac user creates a folder with a slash in the file name, from the PC the user does not see that slash in the name. As soon as the PC user copies a file to that folder, the Mac user is locked from that folder. Will get the following error message; - Sorry, the operation could not be completed because an unexpected error occurred. - (Error code - 50) In addition to the above mentioned character issue with folders, the problem is more evil with filenames. If, for example, you create a file with a slash in the filename on a Mac and copy it to the server you will get the following error message; - You cannot copy some of these items to the destination because their names are too long for the destination. Do you want to skip copying these items and continue copying the other items? Select either Stop or Skip buttons. It does not matter which button is selected. The file name gets copied to the destination location at a reduced size. Depending on the file type, the icon associated with the file may or may not be present. Furthermore, if you open that file on the server you will get the following message; - Couldnt open the file. It may be corrupt or a file format that doesnt recognize. From the users perspective, if they are not observant of the icon or file size, they may disregard the error message and think their file has copied as intended. Only later do they discover the file is corrupt if they open that file. I want to make a note on this problem. It is the PC causing the issue. You can change folder and file names all day on a MAC and you don't have a problem as long as a character is not the issue. Once you change the file name or folder name from a PC the entire folder structure from that level down is corrupted. But it has to be resolved from a PC by creating a new folder and copying the contents to the new folder like stated above. Is something not configured correctly? SUSE Linux Enterprise Server 10 (x86_64) VERSION = 10 PATCHLEVEL = 2 LSB_VERSION="core-2.0-noarch:core-3.0-noarch:core-2.0-x86_64:core-3.0-x86_64" Novell Open Enterprise Server 2.0.1 (x86_64) VERSION = 2.0.1 PATCHLEVEL = 1 BUILD Note: We use Novell clients on all windows systems to connect to the servers for file access and network storage. We use AFP to allow OSx systems to connect to servers.

    Read the article

  • Compress Large Video Files with DivX / Xvid and AutoGK

    - by DigitalGeekery
    Have you ever recorded home video on a camcorder only to find the video size is enormous? What if you wanted to share a video clip on YouTube or another video sharing site, but the file size was bigger than the maximum upload size? Today we’ll look at a way to compress certain video files, such as MPEG and AVI, with Auto Gordian Knot (AutoGK). AutoGK is a free application that runs on Windows. It supports Mpeg1, Mpeg2, Transport Streams, Vobs, and virtually any codec used for an .AVI file. AutoGK will accept as input the following file types: MPG, MPEG, VOB, VRO, M2V, DAT, IFO, TS, TP, TRP, M2T, and AVI. Files are output as .AVI files and are converted using the DivX or XviD codecs. Installing and Using AutoGK Download and install AutoGK (link below) Open the AutoGK. You’ll need to navigate a few wizard screens, but you can just accept the defaults.   Choose your video file by clicking on the folder to the right of the Input file text box.   Browse for and select your video file and click “Open.”   For this example, we’ll be working with an .AVI file that’s 167MB in size.   The output file is copied into the same directory as the input file by default, but you can change this if you choose. If the input file is also .AVI, AutoGK will append an _agk to the output file so that the original is not overwritten. Next, you’ll see any audio tracks listed. You can unselect the check box if you’d like to remove the audio track. You can choose one of the Predefined size options… Or, select a Custom size in MB or Target Quality in percentage. For our example, we’ll be compressing our 167MB file to 35MB. Click on Advanced Settings. Here you can choose your codec, if you have a preference, as well as output resolution and output audio. If you’d like to use the DivX codec, you’ll need to download and install it separately. (See link below) Typically you’ll want to keep the defaults. Click “OK.” Now you’re ready to add your file conversion job to the Job queue. Click Add Job to add it to the queue. You can add multiple files conversions to the job queue and  convert them in one batch. Click Start to begin the conversion process. The process will begin. You’ll be able to see the progress in the Log window on the bottom left. When the conversion is complete you’ll see a “Job finished” and the total time in the log window.   Check your output file to see it’s compressed size. Test your video just to make sure the output quality is satisfactory.   Note:  Conversion times can vary greatly depending on the size of the file and your computer hardware. Files that are several GBs in size may take several hours to compress. AutoGK is no longer being actively developed but is still a wonderful DivX/XviD conversion tool. It can also be used to compress and convert non-copy protected DVDs. Downloads AutoGordianKnot DivX (optional) Similar Articles Productive Geek Tips Use Your Mac Mini as a Media Server Part 2Make Disk Cleanup Compress Older(or Newer) Files on XPMysticgeek Blog: Exclusive Look Inside Vreel – Including Interview With Vreel Founder!Friday Fun: Watch HD Video Content with MeevidConvert a DVD Movie Directly to AVI with FairUse Wizard 2.9 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Penolo Lets You Share Sketches On Twitter Visit Woolyss.com for Old School Games, Music and Videos Add a Custom Title in IE using Spybot or Spyware Blaster When You Need to Hail a Taxi in NYC Live Map of Marine Traffic NoSquint Remembers Site Specific Zoom Levels (Firefox)

    Read the article

  • "Unable to open file", when the program tries to open file in /proc

    - by tristartom
    Hi, I try to read file /proc/'pid'/status, using c program. The code is as follows, and even I use sudo to run it, the prompt still keeps throwing "Unable to open file". Please let me know if you have any ideas on how to fix this. thanks Richard ... int main (int argc, char* argv[]) { string line; char* fileLoc; if(argc != 2) { cout << "a.out file_path" << endl; fileLoc = "/proc/net/dev"; } else { sprintf(fileLoc, "/proc/%d/status", atoi(argv[1])); } cout<< fileLoc << endl; ifstream myfile (fileLoc); if (myfile.is_open()) { while (! myfile.eof() ) { getline (myfile,line); cout << line << endl; } myfile.close(); } else cout << "Unable to open file"; return 0; }

    Read the article

  • mmap() for large file I/O?

    - by Boatzart
    I'm creating a utility in C++ to be run on Linux which can convert videos to a proprietary format. The video frames are very large (up to 16 megapixels), and we need to be able to seek directly to exact frame numbers, so our file format uses libz to compress each frame individually, and append the compressed data onto a file. Once all frames are finished being written, a journal which includes meta data for each frame (including their file offsets and sizes) is written to the end of the file. I'm currently using ifstream and ofstream to do the file i/o, but I am looking to optimize as much as possible. I've heard that mmap() can increase performance in a lot of cases, and I'm wondering if mine is one of them. Our files will be in the tens to hundreds of gigabytes, and although writing will always be done sequentially, random access reads should be done in constant time. Any thoughts as to whether I should investigate this further, and if so does anyone have any tips for things to look out for? Thanks!

    Read the article

  • jQuery: get the file name selected from <input type="file" />

    - by ConNen
    This code should work in IE (don't even test it in Firefox), but it doesn't. What I want is to display the name of the attached file. Any help? <html> <head> <title>example</title> <script type="text/javascript" src="../js/jquery.js"></script> <script type="text/javascript"> $(document).ready( function(){ $("#attach").after("<input id='fakeAttach' type='button' value='attach a file' />"); $("#fakeAttach").click(function() { $("#attach").click(); $("#maxSize").after("<div id='temporary'><span id='attachedFile'></span><input id='remove' type='button' value='remove' /></div>"); $('#attach').change(function(){ $("#fakeAttach").attr("disabled","disabled"); $("#attachedFile").html($(this).val()); }); $("#remove").click(function(e){ e.preventDefault(); $("#attach").replaceWith($("#attach").clone()); $("#fakeAttach").attr("disabled",""); $("#temporary").remove(); }); }) }); </script> </head> <body> <input id="attach" type="file" /><span id="maxSize">(less than 1MB)</span> </body> </html>

    Read the article

  • Openssh sftp-server: .filepart support?

    - by Guillaume Bodi
    I am trying to setup a SFTP server, running off Ubuntu Server 11.04. I installed openssh-server to provide SSH access. What I am trying to do is make file uploads run with a suffix (.filepart or whatever), which would be removed upon transfer completion. The flow idea is: User uploads cat.jpg The server starts writing cat.jpg.filepart in the destination directory Once the upload completes, the server trashes the previous cat.jpg (if any) and renames cat.jpg.filepart to cat.jpg This is to make sure that incomplete file uploads do not overwrite the existing files. Any idea on how I can do this? Thanks

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >