Search Results

Search found 45804 results on 1833 pages for 'large files'.

Page 302/1833 | < Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >

  • How do I merge MP4 files without audio going out of sync?

    - by djangofan
    Is there a tool I can use that can merge MP4 files without throwing the audio out of sync? I generated some MP4 files from a DVD using AVIDemux but whatever tool I try to use always ends up throwing the audio out of sync with the video. The further you get into the video the further off-sync the audio is. By themselves the MP4/AAC videos have perfect audio-video sync. later tonight i might try http://www.headbands.com/gspot/ to examine the file before and after to see if anything changed in the media format.

    Read the article

  • Should root ever own files in my (linux) home directory?

    - by Darren Cook
    This question started off asking why my history file wasn't working properly. Then I noticed it was -rw------- 1 root root and hadn't been updated since 2012-09-11. I changed the ownership, problem fixed. But now I see some other files are owned by root: .gitconfig .pearrc .viminfo Can I safely change them to be owned by my normal user, not root? I'm scratching my head trying to work out if there is a downside, or a security consequence. Losing seven weeks history is actually quite painful, because I lean on it a lot (e.g. to remind how I last did an archive). Would it be reasonable to set up a cron job to email me if it finds any files in my home directory owned by anyone else but me? Rephrased: is there ever a good reason for root to own a file in my home directory?

    Read the article

  • How can I convert audio files to this format?

    - by jeffamaphone
    I have a bunch of audio files that are named .wav but it seems not all .wavs are created equal. For example: $ file * file1.wav: RIFF (little-endian) data, WAVE audio, Microsoft PCM, 16 bit, stereo 44100 Hz file2.wav: Audio file with ID3 version 2.2.0, contains: MPEG ADTS, layer III, v1, 160 kbps, 44.1 kHz, JntStereo file3.wav: Claris clip art? file4.wav: Audio file with ID3 version 2.2.0, contains: MPEG ADTS, layer III, v1, 160 kbps, 44.1 kHz, JntStereo And for good measure, a non-wav: file5.m4a: ISO Media, MPEG v4 system, iTunes AAC-LC I would like to convert all of these files to the format that file1.wav is: RIFF (little-endian) data, WAVE audio, Microsoft PCM, 16 bit, stereo 44100 Hz What is the proper set of arguments to pass to afconvert to make that happen?

    Read the article

  • How can I restore my system from WIM files?

    - by Brian Henk
    I installed another OS on my netbook and decided I want to revert back to Windows 7 Starter. I was careful to keep the recovery partition, but even when I manage to boot to it, the system just restarts a few seconds after selecting "restore." I grabbed all the files from the recovery partition onto a flash drive. I also have been able to use this drive to boot a Windows 7 install, but it was unable to find the recovery partition. These WIM files seem to be the key to installing Windows again. How can I use them?

    Read the article

  • When running a shell script, how can you protect it from overwriting or truncating files?

    - by Joseph Garvin
    If while an application is running one of the shared libraries it uses is written to or truncated, then the application will crash. Moving the file or removing it wholesale with 'rm' will not cause a crash, because the OS (Solaris in this case but I assume this is true on Linux and other *nix as well) is smart enough to not delete the inode associated with the file while any process has it open. I have a shell script that performs installation of shared libraries. Sometimes, it may be used to reinstall versions of shared libraries that were already installed, without an uninstall first. Because applications may be using the already installed shared libraries, it's important the the script is smart enough to rm the files or move them out of the way (e.g. to a 'deleted' folder that cron could empty at a time when we know no applications will be running) before installing the new ones so that they're not overwritten or truncated. Unfortunately, recently an application crashed just after an install. Coincidence? It's difficult to tell. The real solution here is to switch over to a more robust installation method than an old gigantic shell script, but it'd be nice to have some extra protection until the switch is made. Is there any way to wrap a shell script to protect it from overwriting or truncating files (and ideally failing loudly), but still allowing them to be moved or rm'd? Standard UNIX file permissions won't do the trick because you can't distinguish moving/removing from overwriting/truncating. Aliases could work but I'm not sure what entirety of commands need to be aliased. I imagine something like truss/strace except before each action it checks against a filter whether to actually do it. I don't need a perfect solution that would work even against an intentionally malicious script. Ideas I have so far: Alias cp to GNU cp (not the default since I'm on Solaris) and use the --remove-destination option. Alias install to GNU install and use the --backup option. It might be smart enough to move the existing file to the backup file name rather than making a copy, thus preserving the inode. "set noclobber" in ~/.bashrc so that I/O redirection won't overwrite files

    Read the article

  • Excel; exporting/importing different columns to different csv files

    - by Sisyphus
    Is there a way to batch export different columns to different csv files in excel on an OSX, I'm thinking something along the lines of possibly automator, applescript or bash. I've had a look play around with automator and so far no look. The best I have accomplished export the whole sheet, then use sed to strip out what I don't need, however this is terribly inefficient. Also, is there a method, to batch import multiple csv files into columns. Thanks in advance && sorry I didn't tag excel correctly it wouldn't allow me to create the excel:mac tag

    Read the article

  • 7zip: Add files to new folder in archive via command line?

    - by cschol
    I am using 7zip for compressing a bunch of files. The files are in a directory structure, like this: MyDir\File1 MyDir\File2 MyDir\File3 MyDir\MoreFiles\File4 MyDir\MoreFiles\File5 I want to create a 7z file with the following structure via command line: ZippedDir\File1 ZippedDir\File2 ZippedDir\File3 ZippedDir\MoreFiles\File4 ZippedDir\MoreFiles\File5 Basically, I want to zip the content of MyDir\ into a new folder called ZippedDir\. I know I could copy the content into a directory called ZippedDir\ and then zip this new directory. However, I was wondering if there was a way to avoid this extra copy step and directly zip the content, if possible, via command line.

    Read the article

  • Estimating compressed file size using a list parameter

    - by Sai
    I am currently compressing a list of files from a directory in the following format: tar -cvjf test_1.tar.gz -T test_1.lst --no-recursion The above command will compress only those files mentioned in the list. I am doing this because this list is generated such that it fits a DVD. However, during compression the compression rate decreases the estimated file size and there is abundant space left in the DVD. This is something like a Knapsack algorithm. I would like to estimate the compressed file size and add some more files to the list. I found that it is possible to estimate file size using the following command: tar -cjf - Folder/ | wc -c This command does not take a list parameter. Is there a way to estimate compressed file size? I am also looking into options like perl scripts etc. Edit: I think I should provide more information since I have been doing a lot of web search. I came across a perl script(Link)that sort of emulates the Knapsack algorithm. The current problem with the above mentioned script is that it splits the files in their original state. When I compress the files after splitting them, there are opportunities for adding more files which I consider to be inefficient. There are 2 ways I could resolve the inefficiency: a) Compress individual files and save them in a directory using a script. The compressed file could provide a best estimate. I could generate a script using a folder of compressed files and use them on the uncompressed ones. b) Check whether the compressed file's size is less than the required size. If so, I should keep adding files until I meet the requirement. However, the addition of new files to the compressed file is an optimization problem by itself.

    Read the article

  • How to securely delete files stored on a SSD?

    - by Chris Neuroth
    From a (very long, but definitely worth to read) article on SSDs: When you delete a file in your OS, there is no reaction from either a hard drive or SSD. It isn’t until you overwrite the sector (on a hard drive) or page (on a SSD) that you actually lose the data. File recovery programs use this property to their advantage and that’s how they help you recover deleted files. The key distinction between HDDs and SSDs however is what happens when you overwrite a file. While a HDD can simply write the new data to the same sector, a SSD will allocate a new (or previously used) page for the overwritten data. The page that contains the now invalid data will simply be marked as invalid and at some point it’ll get erased. So, what would be the best way to securely erase files stored on a SSD? Overwriting with random data as we are used to from hard disks (e.g. using the "shred" utility) won't work unless you overwrite the WHOLE drive...

    Read the article

  • What is the best free service to host images and mp3 files?

    - by Edward Tanguay
    I am making an educational social software silverlight application. I would like users to be able to point the application to a URL with text, images, and audio files which they have created. Many users will not have their own website to do this, so we are looking for a free service they can use to upload, and manage their own text/image/audio content. What is the best free service for non-technical users to upload and make available text, images and audio? For instance, sites.google.com allows you to upload pictures and access them via http so that would work, but that is more about making a website. For this purpose we just need the ability to upload files, without the website creation tools.

    Read the article

  • what config files need to be transferred while migrating apache vhosts from old suse server to new suse server?

    - by jarus
    I have an old server with suse on it and its hosting numerous website under same IP , now i am trying to migrate the websites and all the contents of the old suse server to a new server with open suse 12.1 , i have transferred "/srv/www/vhosts" "/etc/apache2/vhosts.d" "/etc/apache2//httpd.conf" "/etc/apache2/listen.conf" "/etc/apache2/default-server.conf" i have transferred all the database files also . i am trying to replace the old server with the new server , i tried changing the ip address with the old server's ip address but its not working. what files do i need to transfer and what do i need to do to get the new server hosting the websites in place of the old server , please, any help will be greatly appreciated.

    Read the article

  • How do you change the "scan this dir for additional ini files" path?

    - by amvx
    I managed to get the custom INI to load, but its still loading other .ini files from the default location. I created an fcgi wrapper that passed the ini value as a parameter. That worked. Now just these other ini's need to be loaded from the same dir as my custom ini. The problem is the other .ini files are overriding the settings in my custom php.ini =/ I realize the problem now is that the php.fcgi was compiled with a custom path parameter. So that's a problem. I might have to recompile it using a different location or none at all. I'd hate to have to compile an fcgi for each domain =/

    Read the article

  • Any way to know what files were in a broken ZFS pool?

    - by Erik Tjernlund
    I have a large ZFS pool of 4 combined drives. Now, the filesystem can not be mounted: pool: tank state: UNAVAIL status: One or more devices could not be opened. There are insufficient replicas for the pool to continue functioning. action: Attach the missing device and online it using 'zpool online'. see: http://www.sun.com/msg/ZFS-8000-3C scan: none requested config: NAME STATE READ WRITE CKSUM tank UNAVAIL 0 0 0 insufficient replicas c10t0d0 ONLINE 0 0 0 c8t0d0 UNAVAIL 0 0 0 cannot open c8t1d0 ONLINE 0 0 0 c10t1d0 ONLINE 0 0 0 Probably a broken drive (c8t0d0). I'm not overly concerned by the loss of the data, but I'd love to know exactly which files were in that pool. Is there any way to get a listing of what files were there?

    Read the article

  • Can I lose files when changing security on an XP drive within Windows 7?

    - by Will
    Hard to come up with a title for this one, sheesh. Have a friend whose computer went down. He asked me to get all his data off his drive. His old computer was running XP. So, I've plugged it into my Windows 7 computer. When I attempt to open up his Documents and Settings folder, I get prompted to elevate in order to "permanently get access to this folder." If I do this, will I be able to access the files in this directory, or will all the current files be lost? I may be overly paranoid about this, but I can't find any information about exactly what will happen when I do this. TIA.

    Read the article

  • How can i automatically move files based on their name?

    - by Pasha
    I have 13 folders containing scanned photographs. Some photographs have been renamed to the date on which they were taken, resulting in YYYY.MM.DD.tif name. It could potentially be YYYY.MM.DD (###).tif where ### is just a number. Others are just named IMG_###.tif I would like to move the files with the YYYY.MM.DD name to a YYYY\MM\DD folder structure. While the files are being moved, I would also like to append the original folder name to the end of the file name. So, a file 01\2012.06.26 (1).tif should end up 2012\06\26\2012.06.26 (1) - 01.tif Is there a Windows tool that can help me with this? Or do I need to resort to writing a custom app?

    Read the article

  • List/remove files, with filenames containing string that's "more than a month ago"?

    - by Martin Tóth
    I store some data in files which follow this naming convention: /interesting/data/filename-YYYY-MM-DD-HH-MM How do I look for the ones with date in file name < now - 1 month and delete them? Files may have changed since they were created, so searching according to last modification date is not good. What I'm doing now, is filter-ing them in python: prefix = '/interesting/data/filename-' import commands names = commands.getoutput('ls {0}*'.format(prefix)).splitlines() from datetime import datetime, timedelta all_files = map(lambda name: { 'name': name, 'date': datetime.strptime(name, '{0}%Y-%m-%d-%H-%M'.format(prefix)) }, names) month = datetime.now() - timedelta(days = 30) to_delete = filter(lambda item: item['date'] < month, all_files) import os map(os.remove, to_delete) Is there a (oneliner) bash solution for this?

    Read the article

  • Is there a compression method for compressing a group of very similar files, without archiving them?

    - by awiebe
    I want to compress a large nuber of files that have near identical headers, and also some data, however I do not wish to archive them, nor do I wish to zip them individually(because the copression ratio would be much higher if substitutions of similar blocks could be done using a single table). Does a compression method exist to do this already, or should I implement it myself. Note: Don't say "Disk space is cheap", because I may want to use this on an embedded system.

    Read the article

  • Ubuntu how to FTP transfer files to folder /var/www?

    - by jc.yin
    I'm new to linux and I've set up a web server with Ubuntu Desktop edition so I can practice with the GUI a bit before transitioning to Ubuntu Server. I've already set up a LAMP stack as well as FTP. Now I just need to know how to transfer my web files to the /var/www folder in Ubuntu. Previously I've worked on Mac OS and there's a central server for all the web files where I can FTP to. Now after I've managed to connect via FTP to the Ubuntu server, I see all the folders such as Desktop, Downloads, Documents etc but no web folder. Anyone able to help me understand how do I FTP to the /var/www folder in Ubuntu? Thanks

    Read the article

  • Seeking a solution to automatically copy files from the cd-rom disk to the USB drive once it's connected.

    - by Ray Nathan
    I plan to distribute a free CD that automatically copies files to a connected usb device. This process will be done on the computers of the users that obtain the cd. The CD will contain an autorun.ini file that will instruct the computer to copy a set of files located on the cd..to a specific directory on the connected usb device. The usb drive letter is not the same on all the systems, therefore...Windows XP should automatically know the drive letter of the usb device before the copy operation begins. What would be the best way of creating a short batch file or script that I can place on the CD to execute this process? Also, please note that it is NOT feasible or recommended to include a batch file on the USB devices to sync this operation due to the explanation at the beginning of this paragraph. :) Thank You All

    Read the article

  • How do I recover files from a corrupt VDI file?

    - by Eric P
    Is it possible to repair a corrupt VDI file? The OS on the VDI (XP) doesn't boot at all, it just hangs at a black screen. I was getting file errors before on its last boot, but now its not working at all. Sector viewer shows 'Invalid partition table Error loading operating system Missing operating system'. I tried mounting the file from the host OS, but it just says that the drive isn't formatted. I don't need to be able to run the VDI, but I do need some files that are on it. Is there any way to recover files from the corrupt VDI file?

    Read the article

  • How can I evaluate the best choice of archive format for compressing files?

    - by Mehrdad
    In general, I've observed the following: Linux-y files or tools use bzip2 or gzip for distributing archives Windows-y files or tools use ZIP for distributing archives Many people use 7-Zip for creating and distributing their own archives Questions: What are the advantages and disadvantages of these formats, all of which appear to be open formats? When/why should I choose one (say, 7-Zip) over another (say, ZIP)? Why does the trend above appear to hold, even though all of these are portable formats? Are there any particular advantages to using a particular archive format on a particular platform?

    Read the article

  • How can I audit a Linux filesystem for files which have been changed or added within a specific time

    - by Bcos
    We are a website design/hosting company running several sites on a Linux server using Joomla 1.5.14 and recently someone was able exploit a vulnerability in the RW Cards component to write arbitrary files/modify existing files on our filesystem enabling them to do some nasty things to our customers sites. We have removed vulnerable modules from all sites but are still seeing some problems. We suspect that they still have some scripts installed and need a way to audit anything that has been changed or added in the last 10 days. Is there a command or script we can run to do this?

    Read the article

  • How to print TIFF files using MSFT Office Document imaging?

    - by Think Floyd
    OS: Vista and Windows7 I have Microsoft Office Document Imaging installed. .tif and .tiff files association is set to " Microsoft Office Document Imaging" When I open a TIFF file, it opens in " Microsoft Office Document Imaging". Good so far. However, when I right-click on the TIFF file and invoke print, I see a "Print Pictures" dialog, ("How do you want to print your pictures?") I have some applications installed on my machine that print incoming TIFF files on the printer. They work fine on XP. However, on Vista and Windows7, I get this "Print pictures" prompt requiring an user intervention (i.e, click on Print button). How do I get rid of this "Print Pictures" prompt?

    Read the article

< Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >