Search Results

Search found 39784 results on 1592 pages for 'ignore files'.

Page 45/1592 | < Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >

  • bash: listing files in date order, with spaces in filenames

    - by Jason Judge
    I am starting with a file containing a list of hundreds of files (full paths) in a random order. I would like to list the details of the ten latest files in that list. This is my naive attempt: ls -las -t `cat list-of-files.txt` | head -10 That works, so long as none of the files have spaces in, but fails if they do as those files are split up at the spaces and treated as separate files. I have tried quoting the files in the original list-of-files file, but the here-document still splits the files up at the spaces in the filenames. The only way I can think of doing this, is to ls each file individually (using xargs perhaps) and create an intermediate file with the file listings and the date in a sortable order as the first field in each line, then sort that intermediate file. However, that feels a bit cumbersome and inefficient (hundreds of ls commands rather than one or two). But that may be the only way to do it? Is there any way to pass "ls" a list of files to process, where those files could contain spaces - it seems like it should be simple, but I'm stumped.

    Read the article

  • Is there a best practice for concatenating MP3 Files, adjusting sample rates to match, while preserving original files?

    - by Scott
    Hello overflow community! Does anyone know if there is a "best practice" to concatenate mp3 files to create new files, while preserving the original files? I am working on a CentOS Linux machine, in command line. I will eventually call the command line from a PHP script. I have been doing research and I have come up with a process that I think could work. It combines general advice from different forums, blogs, and sources like this one. So here I go: Create a temporary folder Loop through files to create a new, converted copy, of file into a "raw" format (which one, I don't know. I didn't know "raw" files existed before too long ago. I could use some suggestions on this) Store the path to the temporary files, in the temporary folder, and then loop through the files to concatenate them and then put the new merged file the final "processed directory" Delete the contents of the temporary file with the temporary raw files inside. Convert the final file from "raw" to mp3 and enjoy the finished result I'm thinking that this course of action might be best because I can't necessarily control the quality of the original "source" mp3s. The only other option I could think of would be to create a script that would perform a similar process upon files being added to the system leaving only the files with the "proper" format and removing the original "erroneous" file. Hopefully you can see that I have put some thought into this and that I'm trying to leverage the collective knowledge of this community to choose the best direction. Perhaps there is a better path that I could take? By concatenate, I mean to join together in sequence to create a new audio file from the "concatenated files."

    Read the article

  • rm failing inside cron script

    - by Nicholas
    I have a cron job calling a bash script which runs fine, except for one line inside it that is suppose to remove all fines in a directory. The result of this line is always 'no such file or directory' even though I have verified (many times) that there are files in that directory. The line in question is as simply: rm /dir1/dir2/dir3/* The script works fine when run manually in the terminal, so it must be something about how the cron is run. I've tried giving 'dir3' and all the files inside it every permission possible, so it shouldn't be a permission problem. (The directory and files are also owned by the user). I've tried specifing 'SHELL=/bin/bash' inside 'crontab'. There is no sticky bit set and there is no alias on the rm command. Interestingly changing the 'rm' command to 'ls' gives the same negative result (unless you remove the trailing '*', and then that works). What am I missing here?

    Read the article

  • Where are essential Windows files located?

    - by Dorothy
    I am using a Vista but I would like the answer for XP, Vista and Windows 7. I am writing a program where I want to count the Important or Essential files of the Windows PC. It looks like the Essential files would be located somewhere in C:/Windows and after some research I found that some Essential files are located in C:/Windows/winsxs. What and where are the Essential files for a Windows PC? Is there a folder or set of folders that contain the essential files? Are all the files in C:/Windows/winsxs Essential? Essential Definition: Absolutely necessary; extremely important

    Read the article

  • Windows Server 2003 R2 Standard: Locks MS Office files, but not Adobe .AI and .PSD files?

    - by Bruce Garlock
    We have some shares setup on a Windows 2003 R2 server, and the MS Office files people save behave properly: The first person to open the file gets read/write, and the second person to open the file while the first person still has the file open, gets a read-only version. This is not true for the graphics files, like Adobe Illustrator .AI files, and Photoshop .PSD files. Anyone who goes to open these files has full read/write, even if someone else is already working on the file! This has lead to numerous file corruption issues, as well as other lost work, since it always saves the last changes to the file. How do we get Windows to properly lock these files so when someone is working on a file, and someone else wants to open one, they get read-only access? Many thanks, Bruce

    Read the article

  • All files trying to start in Notepad

    - by Jormal
    This question has been asked, but the solutions given were worthless to me because ALL files are trying to open in Notepad. I mistakenly associated all exe files with Notepad and everything is trying to open within Notepad now. That includes regedit, so suggesting I use regedit to doctor files does not work (cmd window will not open from Winkey + R - Run). It also includes any program I download to fix the issue. I also can not right click and choose Open With because that is not a given option when right clicking on a majority of files, at least none of the exe files I want to start. Yes, I tried it on the program files, not the shortcut. I also can not use System Restore because it, too, tries to open in Notepad. I've been banging my head uselessly on this for hours. Could someone help me out?

    Read the article

  • Finding out whether files are added, changed or deleted on a FTP server

    - by futureelite7
    I've recently been given the task to migrate about 200GB of data from one dedicated server to another. As this will take a week or more, I've been taking a snapshot of the current files on the FTP server using wget's mirror feature. However, since other users will probably be uploading / changing stuff in the meantime, the snapshot that I have made will not include the most recent changes. Since I only have access to FTP on this server, I'm planning to write a script that will recursively do a FTP stat on all files in the FTP folder, and compare the directory listing against the snapshot I have locally. If there are differences in the number of files, then I know files have been added or deleted. If the modification dates have been changed, then I know the files have been changed, and should redownload those files specifically. Am I missing anything in my approach, or are there any possible improvements to this approach?

    Read the article

  • Powershell overruling Perl binmode?

    - by hippietrail
    I have a Perl script which creates a binary file while scanning a very large text file. It outputs to STDOUT which I redirect in the commandline to a file. To optimize it I'm making changes then seeing how low it takes to run. On Linux for this I use the "time" command. On Windows the best way to time a program seemed to be to PowerShell's "measure-command". This seemed to work fine but I noticed the generated files were larger. On examination I found that the files generated from within PowerShell begin with a BOM and contain CRLF pairs! My Perl script has a "binmode STDOUT" directive and does work correctly in a normal dosbox. Is this a bug or misfeature in PowerShell or measure-command? Has it affected others creating binary files by means other than Perl? Googling hasn't turned anything up so far. I'm using Perl 5.12, PowerShell v1.0 and Windows XP.

    Read the article

  • Removing DS_Store files and variants?

    - by Ron Gejman
    Hi, I am running an Ubuntu 10.04.1 LTS server. Frequently I open up files using AFP from my Mac. Inevitably this created .DS_Store files on the server (although for some reason they are named :2eDS_Store. However, it also creates variants on DS_Store files. These variants are often named similarly to other files in that directory. E.g.: ~$ ls total 60K -rw-r--r-- 1 tarakhovsky 16K 2010-11-30 18:28 :2eDS_Store drwx--S--- 4 tarakhovsky 4.0K 2010-11-08 13:58 :2eTemporaryItems/ lrwxrwxrwx 1 tarakhovsky 15 2010-10-19 17:44 bigdisk -> /media/bigdisk// ... drwxr-xr-x 3 tarakhovsky 4.0K 2010-11-03 18:24 Temporary Items/ drwxr-xr-x 3 tarakhovsky 4.0K 2010-11-30 01:34 tmp/ ... I've disabled creation of DS_Store files using: defaults write com.apple.desktopservices DSDontWriteNetworkStores true so hopefully this won't continue to occur—but I really want to get rid of all of the existing variants of DS_Store files already on the server. Any ideas as to why these variants are being created and how I can get rid of them all?

    Read the article

  • Could I centralize batch files more efficiently?

    - by PeanutsMonkey
    I am new to the world of batch scripting so please forgive what may appear as basic questions. I am learning as I get assigned different jobs and I am a huge proponent of automation where possible. I have several batch files that perform several tasks. Each of these files had their paths hard-coded e.g. c:\temp. d:\data, etc in the batch file. Initially I moved these to a text file I could call from a batch file e.g. for /f "tokens=1,2 delims==" %%R in (config.txt) do ( if %%R==bdata set bdata=%%S if %%R==cdata set cdata=%%S ) The config.txt file contains these values bdata=c:\temp cdata=d:\data I realized that each time I would need to create a new variable, I would need to update the config.txt file as well the config.bat files. I decided I would move all the values to just the config.bat file as follows set bdata=c:\temp set cdata=d:\data I then updated each of the existing batch files to call the variables rather than the hard-coded paths. I also added the following lines of code to each batch file except config.bat. The only additional line added to the config.bat file is @echo off. @echo off setlocal enableextensions enabledelayedexpansion call config.bat I then have another batch file that centralizes calling all the batch files in sequence. The name of this batch file is start.bat. The reason I am using start /wait is because there have been instances of where the delete.bat runs before compress.bat has had an opportunity to finish. start /wait compress.bat start /wait validate.bat start /wait delete.bat Questions Is this the best way to centralize values and if not, what is a better way? Do I need to specify setlocal enableextensions enabledelayedexpansion in all the existing batch files? Do all the batch files have to have @echo off or is it sufficient for just the config.bat file? Is start /wait the best way to call multiple files? Can I pass values from one batch file to another using the said command? All the batch files have different functions e.g. move, delete, etc however use %%a or %%b. Is this okay? For example The validate.bat file has the code for %%a in (%bdata%\*.*) do if "%%~xa" == "" move /Y "%bdata%\%%~xa" "%bdata%\%done%" and the delete.bat file has the code for %%a in (%bdata%\*.*) do if "%%~xa" == ".txt" del "%%a"

    Read the article

  • rsync -c -i flags identical files as different

    - by Scott
    My goal: given a list of files on local server, show any differences to the files with the same absolute path on remote server; e.g. compare local /etc/init.d/apache to same file on remote server. "Difference" for me means different checksum. I don't care about file modification times. I also do not want to sync the files (yet); only show the diffs. I have rsync 3.0.6 on both local and remote servers, which should be able to do what I want. However, it is claiming that local and remote files, even with identical checksums, are still different. Here's the command line: $ rsync --dry-run -avi --checksum --files-from=/home/me/test.txt --rsync-path="cd / && rsync" / me@remote:/ where: "me" = my username; "remote" = remote server hostname current working directory is '/' test.txt contains one line reading "/etc/init.d/apache" OS: Linux 2.6.9 Running cksum on /etc/init.d/apache on both servers yields the same result. The files are the same. However, rsync output is: me@remote's password: building file list ... done .d..t...... etc/ cd+++++++++ etc/init.d/ <f+++++++++ etc/init.d/apache sent 93 bytes received 21 bytes 20.73 bytes/sec total size is 2374 speedup is 20.82 (DRY RUN) The output codes (see http://www.samba.org/ftp/rsync/rsync.html) mean that rsync thinks /etc is identical except for mod time /etc/init.d needs to be changed /etc/init.d/apache will be sent to the remote server I don't understand how, with --checksum option, and the files having identical checksums, that rsync should think they're different. (I've tried with other files having identical mod times, and those files are not flagged as different.) I did run this in /, and made sure (AFAIK) that it's run remotely in /, so even relative pathnames will still be correct. I ran rsync with -avvvi for more debug info, but saw nothing remarkable. I'm wondering: is rsync still looking at file mod times, even with --checksum? am I somehow not setting up the path(s) right? what am I doing wrong?

    Read the article

  • Removing DS_Store files and variants?

    - by Ron Gejman
    I am running an Ubuntu 10.04.1 LTS server. Frequently I open up files using AFP from my Mac. Inevitably this created .DS_Store files on the server (although for some reason they are named :2eDS_Store. However, it also creates variants on DS_Store files. These variants are often named similarly to other files in that directory. E.g.: ~$ ls total 60K -rw-r--r-- 1 tarakhovsky 16K 2010-11-30 18:28 :2eDS_Store drwx--S--- 4 tarakhovsky 4.0K 2010-11-08 13:58 :2eTemporaryItems/ lrwxrwxrwx 1 tarakhovsky 15 2010-10-19 17:44 bigdisk -> /media/bigdisk// ... drwxr-xr-x 3 tarakhovsky 4.0K 2010-11-03 18:24 Temporary Items/ drwxr-xr-x 3 tarakhovsky 4.0K 2010-11-30 01:34 tmp/ ... I've disabled creation of DS_Store files using: defaults write com.apple.desktopservices DSDontWriteNetworkStores true so hopefully this won't continue to occur—but I really want to get rid of all of the existing variants of DS_Store files already on the server. Any ideas as to why these variants are being created and how I can get rid of them all?

    Read the article

  • jZip isn't integrating with Windows 7 shell

    - by Aayush
    I installed jZip in my new windows 7 64 bit OS. but it doesn't seem to integrate with the shell. It doesn't appear in the right click menu on any file(not even zip files). To zip a file I have to open jZip go to file-new and browse for the files and to extract compressed files I have to open the file in jZip then click on extract, there is no right click menu integration what so ever, although It used to have when I had jZip on the Windows 7 RC. I reinstalled checking the settings incase I made a mistake but I checked shell integration while installation. Don't know what it wrong? anyone know how I should solve this. Help..? Thanks!

    Read the article

  • Count total file size of many files in Windows

    - by user105249
    When I want to burn a CD R with lots of files, I have to make sure the total file size in my folder doesn't exceed the capacity of the disc (680MB). In Windows are there possibilities to check the total file size of a bunch of files. I put them in a folder, right-click and check the properties. But this is an annoying trial and error kind of way. Either there are too many files in the folder, or too little. I watch the file size go up as I keep selecting more files, using ALT+going down button. No. 2 is my favorite way to do it. Here's my question: For some reason Windows (I still use XP) only shows the total file size of 100 selected files. When you select more than 100 files, no file size information appears any more. Is there way, a trick, an app, to work around this problem?

    Read the article

  • make quantaplus a default editor for php files in ubuntu

    - by diEcho
    In Ubuntu, how can I make Quanta Plus the default editor for PHP files? In Windows we just use the open with context menu the first time for any new types of files and check use this application for these type of files. After that the files always open up in checked application. I want to know if Ubuntu has a similar feature. Also, if I open 2 PHP files then two instance of Quanta Plus opens up. How can I get the secondary PHP files to open on another tab instead?

    Read the article

  • tar - exclude certain files

    - by Alan
    I wish to tar all files in a directory and its subdirectories that do NOT end in .jpg, .bmp, .gif, or png. So, given the following folders and files: foo/file.txt foo/file.gif foo/bar/file foo/bar/image.jpg I want to tar only the files file.txt and file. file.gif and image.jpg should be ignored. I would also like to maintain the folder structure. My first thought was to pipe the results of the find command in conjunction with grep -v ".jpg|.gif|.bmp.png" to a text file, and then use the tar include argument to feed it that list of files. However, the results of the grepped find command also contain directories (in the example above, it would be "foo" and "foo/bar"), and when a directory is fed to tar, it includes all files in that directory, so I would end up with a tar file containing all of the files--not what I want. Is there any way to prevent find from outputting directories? Is there a far easier way to approach this?

    Read the article

  • Eclipse: Organising Files

    - by someguy
    I want to import a project that I'm planning to build upon. The problem is that it is very messy; with source files, class files and libraries under one directory. How would I organise these files using Eclipse? I know you can change the source folder and output folder, but when I do change the source folder, the files that I want inside it do not physically move to that folder. Output folder is fine, though. Also, I would like a separate folder for libraries. I'm not sure how I would go about this, however. Here's how I would like it: src: This folder will contain source files. bin: This folder will contain binary (class) files. lib: This folder will contain external libraries.

    Read the article

  • Apache Custom Log Format

    - by Shishant
    Hello, I am trying to write a reward system wherein users will be given reward points if they download complete files, So what should be my log format. After searching alot this is what I understand its my first time and havent done custom logs before. First of all which file should I edit for custom logs because this thing I cant find. I am using ubuntu server with default apache, php5 and mysql installation # I use this commands and they work fine nano /etc/apache2/apache2.conf /etc/init.d/apache2 restart I think this is what I need to do for my purpose LogLevel notice LogFormat "%f %u %x %o" rewards CustomLog /var/www/logs/rewards_log rewards This is as it is command or there is something missing? and is there any particular location where I need to add this? and one more thing %o is for filesize that was sent and is it possible to log only files from a particular directory? or for files with size more than 10mb. Thank You.

    Read the article

  • Deleting files using .NET that were migrated from win2k3

    - by Andrew Duncan
    We recently migrated an ASP.NET website from Windows 2003 to Windows 2008 R2, by zipping up all the files and extracting them to the new site. Since migrating the web application is still able to upload and delete files (that are new), however, it's unable to delete files that were copied from the original Win 2k3 app. We're guessing it's a permissions problem because the error is: Access to the path 'E:.......PATH.....' is denied. We've been trying to match the permissions of a newly uploaded file to that of a migrated files. Newly uploaded files seem to get the APP POOL user as a permission and the OWNER. However, the original files didn't have this. Any help that anyone can be would be fantastic. Thanks,

    Read the article

  • Linux Log Viewer with Web interface

    - by user180039
    I have been asked at work to find a solution to one of our problems. We have several logs that customers need access to, because we don't want to give them direct access to the folder/share we are looking to implement a simple Web based solution that permits customers to login see a list of files they have permissions to and download the file. It would need to be able to setup permissions so User01 can see file01 and file03 and User02 can see file04 and file06, optimally all the files would be under the same folder, so permissions are based on files rather then based on folders. Anyone got any ideas Many Thanks

    Read the article

  • Extract duplicity difftar files manually

    - by isnogud
    I have a duplicity backup which i am not able to recover with duplicity. By calling duplicity file:///path/to/backups /path/to/dir, it returns "Local and Remote metadata are syncronized, no sync needed." but the /path/to/dir is empty. I decrypted all backup volumes and I'm able to view and extract the files from the different difftar files. My only problem is that there are files partitioned and saved in folders named after the files. Can anyone give me a simple script or at least a hint how to untar these difftar files so i get the actual files instead of the partitioned ones?

    Read the article

  • Ways to deduplicate files

    - by User1
    I want to simply backup and archive the files on several machines. Unfortunately, the files have some large files that are the same file but stored differently on different machines. For instance, there may a few hundred photos that were copied from one computer to the other as an ad-hoc backup. Now that I want to make a common repository of files, I don't want several copies of the same photo. If I copy all of these files to a single directory, is there a tool that can go thru and recognize duplicate files and give me a list or even delete one of the duplicates?

    Read the article

  • Source File not updating Destination Files in Excel

    - by user127105
    I have one source file that holds all my input costs. I then have 30 to 40 destination files (costing sheets) that use links to data in this source file for their various formulae. I was sure when I started this system that any changes I made to the source file, including the insertion of new rows and columns was updated automatically by the destination files, such that the formula always pulled the correct input costs. Now all of a sudden if my destination files are closed and I change the structure of the source file by adding rows - the destination files go haywire? They pick up changes to their linked cells, but don't pick up changes to the source sheet that have shifted their relative positions in the sheet. Do I really need to open all 40 destination files at the same time I alter the source file structure? Further info: all the destination files are protected, and I am working on DropBox.

    Read the article

  • Migrate 3 terabytes of files to a new server windows 2003

    - by smackaysmith
    We have a new file server to handle the obscene amount of files generated by the company (PDFs, XLS, DOCs and JPGs). Files being moved to the new server total about 3tb. The problem is we can't take the company down for days to move the files. The other problem is the applications creating all these files have to reference previous files, so we can't simply point them to the new server. Also, there isn't an option to have the applications create files on the new server, but reference the old server for existing files. The servers are x64 win2003 r2. Both servers are on the same subnet. DFS doesn't work. Is there an application that can handle this amount of data to copy the files over, throttle bandwidth, and do a 'merge'? By merge I mean constantly copying over newly created files until the two servers are synched.

    Read the article

  • Extracting a .zip file into Program Files (x86)

    - by Evan
    I just got 64 bit Vista system after being on Windows XP. I'm trying to get all my useful programs up to date, and I've recently had a problem extracting files into the 32-bit program files directory (Program Files (x86)). I'm using 7zip to extract the eclipse-SDK-3.5-win32.zip directory into C:\Program Files (x86) Unfortunately, every time I've tried to do this, 7Zip reports can not open output file C:\Program Files (x86)\eclipse\... I've been able to extract it to C:\ and then move it, I'm assuming there's some protection on the Program Files directory that is causing some problems. Any suggestions?

    Read the article

< Previous Page | 41 42 43 44 45 46 47 48 49 50 51 52  | Next Page >