Search Results

Search found 46908 results on 1877 pages for 'managing files and folder'.

Page 354/1877 | < Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >

  • Any way to know what files were in a broken ZFS pool?

    - by Erik Tjernlund
    I have a large ZFS pool of 4 combined drives. Now, the filesystem can not be mounted: pool: tank state: UNAVAIL status: One or more devices could not be opened. There are insufficient replicas for the pool to continue functioning. action: Attach the missing device and online it using 'zpool online'. see: http://www.sun.com/msg/ZFS-8000-3C scan: none requested config: NAME STATE READ WRITE CKSUM tank UNAVAIL 0 0 0 insufficient replicas c10t0d0 ONLINE 0 0 0 c8t0d0 UNAVAIL 0 0 0 cannot open c8t1d0 ONLINE 0 0 0 c10t1d0 ONLINE 0 0 0 Probably a broken drive (c8t0d0). I'm not overly concerned by the loss of the data, but I'd love to know exactly which files were in that pool. Is there any way to get a listing of what files were there?

    Read the article

  • Cannot move folder to a subdirectoy of itself - What's going on?

    - by calumbrodie
    O.K I am trying to do the following very simple command and it is failing as follows... mv '/home/admin/Downloads/folder1' '/home/admin/MyLibrary/MyVideos/TV/folder1/' mv: cannot move `/home/admin/Downloads/folder1' to a subdirectory of itself, `/home/admin/MyLibrary/MyVideos/TV/folder1/' The destination is NOT a subfolder of the source - why is it giving me this error?? Linux version is a custom version of Red Hat on a NAS box. Thanks

    Read the article

  • Why does cpio say "WARNING! These file names were not selected" when copying a large number of files

    - by mmm bacon
    For over 10 years, I've been using this strategy to copy a large number of files between UNIX filesystems: cd source_directory find . -depth -print | cpio -pdm /path/to/destination_directory It works like a champ. However, I'm now getting this error from cpio: cpio: WARNING! These file names were not selected: (long list of files here...) The source directory is on OSX 10.5, and the destination directory is a NFS filesystem from an OpenSolaris server. Copying over NFS has never been a problem in the past. There's nothing strange about the filenames, meaning there aren't special characters or anything like that. Any ideas?

    Read the article

  • Copy past speed very slow for a large number of tiny files on Windows but not on linux

    - by Arno2501
    I've got this folder which contains 15'000 of tiny images (around 400 bytes each). If I copy past this folder on my laptop (Windows 7, i7 latest gen, superfast ssd) it takes about 30 seconds (yes for 7 megs !!!) the average transfer rate is 400 KBytes / second which is so slow. I mean my usual transfer rate is more like hundreds of MBytes per second !!! I get the same problem on my servers (Windows 2003, 2008 /r2) and on every Windows box that I could get my hands on. On the other hand if I do the same on a linux box (debian base, Ext3 FS) (which runs on the same SAN than all the windows servers I've tested) It's nearly instantaneous !!! I'm pretty sure the size / number of the files may stress such filesystem more than another but such differences !? Why is that ? Why is it so slow on the windows boxes (more that 30 sec for 7 MB) and so fast on the linux ones (one sec or so) (I mean this was not a hardlink that I've created it was a true copy). Is it a normal behaviour or something unusual ?

    Read the article

  • Symantec NetBackup restore - Incremental backup

    - by w0051977
    We are using Net Backup as a corporate solution. Incremental backups are taken daily during the week and then a weekly backup is done at the weekend (Saturday). My colleague has restored a folder to how it stood at 14:00 on a Tuesday. The problem is that the restore is taking files from the weekend backup if they did not exist at that point in time of the restore. For example, the folder we are restoring should look like this (this is how it looked on Tuesday at 14:00): Folder1 (folder name) Test.txt Test1.txt Test2.txt The folder looked like this at the weekend when the full restore was done (even though it did exist at the weekend when the full backup ran): Folder1 (folder name) Test.txt Test1.txt Test2.txt Test3.txt The actual folder restored looks like this: Folder1 (folder name) Test.txt Test1.txt Test2.txt Test3.txt Test3.txt should not be restored because it did not exist at the point of the restore. Is there a setting somewhere that we are missing. The folder in question is 200GB - the example above is for simplification. I realise this is a basic question.

    Read the article

  • What's the fastest way to store/access large files?

    - by philfreo
    I do a lot of video editing on my Mac and need a way to store very large (30 GB) files, and don't have room on my HD. A USB/Firewire external hard drive would work, but it seems way too slow for consistently working with such large files. I've also considered buying another computer, with a large hard drive, and putting it on the same network with a shared folder. What's the fastest / most efficient way to do this? Please consider USB 2.0 speeds, hard drive read times, ethernet speeds, etc. Are there other options I should consider?

    Read the article

  • List/remove files, with filenames containing string that's "more than a month ago"?

    - by Martin Tóth
    I store some data in files which follow this naming convention: /interesting/data/filename-YYYY-MM-DD-HH-MM How do I look for the ones with date in file name < now - 1 month and delete them? Files may have changed since they were created, so searching according to last modification date is not good. What I'm doing now, is filter-ing them in python: prefix = '/interesting/data/filename-' import commands names = commands.getoutput('ls {0}*'.format(prefix)).splitlines() from datetime import datetime, timedelta all_files = map(lambda name: { 'name': name, 'date': datetime.strptime(name, '{0}%Y-%m-%d-%H-%M'.format(prefix)) }, names) month = datetime.now() - timedelta(days = 30) to_delete = filter(lambda item: item['date'] < month, all_files) import os map(os.remove, to_delete) Is there a (oneliner) bash solution for this?

    Read the article

  • Seeking a solution to automatically copy files from the cd-rom disk to the USB drive once it's connected.

    - by Ray Nathan
    I plan to distribute a free CD that automatically copies files to a connected usb device. This process will be done on the computers of the users that obtain the cd. The CD will contain an autorun.ini file that will instruct the computer to copy a set of files located on the cd..to a specific directory on the connected usb device. The usb drive letter is not the same on all the systems, therefore...Windows XP should automatically know the drive letter of the usb device before the copy operation begins. What would be the best way of creating a short batch file or script that I can place on the CD to execute this process? Also, please note that it is NOT feasible or recommended to include a batch file on the USB devices to sync this operation due to the explanation at the beginning of this paragraph. :) Thank You All

    Read the article

  • How do I recover files from a corrupt VDI file?

    - by Eric P
    Is it possible to repair a corrupt VDI file? The OS on the VDI (XP) doesn't boot at all, it just hangs at a black screen. I was getting file errors before on its last boot, but now its not working at all. Sector viewer shows 'Invalid partition table Error loading operating system Missing operating system'. I tried mounting the file from the host OS, but it just says that the drive isn't formatted. I don't need to be able to run the VDI, but I do need some files that are on it. Is there any way to recover files from the corrupt VDI file?

    Read the article

  • How do I share a complete XP disk so it can be seen from a Windows 7 system? (To move all files to a

    - by Ian Ringrose
    This should be easier! (both computers can see the internet etc so I know the network it’s self is working) I have a normal home network with a Windows XP machine on it and the new Windows 7 (64 bit) machine. So I can transfer the files to the new Windows 7 machine, I wish to share the complete disk (and all files) from the Windows XP machine and access them from the Windows 7 machine. Is there a step by step set of instructions for doing this anywhere? So fare I have: put both computers into the same workgroup put the windows 7 machine into work network mode so it can see the XP machine in the work group shared the XP disk as read only But when I try to access a lot of the folders on the XP disks, I am told I am not allowed to access them. (I was not asked for any passwords by the windows 7 machine when I accessed the XP machine. The XP machine just has its default account with no password set on it) The XP machine runs XP home and hence has "simple file shairing" turn on. So it seems that even if I create a admin account (with password) and connect with that account, it still comes in as "guest" on the XP machine. Chooseing to share the folder I want access to rather then the top of the disk drive seems to work, but is a pain as I need to share each user's folder with a different share name. If the new computer was not a laptop, I would just plug the hard disk from the old machine into it, but being a laptop I don't have that option.

    Read the article

  • How can I audit a Linux filesystem for files which have been changed or added within a specific time

    - by Bcos
    We are a website design/hosting company running several sites on a Linux server using Joomla 1.5.14 and recently someone was able exploit a vulnerability in the RW Cards component to write arbitrary files/modify existing files on our filesystem enabling them to do some nasty things to our customers sites. We have removed vulnerable modules from all sites but are still seeing some problems. We suspect that they still have some scripts installed and need a way to audit anything that has been changed or added in the last 10 days. Is there a command or script we can run to do this?

    Read the article

  • How can I evaluate the best choice of archive format for compressing files?

    - by Mehrdad
    In general, I've observed the following: Linux-y files or tools use bzip2 or gzip for distributing archives Windows-y files or tools use ZIP for distributing archives Many people use 7-Zip for creating and distributing their own archives Questions: What are the advantages and disadvantages of these formats, all of which appear to be open formats? When/why should I choose one (say, 7-Zip) over another (say, ZIP)? Why does the trend above appear to hold, even though all of these are portable formats? Are there any particular advantages to using a particular archive format on a particular platform?

    Read the article

  • How to print TIFF files using MSFT Office Document imaging?

    - by Think Floyd
    OS: Vista and Windows7 I have Microsoft Office Document Imaging installed. .tif and .tiff files association is set to " Microsoft Office Document Imaging" When I open a TIFF file, it opens in " Microsoft Office Document Imaging". Good so far. However, when I right-click on the TIFF file and invoke print, I see a "Print Pictures" dialog, ("How do you want to print your pictures?") I have some applications installed on my machine that print incoming TIFF files on the printer. They work fine on XP. However, on Vista and Windows7, I get this "Print pictures" prompt requiring an user intervention (i.e, click on Print button). How do I get rid of this "Print Pictures" prompt?

    Read the article

  • How do I set the umask for files and directories created from the GUI in MacOS X Lion (10.7)?

    - by Avry
    I've set my umask in my .bashrc file to 007. Any files created on the command line after loading my bashrc file respects this setting. I want to be able to set the umask to 007 for any files created using non-command line apps. This document talks about setting the umask via launchd. And it kind of works. If I follow these directions I can change the default permissions on a GUI created file from rw-r--r-- to rw-rw---- but the directories still are not group writeable (i.e. I want them to be rwxrwx--- but they are rwxr-x--- instead) The analog on Linux would be /etc/login.defs as the place to set the umask. What do I change in order for the umask to be set properly (i.e. the way I want it)?

    Read the article

  • What is the best filesystem for storing thousands of files in one dictionary-like id-blob structure?

    - by Ivan
    What filesystem best suits my needs? Thousands or even millions of files in one directory. Good (ext4 & ntfs level or close) reliability (incl. fault tolerance) and access speed. No directories actually needed, as well as descriptive names, just a dictionary-like structure of id-blob pairs is all I need. No links, attributes, and access control features needed. The purpose is a file storage where all the metadata (data describing all the facts about what the file actually contains and who can access it) is stored in a MySQL database. As far as I know common filesystems like NTFS and ext3/4 can go dead-slow if there are too many files placed in one directory - that's why I ask.

    Read the article

  • Why does the command forfiles list files that are not a day old despite the command being otherwise?

    - by PeanutsMonkey
    The command I am executing is forfiles -p"C:\testdata" -m*.* -d-1 -c"cmd /c echo @PATH\@FILE" I have specified that I wish to only list files that are a day old however when I execute the statement, it returns me a list of files that were created today. Why is that? Am I doing something wrong? Would it be better to specify a time period as opposed to a date e.g. 24 hours? The version of forfiles I have reads as follows FORFILES v 1.1 - [email protected] - 4/98. The batch file is being run on Windows XP.

    Read the article

  • How do I catalog files on several external hard drives that I want to store off-line? OSX

    - by raudi
    My partner, an artist, has more than 10 external hardisks both USB and firewire and every 2-3 months a new one has to be added (She's working with videos and pictures) currently its 10TB and growing so too much for a affordable NAS. Right now the files are not indexed and I think can not be searched with spotlight because not all drives can be connected at the same time. So if she wants to search for a file, she has to guess which disk/disks (based mostly on the date) and then search several drives. Now I'm looking for a solution to index/catalog the drives, something like GentibusCD Cathy Disclib (all these solutions are unfortunately Windows only) Is there any software for OSX that will catalog all the hard drives, so she can search the catalog, find the files, and get the ID of the disk / disk name that has the content? Preferably something with a GUI so my partner can also use it easily Preferably with Thumbnails for pictures/videos (But even an equivalent to "tree /F /A" would be better than nothing)

    Read the article

  • How to Merge Data From Multiple Excel Files into a Single Excel File or Access Database?

    - by lalabeans
    I have a few dozen excel files which are all of the same format (i.e. 4 worksheets per Excel file). I need to combine all the files into 1 master file which must have just 2 of the 4 worksheets. The corresponding worksheets from each Excel file are named exactly the same as are the column headers. While each file is structured the same, the information within sheet 1 and 2 (for example) is different. So it can’t be combined into one file with everything in one sheet! I've never used VBA before and I'm wondering where I might start this task!

    Read the article

  • How can I share files from my Windows 7 machine to my friend's Ubuntu machine?

    - by ProfKaos
    I run a Windows 7 Pro SP1 laptop as my home machine, and my housemate runs an Ubuntu 12.04.1.05 desktop. We share a WLAN. I would like to make certain locations and files available for him to read and maybe write. How can I go about this? Bearing in mind I have very little recent experience with modern Linux, and Ubuntu in particular. My first idea is to share a Windows folder with my Ubuntu VM under VMWare Player, then his Ubuntu machine can connect to my Unbuntu VM, and the two can use whatever magic Ubuntu uses to achieve file sharing. This requires my Ubuntu VM to be always running though, and that may not always be possible. I have also heard that Samba may have a feature to help here, but I know nothing about that. How can I share my Windows files with my mate's Ubuntu machine, preferably with a 1 to 1 connection, i.e. rather not using shim VM's

    Read the article

  • How to rename a file inside a folder using a shell command?

    - by Leonid Shevtsov
    I have a file at some/long/path/to/file/myfiel.txt. I want to rename it to some/long/path/to/file/myfile.txt. Currently I do it by mv some/long/path/to/file/myfiel.txt some/long/path/to/file/myfile.txt , but typing the path twice isn't terribly effective (even with tab completion). How can I do this faster? (I think I can write a function to change the filename segment only, but that's plan B).

    Read the article

  • Is there a way to create a cmd shortcut for a specific folder on W7 or/and W8?

    - by Hinstein
    Let say i have 3 different folders that i want to access with CMD C:\Users\Henok\Documents\Visual Studio 2012\Projects\TestApp1\Debug C:\Users\Henok\Documents\Visual Studio 2012\Projects\TestApp2\Debug C:\Users\Henok\Documents\Visual Studio 2012\Projects\TestApp3\Debug I wonder if there is a way to create 3 different cmd shortcuts to access those directory (folders) individually without changing the default cmd directory location. Forgive me for my broken English, and thanks for your time.

    Read the article

< Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >