Search Results

Search found 40999 results on 1640 pages for 'duplicate files'.

Page 133/1640 | < Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >

  • Rename files and directories using substitution and variables

    - by rednectar
    I have found several similar questions that have solutions, except they don't involve variables. I have a particular pattern in a tree of files and directories - the pattern is the word TEMPLATE. I want a script file to rename all of the files and directories by replacing the word TEMPLATE with some other name that is contained in the variable ${newName} If I knew that the value of ${newName} was say "Fred lives here", then the command find . -name '*TEMPLATE*' -exec bash -c 'mv "$0" "${0/TEMPLATE/Fred lives here}"' {} \; will do the job However, if my script is: newName="Fred lives here" find . -name '*TEMPLATE*' -exec bash -c 'mv "$0" "${0/TEMPLATE/${newName}}"' {} \; then the word TEMPLATE is replaced by null rather than "Fred lives here" I need the "" around $0 because there are spaces in the path name, so I can't do something like: find . -name '*TEMPLATE*' -exec bash -c 'mv "$0" "${0/TEMPLATE/"${newName}"}"' {} \; Can anyone help me get this script to work so that all files and directories that contain the word TEMPLATE have TEMPLATE replaced by whatever the value of ${newName} is eg, if newName="A different name" and a I had directory of /foo/bar/some TEMPLATE directory/with files then the directory would be renamed to /foo/bar/some A different name directory/with files and a file called some TEMPLATE file would be renamed to some A different name file

    Read the article

  • Looping through a directory on the web and displaying its contents (files and other directories) via

    - by al jaffe
    In the same vein as http://stackoverflow.com/questions/2593399/process-a-set-of-files-from-a-source-directory-to-a-destination-directory-in-pyth I'm wondering if it is possible to create a function that when given a web directory it will list out the files in said directory. Something like... files[] for file in urllib.listdir(dir): if file.isdir: # handle this as directory else: # handle as file I assume I would need to use the urllib library, but there doesn't seem to be an easy way of doing this, that I've seen at least.

    Read the article

  • Error when trying to access Shared files from iMac via smb

    - by SatheeshJM
    I used to access all my Windows XP shared files on my Mac using Finder -- Window -- Connect to server. Now all of a sudden, an error crops up when I try to connect. I get the error "There was a problem connecting to the server "192.168.1.*" The server may not exist or it is unavailable at this time. Check the server name or IP address, check your internet connection and then try again. How can I remove this error and access my shared files from my Mac? P.S my network connections is fine.

    Read the article

  • Search Files (Preferably with index) on Windows 2000 Server

    - by ThinkBohemian
    I have many files on a windows server 2000 machine that is setup to act as a networked disk drive, is there anyway I can index the files and make that index available as a search to more people than just me? Bonus if the index can look inside of documents such as readme.txt? If there is no easy way to do this globaly (for all users) Is there a way I could generate and store an index locally on my computer? If this is the wrong place to ask this question, any advice on community more suited?

    Read the article

  • Making archive from files with same names in different directories

    - by Tim
    Hi, I have some files with same names but under different directories. For example, path1/filea, path1/fileb, path2/filea, path2/fileb,.... What is the best way to make the files into an archive? Under these directories, there are other files that I don't want to make into the archive. Off the top of my head, I think of using Bash, probably ar, tar and other commands, but am not sure how exactly to do it. Renaming the files seems to make the file names a little complicated. I tend to keep the directory structure inside the archive. Or I might be wrong. Other ideas are welcome! Thanks and regards!

    Read the article

  • IE8 Unable to download files

    - by jetgunner
    I recently installed Windows 7. I can browse to any webpage using IE8, but if I click on any links to download files, I receive the following error: Unable to download [filename] from [website]. Unable to open this Internet site. The requested site is either unavailable or cannot be found. Please try again later. I can download files perfectly fine using firefox, it's just IE that is having issues. There are no messages in the windows event log. I have no add-ins installed and have made no security changes as this is a fresh install. Any ideas?

    Read the article

  • Haskell lazy I/O and closing files

    - by Jesse
    I've written a small Haskell program to print the MD5 checksums of all files in the current directory (searched recursively). Basically a Haskell version of md5deep. All is fine and dandy except if the current directory has a very large number of files, in which case I get an error like: <program>: <currentFile>: openBinaryFile: resource exhausted (Too many open files) It seems Haskell's laziness is causing it not to close files, even after its corresponding line of output has been completed. The relevant code is below. The function of interest is getList. import qualified Data.ByteString.Lazy as BS main :: IO () main = putStr . unlines =<< getList "." getList :: FilePath -> IO [String] getList p = let getFileLine path = liftM (\c -> (hex $ hash $ BS.unpack c) ++ " " ++ path) (BS.readFile path) in mapM getFileLine =<< getRecursiveContents p hex :: [Word8] -> String hex = concatMap (\x -> printf "%0.2x" (toInteger x)) getRecursiveContents :: FilePath -> IO [FilePath] -- ^ Just gets the paths to all the files in the given directory. Are there any ideas on how I could solve this problem? The entire program is available here: http://haskell.pastebin.com/PAZm0Dcb

    Read the article

  • Ignoring generated files when using "Treat warnings as errors"

    - by krystan honour
    We have started a new project but also have this problem for an existing project. The problem is that when we compile with a warning level of 4 we also want to switch on 'Treat all warnings as errors' We are unable to do this at the moment because generated files (in particular reference.cs files) are missing things like XML comments and this generates a warning, we do not want to suppress the xml comment warnings totally out of all files just for specific types of files (namely generated code). I have thought of a way this could be achieved but am not sure if these are the best way to do this or indeed where to start :) My thinking is that we need to do something with T4 templates for the code that is generated such that it does fill in XML documentation for generated code. Does anyone have any ideas, currently I'm at well over 2k warnings (its a big project) :(

    Read the article

  • Cannot chown my own files from NFS

    - by valpa
    We have a NFS server provide home directory for many account, which provided by a NIS server. I have account A and B. In /home/A, I try to copy "cp -a /home/B/somedir ~/". Then I found in /home/A/somedir, all files are owned by user A. Then if I do "chown -R B:B somedir", I got "Operation not permitted" error. I am user A, "cp -a" didn't preserve the original user (B). Then I cannot chown my own files. Any suggestion? I fix my own issue by "chmod 777 /home/A", "su - B" and "cp -a somedir /home/A/", and "su - A", then "chmod 755 /home/A". But it is not a good solution.

    Read the article

  • How to copy protected files when an Administrator in Vista (easily)

    - by earlz
    Hello, I have a harddrive I need to backup. In the harddrive is of course things like Documents and Settings which is set to not allow other people to see inside someone's personal folders. I am an administrator though and I can not figure out how to mark these files so that I am permitted to access them and copy them. IWhen I double click on My Documents then it pops up saying You must have permission to access this and gives me an option like ok or cancel. I click ok and then it says you do not have permission to access these files I'm an administrator on the system so I don't understand why Vista is locking me out. How can I setup vista so that it will let me copy every file, even ones I don't have permission to?

    Read the article

  • puppet agent doesn't retrieve files from master

    - by nicmon
    I have a very basic question regarding to Puppet 3.0.1 configuration. I setup a puppet master server (CentOS) with 2 agents (CentOS and Windows 7), all 3 can ping and access each other. There is no error at all. I have copied a file under /etc/puppet/files/test2.txt my site.pp (/etc/puppet/manifests) contains these lines: node default { include test file { "/tmp/testmaster.txt": owner => root, group => root, mode => 644, source => "puppet:///files/test2.txt" } } but there will no file be created on agent servers under /tmp/ once I run "puppet agent --test" here is the output: [root@agent1 ~]# puppet agent --test Info: Retrieving plugin Info: Caching catalog for agent1.mydomain.com Info: Applying configuration version '1354267916' Finished catalog run in 0.02 seconds "puppet apply /etc/puppet/manifests/site.pp" creates the testmaster.txt under /tmp/ on master.

    Read the article

  • Preventing logrotate's dateext from overwriting files

    - by Thirler
    I'm working with a system where I would like to use the dateext function of logrotate (or some other way) to add the date to a logfile when it is rotated. However in this system it is important that no logging is missing and dateext will overwrite any existing files (which will happen if logrotate is called twice on a day). Is there a reliable way to prevent dateext to overwrite existing files, but instead make another file?. It is acceptable that either no rotate happens or a file is created with a less predictable name (date with an extra number, or the time or something).

    Read the article

  • Django fails to find static files served by nginx

    - by Simon
    I know this is a really noobish question but I can't find any solution despite finding the problem trivial. I have a django application deployed with gunicorn. The static files are served by the nginx server with the following url : myserver.com/static/admin/css/base.css. However, my django application keep looking for the static files at myserver.com:8001/static/admin/css/base.css and is obviously failing (404). I don't know how to fix this. Is it a django or an nginx problem ? Here is my nginx configuration file : server { server_name myserver.com; access_log off; location /static/ { alias /home/myproject/static/; } location / { proxy_pass http://127.0.0.1:8001; proxy_set_header X-Forwarded-Host $server_name; proxy_set_header X-Real-IP $remote_addr; add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"'; } } Thanks for the help !

    Read the article

  • Which open source repository or version control systems store files' original mtime, ctime and atime

    - by sampablokuper
    I want to create a personal digital archive. I want to be able to check digital files (some several years old, some recent, some not yet created) into that archive and have them preserved, along with their metadata such as ctime, atime and mtime. I want to be able to check these files out of that archive, modify their contents and commit the changes back to the archive, while keeping the earlier commits and their metadata intact. I want the archive to be very reliable and secure, and able to be backed up remotely. I want to be able to check files in and out of the archive from PCs running Linux, Mac OS X 10.5+ or Win XP+. I want to be able to check files in and out of the archive from PCs with RAM capacities lower than the size of the files. E.g. I want to be able to check in/out a 13GB file using a PC with 2GB RAM. I thought Subversion could do all this, but apparently it can't. (At least, it couldn't a couple of years ago and as far as I know it still can't; correct me if I'm wrong.) Is there a libre VCS or similar capable of all these things? Thanks for your help.

    Read the article

  • RealPath returns an empty string

    - by Abs
    Hello all, I have the following which just loops through the files in a directory and echo the file names. However, when I use realpath, it returns nothing. What am I doing wrong: if ($handle = opendir($font_path)) { while (false !== ($file = readdir($handle))) { if ($file != "." && $file != ".." && $file != "a.zip") { echo $file.'<br />';//i can see file names fine echo realpath($file);// return empty string?! } } closedir($handle); } Thanks all for any help on this. ~I am on a windows machine, running php 5.3 and apache 2.2.

    Read the article

  • Efficient file buffering & scanning methods for large files in python

    - by eblume
    The description of the problem I am having is a bit complicated, and I will err on the side of providing more complete information. For the impatient, here is the briefest way I can summarize it: What is the fastest (least execution time) way to split a text file in to ALL (overlapping) substrings of size N (bound N, eg 36) while throwing out newline characters. I am writing a module which parses files in the FASTA ascii-based genome format. These files comprise what is known as the 'hg18' human reference genome, which you can download from the UCSC genome browser (go slugs!) if you like. As you will notice, the genome files are composed of chr[1..22].fa and chr[XY].fa, as well as a set of other small files which are not used in this module. Several modules already exist for parsing FASTA files, such as BioPython's SeqIO. (Sorry, I'd post a link, but I don't have the points to do so yet.) Unfortunately, every module I've been able to find doesn't do the specific operation I am trying to do. My module needs to split the genome data ('CAGTACGTCAGACTATACGGAGCTA' could be a line, for instance) in to every single overlapping N-length substring. Let me give an example using a very small file (the actual chromosome files are between 355 and 20 million characters long) and N=8 import cStringIO example_file = cStringIO.StringIO("""\ header CAGTcag TFgcACF """) for read in parse(example_file): ... print read ... CAGTCAGTF AGTCAGTFG GTCAGTFGC TCAGTFGCA CAGTFGCAC AGTFGCACF The function that I found had the absolute best performance from the methods I could think of is this: def parse(file): size = 8 # of course in my code this is a function argument file.readline() # skip past the header buffer = '' for line in file: buffer += line.rstrip().upper() while len(buffer) = size: yield buffer[:size] buffer = buffer[1:] This works, but unfortunately it still takes about 1.5 hours (see note below) to parse the human genome this way. Perhaps this is the very best I am going to see with this method (a complete code refactor might be in order, but I'd like to avoid it as this approach has some very specific advantages in other areas of the code), but I thought I would turn this over to the community. Thanks! Note, this time includes a lot of extra calculation, such as computing the opposing strand read and doing hashtable lookups on a hash of approximately 5G in size. Post-answer conclusion: It turns out that using fileobj.read() and then manipulating the resulting string (string.replace(), etc.) took relatively little time and memory compared to the remainder of the program, and so I used that approach. Thanks everyone!

    Read the article

  • Another "Trouble copying music files from HD to 16GB thumb drive"

    - by Ron
    I have a brand new HP Quad Core - 6GB Ram - running 64-bit Windows 7. Running Norton Internet 2010. I try copying music files from HD to 16 GB thumbdrive and get blue screen after a few files have transfered over to thumbdrive. Computer blinks once and boom - blue screen and then reboots. This is really aggravating. I stopped anti-virus, pulled other USB attached devices - still does same thing. Any solutions out there?

    Read the article

  • CVS list of files only in working directories

    - by Joshua Berry
    Is it possible to get a list of files that are in the working directory tree, but not in the current branch/tag? I currently diff the working copy with another directory updated to the same module and tag/branch but without the local non-repo files. It works, but doesn't honor the .cvsignore files. I figure there must be an option using a variation of 'cvs diff'. Thanks in advance.

    Read the article

  • Wrong owner and group for files created under a samba shared directory

    - by agmao
    I am trying to make writing to a shared samba directory work. I got a very weird problem. Now the shared directory is writable from a client machine. But the files created under the samba share directory have weird owner and group names. I am writing to the shared directory as user mike under the client machine, but the file created always has user and group name as steve instead... Does anybody know why that would happen...? Another thing I just noticed is that on the samba server, the files have owner and user name as samba, which I created for samba clients. Thanks a lot

    Read the article

  • Using windows CopyFile function to copy all files with certain name format

    - by Ben313
    Hello! I am updating some C code that copys files with a certain name. basically, I have a directory with a bunch of files named like so: AAAAA.1.XYZ AAAAA.2.ZYX AAAAA.3.YZX BBBBB.1.XYZ BBBBB.2.ZYX Now, In the old code, they just used a call to ShellExecute and used xcopy.exe. to get all the files starting with AAAAA, they just gave xcopy the name of the file as AAAAA.* and it knew to copy all of the files starting with AAAAA. now, im trying to get it to copy with out having to use the command line, and I am running into trouble. I was hoping CopyFile would be smart enough to handle AAAAA.* as the file to be copied, but it doesnt at all do what xcopy did. So, any Ideas on how to do this without the external call to xcopy.exe?

    Read the article

  • Protect Files from Git

    - by Tanner
    I'm using Git with WindRiver to manage a project of mine. The code is being managed, however the project files (such as .cproject, .project, .wrmakefile, and .wrproject) are not. However when I switch branches, Git deletes those files spite them being in .gitignore, thereby removing my ability to compile the code without having to revert commits or keeping a backup. So, is there a way to say to Git - ignore these files and don't touch them no matter what?

    Read the article

  • how to find files in a given branch

    - by Haiyuan Zhang
    I noticed that when doing code view, people here in my company usually just give the branch in which his work is done, and nothing else. So I guess there must be a easy way to find out all the files that has a version in the given branch which is the same thing to find all the files that has been the changed. Yes, I don't know the expected "easy way" to find files in certain branch, so need your help and thanks in advance.

    Read the article

  • How to ignore the .classpath for Eclipse projects using Mercurial?

    - by Feanor
    I'm trying to share a repository between my Mac (laptop) and PC (desktop). There are some external dependencies for the project that are stored on different places on each machine, and noted in the .classpath file in the Eclipse project. When the project changes are shared, the dependencies break. I'm trying to figure out how to keep this from happening. I've tried using .hgignore with the following settings, among others, without success: syntax: glob *.classpath Based on this question, it appears that the .hgignore file will not allow Mercurial to ignore files that are also committed to the repository. Is there another way around this? Other ways to configure the project to make it work?

    Read the article

< Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >