Search Results

Search found 51448 results on 2058 pages for 'log files'.

Page 156/2058 | < Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >

  • Ignoring generated files when using "Treat warnings as errors"

    - by krystan honour
    We have started a new project but also have this problem for an existing project. The problem is that when we compile with a warning level of 4 we also want to switch on 'Treat all warnings as errors' We are unable to do this at the moment because generated files (in particular reference.cs files) are missing things like XML comments and this generates a warning, we do not want to suppress the xml comment warnings totally out of all files just for specific types of files (namely generated code). I have thought of a way this could be achieved but am not sure if these are the best way to do this or indeed where to start :) My thinking is that we need to do something with T4 templates for the code that is generated such that it does fill in XML documentation for generated code. Does anyone have any ideas, currently I'm at well over 2k warnings (its a big project) :(

    Read the article

  • Making archive from files with same names in different directories

    - by Tim
    Hi, I have some files with same names but under different directories. For example, path1/filea, path1/fileb, path2/filea, path2/fileb,.... What is the best way to make the files into an archive? Under these directories, there are other files that I don't want to make into the archive. Off the top of my head, I think of using Bash, probably ar, tar and other commands, but am not sure how exactly to do it. Renaming the files seems to make the file names a little complicated. I tend to keep the directory structure inside the archive. Or I might be wrong. Other ideas are welcome! Thanks and regards!

    Read the article

  • IE8 Unable to download files

    - by jetgunner
    I recently installed Windows 7. I can browse to any webpage using IE8, but if I click on any links to download files, I receive the following error: Unable to download [filename] from [website]. Unable to open this Internet site. The requested site is either unavailable or cannot be found. Please try again later. I can download files perfectly fine using firefox, it's just IE that is having issues. There are no messages in the windows event log. I have no add-ins installed and have made no security changes as this is a fresh install. Any ideas?

    Read the article

  • Cannot chown my own files from NFS

    - by valpa
    We have a NFS server provide home directory for many account, which provided by a NIS server. I have account A and B. In /home/A, I try to copy "cp -a /home/B/somedir ~/". Then I found in /home/A/somedir, all files are owned by user A. Then if I do "chown -R B:B somedir", I got "Operation not permitted" error. I am user A, "cp -a" didn't preserve the original user (B). Then I cannot chown my own files. Any suggestion? I fix my own issue by "chmod 777 /home/A", "su - B" and "cp -a somedir /home/A/", and "su - A", then "chmod 755 /home/A". But it is not a good solution.

    Read the article

  • Haskell lazy I/O and closing files

    - by Jesse
    I've written a small Haskell program to print the MD5 checksums of all files in the current directory (searched recursively). Basically a Haskell version of md5deep. All is fine and dandy except if the current directory has a very large number of files, in which case I get an error like: <program>: <currentFile>: openBinaryFile: resource exhausted (Too many open files) It seems Haskell's laziness is causing it not to close files, even after its corresponding line of output has been completed. The relevant code is below. The function of interest is getList. import qualified Data.ByteString.Lazy as BS main :: IO () main = putStr . unlines =<< getList "." getList :: FilePath -> IO [String] getList p = let getFileLine path = liftM (\c -> (hex $ hash $ BS.unpack c) ++ " " ++ path) (BS.readFile path) in mapM getFileLine =<< getRecursiveContents p hex :: [Word8] -> String hex = concatMap (\x -> printf "%0.2x" (toInteger x)) getRecursiveContents :: FilePath -> IO [FilePath] -- ^ Just gets the paths to all the files in the given directory. Are there any ideas on how I could solve this problem? The entire program is available here: http://haskell.pastebin.com/PAZm0Dcb

    Read the article

  • puppet agent doesn't retrieve files from master

    - by nicmon
    I have a very basic question regarding to Puppet 3.0.1 configuration. I setup a puppet master server (CentOS) with 2 agents (CentOS and Windows 7), all 3 can ping and access each other. There is no error at all. I have copied a file under /etc/puppet/files/test2.txt my site.pp (/etc/puppet/manifests) contains these lines: node default { include test file { "/tmp/testmaster.txt": owner => root, group => root, mode => 644, source => "puppet:///files/test2.txt" } } but there will no file be created on agent servers under /tmp/ once I run "puppet agent --test" here is the output: [root@agent1 ~]# puppet agent --test Info: Retrieving plugin Info: Caching catalog for agent1.mydomain.com Info: Applying configuration version '1354267916' Finished catalog run in 0.02 seconds "puppet apply /etc/puppet/manifests/site.pp" creates the testmaster.txt under /tmp/ on master.

    Read the article

  • How to copy protected files when an Administrator in Vista (easily)

    - by earlz
    Hello, I have a harddrive I need to backup. In the harddrive is of course things like Documents and Settings which is set to not allow other people to see inside someone's personal folders. I am an administrator though and I can not figure out how to mark these files so that I am permitted to access them and copy them. IWhen I double click on My Documents then it pops up saying You must have permission to access this and gives me an option like ok or cancel. I click ok and then it says you do not have permission to access these files I'm an administrator on the system so I don't understand why Vista is locking me out. How can I setup vista so that it will let me copy every file, even ones I don't have permission to?

    Read the article

  • Django fails to find static files served by nginx

    - by Simon
    I know this is a really noobish question but I can't find any solution despite finding the problem trivial. I have a django application deployed with gunicorn. The static files are served by the nginx server with the following url : myserver.com/static/admin/css/base.css. However, my django application keep looking for the static files at myserver.com:8001/static/admin/css/base.css and is obviously failing (404). I don't know how to fix this. Is it a django or an nginx problem ? Here is my nginx configuration file : server { server_name myserver.com; access_log off; location /static/ { alias /home/myproject/static/; } location / { proxy_pass http://127.0.0.1:8001; proxy_set_header X-Forwarded-Host $server_name; proxy_set_header X-Real-IP $remote_addr; add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"'; } } Thanks for the help !

    Read the article

  • Another "Trouble copying music files from HD to 16GB thumb drive"

    - by Ron
    I have a brand new HP Quad Core - 6GB Ram - running 64-bit Windows 7. Running Norton Internet 2010. I try copying music files from HD to 16 GB thumbdrive and get blue screen after a few files have transfered over to thumbdrive. Computer blinks once and boom - blue screen and then reboots. This is really aggravating. I stopped anti-virus, pulled other USB attached devices - still does same thing. Any solutions out there?

    Read the article

  • Using windows CopyFile function to copy all files with certain name format

    - by Ben313
    Hello! I am updating some C code that copys files with a certain name. basically, I have a directory with a bunch of files named like so: AAAAA.1.XYZ AAAAA.2.ZYX AAAAA.3.YZX BBBBB.1.XYZ BBBBB.2.ZYX Now, In the old code, they just used a call to ShellExecute and used xcopy.exe. to get all the files starting with AAAAA, they just gave xcopy the name of the file as AAAAA.* and it knew to copy all of the files starting with AAAAA. now, im trying to get it to copy with out having to use the command line, and I am running into trouble. I was hoping CopyFile would be smart enough to handle AAAAA.* as the file to be copied, but it doesnt at all do what xcopy did. So, any Ideas on how to do this without the external call to xcopy.exe?

    Read the article

  • Preventing logrotate's dateext from overwriting files

    - by Thirler
    I'm working with a system where I would like to use the dateext function of logrotate (or some other way) to add the date to a logfile when it is rotated. However in this system it is important that no logging is missing and dateext will overwrite any existing files (which will happen if logrotate is called twice on a day). Is there a reliable way to prevent dateext to overwrite existing files, but instead make another file?. It is acceptable that either no rotate happens or a file is created with a less predictable name (date with an extra number, or the time or something).

    Read the article

  • how to find files in a given branch

    - by Haiyuan Zhang
    I noticed that when doing code view, people here in my company usually just give the branch in which his work is done, and nothing else. So I guess there must be a easy way to find out all the files that has a version in the given branch which is the same thing to find all the files that has been the changed. Yes, I don't know the expected "easy way" to find files in certain branch, so need your help and thanks in advance.

    Read the article

  • CVS list of files only in working directories

    - by Joshua Berry
    Is it possible to get a list of files that are in the working directory tree, but not in the current branch/tag? I currently diff the working copy with another directory updated to the same module and tag/branch but without the local non-repo files. It works, but doesn't honor the .cvsignore files. I figure there must be an option using a variation of 'cvs diff'. Thanks in advance.

    Read the article

  • Which open source repository or version control systems store files' original mtime, ctime and atime

    - by sampablokuper
    I want to create a personal digital archive. I want to be able to check digital files (some several years old, some recent, some not yet created) into that archive and have them preserved, along with their metadata such as ctime, atime and mtime. I want to be able to check these files out of that archive, modify their contents and commit the changes back to the archive, while keeping the earlier commits and their metadata intact. I want the archive to be very reliable and secure, and able to be backed up remotely. I want to be able to check files in and out of the archive from PCs running Linux, Mac OS X 10.5+ or Win XP+. I want to be able to check files in and out of the archive from PCs with RAM capacities lower than the size of the files. E.g. I want to be able to check in/out a 13GB file using a PC with 2GB RAM. I thought Subversion could do all this, but apparently it can't. (At least, it couldn't a couple of years ago and as far as I know it still can't; correct me if I'm wrong.) Is there a libre VCS or similar capable of all these things? Thanks for your help.

    Read the article

  • How to ignore the .classpath for Eclipse projects using Mercurial?

    - by Feanor
    I'm trying to share a repository between my Mac (laptop) and PC (desktop). There are some external dependencies for the project that are stored on different places on each machine, and noted in the .classpath file in the Eclipse project. When the project changes are shared, the dependencies break. I'm trying to figure out how to keep this from happening. I've tried using .hgignore with the following settings, among others, without success: syntax: glob *.classpath Based on this question, it appears that the .hgignore file will not allow Mercurial to ignore files that are also committed to the repository. Is there another way around this? Other ways to configure the project to make it work?

    Read the article

  • Wrong owner and group for files created under a samba shared directory

    - by agmao
    I am trying to make writing to a shared samba directory work. I got a very weird problem. Now the shared directory is writable from a client machine. But the files created under the samba share directory have weird owner and group names. I am writing to the shared directory as user mike under the client machine, but the file created always has user and group name as steve instead... Does anybody know why that would happen...? Another thing I just noticed is that on the samba server, the files have owner and user name as samba, which I created for samba clients. Thanks a lot

    Read the article

  • Can't access my files in ASP.NET web site

    - by jumbojs
    I'm having a very difficult time. I am running windows 2008 server, I have an Able Commerce site using ASP.NET with C#. I'm writing an automated task that will ftp some xml files down into a local directory on our web server and then the program parses the xml file and saves information to our database. The problem, once I save the files to our local directory, my program has no access to the files. The NETWORK SERVICE user permissions isn't being inherited by the xml files so my program can't do anything with them. I can manually change the permissions, but this wouldn't be automated and won't work. How can I get this to work? help please, it's very frustrating.

    Read the article

  • Scalable (half-million files) version control system

    - by hashable
    We use SVN for our source-code revision control and are experimenting using it for non-source-code files. We are working with a large set (300-500k) of short (1-4kB) text files that will be updated on a regular basis and need to version control it. We tried using SVN in flat-file mode and it is struggling to handle the first commit (500k files checked in) taking about 36 hours. On a daily basis, we need the system to be able to handle 10k modified files per commit transaction in a short time (<5 min). My questions: Is SVN the right solution for my purpose. The initial speed seems too slow for practical use. If Yes, is there a particular svn server implementation that is fast? (We are currently using the gnu/linux default svn server and command line client.) If No, what are the best f/oss/commercial alternatives Thanks

    Read the article

  • Efficient file buffering & scanning methods for large files in python

    - by eblume
    The description of the problem I am having is a bit complicated, and I will err on the side of providing more complete information. For the impatient, here is the briefest way I can summarize it: What is the fastest (least execution time) way to split a text file in to ALL (overlapping) substrings of size N (bound N, eg 36) while throwing out newline characters. I am writing a module which parses files in the FASTA ascii-based genome format. These files comprise what is known as the 'hg18' human reference genome, which you can download from the UCSC genome browser (go slugs!) if you like. As you will notice, the genome files are composed of chr[1..22].fa and chr[XY].fa, as well as a set of other small files which are not used in this module. Several modules already exist for parsing FASTA files, such as BioPython's SeqIO. (Sorry, I'd post a link, but I don't have the points to do so yet.) Unfortunately, every module I've been able to find doesn't do the specific operation I am trying to do. My module needs to split the genome data ('CAGTACGTCAGACTATACGGAGCTA' could be a line, for instance) in to every single overlapping N-length substring. Let me give an example using a very small file (the actual chromosome files are between 355 and 20 million characters long) and N=8 import cStringIO example_file = cStringIO.StringIO("""\ header CAGTcag TFgcACF """) for read in parse(example_file): ... print read ... CAGTCAGTF AGTCAGTFG GTCAGTFGC TCAGTFGCA CAGTFGCAC AGTFGCACF The function that I found had the absolute best performance from the methods I could think of is this: def parse(file): size = 8 # of course in my code this is a function argument file.readline() # skip past the header buffer = '' for line in file: buffer += line.rstrip().upper() while len(buffer) = size: yield buffer[:size] buffer = buffer[1:] This works, but unfortunately it still takes about 1.5 hours (see note below) to parse the human genome this way. Perhaps this is the very best I am going to see with this method (a complete code refactor might be in order, but I'd like to avoid it as this approach has some very specific advantages in other areas of the code), but I thought I would turn this over to the community. Thanks! Note, this time includes a lot of extra calculation, such as computing the opposing strand read and doing hashtable lookups on a hash of approximately 5G in size. Post-answer conclusion: It turns out that using fileobj.read() and then manipulating the resulting string (string.replace(), etc.) took relatively little time and memory compared to the remainder of the program, and so I used that approach. Thanks everyone!

    Read the article

  • Mysterious xyz.event files appearing

    - by Pekka
    I am getting mysterious .event files - always empty, created by me a few weeks ago - in several local project directories. They are all Subversion checkouts. They are always named after the directory they reside in, so a directory named pagination will contain a pagination.event file. Does anybody know what this is? Possibly important information: I am working on a Windows 7 Workstation I use NuSphere's PHP IDE (no updates recently) I use TortoiseSVN for version control I set up a Windows 7 backup job recently that ran once, I can' remember when exactly. The event files seem to turn up only in repositories There is no external access to those repositories

    Read the article

  • Protect Files from Git

    - by Tanner
    I'm using Git with WindRiver to manage a project of mine. The code is being managed, however the project files (such as .cproject, .project, .wrmakefile, and .wrproject) are not. However when I switch branches, Git deletes those files spite them being in .gitignore, thereby removing my ability to compile the code without having to revert commits or keeping a backup. So, is there a way to say to Git - ignore these files and don't touch them no matter what?

    Read the article

  • RealPath returns an empty string

    - by Abs
    Hello all, I have the following which just loops through the files in a directory and echo the file names. However, when I use realpath, it returns nothing. What am I doing wrong: if ($handle = opendir($font_path)) { while (false !== ($file = readdir($handle))) { if ($file != "." && $file != ".." && $file != "a.zip") { echo $file.'<br />';//i can see file names fine echo realpath($file);// return empty string?! } } closedir($handle); } Thanks all for any help on this. ~I am on a windows machine, running php 5.3 and apache 2.2.

    Read the article

  • Combine and compress script files in asp.net mvc

    - by victor_foster
    I am working in Visual Studio 2008, IIS7 and using asp.net MVC. I would like to know the best way to combine all of my Javascript files into one file to reduce the number of HTTP requests to the server. I have seen many articles on this subject but I'm not sure which one I should look at first (many of them are over a year old). Here are the things I would like to do: Combine my Javascript and css files Safely compress my Javascript files when I publish, but keep them uncompressed while I am debugging Cache my Css and Javascript files but allow them to refreshed with a hard refresh when they are updated without having to rename them.

    Read the article

< Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >