Search Results

Search found 40479 results on 1620 pages for 'binary files'.

Page 315/1620 | < Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >

  • Is there an encrypted write-only file system for Linux?

    - by Grumbel
    I am searching for an encrypted filesystem for Linux that can be mounted in a write-only mode, by that I mean you should be able to mount it without supplying a password, yet still be able to write/append files, but neither should you be able to read the files you have written nor read the files already on the filesystem. Access to the files should only be given when the filesystem is mounted via the password. The purpose of this is to write log files or similar data that is only written, but never modified, without having the files themselves be exposed. File permissions don't help here as I want the data to be inaccessible even when the system is fully compromised. Does such a thing exist on Linux? Or if not, what would be the best alternative to create encrypted log files? My current workaround consists of simply piping the data through gpg --encrypt, which works, but is very cumbersome, as you can't easily get access to the filesystem as a whole, you have to pipe each file through gpg --decrypt manually.

    Read the article

  • How can I reduce the CPU usage of Offline Files?

    - by Diego
    Whenever I have the Offline Files service running, there is a constant 25% CPU usage on svchost.exe (this is a quad core, so that means it's using up one core). This, in turn, triples the power consumption and keeps the machine hot... I do have several GB synchronized (music collection), but they are not changing at all, in either side. Am I misusing this feature? Is there anything I can configure to keep it down when there's nothing to do? Or should I forget about it and synchronize big folders manually?

    Read the article

  • OS X Automator empty, blank or null value.

    - by Brian
    I have some data files mostly excel, word and pdf files most of the files have no extension on them. So they are missing the .doc .xls. This data needs to be used in a Windows environment now. I have created automator apps for each of the file types I want to add the ext onto. The problem is it also adds the extension to files that already have an extension. So data.xls becomes data.xls.xls I would like to figure a way to only add the extenion to the files without extension. How do I tell the finder filter that i only want it to return files without extensions. I see how to add a line to filter by extension but I don't know how to let it know I want only blank or null or files without any extensions. Thanks

    Read the article

  • Why use hashing to create pathnames for large collections of files?

    - by Stephen
    Hi, I noticed a number of cases where an application or database stored collections of files/blobs using a has to determine the path and filename. I believe the intended outcome is a situation where the path never gets too deep, or the folders ever get too full - too many files (or folders) in a folder making for slower access. EDIT: Examples are often Digital libraries or repositories, though the simplest example I can think of (that can be installed in about 30s) is the Zotero document/citation database. Why do this? EDIT: thanks Mat for the answer - does this technique of using a hash to create a file path have a name? Is it a pattern? I'd like to read more, but have failed to find anything in the ACM Digital Library

    Read the article

  • Script to gather all the files ending in .log and create a tar.gz file.

    - by Oscar Reyes
    I'm currently using this script line to find all the log files from a given directory structure and copy them to another directy where I can easily compress them. find . -name "*.log" -exec cp \{\} /tmp/allLogs/ \; The problem I have, is, the directory/subdirectory information gets lost because, I'm copying only the file. For instance I have: ./product/install/install.log ./product/execution/daily.log ./other/conf/blah.log And I end up with: /tmp/allLogs/install.log /tmp/allLogs/daily.log /tmp/allLogs/blah.log And I would like to have: /tmp/allLogs/product/install/install.log /tmp/allLogs/product/execution/daily.log /tmp/allLogs/other/conf/blah.log

    Read the article

  • How to monitor the size of files in Windows folder?

    - by zladuric
    What are some of good ways to automatically monitor the size of files in a directory and send warning email if they get close to a certain limit on a Windows server? I have a Progress DB installation to keep in check, and last week we hit some problems. Apparently, the size of extents has hit 2GB - and Progress won't work past that - we needed to open a new extent. I'm coming from a Linux environment, so I don't know what are the usual to monitor this in a Windows environment (or monitoring tools whatsoever). I prefer some generic solution, as I have a mixed environment (Windows 2000, Windows Server 2003, Windows Server 2008 R2). Thanks in advance for all usable alternative answers.

    Read the article

  • Schedule a batch file with parameters containing spaces

    - by Danilo Brambilla
    Hi, I need to schedule a task in Windows Server 2003 that executes this script that deletes files older that n days in the specified folder. The script needs 3 parameters: %1 path to folder where files need to be deleted %2 file names (es. *.log) %3 number of days @echo off forfiles -p %1 -s -m %2 -d -%3 -c "cmd /c del /q @path" The script works fine if the first parameter has no spaces inside. This is an example of parameters that work: "C:\Program Files\SCRIPT\DeleteFilesOlderThanXDays.cmd" N:\FOLDER\FOLDER *.zip 60 This is an example that does not work: "C:\Program Files\SCRIPT\DeleteFilesOlderThanXDays.cmd" N:\Program Files\LOG *.zip 60 This does not work too: "C:\Program Files\SCRIPT\DeleteFilesOlderThanXDays.cmd" "N:\Program Files\LOG" *.zip 60 I think it would be a quotes problem but I can't figure out the solution. I'd like not to insert values directly into the script if possible Thank you all for help

    Read the article

  • how sort recursively by maximum fileze and counts files type?

    - by user599395
    Hello! I'm beginner in bash programming. I want to display head -n $1 results of sorting files by size in /etc/*. The problem is that at final search, I must know how many directories and files has processed. I compose following code: #!/bash/bin let countF=0; let countD=0; for file in $(du -sk /etc/* |sort +0n | head $1); do if [ -f "file" ] then echo $file; let countF=countF+1; else if [ -d "file" ] then let countD=countD+1; fi done echo $countF echo $countD I have errors at execution. How use find with du, because I must search recursively?

    Read the article

  • How can I check the location of perl and CPAN files?

    - by Rob
    I constantly have to set up new servers for an employer of mine for an exact purpose of his, and as such they all have to be set up in exactly the same way. So I've created a script in PHP that I run from my own box to automatically send over all the relevant files, compile everything, run updates, and everything else. However, for some reason these brand new servers come with perl, which is fine, but they have perl installed in different locations. This makes it a pain for me to copy over Config.pm for CPAN without going in and finding the location manually. Is there perhaps some command I'm unaware of that will hunt down the precise location? If it helps, usually the servers are CentOS 5

    Read the article

  • Why my browsers display XML files as blank pages?

    - by n1313
    Every time I open an XML file, all I get is blank page instead of tag tree. The file itself is correct and loads okay, I can see it via View Source or in the Firebug. I've tried turning off all my addons and tried running Firefox in safe mode, but the problem was not solved. I'm guessing that I've messed up my configuration somehow and Firefox now tries to render XML files as HTML ones. I've tried googling, but with no success. Help, please? UPD: example file: http://lj.lain.ru/3/1273657698603.sample.xml Also I've noticed that somehow all of the browsers on the machine are now acting the same, so I'm changing the question accordingly

    Read the article

  • Using finch first time. How to play mp3,ogg or other formats (wav files to big) ?

    - by Allisone
    My *.wav's work as expected. But wav files are to big, so I want to play *.mp3 or *.ogg but it doesn't work. I use this lines of code found in the finch Demo project engine = [[Finch alloc] init]; sitar = [[Sound alloc] initWithFile:RSRC(@"sitar.wav")]; [sitar play]; So I only change sitar.wav into my .mp3 filename. Note 1: It mustn't be mp3 or ogg, any file format not as huge as wav should be ok, but which ? Note 2: I didn't know how to use sound, so I searched and found finch here at stackoverflow. It looks easy, so I would like to use that, but if you know some other easy way to play that sound files (ambient + effects sound with compressed codec) I would also switch to that other technique.

    Read the article

  • BASH: How to remove all files except those named in a manifest?

    - by brice
    I have a manifest file which is just a list of newline separated filenames. How can I remove all files that are not named in the manifest from a folder? I've tried to build a find ./ ! -name "filename" command dynamically: command="find ./ ! -name \"MANIFEST\" " for line in `cat MANIFEST`; do command=${command}"! -name \"${line}\" " done command=${command} -exec echo {} \; $command But the files remain. [Note:] I know this uses echo. I want to check what my command does before using it.

    Read the article

  • PHP Included files writing their own content from Importer values ...

    - by Adrian
    Hello, I have a index.php file that will include several external files: "content/templates/id1/template.php" "content/templates/id2/template.php" "content/templates/id3/template.php" etc. All these files are loaded dynamically into index.php (it reads all folders inside "templates" directory and then includes every "template.php" file). I want to make "template.php" to have the same code in all the "id1,id2,id3" folders, BUT to load values from index.php depending in which folder it stays.. How can I do that? Thank You!

    Read the article

  • How do I access files inside a Wubi virtual ext4 Ubuntu partition from within Windows?

    - by aalaap
    I just installed Ubuntu 10.04 using Wubi on a PC that has Windows XP and Windows 7 installed. I was working in it for a while and everything is just fine. However, when I booted back into Windows 7, I couldn't figure out a way to access the files I had created or downloaded into the Ubuntu partition. They're in a virtual disk called root.disk in my C:\ubuntu\disks. Is there a way I can mount this vhd into Windows or at least browse the contents and extract what I need?

    Read the article

  • Why am I getting "too many include files : depth = 1024"?

    - by BeeBand
    I'm using Visual Studio 2008 Express edition, and keep getting the following error: "Cascadedisplay.h(4) : fatal error C1014: too many include files : depth = 1024. Obviously I'm doing something very wrong with include files, but I just can't see what. Basically, I have an interface class, StackDisplay, from which I want to derive CascadeDisplay in another file: #if !defined __BASE_STACK_DISPLAY_H__ #define __BASE_STACK_DISPAY_H__ #include <boost\shared_ptr.hpp> #include "CascadeDisplay.h" namespace Sol { class StackDisplay { public: virtual ~StackDisplay(); static boost::shared_ptr<StackDisplay> make_cascade_display(boost::shared_ptr<int> csptr) { return boost::shared_ptr<StackDisplay>(new CascadeDisplay(csptr)); } }; } #endif and then in CascadeDisplay.h: #if !defined __CASCADE_DISPLAY_H__ #define __CASCADE_DISPAY_H__ #include "StackDisplay.h" #include <boost\shared_ptr.hpp> namespace Sol { class CascadeDisplay: public StackDisplay { public: CascadeDisplay(boost::shared_ptr<int> csptr){}; }; } #endif So what's up with that?

    Read the article

  • How do I enable automatic reloading of view files in development mode in JRuby on Rails?

    - by thekingoftruth
    I am developing an app in JRuby on Rails. For some reason, when I edit the view files, the development JRuby Mongrel server doesn't reload them. The perplexing thing is that after editing the controller files, the server reloads them just fine on the next request. This would be annoying even when using MRI Ruby, however starting up JRuby Mongrel after every view edit is much slower, and much more annoying. (Note that once it starts up it's quite fast, the only issue is startup--the JVM has to load up every time I start JRuby Mongrel.) I'm running JRuby 1.5.0, Rails 2.3.5, and Java 6.

    Read the article

  • How to ensure I can replace files in a directory?

    - by chaiguy
    I want to completely replace one directory on the file system with another directory in a temp directory. The tricky part is that the files in the folder to be replaced could be being used at any time, causing the replace operation to fail. I need to somehow wait on an exclusive lock on the directory so that I can delete all of its contents without failing, so I can then move the other directory in to replace it. To make matters potentially more difficult, the process that is likely to be using the files is my own (via a Lucene.net library and out of my hands). So it can't be a process-level lock it has to be an object-level lock. Any thoughts on how I might do this? Or should I just keep re-attempting until it succeeds? I guess that's always an option.

    Read the article

  • How to handle splitting a file under source control?

    - by sharptooth
    I have a .cpp file and .h file containing a class. Class.cpp contains the implementation and Class.h contains the definition. The class is overcomplicated so I want to separate some code and move it into a separate class. So I create NewClass.cpp and NewClass.h and move the code there. How do I handle this when the files are under SVN? I can simply "svn add" the two new files, but then they will appear as new and will have no history. I could instead "svn copy and rename" the two initial files and edit the the two old files and the two new files - then the two new files will have common history. Which approach is better from the point of version control? Should the new files share history with the old files or should they appear as new?

    Read the article

  • Can gedit on mac be used to edit files over ssh?

    - by Dave
    I use a linux machine at work and a mac at home. I can ssh from my machine at home to my work machine. But the only editor that I have access to on the command line then is vi, which I don't like. Is there a way to use gedit on my mac to edit files remotely over an ssh connection? This page says that it can be done, but I think that it assumes that you are using gedit on ubuntu. On my mac (os 10.5.8) I don't have the "bookmark" option when I click "connect to server". http://thecodecentral.com/2010/04/02/use-gedit-as-remote-file-editor-via-ftp-and-ssh-ubuntu/comment-page-1#comment-50558

    Read the article

  • Symlink across local volumes in webroot?

    - by geerlingguy
    I am looking for a good short-term solution to storage space concerns on my website. Currently, I have all uploaded files (flash video, images, etc.) inside the 'files' directory in my web root (/home/account/public_html/files). That directory is located on my high-speed main hard drive (a 15k SCSI drive). I have another drive with much more capacity, but spinning at 10k rpm (so still fast, but not as good for random reads/writes as the main drive. The entire drive is mounted at /backup Right now I'm just using it as a backup volume. I would like to create a symlink from my /home/account/public_html/files folder to /backup/files, and have all files reside on the second drive. However, if someone accesses a file at http://www.example.com/files/filename.jpg, would it still work if I symlinked to the second drive? (Basically, would Apache/PHP automatically know to follow the symlink for that directory?).

    Read the article

  • What is the default value for Empty Temporary Internet Files when browser is closed in IE8?

    - by schellack
    We have four different machines that all have "Empty Temporary Internet Files when browser is closed" set to true (checked) in IE8's Internet Options (located under the Security section in the Advanced tab). No one remembers checking that checkbox to turn on the setting. What is the default value supposed to be? I'm specifically interested in Windows 7 and Windows XP. I have run rsop.msc on one of the corporate machines—3 of the 4 are members of a corporate network/domain—and see this under User Configuration, which makes the current scenario seem even stranger: The Local Group Policy Editor (gpedit.msc) also shows the Configure Delete Browsing History on exit setting to be Not configured (under Computer ConfigurationAdministrative TemplatesWindows ComponentsInternet ExplorerDelete Browsing History).

    Read the article

  • How could I portably split large backup files over multiple discs?

    - by sourcejedi
    Context: I make backups / archives, primarily of photos. I'm experimenting with Bup, which is designed for backup to hard disk. Basically it creates Git repos which include packfiles of up to 1GB. But I still need last-ditch backups to keep offline and move offsite (and keeping them on read-only media is good too!). What are the options for archiving and splitting large files over several discs like CDs (and reading them back!)? I'd prefer methods which will stay readable in future. are portable e.g. to Windows. have known simple implementations, so I could re-implement them myself if necessary. (Using Bup packs will stretch my robustness budget. So I want to be confident about how other parts of the system would behave). I heard split archives are possible with both ZIP and 7-Zip. Is that right?

    Read the article

< Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >