Search Results

Search found 49518 results on 1981 pages for 'configuration files'.

Page 584/1981 | < Previous Page | 580 581 582 583 584 585 586 587 588 589 590 591  | Next Page >

  • eCryptFS: How to mount a backup of an encrypted home dir?

    - by Boldewyn
    I use eCryptFS to encrypt the home directory of my laptop. My backup script copies the encrypted files to a server (together with everything else in (home/.ecryptfs). How can I mount the encrypted files of the backup? I'd like to verify that I can do that, and that everything is in place. My naive try with mount -t ecryptfs /backup/home/.ecryptfs/boldewyn /mnt/test didn't work, eCryptFS wanted to create a new partition.

    Read the article

  • Problem with unpacking zip archive with UTF-8 file names in OS X if zip was made in Windows

    - by Andrei
    I have packed my files in Windows 7 using Total Commander asking to use UTF-8 for file names. Then I tried to unpack my files in OS X, but Cyrillic names were messed. I have tried most programs -- none has helped me, so I had to use Parallels with Windows and Total Commander to get what I want. Is there any other way to do it? Is it a fault of Total Commander or I need to tune OS X settings?

    Read the article

  • Does gunzip work in memory or does it write to disk?

    - by Ryan Detzel
    We have our log files gzipped to save space. Normally we keep them compressed and just do gunzip -c file.gz | grep 'test' to find important information but we're wondering if it's quicker to keep the files uncompressed and then do the grep. cat file | grep 'test' There has been some discussions about how gzip works if it would make sense that if it reads it into memory and unzips then the first one would be faster but if it doesn't then the second one would be faster. Does anyone know how gzip uncompresses data?

    Read the article

  • How to convert an CHM file into a single HTML file?

    - by ruslan
    I have tried many different CHM-to-HTML utilities, but I am having a difficult time finding one that is able to produce a single HTML file. I can decompile a CHM file using hh.exe, but I don't know how to easily merge the resulting files into a single HTML file, all while preserving the correct order of pages. Is there a free tool which can do this? If not, how can I merge the HTML files in order?

    Read the article

  • Disabling FileSystemWatcher for specific updates?

    - by chaiguy
    Does anyone have any ideas how I can reliably disable a FileSystemWatcher object when my application makes changes to the files in the directory, so that I am only watching for external changes to the directory? I've tried setting EnableRaisingEvents to false immediately before performing a write and setting it back to true immediately after, but it seems this method is not reliable, and occasionally I still get the event firing. The only other thing I can think of is to wait a small amount of time after performing the write to let the OS finish up the modification of the directory before re-enabling the FSW, but that seems hackish and I don't like it. To add to the problem, the directory consists of potentially many files, the identities of which are beyond my knowledge and control, so I can't just wait for the event to fire for a specific file and then ignore it. There could be any number of FSW events firing after a single modification (because of the potentially many files getting updated).

    Read the article

  • Project organization in perforce

    - by Chupa
    Hello. I created several web applications that use the same static files (css, js, images). When I use svn for version control, I use an external repository (svn: externals) to add files to the current project. For example: - Project_1 ---- Webapp -------- Static (external to static's repo) - Project_2 ---- Webapp -------- Static (external to static's repo) I could easily use it in their web pages by adding a link like /static/ ... But now our company has moved to perforce. How can I support the current structure? We also use maven, I think to pack these files as a jar and use as a dependency, but then my editor (idea) does not see that this dependence are js-scripts and styles. And i need to repackage and deploy jar file when create minor changes. How to use maven correctly?

    Read the article

  • How to pause an m4b file

    - by Phenom
    When I play m4b files on my computer, they open with iTunes. I can stop the file, but I cannot resume the file from within iTunes. In order to pick up where I left off, I have to open the file again. How can I resume where I left off from within iTunes? Is there another program that will play m4b files and resume from where you left off?

    Read the article

  • missing user name in apache log file

    - by nani
    hy every body, We have dokeos application using apache as the web server. when accessing dokeos we have to login, So users who try to access this application , has to login using ID & pwd. But I don't have this ID information in the apache webserver log files. I mean "user name" information is not getting into the log files. Thanks.

    Read the article

  • set JAVA_HOME in windows but "ant build" still fails

    - by patrickinmpls
    I set JAVA_HOME in windows environment preferences echo %JAVA_HOME% C:\Program Files (x86)\Java\jdk1.6.0_20 but then I try to run ant build and I get Perhaps JAVA_HOME does not point to the JDK. It is currently set to "C:\Program Files\Java\jre6" I think the registry key JAVASOFT is interfering with my environment variable, but I'm not sure how to fix this

    Read the article

  • Get CruiseControl to talk to github with the correct public key.

    - by Danny Lister
    Hi All, Has anybody installed git and ControlControl and got CruiseControl to pull from GitHub on a window 2003 server. I keep getting public key errors (access denied) - Which is good i suppose as that confirms git is talking to github. However what is not good is that I dont not know where to install the rsa keys so they will be picked up by the running process (git in the context of cc.net). Any help would save me a lot of hair! I have tried installing the keys into; c:\Program Files\Git.ssh Whereby running git bash and cd ~ take me to: c:\Program Files\Git Current error from CC.net is Error Message: ThoughtWorks.CruiseControl.Core.CruiseControlException: Source control operation failed: Permission denied (publickey). fatal: The remote end hung up unexpectedly . Process command: C:\Program Files\Git\bin\git.exe fetch origin Thanks in advance

    Read the article

  • read http header info in ASIHttpRequest asynchronous

    - by user262325
    Hello everyone One of my project is to download several large size files using ASIHTTPRequest in asynchronous mode. I hope to read the http returned header info to get the size of files. I know [request respsonseHeaders] (requestFinished: delegate method ) can do that. I tested and found that requestFinished: is only triggered when it completed the download of a whole single file. But I hope to access the function [request respsonseHeaders] before ASIHTTPRequest starting to download files (just when ASIHTTPRequest got the returned header info). I can not find the triggered event for this. Welcome any comment Thanks interdev

    Read the article

  • Windows Server 2008 Software Raid 5 - Data integrity issues

    - by Fopedush
    I've got a server running Windows Server 2008 R2, with a (windows native) software raid-5 array. The array consists of 7x 1TB Western Digital RE3 and RE4 drives. I have offline backups of this array. The problem is this: I noticed a few days ago after copying a large file to the disk that there was an integrity issue with that file - it was a ~12GB file that I had downloaded via uTorrent. After moving it to the raid array, I used uTorrent to relocate the download location, and performed a re-check so I could seed it from that location. The recheck found that only 6308/6310 chunks of the copied file were intact. My next step was to write a quick powershell script that would copy files to the array, while performing a SHA1 hash of the original and resultant files and comparing them. Smaller files (100-1000MB) copied over just fine. When I started copying larger data (~15GB), I found that the hash check failed about 2/3rds of the time. The corrupt files had very, very small inconsistencies - less than .01%. I further eliminated the possibility of networking or client issues by placing this large file on the C:\ of the server, and copying it repeatedly from there to the array, seeing similar results. Copying the data via explorer, powershell, or the standard windows command prompt yield the same results. None of the copies fail or report any problems. The raid array itself is listed as healthy in disk management. After a few experiments, I shut down the server and ran memtest overnight. No errors were detected. A basic run of chkdsk found no problems, but I did not use the /R flag, as I was unsure how that might affect a software raid-5 volume. I next ran Crystal Disk Info to check the smart data on the drives - but found that CDI only detected 5 out of 7 of the disks in the array. I have no idea why. Nevertheless, CDI shows the following "caution" flags on a single one of the drives: 05 199 199 140 000000000001 Reallocated Sectors Count C5 200 200 __0 000000000001 Current Pending Sector Count Which is a little bit alarming, but I don't really know what to do with the information. I hardly feel like one reallocated sector could be causing this. At this point, I'm looking for some guidance on what to do next. I need to determine the cause of this issue, but I'm hesitant to run chkdsk /R or any bootable disk health checkers because I'm afraid they might break the array. I've considered triggering a re-sync of the array, but I'm not actually sure how to do that without doing something silly like manually dropping a disk and then restoring it. Any advice that could help me ferret out the precise cause of this issue would be greatly appreciated.

    Read the article

  • File copying software to do this kind of work... in Windows 7 32 bit

    - by Senthil
    I need a software (Windows 7 32bit) to help me in this process: I have my documents, music, video clips, movies, pictures in my hard disk. These will not be scattered around the system. But will be inside C:\Senthil\ At the end of every week, I want to plug in an external hard disk and run a software that should make sure whatever is inside C:\Senthil\ is also present in the external disk. Files deleted from C:\Senthil\ should be deleted there, and new files should be copied etc... at the end of the process, every bit inside the source folder in my internal disk should be inside my external disk. A couple of important requirements and points: I do NOT need multiple versions or historic versions. I don't need the previous versions of my files. I only want the latest copy to be present in my "backup". Incremental backup makes sense. If files were not touched since the last backup, it need not copy. The size of my folder will run into GBs and in a year or two will go into TBs. But I will make sure the size of the external HDD is equal to or bigger than my source folder. I do not want it to run automatically because when I accidentally delete a file in my source, it will delete the one in the backup (I know this is why we have versioning facilities). I just want to be able to run it manually so that I am in control of when the backup is made and what is backed up and I should be able to pick something from the backup and restore it to the source folder in the above situation. Is there any software that will let me do exactly this? I don't want any other "smart" facility of the software to interfere with this process. I know what I want and the software can keep its smartness to itself :D The main reason I am asking this question is, I am a software developer and I can write this software myself. But I am a little constrained by time at the moment and I want to know if there is an existing program that can do this. Kindly don't worry about earthquakes or fire or snowstorms and bring up the "in case of a natural disaster your backup will also be in the damage zone and will be lost" argument because: I will have bigger things to worry about than my holiday memories. I don't think I will digitally store any life-ruining documents. This backup is only to avoid the inconvenience of obtaining a new copy of stuff that I have. Not to protect them against the end of the world. I am more worried about power surges in my area frying my system, hard disk failure, children who merrily hit Delete or teens who hit Shift + Delete or myself getting a little careless at times! In short: Is there a file/folder syncing software that listens to what I say and doesn't try to act smart? Please forgive me if I sound arrogant :)

    Read the article

  • Secure, efficient, version-preserving, filename-hiding backup implemented in this way?

    - by barrycarter
    I tried writing a "perfect" backup program (below), but ran into problems (also below). Is there an efficient/working version of this?: Assumptions: you're backing up from 'local', which you own and has limited disk space to 'remote', which has infinite disk space and belongs to someone else, so you need encryption. Network bandwidth is finite. 'local' keeps a db of backed-up files w/ this data for each file: filename, including full path file's last modified time (mtime) sha1sum of file's unencrypted contents sha1sum of file's encrypted contents Given a list of files to backup (some perhaps already backed up), the program runs 'find' and gets the full path/mtime for each file (this is fairly efficient; conversely, computing the sha1sum of each file would NOT be efficient) The program discards files whose filename and mtime are in 'local' db. The program now computes the sha1sum of the (unencrypted contents of each remaining file. If the sha1sum matches one in 'local' db, we create a special entry in 'local' db that points this file/mtime to the file/mtime of the existing entry. Effectively, we're saying "we have a backup of this file's contents, but under another filename, so no need to back it up again". For each remaining file, we encrypt the file, take the sha1sum of the encrypted file's contents, rsync the file to its sha1sum. Example: if the file's encrypted sha1sum was da39a3ee5e6b4b0d3255bfef95601890afd80709, we'd rsync it to /some/path/da/39/a3/da39a3ee5e6b4b0d3255bfef95601890afd80709 on 'remote'. Once the step above succeeds, we add the file to the 'local' db. Note that we efficiently avoid computing sha1sums and encrypting unless absolutely necessary. Note: I don't specify encryption method: this would be user's choice. The problems: We must encrypt and backup 'local' db regularly. However, 'local' db grows quickly and rsync'ing encrypted files is inefficient, since a small change in 'local' db means a big change in the encrypted version of 'local' db. We create a file on 'remote' for each file on 'local', which is ugly and excessive. We query 'local' db frequently. Even w/ indexes, these queries are slow, since we're often making one query for each file. Would be nice to speed this up by batching queries or something. Probably other problems that I've now forgotten.

    Read the article

  • Constraint Validation

    - by tanuja
    I am using javax.validation.Validator and relevant classes for annotation based validation. Configuration<?> configuration = Validation.byDefaultProvider().configure(); ValidatorFactory factory = configuration.buildValidatorFactory(); Validator validator = factory.getValidator(); Set<ConstraintViolation<ValidatableObject>> constraintViolations = validator.validate(o); for (ConstraintViolation<ValidatableObject> value : constraintViolations) { List< Class< ? extends ConstraintValidator< ? extends Annotation,?>>> list = value.getConstraintDescriptor().getConstraintValidatorClasses(); } I get a compilation error stating: Type mismatch: cannot convert from List< Class< ? extends ConstraintValidator< capture#4-of ?,? to List< Class< ? extends ConstraintValidator< ? extends Annotation,? What am I missing?

    Read the article

  • Extract 100G of Music from Ipod to Harddisk

    - by user10826
    Hi, I have an ipod 5th gen with 110G of Music and a macbook with itunes. I would like to rip all music files from my ipod to my hard disk and then select only some of the files and add them to the itunes library, which will sync with the ipod. I tried Expod and similar softwares but they hang with more than 50G. Do you know any other approach? Thanks

    Read the article

  • How to view shell commands used by eclipse "run configurations"

    - by gmale
    Given a "run configuration" in Eclipse, I want to print out the associated shell command that would be used to run it. For example: Right now, in Eclipse, if I click "play" it will run: mvn assembly:directory -Dmaven.test.skip=true I don't see that command, I just know that's what the IDE must run, at some point. However, some of the other run configurations are far more complex with long classpaths and virtual machine options and, frankly, sometimes I have no idea what the equivalent shell command would be (particularly when it comes to Flex). There must be some way to access the shell command that would be associated with a "Run Configuration" in Eclipse/Flex Builder. This information must be available, which leads me to believe someone has written a plugin to display it. Or maybe there's already an option built into Eclipse for accessing this. So is there a way to, essentially, convert an Eclipse run configuration into a shell command? (for context only: I'm asking because I'm writing a bash script that automates everything I do, during development--from populating the Database all the way to opening Firefox and clearing the cache before running the web app. So every command I run from the IDE needs to exist in the script. Some are tricky to figure out.)

    Read the article

  • "rsAccessDenied" error for SSRS 2008

    - by JackLocke
    Hi All, I have been trying to access SSRS Web Service URL hxxp://myServer:80/ReportServer (from Reporting Service Configuration Manager), but my IE always shows "rsAccessDenied" message saying that my account doesn't have privilage required to view. Here are my system specs. Its my laptop with Windows 7 x64, and SQL Server 2008 with SP1 and I am using Mixed Mode Authentication with My account as SysAdmin privilages and this is what I have been trying / tried ... (ofcourse with restarting the service everytime I make any change in configuration), I changed service account from Reporting Service Configuration Manager to make it use My account but nothing happend. I tried running my IE as admin, by RUN AS ADMIN but still same message. Then I read somewhere I have to delete/recreate my encryption keys as well, so I tried again with that, then it was asking me to enter ID/PWD to access server here I am totally blank because it was not accepting my account credentials !!!. Weird thing is I can see my existing reports if I follow this URL hxxp://myServer:80/Reports , for which My guess is solely used to view reports. I have read post here about kind of same problem, but it seems that OP just left forum after asking question... Also, MSDN does have these helps hxxp://msdn.microsoft.com/en-us/library/ms156034.aspx hxxp://msdn.microsoft.com/en-us/library/bb630430.aspx but both of this didn't workout for me. I will really appriciate it if any one can help me out. Jack p.s. I was not allowed to post more than 1 URL because of my "reputation" so I had to change the string a bit. Please replace hxxp wih http in URLs.

    Read the article

  • How to delete everything except .svn directories?

    - by Arek
    I have quite complex directory tree. There are many subdirectories, in those subdirectories beside other files and directories are ".svn" directories. Now, under linux I want to delete all files and directories except the .svn directories. I found many solutions about opposite behaviour - deleting all .svn directories in the tree. Can somebody quote me the correct answer for deleting everything except .svn?

    Read the article

  • amavisd Net server pid file already exists after system crash and startup

    - by Simiyu
    Hi all, whenever i have an unclean shutdown, which is often due to power failure most of the time i get problems with amavis starting up. The error amavisd Net server pid_file already exists for running process comes when i start it under debug mode, so i always have to delete the amavisd.pid and amavisd.lock files manually before it starts. Is there a way i can stop this from happening or get a way to delete the files during reboot in the case of an unclean shutdown. Thanks

    Read the article

  • Quarter turn pdf document

    - by Rogier
    We have created thousands of pdf files that are printed as a label on a special label printer. Printing these labels is ok, but some of the label paper are quarter turned and the pdf are printed incorrectly. There is a possibility to rotate the page before printing. But is it possible to rotate a pdf file and save it again as a pdf file? And there are thousands of pdf files, is it also possible to do this is a batch program?

    Read the article

  • What does this httpd directive do?

    - by alsciende
    Hello, I stumbled upon a httpd.conf directive that I can't manage to understand: <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy All </Files> According to the doc , I would say that Satisfy doesn't have any effect since there is no Allow. Am I wrong? What do you think this directive does?

    Read the article

  • CD Burned in XP isn't readable in Vista

    - by RickMeasham
    I burned a CD on XP using the built-in burning software and I can read the CD on that machine, but when I insert it into my Vista machine, I can't read the files. It shows the correct volume label, and the correct 'free space', but I can't access the actual files. Am I missing something obvious? (Both systems are fully up-to-date)

    Read the article

  • Where can I get precompiled mod_perl, mod_python for Apache on Win64?

    - by Soumya92
    I have managed to set up pure 64-bit Apache, PHP, MySQL, and 64-bit distributions of Perl and Pyton. However, I cannot get Apache to automatically parse .pl files with Perl, and .py files with Python. Looking around points to mod_perl and mod_python for Apache, which unfortunately fail to build. Is there any precompiled mod_perl, mod_python for Win64? Or is there any other way of getting .pl, .py to work on Apache?

    Read the article

  • I'm mplement http live streaming video from my webserver to iPhone. Will I get rejected for bandwid

    - by yujean
    Apache webserver setup added: AddType application/x-mpegURL .m3u8 AddType video/MP2T .ts to "httpd.conf" file. Movie file preparation I have 3 movie files (9mb - 25mb each). Used QuickTime to convert movies into iPhone format. Used mediafilesegmenter to convert .m4v into 10-second segments of .ts files, with an accompanying .m3u8 file. Placed these in a folder on webserver. iPhone App implementation Created UIWebView whose URL points to http://71.191.59.68/~yujean/stream.html Simulator accesses the site and streams the movie-files just fine. Question Will I still get rejected by apple for bandwidth issues over the 3G and/or Edge network? Do I need to somehow check which network the end-user is on first? And then provide a different movie accordingly? If so, how do I do that ...? Thank you in advance, Eugene

    Read the article

< Previous Page | 580 581 582 583 584 585 586 587 588 589 590 591  | Next Page >