Search Results

Search found 14439 results on 578 pages for 'folder customization'.

Page 409/578 | < Previous Page | 405 406 407 408 409 410 411 412 413 414 415 416  | Next Page >

  • Incremental backup and sync software

    - by martjno
    I need a free software for Windows (with gui or command line) that does incremental backup copying all files and storing changed or deleted files in a directory named like last change date (or a progressive number). To be more precise: D:\ is my Data drive E:\ is my Backupdrive. If i want to backup all my data from D:\: E:\d_lastbackup\ will contain a plain copy of all the files and folder content (no compression or archiving, same files attributes) of D E:\d_20090822\ will contain all files (with their full path) that are changed or deleted in the last version (since the previous one) E:\d_20090820\ will contain all files (with their full path) that are changed or deleted in the last version (since the previous one) and so on... I had a software working prefectly with an old USB harddsik by Maxtor, but it works only on that device. Any suggestion?

    Read the article

  • Creating self-signed SSL certificate - Access denied?

    - by Shaul
    I'm trying to create a Self-Signed Certificate in IIS 7 (Win7 Ultimate x64), and getting the following error: I found this question on SF, which says I should set permissions on the C:\ProgramData\Microsoft\Crypto\RSA\MachineKeys folder to allow rights - but that's also not working. Firstly, note that "Everyone" has "Full Control" rights: And when I try to delete and recreate rights, look what comes up: I am logged in as a user with admin privileges, and I've even tried running Explorer with Admin rights... nothing seems to help. What do I do to get this right?

    Read the article

  • How to determine if a file has been backed up?

    - by Console
    I try to consolidate old drives to new ones of larger capacity. Sometimes files have been renamed, but are otherwise identical. Sometimes an old directory has just a few more files in it than a newer directory with the same name. Sometimes a file has the same name but the size differs. So I often find myself asking the question: Are there any files on this old drive or directory that I haven't already copied to the new drive? I just want to know that I have the files, I don't want to try and sync stuff automatically (Syncing tools tend to just sync, creating duplicate folder structures and other problems, so I prefer to do it by hand). Basically, if an old drive has a file called "foo.bar" ten directories deep, and my new big drive has an identical file called "oldstuff.zip" in the root, I just want a "yes you have it" or "no, unique files exist". Is there a free tool, a script or a quick and easy method (Mac/Unix or Windows) to get the answer?

    Read the article

  • What I do with .chk files in FOUND.000 folders?

    - by Svish
    Just discovered I have gotten a FOUND.000 folder on my usb drive. I guess from running chkdsk once. It has three files in it: FILE0000.CHK FILE0001.CHK FILE0002.CHK What exactly can I do with these files? What are my options? Are they lost files? Are they garbage? Can I use them for anything at all? Or will I just have to delete them?

    Read the article

  • Anyway I can trick Carbonite into backing up an external hard drive?

    - by Brian
    I use carbonite to back up my PC (Windows XP). We were running low on disk space on our home PC (down to 15 gig) so I went out and purchased an external hard drive. However, Carbonite will not back it up. I just want the external drive to be extra disk space. From their FAQ: The current version of Carbonite backs up only the files that reside on permanent hard drives on your computer. It will not back up network drives, external drives, and NAS (network accessed storage) drives. If there are files on a remote drive that you wish to include in your Carbonite backup, you should copy the files to a folder on your local hard drive. If the files are on a shared network drive, you could install Carbonite on the computer on which the network shared drive physically exists, and back the files up directly from that computer. Check back soon for a Carbonite service plan that will allow you to back up your external drives.

    Read the article

  • Importing an existing project into Git

    - by Andy
    Background During the course of developing our site (ASP.NET), we discovered that our existing source control (SourceGear Vault) wasn't working for us. So, we decided to migrate to Git. The translation has been less than smooth though. Our site is broken up into three environments DEV, QA, and PROD. For tho most part, DEV and the source control repo have been in sync with each other. There is one branch in the repo, if a page was going to be moved up to QA then the file was moved manually, same thing with stuff that was ready for PROD. So, our current QA and PROD environments do not correspond to any particular commit in the master branch. Clarification: The QA and PROD branches are not currently, nor have they ever been in source control. The Question How do I move QA and PROD into Git? Should I forget about the history we've maintained up to this point and start over with a new repo? I could start with everything on PROD, then make a branch and pull in everything from QA, and then make another branch off of that with DEV. That way not only will the branches reflect the differences in the environments, they'll be in the right order chronologically with the newest commits in the DEV branch. What I've tried so far I thought about creating a QA branch off of the current master and using robocopy to make the working folder look like the current QA environment. This doesn't work because the new commit from QA will remove new files from DEV and that will remove them when we merge up, I suspect there will be similar problems if I started QA at an earlier (though not exact) commit from DEV.

    Read the article

  • show differences between file and file in (compressed) tar archive

    - by Kyss Tao
    Say I have unpacked a gz-compressed tar file, and do not remember what changes I made to the unpacked files, or I archived a folder a while ago and want to know what has changed to the files since. I can use tar -zd to get an overview. Then, say it shows me file foo has changed. How can I see the changes in this file, i.e. the difference between the file on my file system and the (older) file in the archive (ideally in vimdiff, but diff output would be fine too)?

    Read the article

  • Giving a permission to write and read from /var/www

    - by mako
    I need that directory, as I want to put my sites there, so that apache can run them.. It is my virtual directory path.. and I am new to linux.. I just want to read and write from that directory.. How do I enable creating/saving/reading files/folders from that directory? What command do I give? I tried a few, but I think I need to be a super user to make the folder writtable readable. Note that I dont care about security.

    Read the article

  • How to avoid copying corrupted files with rsync

    - by Roberto Aloi
    I have an HDD with plenty of files, some of which are unfortunately corrupted. I'm now trying to copy the good files into a new HDD. I'm using: rsync -azP SRC TGT When rsync comes to one of the corrupted files, I can see a message in the console: rsync: read errors mapping XXX: Input/output error (5) In the target folder, I still see the corrupted file, which I'm not able to open and which I have to delete manually. Is there any option to tell rsync not to copy files after a i/o error?

    Read the article

  • What's the fastest way to store/access large files?

    - by philfreo
    I do a lot of video editing on my Mac and need a way to store very large (30 GB) files, and don't have room on my HD. A USB/Firewire external hard drive would work, but it seems way too slow for consistently working with such large files. I've also considered buying another computer, with a large hard drive, and putting it on the same network with a shared folder. What's the fastest / most efficient way to do this? Please consider USB 2.0 speeds, hard drive read times, ethernet speeds, etc. Are there other options I should consider?

    Read the article

  • NFS Issues in Gnome

    - by Alex
    I mount NFSv4 export via /etc/fstab and mount and use the shared folder in nautilus. There are two issues: When I copy a large file (around 4 GB) to the NFS server, the progress bar rapidly goes to 2 GB and then basically stops moving. But the copy s still in progress - it is just not displayed well When I disconnect from the network without unmounting the nfs share, nautilus freezes. How can I work around that? /etc/export on the server /export/share 192.168.0.0/24(rw,sync,insecure,no_subtree_check,anonuid=1000,anongid=1000) /etc/fstab on the client: server:/share /mnt nfs4 soft,tcp

    Read the article

  • debian hang on startup "starting the winbind daemon: winbind"

    - by Bajingan Keparat
    I took a copy of a VM running a debian, just so that I can play around with it. I spin up the copy, but didn't give it any network connection to avoid conflict with the original one. However, when I turn the VM on, it seems to freeze after this startup message Starting Sambe daemons:nmbd smbd Starting PostgreSQL 8.4 database server: main Starting the Winbind daemon: winbind how do i fix this? I never get to the prompt to login. This vm does have a mount point that connects to a windows share folder.

    Read the article

  • Install Navicat for MySQL on Linux Ubuntu 12.04

    - by DanielAttard
    How do I install Navicat on Ubuntu Desktop 12.04? I have just configured a new Ubuntu 12.04. Because I'm not so familiar with the command line, I prefer to use Ubuntu Desktop so that I can have a GUI. Now I need to install a front-end to connect with MySQL. I prefer to use Navicat in a MAC environment, so I was hoping to install Navicat on the Ubuntu machine. I downloaded a Linux copy of Navicat for MySQL from here: http://www.navicat.com/download/navicat-for-mysql The problem I am having is that I don't know how to install the program after it has been downloaded. There is a navicat.exe file in the navicat folder, but that seems to be for a DOS/Windows environment. I just can't figure out how to install Navicat onto Ubuntu Desktop 12.04. Anyone have any ideas? Thanks.

    Read the article

  • Worker processes not starting in IIS 7.5. What should I check?

    - by locster
    I have a Windows 7 machine (Windows version 6.1.7601 SP1 Build 7601) with IIS installed. At some point the installation appears to have become 'corrupted' in some way, as any requests are now met with the message: Service Unavailable HTTP Error 503. The service is unavailable. In IIS manager IIS is started and the app pool I am using reports itself as 'Started', yet there is no w3wp.exe process listed in the process list in task manager (I am a local admin and have clicked the 'Show processes from all users' button. I have enabled logging for the web site (at default location of %SystemDrive%\inetpub\logs\LogFiles), but this folder is empty. I am assuming that this log output is written by w3wp.exe as it handles requests (no w3wp.exe, no log file?). Presumably there is another layer of request handling that is responsible for starting the worker processes, does thsi layer have log files I can check, and/or can I uninstall/re-install that layer? Thanks.

    Read the article

  • Apache virtual host documentroot in other folders

    - by giuseppe
    I am trying to set up a couple ov VritualHost in my Apache, but I would like to put the DocumentRoot of these virtual host on folders outside the basic www folder. It happens that I get alwasy "Permission Denied". My httpd.conf follows: NameVirtualHost *:80 <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /home/giuseppe/www ServerName www.example.com/www ErrorLog logs/host.www.projects-error_log CustomLog logs/dummy-host.example.com-access_log common <Directory "/home/giuseppe/www"> Options Indexes FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /home/developper ServerName www.example.com ErrorLog logs/host.developper-error_log CustomLog logs/dummy-host.example.com-access_log common </VirtualHost>

    Read the article

  • Postfix Vacation.pl with local users

    - by Simiyu
    Hi, I am trying to setup the vacation.pl script on a mail servers which has local users only (since they are only 10 users). I have installed the SquirrelMail plugin and the Auto respond option is available for the users, but when an email is sent to the addresses no auto reply email is sent to the sender. There are also no logs on the /var/log/vacation folder which i created as well as the normal log files. Most of the examples online refer to virtual users, can it work with local users? and if so how? regards, Arthur

    Read the article

  • wildcard in httpd conf file?

    - by Joe
    Here is an example httpd config I'm currently using: <VirtualHost 123.123.123.123:80> ServerName mysite.com ServerAlias www.mysite.com DocumentRoot /home/folder </VirtualHost> I'm wondering, is it possible to have a wildcard for the ServerName & ServerAlias variable? Reason for asking is I have some software that is shared among multiple URL's all controlled in a CMS and it's kind of a pain to add new domains via ssh everytimee. And before someone points out a security hole, the software does check the current URL before doing any webpages :)

    Read the article

  • Modifying the install environment for RH-like installations

    - by javanix
    I am trying to modify the basic installation environment (ie, what Anaconda runs in) for a customized CentOS distribution. For the first try, I would just like to modify a few of the splash images. My initial attempt to do this entailed: 1) Mount images/install.img to a directory ~/img/ 2) Copy all files from img/ to ~/tmpimg/ 3) Modify the splash images 4) mkisofs -o ~/final/install.img 5) cp ~/final/install.img back to my ~/cdroot/ folder and remake the iso. However, the generated .img in step 4 doesn't even come close to matching the file size from the original install.img (meaning that install.img must be created in some other fashion using compression), and it fails when I boot my iso. What settings should I be using to make the install.img file? Is there some other technique for modifying CentOS install environments?

    Read the article

  • Bash script getting automatically deleted from Ubuntu 12.04 Server?

    - by Kris Anderson
    I'm running a bash script on an ubuntu 12.04 through cron. The script works fine for a few weeks (runs daily backups of websites, mysql databases, and copies to Amazon S3). However, twice now I've noticed that backups stopped happening. Both times the backup script (backupscript.sh) located in my home folder was no longer there. No one else has access to this server, so nothing was manually changed on the server and no one deleted the file by mistake. The cron job (nano /etc/crontab) still references this script, but the script itself disappears. What could cause this to happen? Does Ubuntu delete the script if it runs into some sort of error?

    Read the article

  • Deploying a Windows 7 image, which way is the fastest?

    - by MatF
    I captured an image of a basic Windows 7 installation with some modifications using imagex. Before the image was captured, I ran sysprep generalizing and selecting to enter OOBE after it's done. Which way would be the fastest to deploy that image again? Using imagex /apply. Or naming the image install.wim and put it in the source folder of a normal installation (on a bootable USB device), running a normal setup afterwards. Currently I have only tried the second approach. However I just found out about the imagex way and wondered if it would be faster. Or are the even more methods that would be better?

    Read the article

  • Why can't I run any Android NDK commands?

    - by TheBuzzSaw
    I had been running Mint 12 before, and everything was working there. I switched to Ubuntu 12.04, and now I am very frustrated. When I run ndk-build, I get /home/buzz/ndk/prebuilt/linux-x86/bin/make: not found So, I changed to that folder directly. When I type in ./make, I get bash: ./make: No such file or directory Typing ls clearly shows the file where I am! I did some hacking around (pointing to external tools) to get past each error (just to experiment), and I ran into this! /home/buzz/ndk/toolchains/arm-linux-androideabi-4.6/prebuilt/linux-x86/bin/arm-linux-androideabi-gcc: Command not found Why? Why are all these files unable to be found? As I said above, this was all working just fine in another distro. What changed? What's extra frustrating is that if I push TAB to auto-complete, it works. So, the file is clearly there (and clearly marked with execution permissions). So, why can't it be found?

    Read the article

  • How do I securely share / allow access to a drive?

    - by sleske
    To simplify backing up a laptop (Windows Vista), I'm planning on sharing its C: drive (with password protection) and using that to back it up from another computer. What are the security implications of this? If I share C: with a reasonable password, how big is the risk of compromise if the system is e.g. inadvertently used on a public WLAN or similar? Background: I'm planning to use [Areca Backup][1] to back up two systems (Windows XP and Vista). My current plan is to install Areca on the XP box, and share the Vista system's C: as a shared folder, so the XP system can read it. Then I can set up the drive as a network drive and have Areca read it like a local drive. Of course, if you can think of a more elegant way of doing this, I'm open to suggestions.

    Read the article

  • LAMP: How do I set up http://myservername.com/~user access?

    - by Travesty3
    Been trying to Google this, but I can't figure out good search terms to find any info about what I need, since I don't really know what it's called. I'm pretty much being thrown to the wolves to figure out how to set up a LAMP server. We had someone who knew how to do it, he set one up and then quit. It was set up so that when I went to "http://{myservername}.com/~travis" it showed the contents of my /home/travis/public_html folder. This worked fine, then we lost power and the server restarted (I know, battery backup, but this is a dev server in a dev building so it's OK). Now, the browser can't find that URL. I also need to know how to set this up on a new server, so instead of wasting time diagnosing this problem (probably just something dumb I did messing with settings or something), I really need to know how to set this up from scratch. Thanks for taking the time to read this and (hopefully) answer!

    Read the article

  • How can I share files from my Windows 7 machine to my friend's Ubuntu machine?

    - by ProfKaos
    I run a Windows 7 Pro SP1 laptop as my home machine, and my housemate runs an Ubuntu 12.04.1.05 desktop. We share a WLAN. I would like to make certain locations and files available for him to read and maybe write. How can I go about this? Bearing in mind I have very little recent experience with modern Linux, and Ubuntu in particular. My first idea is to share a Windows folder with my Ubuntu VM under VMWare Player, then his Ubuntu machine can connect to my Unbuntu VM, and the two can use whatever magic Ubuntu uses to achieve file sharing. This requires my Ubuntu VM to be always running though, and that may not always be possible. I have also heard that Samba may have a feature to help here, but I know nothing about that. How can I share my Windows files with my mate's Ubuntu machine, preferably with a 1 to 1 connection, i.e. rather not using shim VM's

    Read the article

  • Using unsigned drivers in Windows 8

    - by T. Fabre
    I just migrated from Windows 7 x64 to 8, but I can't get my VPN software to run anymore : the SafeNet IKE service (installed by SafeNet SoftRemoteLT GA, used by my VPN provider) cannot start anymore. I found that by default unsigned drivers are disabled on Win8, and that is what is blocking the driver. The System event log tells me that the driver (apparently, C:\WINDOWS\SysWow64\Drivers\IPSECDRV.sys ) was blocked when I try to manually start the service (SafeNet IKE Service). I get the same messages for another driver, crypto.sys found in the same folder. I tried using bcdedit to enable unsigned drivers : bcdedit /set loadoptions DDISABLE_INTEGRITY_CHECKS bcdedit /set testsigning ON After reboot, same error. I tried by booting into Win 8's test mode, same issue. Applying the code signing policy (Enabled, Ignore) does not help either. Running gpresult does show that the policy is applied. Any help welcome.

    Read the article

< Previous Page | 405 406 407 408 409 410 411 412 413 414 415 416  | Next Page >