Search Results

Search found 54055 results on 2163 pages for 'multiple files'.

Page 110/2163 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • WinSCP putting multiple files on sftp site

    - by NewToWinSCP
    WinSCP 5.2 I wanted to put multiple files with a file extension .pgp on an sftp site. When I tested my original command line (see below) and it only placed the first *.pgp alphabetical file (D:\a.csv.pgp) on the sftp site. I tried specifying *.PGP and *.pgp without any changes - only one file (D:\a.csv.pgp) would be copied each time. I got it to work for all files only if I specified a put command for each .pgp file. Any ideas on how to put all *.pgp on the sftp site? Original Command Line - Does Not Work d:\winscp\winscp /command "option echo off" "option batch on" "option confirm off" "open sftp" "put D:\*.pgp" "close" "exit" Works d:\winscp\winscp /command "option echo off" "option batch on" "option confirm off" "open sftp" "put D:\a.csv.pgp" "put D:\b.csv.pgp" "put D:\c.csv.pgp" "put D:\d.csv.pgp" "put D:\e.csv.pgp" "put D:\f.csv.pgp" "put D:\g.csv.pgp" "put D:\h.csv.pgp" "put D:\i.csv.pgp" "close" "exit"

    Read the article

  • PLESK Original php.ini files needed

    - by Saif Bechan
    Hello, I am using PLESK 9.2.3 with CentOS 5. I recently tried to change some things in the php.ini files, but i ended up clearing both files. stupid mistake! Is there a possibility i can get the content of both files back. I am talking about the files: /etc/php.ini /usr/local/psa/admin/conf/php.ini or was it /etc/php.ini /usr/local/psa/admin/conf/.php.ini When i browse to my PLESK panel i get the error message: Warning: Unknown: failed to open stream: No such file or directory in Unknown on line 0 Fatal error: Unknown: Failed opening required 'auth.php3' (include_path='.:') in Unknown on line 0 Thats as much information i can get, i hope someone can make some sense of all this Regrads EDIT: I ended up just reinstalling plesk which was fairly easy. Something like yum install psa-plesk. There was a guide i followed, and everything worked out ok

    Read the article

  • How to set the default file permissions on ALL newly created files in linux

    - by eviljack
    My question is similar to this: http://stackoverflow.com/questions/228534/linux-default-file-permission but there is no scp/ftp client involved and that question looks abandoned. Simply put: I want to be able to, at some global level decree that all newly created files will never have world writable permissions (0775). I tried putting a umask 02 in /etc/profile then in my bash_profile but it only works for scripts or new files that I create in a shell. It doesn't work for files that another binary creates. Is there anyway to have all new files that are created?

    Read the article

  • Windows Media Player - Display JPEG cover art for FLAC files

    - by pelms
    I got WMP 12 (on Windows 7 RC1) to play FLAC files by installing the Ogg Vorbis/FLAC, Direct Show filters, but WMP does not display the embedded cover artwork. After experimenting I found that it will display the cover art if it is embedded in PNG format, but up to now I've used JPEG for all my FLAC files. Rather than re-tagging all my files, does anyone know a way to get WMP 12 to display JPEG embedded cover art?

    Read the article

  • Make vhdresizer work on XP Mode VHD files

    - by A_M
    I'm trying to shrink a Windows 7 XP Mode VHD file with VhdResizer with little success. When I select my VHD file, it says "VhdExpand only supports fixed and dynamic VHD files". My XP Mode VHDs are dynamic files. Does anyone have any idea why it is failing? Failing that, does anyone have a process that I can use to shrink my XP mode VHD files? Thanks.

    Read the article

  • VRF Internet Gateway Multiple External IP's 1 Internal IP to AWS

    - by user223903
    Trying to setup VRF for the first time and its not working for me even though I keep reading everything online. IP's are different to real life. I have an Internet connection which I can ping to my router in the current setup below 195.45.73.22 I have a block of ip addresses 195.45.121.0/27 I want to setup multiple VPN's to AWS so need to have multiple external ip's thus the block of IP addresses. I have setup the 2nd and 3rd IP address but can not ping them from external. Any help would be grateful. Bryan ip source-route ! ip vrf Internet rd 1:1 route-target export 1:1 route-target import 1:1 ip vrf AWSSydney1 rd 2:2 route-target export 2:2 route-target import 2:2 route-target import 1:1 ip vrf AWSSydney2 rd 3:3 route-target export 3:3 route-target import 3:3 route-target import 1:1 ip cef no ip domain lookup no ipv6 cef multilink bundle-name authenticated interface FastEthernet0/0 description Vocus Internet no ip address speed 100 full-duplex interface FastEthernet0/0.1 encapsulation dot1Q 1 native ip address 195.45.73.22 255.255.255.252 interface FastEthernet0/0.2 encapsulation dot1Q 2 ip vrf forwarding AWSSydney1 ip address 195.45.121.1 255.255.255.224 interface FastEthernet0/0.3 encapsulation dot1Q 3 ip vrf forwarding AWSSydney2 ip address 195.45.121.2 255.255.255.224 interface FastEthernet0/1 description LAN_SIDE ip address 10.0.0.5 255.255.255.0 speed 100 full-duplex no mop enabled ip forward-protocol nd ip route 0.0.0.0 0.0.0.0 195.45.73.21 ip route vrf Internet 0.0.0.0 0.0.0.0 195.45.73.21

    Read the article

  • List files with last access date in linux

    - by kayaker243
    I'd like to clean up a server that my webmaster let turn into a mess. I know how to list all files not accessed within the last x days using find and -atime, but what I'm looking for is to come up with a listing of the last access date for files one level down in directory /foo: /foo/bar1.txt Dec 11, 2001 /foo/bar2.txt Nov 12, 2008 /foo/bar3.txt Jan 12, 2004 For folders one level down in directory /foo, list the date of the most recently accessed file within the directory (no limit on depth for identifying last access date) /foo/bar1/ Feb 13, 2012 /foo/bar2/ Oct 11, 2008 Where /foo/bar1/ has a file modified Jan 1, 1998 and Feb 13, 2012 and /foo/bar2/ has 30 files, most recent of which was accessed Oct 11, 2008. This question is similar to: http://stackoverflow.com/questions/5566310/how-to-recursively-find-and-list-the-latest-modified-files-in-a-directory-with-s but rather than the modification date, the date of interest is the last accessed date.

    Read the article

  • Windows XP: GNU find.exe moved my files and now they're lost

    - by user2001487
    I used a series of GNU find.exe commands to move media files to the current folder. Explorer show that the folders and files still exist in their original location but properties show zero size and files. I performed a chkdisk and that didnt help. Any ideas how to recover these files? Here are the commands from my bat file. find.exe . -mindepth 1 -name "*.mkv" -type d -exec mv {} {}.tmp ; find.exe . -mindepth 2 -type f -exec mv {} . ; find.exe . -mindepth 1 -type d -exec rmdir {} ; http://i.stack.imgur.com/Eiq2y.jpg http://i.stack.imgur.com/E1wSN.jpg

    Read the article

  • Copy Office files from a Mac to a PC

    - by Martin
    A friend of me recently dumped his Mac fpr a PC. He used Microsoft Office for Mac and has several hundred Files (Word, Excel) which were copied over to the new PC using a USB disk. Microsoft Office is now unable to read any of there files. I suspect this is because of little endian vs. big endian. Is there any tool which can converted all theses files automatically, doing this by hand would take ages.

    Read the article

  • Copy Office files from a Mac to a PC

    - by Martin
    A friend of me recently dumped his Mac fpr a PC. He used Microsoft Office for Mac and has several hundred Files (Word, Excel) which were copied over to the new PC using a USB disk. Microsoft Office is now unable to read any of there files. I suspect this is because of little endian vs. big endian. Is there any tool which can converted all theses files automatically, doing this by hand would take ages.

    Read the article

  • While in CMD shell, copying files from host OS to guest VM locks files (VMware Player/Workstation)

    - by Malcolm
    We're running the latest versions of VMWare Player and Workstation for Windows. The following behavior is identical across both products. Problem: We open a CMD prompt in our guest OS (XP, Vista, Windows 7) and copy files from our host OS using the standard CMD shell copy command: copy z:\C$\testfiles The copy completes successfully, but from that point forward, all the files that were copied to our guest OS are now LOCKED on our host OS. This does not happen if we use Windows Explorer to copy files - it only happens when files are copied via the CMD shell. As mentioned at the start of this question, this behavior is reproducible in both VMWare Player and VMWare Workstation across multiple machines and multiple guest OS's. I've googled for a workaround, but without success. Any ideas appreciated. Malcolm

    Read the article

  • Apache directory structure with multiple hosted languages.

    - by anomareh
    I just got a new work machine up and running and I'm trying to decide on how to set everything up directory wise. I've done some digging around and really haven't been able to find anything conclusive. I know it's a question with a variety of answers but I'm hoping there's some sort of general guidelines or best practices to go by. With that said, here are a few things specific to my situation. I will be doing actual development and testing on the same machine as the server. It is a single user machine in the sense that I will be the only one working on the machine. There will be multiple hosted languages, specifically PHP and RoR while possibly expanding later. I'd like the setup to translate well to a production environment. With those 3 things in mind there are a couple of things I've had in the back of mind. Seeing as it's a single user machine I haven't been able to decide whether or not I should be working on things out of my home directory or if they should be located outside of it. I'm feeling that outside of a user directory would be better as it would translate better to a production environment, but I'm also not sure if that will come with any permission annoyances or concerns seeing as I'll be working on the same machine. Hosting multiple languages seems like it may be a bit quirky. With PHP I've found you're generally just dumping the project somewhere in the document root where as something like a Rails app you have the entire project and you only want the public directory in the document root. Thanks for any insight, opinion, or just personal preference from experience anyone can offer.

    Read the article

  • Sync files between 2 remote computers using Linux terminal

    - by Eddy
    I want to be able to synchronise files between 2 remote computers in both directions. Say for example that I want to synchronise my /home/Documents directory with <username>@example.com:/home/Documents What's the easiest way to update the folders in both directions, so that new/updated files on my home computer get transferred to the remote computer, and new/updated files on the remote computer get transferred to my home computer?

    Read the article

  • Is my dns server being attacked? And what should I do about it?

    - by Mnebuerquo
    I've been having some intermittent dns problems with a web server, where certain isp's dns servers don't have my hostnames in cache and fail to look them up. At the same time, queries to opendns for those hostnames resolve correctly. It's intermittent, and it always works fine for me, so it's hard to identify the problem when someone reports connectivity problems to my site. In trying to figure this out, I've been looking at my logs to see if there are any errors I should know about. I found thousands of the following messages in my logs, from different ip's, but all requesting similar dns records: May 12 11:42:13 localhost named[26399]: client 94.76.107.2#36141: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:13 localhost named[26399]: client 94.76.107.2#29075: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:13 localhost named[26399]: client 94.76.107.2#47924: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:13 localhost named[26399]: client 94.76.107.2#4727: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:14 localhost named[26399]: client 94.76.107.2#16153: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:14 localhost named[26399]: client 94.76.107.2#40267: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:43:35 localhost named[26399]: client 82.209.240.241#63507: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:43:35 localhost named[26399]: client 82.209.240.241#63721: query (cache) 'burningpianos.org/MX/IN' denied May 12 11:43:36 localhost named[26399]: client 82.209.240.241#3537: query (cache) 'burningpianos.com/MX/IN' denied I've read of Dan Kaminski's dns cache poisoning vulnerability, and I'm wondering if these log records are an attempt by some evildoer to attack my dns server. There are thousands of records in my logs, all requesting "burningpianos", some for com and some for org, most looking for an mx record. There are requests from multiple ip's, but each ip will request hundreds of times per day. So this smells to me like an attack. What is the defense against this?

    Read the article

  • Stand alone or free application to backup ADAM / AD LDS database files

    - by Darqer
    Do you know any small standalone and free tool, that can be run in console, to backup / restore ADAM / AD LDS database files (like adamntds.dit, edbres00001.jrs etc.). I tried to stop ADAM service and copy / paste these files to other location but afterwards I was unable to restore ADAM from these files. I know I could use on ws 2003 some backup tool that was provided by microsoft but it seems to be unavailable on ws 2008.

    Read the article

  • Windows Media Player - Display JPEG cover art for FLAC files

    - by pelms
    I got WMP 12 (on Windows 7 RC1) to play FLAC files by installing the Ogg Vorbis/FLAC, Direct Show filters, but WMP does not display the embedded cover artwork. After experimenting I found that it will display the cover art if it is embedded in PNG format, but up to now I've used JPEG for all my FLAC files. Rather than re-tagging all my files, does anyone know a way to get WMP 12 to display JPEG embedded cover art?

    Read the article

  • IIS7 Compression CSS files only compressed when dynamic compression is enabled

    - by Paul
    If anyone can help it would be appreciated. I would like to enable compression for static files within IIS7 (for the sake of simplicity I'll just refer to static css files for the time being). The problem I'm getting is that css files are only compressed when both dynamic and static compression is enabled in IIS for the website. What I really want to achieve is css compression (static file) whilst leaving the dynamic (aspx) pages as uncompressed for the time being (to avoid unnecessary CPU load). I am puzzled as to why just leaving 'static compression' enabled causes css files to be returned uncompressed. My applicationHost.config file has not be altered and looks like this: <httpCompression directory="%SystemDrive%\inetpub\temp\IIS Temporary Compressed Files"> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" /> <staticTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/javascript" enabled="true" /> <add mimeType="*/*" enabled="false" /> </staticTypes> <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="*/*" enabled="false" /> </dynamicTypes> </httpCompression> The server-wide compression setting within IIS is set to 'Dynamic Disabled' and 'Static Enabled' from the Server Features Compression page. The web-site compression setting (Server Sites MyWebsite Features Compression) is where I am enabling and disabling dynamic compression as detailed above. Any help would be really help me get unstuck on this. Thanks

    Read the article

  • Bacula & Multiple Tape Devices, and so on

    - by Tom O'Connor
    Bacula won't make use of 2 tape devices simultaneously. (Search for #-#-# for the TL;DR) A little background, perhaps. In the process of trying to get a decent working backup solution (backing up 20TB ain't cheap, or easy) at $dayjob, we bought a bunch of things to make it work. Firstly, there's a Spectra Logic T50e autochanger, 40 slots of LTO5 goodness, and that robot's got a pair of IBM HH5 Ultrium LTO5 drives, connected via FibreChannel Arbitrated Loop to our backup server. There's the backup server.. A Dell R715 with 2x 16 core AMD 62xx CPUs, and 32GB of RAM. Yummy. That server's got 2 Emulex FCe-12000E cards, and an Intel X520-SR dual port 10GE NIC. We were also sold Commvault Backup (non-NDMP). Here's where it gets really complicated. Spectra Logic and Commvault both sent respective engineers, who set up the library and the software. Commvault was running fine, in so far as the controller was working fine. The Dell server has Ubuntu 12.04 server, and runs the MediaAgent for CommVault, and mounts our BlueArc NAS as NFS to a few mountpoints, like /home, and some stuff in /mnt. When backing up from the NFS mountpoints, we were seeing ~= 290GB/hr throughput. That's CRAP, considering we've got 20-odd TB to get through, in a <48 hour backup window. The rated maximum on the BlueArc is 700MB/s (2460GB/hr), the rated maximum write speed on the tape devices is 140MB/s, per drive, so that's 492GB/hr (or double it, for the total throughput). So, the next step was to benchmark NFS performance with IOzone, and it turns out that we get epic write performance (across 20 threads), and it's like 1.5-2.5TB/hr write, but read performance is fecking hopeless. I couldn't ever get higher than 343GB/hr maximum. So let's assume that the 343GB/hr is a theoretical maximum for read performance on the NAS, then we should in theory be able to get that performance out of a) CommVault, and b) any other backup agent. Not the case. Commvault seems to only ever give me 200-250GB/hr throughput, and out of experimentation, I installed Bacula to see what the state of play there is. If, for example, Bacula gave consistently better performance and speeds than Commvault, then we'd be able to say "**$.$ Refunds Plz $.$**" #-#-# Alas, I found a different problem with Bacula. Commvault seems pretty happy to read from one part of the mountpoint with one thread, and stream that to a Tape device, whilst reading from some other directory with the other thread, and writing to the 2nd drive in the autochanger. I can't for the life of me get Bacula to mount and write to two tape drives simultaneously. Things I've tried: Setting Maximum Concurrent Jobs = 20 in the Director, File and Storage Daemons Setting Prefer Mounted Volumes = no in the Job Definition Setting multiple devices in the Autochanger resource. Documentation seems to be very single-drive centric, and we feel a little like we've strapped a rocket to a hamster, with this one. The majority of example Bacula configurations are for DDS4 drives, manual tape swapping, and FreeBSD or IRIX systems. I should probably add that I'm not too bothered if this isn't possible, but I'd be surprised. I basically want to use Bacula as proof to stick it to the software vendors that they're overpriced ;) I read somewhere that @KyleBrandt has done something similar with a modern Tape solution.. Configuration Files: *bacula-dir.conf* # # Default Bacula Director Configuration file Director { # define myself Name = backuphost-1-dir DIRport = 9101 # where we listen for UA connections QueryFile = "/etc/bacula/scripts/query.sql" WorkingDirectory = "/var/lib/bacula" PidDirectory = "/var/run/bacula" Maximum Concurrent Jobs = 20 Password = "yourekiddingright" # Console password Messages = Daemon DirAddress = 0.0.0.0 #DirAddress = 127.0.0.1 } JobDefs { Name = "DefaultFileJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = File Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" } JobDefs { Name = "DefaultTapeJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = "SpectraLogic" Messages = Standard Pool = AllTapes Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" Prefer Mounted Volumes = no } # # Define the main nightly save backup job # By default, this job will back up to disk in /nonexistant/path/to/file/archive/dir Job { Name = "BackupClient1" JobDefs = "DefaultFileJob" } Job { Name = "BackupThisVolume" JobDefs = "DefaultTapeJob" FileSet = "SpecialVolume" } #Job { # Name = "BackupClient2" # Client = backuphost-12-fd # JobDefs = "DefaultJob" #} # Backup the catalog database (after the nightly save) Job { Name = "BackupCatalog" JobDefs = "DefaultFileJob" Level = Full FileSet="Catalog" Schedule = "WeeklyCycleAfterBackup" # This creates an ASCII copy of the catalog # Arguments to make_catalog_backup.pl are: # make_catalog_backup.pl <catalog-name> RunBeforeJob = "/etc/bacula/scripts/make_catalog_backup.pl MyCatalog" # This deletes the copy of the catalog RunAfterJob = "/etc/bacula/scripts/delete_catalog_backup" Write Bootstrap = "/var/lib/bacula/%n.bsr" Priority = 11 # run after main backup } # # Standard Restore template, to be changed by Console program # Only one such job is needed for all Jobs/Clients/Storage ... # Job { Name = "RestoreFiles" Type = Restore Client=backuphost-1-fd FileSet="Full Set" Storage = File Pool = Default Messages = Standard Where = /srv/bacula/restore } FileSet { Name = "SpecialVolume" Include { Options { signature = MD5 } File = /mnt/SpecialVolume } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } # List of files to be backed up FileSet { Name = "Full Set" Include { Options { signature = MD5 } File = /usr/sbin } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } Schedule { Name = "WeeklyCycle" Run = Full 1st sun at 23:05 Run = Differential 2nd-5th sun at 23:05 Run = Incremental mon-sat at 23:05 } # This schedule does the catalog. It starts after the WeeklyCycle Schedule { Name = "WeeklyCycleAfterBackup" Run = Full sun-sat at 23:10 } # This is the backup of the catalog FileSet { Name = "Catalog" Include { Options { signature = MD5 } File = "/var/lib/bacula/bacula.sql" } } # Client (File Services) to backup Client { Name = backuphost-1-fd Address = localhost FDPort = 9102 Catalog = MyCatalog Password = "surelyyourejoking" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # # Second Client (File Services) to backup # You should change Name, Address, and Password before using # #Client { # Name = backuphost-12-fd # Address = localhost2 # FDPort = 9102 # Catalog = MyCatalog # Password = "i'mnotjokinganddontcallmeshirley" # password for FileDaemon 2 # File Retention = 30 days # 30 days # Job Retention = 6 months # six months # AutoPrune = yes # Prune expired Jobs/Files #} # Definition of file storage device Storage { Name = File # Do not use "localhost" here Address = localhost # N.B. Use a fully qualified name here SDPort = 9103 Password = "lalalalala" Device = FileStorage Media Type = File } Storage { Name = "SpectraLogic" Address = localhost SDPort = 9103 Password = "linkedinmakethebestpasswords" Device = Drive-1 Device = Drive-2 Media Type = LTO5 Autochanger = yes } # Generic catalog service Catalog { Name = MyCatalog # Uncomment the following line if you want the dbi driver # dbdriver = "dbi:sqlite3"; dbaddress = 127.0.0.1; dbport = dbname = "bacula"; DB Address = ""; dbuser = "bacula"; dbpassword = "bbmaster63" } # Reasonable message delivery -- send most everything to email address # and to the console Messages { Name = Standard mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: %t %e of %c %l\" %r" operatorcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: Intervention needed for %j\" %r" mail = root@localhost = all, !skipped operator = root@localhost = mount console = all, !skipped, !saved # # WARNING! the following will create a file that you must cycle from # time to time as it will grow indefinitely. However, it will # also keep all your messages if they scroll off the console. # append = "/var/lib/bacula/log" = all, !skipped catalog = all } # # Message delivery for daemon messages (no job). Messages { Name = Daemon mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula daemon message\" %r" mail = root@localhost = all, !skipped console = all, !skipped, !saved append = "/var/lib/bacula/log" = all, !skipped } # Default pool definition Pool { Name = Default Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year } # File Pool definition Pool { Name = File Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year Maximum Volume Bytes = 50G # Limit Volume size to something reasonable Maximum Volumes = 100 # Limit number of Volumes in Pool } Pool { Name = AllTapes Pool Type = Backup Recycle = yes AutoPrune = yes # Prune expired volumes Volume Retention = 31 days # one Moth } # Scratch pool definition Pool { Name = Scratch Pool Type = Backup } # # Restricted console used by tray-monitor to get the status of the director # Console { Name = backuphost-1-mon Password = "LastFMalsostorePasswordsLikeThis" CommandACL = status, .status } bacula-sd.conf # # Default Bacula Storage Daemon Configuration file # Storage { # definition of myself Name = backuphost-1-sd SDPort = 9103 # Director's port WorkingDirectory = "/var/lib/bacula" Pid Directory = "/var/run/bacula" Maximum Concurrent Jobs = 20 SDAddress = 0.0.0.0 # SDAddress = 127.0.0.1 } # # List Directors who are permitted to contact Storage daemon # Director { Name = backuphost-1-dir Password = "passwordslinplaintext" } # # Restricted Director, used by tray-monitor to get the # status of the storage daemon # Director { Name = backuphost-1-mon Password = "totalinsecurityabound" Monitor = yes } Device { Name = FileStorage Media Type = File Archive Device = /srv/bacula/archive LabelMedia = yes; # lets Bacula label unlabeled media Random Access = Yes; AutomaticMount = yes; # when device opened, read it RemovableMedia = no; AlwaysOpen = no; } Autochanger { Name = SpectraLogic Device = Drive-1 Device = Drive-2 Changer Command = "/etc/bacula/scripts/mtx-changer %c %o %S %a %d" Changer Device = /dev/sg4 } Device { Name = Drive-1 Drive Index = 0 Archive Device = /dev/nst0 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } Device { Name = Drive-2 Drive Index = 1 Archive Device = /dev/nst1 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } # # Send all messages to the Director, # mount messages also are sent to the email address # Messages { Name = Standard director = backuphost-1-dir = all } bacula-fd.conf # # Default Bacula File Daemon Configuration file # # # List Directors who are permitted to contact this File daemon # Director { Name = backuphost-1-dir Password = "hahahahahaha" } # # Restricted Director, used by tray-monitor to get the # status of the file daemon # Director { Name = backuphost-1-mon Password = "hohohohohho" Monitor = yes } # # "Global" File daemon configuration specifications # FileDaemon { # this is me Name = backuphost-1-fd FDport = 9102 # where we listen for the director WorkingDirectory = /var/lib/bacula Pid Directory = /var/run/bacula Maximum Concurrent Jobs = 20 #FDAddress = 127.0.0.1 FDAddress = 0.0.0.0 } # Send all messages except skipped files back to Director Messages { Name = Standard director = backuphost-1-dir = all, !skipped, !restored }

    Read the article

  • Windows 7 "Backup completed but some files were skipped"

    - by Andrew Coleson
    I set up Windows 7 Pro to backup my files to a network path (woohoo!) and chose to backup "data for newly created users, libraries", and my user folder (no system image). All went fine (although the first backup took ~12 hours for some ridiculous reason), but at the end it gave me a message that "Your backup completed, but some files were skipped. Click to see which files." I checked and the "files" skipped were my 3 network-mapped drives, which is perfectly fine and reasonable behavior (I certainly don't need it to back up my network-mapped drives as part of my local PC backup), but in the Backup and Restore center it warns me that my Last backup was "Never" and the Action Center now has a permanent "Check your backup results" issue. Is there any way to set up the backup to exclude the network-mapped drives or tell it that I really don't mind that it skipped drives I never asked it to back up?

    Read the article

  • rsync doesn’t sync files

    - by modi
    Hi I'm using rsync(with Cygwin) to sync 2 local folder The folder contains binary files I'm using the following command rsync.exe -av dir1/ dir2/ but the files in dir2 where only partially update, there are few different files does anybody know of a problem with rsync on windows? should i use some other flags 10'xs

    Read the article

  • Access to files on Windows 2003 server from Mac

    - by Clinton
    Hi, I can access the Windows 2003 server from my Mac, but can't see all the files. I am using Snow Leopard and have followed fixes online. All get me connected and I can see all folders, but not all files within these folders appear. Someone else using a Mac can see all files which is very odd. Any help much appreciated. C

    Read the article

  • Debian/Redmine: Upgrade multiple instances at once

    - by Davey
    I have multiple Redmine instances. Let's call them InstanceA and InstanceB. InstanceA and InstanceB share the same Redmine installation on Debian. Suppose I would want to install Redmine 1.3 on both instances, how would I do that? After upgrading the core files I would have to migrate the databases. What I would like to know is: can I migrate all databases in a single action? Normally I would do something like: rake -s db:migrate RAILS_ENV=production X_DEBIAN_SITEID=InstanceA for each instance, but this would get tedious if you have 50+ instances. Thanks in advance! Edit: The README.Debian file that's in the (Debian) Redmine package states: SUPPORTS SETUP AND UPGRADES OF MULTIPLE DATABASE INSTANCES This redmine package is designed to automatically configure database BUT NOT the web server. The default database instance is called "default". A debconf facility is provided for configuring several redmine instances. Use dpkg-reconfigure to define the instances identifiers. But can't figure out what to do with the "debconf facility". Edit2: My environment is a default Debian 6.0 "Squeeze" installation with a default Redmine (aptitude install redmine) installation on a default libapache2-mod-passenger. I have setup two instances with dpkg-reconfigure redmine.

    Read the article

  • What files to backup on Lighttpd+MySQL+PHP server

    - by Tomaszs
    I have a VPS with CentOS 5. I would like to create backup of: all my config files tweaks of database, php, server a databases cron settings website files installed applications and their settings (?) What files should i take into account? I don't want to miss any file that will be necessary to restore fast my webserver in case of any failure. And I don't want to create whole backup because entire VPS has like 30 GB of data.

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >