Search Results

Search found 676 results on 28 pages for 'nfs'.

Page 19/28 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • DMZ and LAN on the same Windows Storage Server 2008 R2

    - by Sergei
    We are moving from EMC Celerra NS20 to Windows Storage Server 2008 R2. I would like to use deduplication feature in WSS (Single Instance Storage) for hosting data for our external FTP server.The idea is to use NFS server on WSS as datastore for Linux FTP server located in DMZ and CIFS services for servers in LAN. Using Celerra fileserver I was able to create multiple instances of fileservers with multiple virtual interfaces and separate filesystems so all data and networks would be separated. Would it be possible to do something similiar on WSS?

    Read the article

  • forcing itunes tags to the file itself

    - by user21458
    How do I make itunes save tagging information to the file itself? I tagged a file and then loaded it into a different itunes library on a different computer via a NFS share. The tagging info wasn't present which leads me to believe the tagging info is only stored in the itunes DB. Update I'm specifically concerned about movies files, so if you RClick - Get info Options - Media Kind (Movies/TV/etc) Video - Show/Episode/Season These tags don't seem to be saved to the file itself, this is lame.

    Read the article

  • DMZ and LAN on the same Windows Storage Server 2008 R2

    - by Sergei
    We are moving from EMC Celerra NS20 to Windows Storage Server 2008 R2. I would like to use deduplication feature in WSS (Single Instance Storage) for hosting data for our external FTP server.The idea is to use NFS server on WSS as datastore for Linux FTP server located in DMZ and CIFS services for servers in LAN. Using Celerra fileserver I was able to create multiple instances of fileservers with multiple virtual interfaces and separate filesystems so all data and networks would be separated. Would it be possible to do something similiar on WSS?

    Read the article

  • vsftp hangs at "150 Here comes the directory listing."

    - by Rikr
    In a vsftpd server enviroment, shared various directories from nfs mountpoints, I can log in without problem, but when I send the first "ls", the vsftp give me the directory listing: lftp [email protected]:~ ls -rw-rw-rw- 1 1160 1016 392 Jun 06 09:28 test.gif but not give me the shell again (lftp client). In the server log I can see that the last message is: "150 Here comes the directory listing." Why happend this?

    Read the article

  • solaris 10 - custom jumpstart menu.lst

    - by romant
    Is it possible to include the config.tar (encompasses the rules before/after scripts…) instead of on a web server served through http:// - but included no the cdrom itself. Namely am trying to do something along the lines of: title Solaris kernel$ /boot/multiboot kernel/$ISADIR/unix install cdrom:/config/config.tar dhcp -B install_media=cdrom module$ /boot/$ISADIR/x86.miniroot Yet it seems Solaris only supports HTTP or NFS as the source for config.tar - and not the CDROM itself. Any ideas? Thank you.

    Read the article

  • /etc/init.d/rc: 317: sed: Permission Denied Ubuntu 9.04

    - by sxanness
    I recently added NFS to my Ubuntu server and edited /etc/fstab to mount the network file system. After a reboot I am not getting the following error several times on the console and it will not boot /etc/init.d/rc: 317: sed: Permission Denied Any advice? I have commented out the lines that I added to /etc/fstab and the issue still persists. Thank You,

    Read the article

  • Automatically mount partition when needed

    - by David Parunakian
    Hello, Can anyone suggest a method of mounting a partition (e.g. an NFS share) when a user changes into the directory it is set to be mounted to, and not on the system startup? So far I've been unable to propose such a method, aside from editing BASH built-in cd implementation and forcing an fstab/mtab check before the working directory is actually changed.

    Read the article

  • How can I put together services bettwen differents servers?

    - by poz2k4444
    For a schoolar project, I have to run differents services in a lab enviroment where I'll have 6 computers working as servers, what services can I put together, and what cannot be, in order to prevent security risks, and considereiting that if one service goes down, affects less possible the function of the server farm, the services are: MySql Http for intranet Https DHCP IPP SMTP LDAP VPN SSH NTP DNS NFS I'll use linux

    Read the article

  • Fix static ip address problems in Ubuntu 10.04

    - by jane
    I used Network Manager in Ubuntu 10.04 to set a static ip address and assigned one that was already in use. Now my computer will not boot (nfs crashes). I booted from a live cd to change the configuration on the file system in /etc/network/interfaces but the file looks to be the default. Where does the network manager (the gui from system- preferences) store it's configuration so I can overwrite it and enter the correct ip addresss and have a happy working computer again. thanks!!

    Read the article

  • Mount network drive

    - by user971155
    I'm using linux(ubuntu), there's a linux(fedora) server, that I can log in with ssh, is there any possibility //server/dir /mnt/my_mount_dir cifs username=loginpassword=11111,uid=1000,iocharset=utf8,codepage=unicode,unicode 0 0 As far as I'm concerned, this construction uses SMB as primary tool. Is there any possibility to use NFS or any another approach, because in case of SMB it tends to have permission, symlink collisions. PS It would be great if you also provide some links. Thanks.

    Read the article

  • Replication - between pools in the same system

    - by Steve Tunstall
    OK, I fully understand that's it's been a LONG time since I've blogged with any tips or tricks on the ZFSSA, and I'm way behind. Hey, I just wrote TWO BLOGS ON THE SAME DAY!!! Make sure you keep scrolling down to see the next one too, or you may have missed it. To celebrate, for the one or two of you out there who are still reading this, I got something for you. The first TWO people who make any comment below, with your real name and email so I can contact you, will get some cool Oracle SWAG that I have to give away. Don't get excited, it's not an iPad, but it pretty good stuff. Only the first two, so if you already see two below, then settle down. Now, let's talk about Replication and Migration.  I have talked before about Shadow Migration here: https://blogs.oracle.com/7000tips/entry/shadow_migrationShadow Migration lets one take a NFS or CIFS share in one pool on a system and migrate that data over to another pool in the same system. That's handy, but right now it's only for file systems like NFS and CIFS. It will not work for LUNs. LUN shadow migration is a roadmap item, however. So.... What if you have a ZFSSA cluster with multiple pools, and you have a LUN in one pool but later you decide it's best if it was in the other pool? No problem. Replication to the rescue. What's that? Replication is only for replicating data between two different systems? Who told you that? We've been able to replicate to the same system now for a few code updates back. These instructions below will also work just fine if you're setting up replication between two different systems. After replication is complete, you can easily break replication, change the new LUN into a primary LUN and then delete the source LUN. Bam. Step 1- setup a target system. In our case, the target system is ourself, but you still have to set it up like it's far away. Go to Configuration-->Services-->Remote Replication. Click the plus sign and setup the target, which is the ZFSSA you're on now. Step 2. Now you can go to the LUN you want to replicate. Take note which Pool and Project you're in. In my case, I have a LUN in Pool2 called LUNp2 that I wish to replicate to Pool1.  Step 3. In my case, I made a Project called "Luns" and it has LUNp2 inside of it. I am going to replicate the Project, which will automatically replicate all of the LUNs and/or Filesystems inside of it.  Now, you can also replicate from the Share level instead of the Project. That will only replicate the share, and not all the other shares of a project. If someone tells you that if you replicate a share, it always replicates all the other shares also in that Project, don't listen to them.Note below how I can choose not only the Target (which is myself), but I can also choose which Pool to replicate it to. So I choose Pool1.  Step 4. I did not choose a schedule or pick the "Continuous" button, which means my replication will be manual only. I can now push the Manual Replicate button on my Actions list and you will see it start. You will see both a barber pole animation and also an update in the status bar on the top of the screen that a replication event has begun. This also goes into the event log.  Step 5. The status bar will also log an event when it's done. Step 6. If you go back to Configuration-->Services-->Remote Replication, you will see your event. Step 7. Done. To see your new replica, go to the other Pool (Pool1 for me), and click the "Replica" area below the words "Filesystems | LUNs" Here, you will see any replicas that have come in from any of your sources. It's a simple matter from here to break the replication, which will change this to a "Local" LUN, and then delete the original LUN back in Pool2. Ok, that's all for now, but I promise to give out more tricks sometime in November !!! There's very exciting stuff coming down the pipe for the ZFSSA. Both new hardware and new software features that I'm just drooling over. That's all I can say, but contact your local sales SC to get a NDA roadmap talk if you want to hear more.   Happy Halloween,Steve 

    Read the article

  • How do you limit root partition disk access to allow drive to go into stanby mode?

    - by Casey
    When there are no users on my system, I would like the hard disk to spindown to low-power state. I realize that this might not be 100% achievable for a straight 24 hours, but it seems reasonable that the system could remain idle for a few hours at a time when it is not in use. My system is headless and running a limited number of services. The primary services are: exim4, mythtv-backend, nfs, samba, cups, apt-cacher-ng Assume that drives are already enabled to go into standby mode. Also, its not acceptable to increase the write-back timeout, since my system is not on a UPS.

    Read the article

  • Oracle Database 11g Release 2 is SAP certified for Unix and Linux platforms.

    - by jenny.gelhausen
    SAP announces certification of Oracle Database 11g Release 2 on all available UNIX and Linux platforms. This certification comes along with the immediate availability of the following important options and features: * Advanced Compression Option (table, RMAN backup, expdp, DG Network) * Real Application Testing * Oracle Database 11g Release 2 Database Vault * Oracle Database 11g Release 2 RAC * Advanced Encryption for tablespaces, RMAN backups, expdp, DG Network * Direct NFS * Deferred Segments * Online Patching All above functionality has been fully integrated within the SAP products so they can be utilized and managed from within the SAP solution stack. All required migration steps can be done fully online. Learn why Oracle is the #1 Database for Deploying SAP Applications SAP Certification announcement var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Unable to get screen signal after purge FGLRX

    - by Boris
    I thought that my ATI driver was not running well so I wanted to re-instal it completely. I did: sudo apt-get remove --purge fglrx* sudo apt-get remove --purge xserver-xorg-video-ati xserver-xorg-video-radeon and after a boot I wanted to intal the ATI driver BUT at the boot no more signal to my screen. Since, every time I turn on my PC it gets to purple screen and then screen shuts down ! Note that: Even if the screen is off, PC seems to be running almost well: I m still able to use my network to access data shared with NFS. Using live USB I have no screen problem. I tried to plug my screen on an alternative output but it did not work. I tried CTRL+ALT+F1 while being on purple screen but it does not do anything, screen shut down anyway. I m going to try the SHIFT thing and learn from blackscreen wiki...

    Read the article

  • WebCenter Content shared folders for clustering

    - by Kyle Hatlestad
    When configuring a WebCenter Content (WCC) cluster, one of the things which makes it unique from some other WebLogic Server applications is its requirement for a shared file system.  This is actually not any different then 10g and previous versions of UCM when it ran directly on a JVM.  And while it is simple enough to say it needs a shared file system, there are some crucial details in how those directories are configured. And if they aren't followed, you may result in some unwanted behavior. This blog post will go into the details on how exactly the file systems should be split and what options are required. Beyond documents being stored on the file system and/or database and metadata being stored in the database along with other structured data, there is other information being read and written to on the file system.  Information such as user profile preferences, workflow item state information, metadata profiles, and other details are stored in files.  In addition, for certain processes within WCC, each of the nodes needs to know what the other nodes are doing so they don’t step on each other.  WCC keeps track of this through the use of lock files on the file system.  Because of this, each node of the WCC must have access to the same file system just as they have access to the same database. WCC uses its own locking mechanism using files, so it also needs to have access to those files without file attribute caching and without locking being done by the client (node).  If one of the nodes accesses a certain status file and it happens to be cached, that node might attempt to run a process which another node is already working on.  Or if a particular file is locked by one of the node clients, this could interfere with access by another node.  Unfortunately, when disabling file attribute caching on the file share, this can impact performance.  So it is important to only disable caching and locking on the particular folders which require it.  When configuring WebCenter Content after deploying the domain, it asks for 3 different directories: Content Server Instance Folder, Native File Repository Location, and Weblayout Folder.  And starting in PS5, it now asks for the User Profile Folder. Even if you plan on storing the content in the database, you still need to establish a Native File (Vault) and Weblayout directories.  These will be used for handling temporary files, cached files, and files used to deliver the UI. For these directories, the only folder which needs to have the file attribute caching and locking disabled is the ‘Content Server Instance Folder’.  So when establishing this share through NFS or a clustered file system, be sure to specify those options. For instance, if creating the share through NFS, use the ‘noac’ and ‘nolock’ options for the mount options. For the other directories, caching and locking should be enabled to provide best performance to those locations.   These directory path configurations are contained within the <domain dir>\ucm\cs\bin\intradoc.cfg file: #Server System PropertiesIDC_Id=UCM_server1 #Server Directory Variables IdcHomeDir=/u01/fmw/Oracle_ECM1/ucm/idc/ FmwDomainConfigDir=/u01/fmw/user_projects/domains/base_domain/config/fmwconfig/ AppServerJavaHome=/u01/jdk/jdk1.6.0_22/jre/ AppServerJavaUse64Bit=true IntradocDir=/mnt/share_no_cache/base_domain/ucm/cs/ VaultDir=/mnt/share_with_cache/ucm/cs/vault/ WeblayoutDir=/mnt/share_with_cache/ucm/cs/weblayout/ #Server Classpath variables #Additional Variables #NOTE: UserProfilesDir is only available in PS5 – 11.1.1.6.0UserProfilesDir=/mnt/share_with_cache/ucm/cs/data/users/profiles/ In addition to these folder configurations, it’s also recommended to move node-specific folders to local disk to avoid unnecessary traffic to the shared directory.  So on each node, go to <domain dir>\ucm\cs\bin\intradoc.cfg and add these additional configuration entries: VaultTempDir=<domain dir>/ucm/<cs>/vault/~temp/ TraceDirectory=<domain dir>/servers/<UCM_serverN>/logs/EventDirectory=<domain dir>/servers/<UCM_serverN>/logs/event/ And of course, don’t forget the cluster-specific configuration values to add as well.  These can be added through Admin Server -> General Configuration -> Additional Configuration Variables or directly in the <IntradocDir>/config/config.cfg file: ArchiverDoLocks=true DisableSharedCacheChecking=true ServiceAllowRetry=true    (use only with Oracle RAC Database)PublishLockTimeout=300000  (time can vary depending on publishing time and number of nodes) For additional information and details on clustering configuration, I highly recommend reviewing document [1209496.1] on the support site.  In addition, there is a great step-by-step guide on setting up a WebCenter Content cluster [1359930.1].

    Read the article

  • A human-friendlier Samba name mangling

    - by Alex
    Most of our computers run Ubuntu, but two of them dual-boot into Windows, and when we have guests over, they typically also run Windows computers. Thus, in addition to using NFS, our file server (Ubuntu server) also runs Samba. And since we use Ubuntu mostly, we like to take advantage of its advantages over Windows, such as being able to use the characters \:*?"<>| in a file name. The problem, of course, is that Windows doesn't accept those characters in file names, and so Samba has to translate the file name into something more acceptable. The way it does this, however, I find to be obnoxious. The file name Episode 182 - Exorcist 2: The Heretic.mp4 for instance turns into E4Q82R~Y.MP4. This is a terrible "correction". Is there a way to make Samba's mangling a little more friendly to humans? Is possible to "correct" it to something like Episode 182 - Exorcist 2_ The Heretic.mp4 instead, where the illegal characters are simply substituted?

    Read the article

  • How to mount NAS folders via direct wired connection?

    - by Pavel Vlasov
    There are two machines connected to each other via network cable. One is notebook Ubuntu 12.04. Another is NAS Western Digital with a Debian. The NAS has some files shared via samba. I am not sure how it is called but under Windows these files are accessible via path \mybooklive\public. I know there is NFS - probably it is preferable over samba... So, how to get my files accessible from Ubuntu when a cable is plugged?

    Read the article

  • EPM Architecture: Foundation

    - by Marc Schumacher
    This post is the first of a series that is going to describe the EPM System architecture per component. During the following weeks a couple of follow up posts will describe each component. If applicable, the component will have its standard port next to its name in brackets. EPM Foundation is Java based and consists of two web applications, Shared Services and Workspace. Both applications are accessed by browser through Oracle HTTP Server (OHS) or Internet Information Services (IIS). Communication to the backend database is done by JDBC. The file system to store Lifecycle Management (LCM) artifacts can be either local or remote (e.g. NFS, network share). For authentication purposes, the EPM Product Suite can connect to external directories or databases. Interaction with other EPM Suite components like product specific Lifecycle Management connectors or Reporting and Analysis Web happens through HTTP protocol. The next post will cover Reporting and Analysis.

    Read the article

  • Uploading or attaching files that located on a shared drive doesn't work?

    - by Alex
    I have this odd, quite minor, but annoying issue that I am quite perplexed about. Whenever I try to upload a file via my browser(let's say attach a file to an email in GMail), I click 'Browse' button and it opens standard file selection dialog, that doesn't show network drives. Further more if I try to drag a file from a network drive into GMail, it doesn't work either, it just doesn't let me do that. This issue has been around for quite sometime now, and I am just curious if this is something on my side or if it's a bug or a misconfiguration of some sort. FWIW, I am currently running 10.10, network drive is a samba share on NAS. This happens in FF and Chrome and this is only happens with Samba mounts. As a matter of fact, NFS volumes that are located on the same network operate perfectly fine.

    Read the article

  • Uploading or attaching files that located on a shared drive doesn't work?

    - by Alex
    I have this odd, quite minor, but annoying issue that I am quite perplexed about. Whenever I try to upload a file via my browser(let's say attach a file to an email in GMail), I click 'Browse' button and it opens standard file selection dialog, that doesn't show network drives. Further more if I try to drag a file from a network drive into GMail, it doesn't work either, it just doesn't let me do that. This issue has been around for quite sometime now, and I am just curious if this is something on my side or if it's a bug or a misconfiguration of some sort. FWIW, I am currently running 10.10, network drive is a samba share on NAS. This happens in FF and Chrome and this is only happens with Samba mounts. As a matter of fact, NFS volumes that are located on the same network operate perfectly fine.

    Read the article

  • Ubuntu 12.04 PXELINUX does not boot RHEL Kernel and Initrd

    - by utpal
    I have successfully setup PXE server on Ubuntu 12.04 with DNSMASQ for DHCP Proxy Service, TFTPD-HPA for TFTP service, NFS-KERNEL-SERVER, APACHE2 and SYSLINUX for pxelinux.0 bootloaded needed for pxe boot using the following POST: http://ubuntuforums.org/showthread.php?t=1606910 I was successfully able to pxe boot a client to a Ubuntu 12.04 LIVE CD. Next, I want to PXE boot a client to a RHEL 6.5 x64 Kernel and initrd image. I dont want to install, just boot a client so that it mounts initrd and I can get a minimal filesystem on the client. How can I do that? Please Help!!

    Read the article

  • How should I synchronize configurations and data across computers?

    - by lfaraone
    Imagine I have three Ubuntu computers home, laptop, beach-house. They all have the same version of Ubuntu, 10.04 installed, and are kept up to date from the repositories. I use f-spot, thunderbird, and google-chrome on all of the computers. Is there a way to keep the data and configuration in sync across them, without requiring constant connectivity for normal (non-synchronous) usage? For example, they should be usable without network connectivity, so something like NFS won't work. An ideal solution would not require manual action to start the syncing process.

    Read the article

  • Ubuntu not mounted?

    - by z3matt
    In Live CD i went in the terminal and when i do 'sudo update-grub' it responds /usr/sbin/grub-probe: error: cannot find a device for / (is /dev mounted?). Here's the breakdown of my drive: sda1 - vfat - Windows 7: FAT32 sda2 - sda3 - nfs - Windows Vista/7: NTFS - Windows 7 sda3/Wubi: - sda4 - Grub2 sda5 - Ubuntu 12.04.1 LTS sda6 - sda7 - sda8 - BIOS Boot Partition Also at the top of the page it states : = No boot loader is installed in the MBR of /dev/sda Any and all help is appreciated and welcomed. When my computer boots, it goes into GRUB and has the options for Windows 7 and Windows Memory Test but no option for Ubuntu. I want to run a dual-boot through it.

    Read the article

  • Bacula & Multiple Tape Devices, and so on

    - by Tom O'Connor
    Bacula won't make use of 2 tape devices simultaneously. (Search for #-#-# for the TL;DR) A little background, perhaps. In the process of trying to get a decent working backup solution (backing up 20TB ain't cheap, or easy) at $dayjob, we bought a bunch of things to make it work. Firstly, there's a Spectra Logic T50e autochanger, 40 slots of LTO5 goodness, and that robot's got a pair of IBM HH5 Ultrium LTO5 drives, connected via FibreChannel Arbitrated Loop to our backup server. There's the backup server.. A Dell R715 with 2x 16 core AMD 62xx CPUs, and 32GB of RAM. Yummy. That server's got 2 Emulex FCe-12000E cards, and an Intel X520-SR dual port 10GE NIC. We were also sold Commvault Backup (non-NDMP). Here's where it gets really complicated. Spectra Logic and Commvault both sent respective engineers, who set up the library and the software. Commvault was running fine, in so far as the controller was working fine. The Dell server has Ubuntu 12.04 server, and runs the MediaAgent for CommVault, and mounts our BlueArc NAS as NFS to a few mountpoints, like /home, and some stuff in /mnt. When backing up from the NFS mountpoints, we were seeing ~= 290GB/hr throughput. That's CRAP, considering we've got 20-odd TB to get through, in a <48 hour backup window. The rated maximum on the BlueArc is 700MB/s (2460GB/hr), the rated maximum write speed on the tape devices is 140MB/s, per drive, so that's 492GB/hr (or double it, for the total throughput). So, the next step was to benchmark NFS performance with IOzone, and it turns out that we get epic write performance (across 20 threads), and it's like 1.5-2.5TB/hr write, but read performance is fecking hopeless. I couldn't ever get higher than 343GB/hr maximum. So let's assume that the 343GB/hr is a theoretical maximum for read performance on the NAS, then we should in theory be able to get that performance out of a) CommVault, and b) any other backup agent. Not the case. Commvault seems to only ever give me 200-250GB/hr throughput, and out of experimentation, I installed Bacula to see what the state of play there is. If, for example, Bacula gave consistently better performance and speeds than Commvault, then we'd be able to say "**$.$ Refunds Plz $.$**" #-#-# Alas, I found a different problem with Bacula. Commvault seems pretty happy to read from one part of the mountpoint with one thread, and stream that to a Tape device, whilst reading from some other directory with the other thread, and writing to the 2nd drive in the autochanger. I can't for the life of me get Bacula to mount and write to two tape drives simultaneously. Things I've tried: Setting Maximum Concurrent Jobs = 20 in the Director, File and Storage Daemons Setting Prefer Mounted Volumes = no in the Job Definition Setting multiple devices in the Autochanger resource. Documentation seems to be very single-drive centric, and we feel a little like we've strapped a rocket to a hamster, with this one. The majority of example Bacula configurations are for DDS4 drives, manual tape swapping, and FreeBSD or IRIX systems. I should probably add that I'm not too bothered if this isn't possible, but I'd be surprised. I basically want to use Bacula as proof to stick it to the software vendors that they're overpriced ;) I read somewhere that @KyleBrandt has done something similar with a modern Tape solution.. Configuration Files: *bacula-dir.conf* # # Default Bacula Director Configuration file Director { # define myself Name = backuphost-1-dir DIRport = 9101 # where we listen for UA connections QueryFile = "/etc/bacula/scripts/query.sql" WorkingDirectory = "/var/lib/bacula" PidDirectory = "/var/run/bacula" Maximum Concurrent Jobs = 20 Password = "yourekiddingright" # Console password Messages = Daemon DirAddress = 0.0.0.0 #DirAddress = 127.0.0.1 } JobDefs { Name = "DefaultFileJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = File Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" } JobDefs { Name = "DefaultTapeJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = "SpectraLogic" Messages = Standard Pool = AllTapes Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" Prefer Mounted Volumes = no } # # Define the main nightly save backup job # By default, this job will back up to disk in /nonexistant/path/to/file/archive/dir Job { Name = "BackupClient1" JobDefs = "DefaultFileJob" } Job { Name = "BackupThisVolume" JobDefs = "DefaultTapeJob" FileSet = "SpecialVolume" } #Job { # Name = "BackupClient2" # Client = backuphost-12-fd # JobDefs = "DefaultJob" #} # Backup the catalog database (after the nightly save) Job { Name = "BackupCatalog" JobDefs = "DefaultFileJob" Level = Full FileSet="Catalog" Schedule = "WeeklyCycleAfterBackup" # This creates an ASCII copy of the catalog # Arguments to make_catalog_backup.pl are: # make_catalog_backup.pl <catalog-name> RunBeforeJob = "/etc/bacula/scripts/make_catalog_backup.pl MyCatalog" # This deletes the copy of the catalog RunAfterJob = "/etc/bacula/scripts/delete_catalog_backup" Write Bootstrap = "/var/lib/bacula/%n.bsr" Priority = 11 # run after main backup } # # Standard Restore template, to be changed by Console program # Only one such job is needed for all Jobs/Clients/Storage ... # Job { Name = "RestoreFiles" Type = Restore Client=backuphost-1-fd FileSet="Full Set" Storage = File Pool = Default Messages = Standard Where = /srv/bacula/restore } FileSet { Name = "SpecialVolume" Include { Options { signature = MD5 } File = /mnt/SpecialVolume } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } # List of files to be backed up FileSet { Name = "Full Set" Include { Options { signature = MD5 } File = /usr/sbin } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } Schedule { Name = "WeeklyCycle" Run = Full 1st sun at 23:05 Run = Differential 2nd-5th sun at 23:05 Run = Incremental mon-sat at 23:05 } # This schedule does the catalog. It starts after the WeeklyCycle Schedule { Name = "WeeklyCycleAfterBackup" Run = Full sun-sat at 23:10 } # This is the backup of the catalog FileSet { Name = "Catalog" Include { Options { signature = MD5 } File = "/var/lib/bacula/bacula.sql" } } # Client (File Services) to backup Client { Name = backuphost-1-fd Address = localhost FDPort = 9102 Catalog = MyCatalog Password = "surelyyourejoking" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # # Second Client (File Services) to backup # You should change Name, Address, and Password before using # #Client { # Name = backuphost-12-fd # Address = localhost2 # FDPort = 9102 # Catalog = MyCatalog # Password = "i'mnotjokinganddontcallmeshirley" # password for FileDaemon 2 # File Retention = 30 days # 30 days # Job Retention = 6 months # six months # AutoPrune = yes # Prune expired Jobs/Files #} # Definition of file storage device Storage { Name = File # Do not use "localhost" here Address = localhost # N.B. Use a fully qualified name here SDPort = 9103 Password = "lalalalala" Device = FileStorage Media Type = File } Storage { Name = "SpectraLogic" Address = localhost SDPort = 9103 Password = "linkedinmakethebestpasswords" Device = Drive-1 Device = Drive-2 Media Type = LTO5 Autochanger = yes } # Generic catalog service Catalog { Name = MyCatalog # Uncomment the following line if you want the dbi driver # dbdriver = "dbi:sqlite3"; dbaddress = 127.0.0.1; dbport = dbname = "bacula"; DB Address = ""; dbuser = "bacula"; dbpassword = "bbmaster63" } # Reasonable message delivery -- send most everything to email address # and to the console Messages { Name = Standard mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: %t %e of %c %l\" %r" operatorcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: Intervention needed for %j\" %r" mail = root@localhost = all, !skipped operator = root@localhost = mount console = all, !skipped, !saved # # WARNING! the following will create a file that you must cycle from # time to time as it will grow indefinitely. However, it will # also keep all your messages if they scroll off the console. # append = "/var/lib/bacula/log" = all, !skipped catalog = all } # # Message delivery for daemon messages (no job). Messages { Name = Daemon mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula daemon message\" %r" mail = root@localhost = all, !skipped console = all, !skipped, !saved append = "/var/lib/bacula/log" = all, !skipped } # Default pool definition Pool { Name = Default Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year } # File Pool definition Pool { Name = File Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year Maximum Volume Bytes = 50G # Limit Volume size to something reasonable Maximum Volumes = 100 # Limit number of Volumes in Pool } Pool { Name = AllTapes Pool Type = Backup Recycle = yes AutoPrune = yes # Prune expired volumes Volume Retention = 31 days # one Moth } # Scratch pool definition Pool { Name = Scratch Pool Type = Backup } # # Restricted console used by tray-monitor to get the status of the director # Console { Name = backuphost-1-mon Password = "LastFMalsostorePasswordsLikeThis" CommandACL = status, .status } bacula-sd.conf # # Default Bacula Storage Daemon Configuration file # Storage { # definition of myself Name = backuphost-1-sd SDPort = 9103 # Director's port WorkingDirectory = "/var/lib/bacula" Pid Directory = "/var/run/bacula" Maximum Concurrent Jobs = 20 SDAddress = 0.0.0.0 # SDAddress = 127.0.0.1 } # # List Directors who are permitted to contact Storage daemon # Director { Name = backuphost-1-dir Password = "passwordslinplaintext" } # # Restricted Director, used by tray-monitor to get the # status of the storage daemon # Director { Name = backuphost-1-mon Password = "totalinsecurityabound" Monitor = yes } Device { Name = FileStorage Media Type = File Archive Device = /srv/bacula/archive LabelMedia = yes; # lets Bacula label unlabeled media Random Access = Yes; AutomaticMount = yes; # when device opened, read it RemovableMedia = no; AlwaysOpen = no; } Autochanger { Name = SpectraLogic Device = Drive-1 Device = Drive-2 Changer Command = "/etc/bacula/scripts/mtx-changer %c %o %S %a %d" Changer Device = /dev/sg4 } Device { Name = Drive-1 Drive Index = 0 Archive Device = /dev/nst0 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } Device { Name = Drive-2 Drive Index = 1 Archive Device = /dev/nst1 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } # # Send all messages to the Director, # mount messages also are sent to the email address # Messages { Name = Standard director = backuphost-1-dir = all } bacula-fd.conf # # Default Bacula File Daemon Configuration file # # # List Directors who are permitted to contact this File daemon # Director { Name = backuphost-1-dir Password = "hahahahahaha" } # # Restricted Director, used by tray-monitor to get the # status of the file daemon # Director { Name = backuphost-1-mon Password = "hohohohohho" Monitor = yes } # # "Global" File daemon configuration specifications # FileDaemon { # this is me Name = backuphost-1-fd FDport = 9102 # where we listen for the director WorkingDirectory = /var/lib/bacula Pid Directory = /var/run/bacula Maximum Concurrent Jobs = 20 #FDAddress = 127.0.0.1 FDAddress = 0.0.0.0 } # Send all messages except skipped files back to Director Messages { Name = Standard director = backuphost-1-dir = all, !skipped, !restored }

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >