Search Results

Search found 46 results on 2 pages for 'bacula'.

Page 2/2 | < Previous Page | 1 2 

  • Backup tape compression

    - by pufferfish
    What things should I check to confirm that compression is actually happening on our tape backup system? Although the tapes are marked as 200G/520G (native/compressed) capacity, they seem to fill up before the 200G mark (some less than 100G). I'm using - Sony AIT-4 tape autochanger - Sony SDX4-200C (AIT-4) tapes - Ubuntu Lucid - Bacula I've tried checking hardware compression with: tapeinfo -f /dev/nst0, which gives Product Type: Tape Drive Vendor ID: 'SONY ' Product ID: 'SDX-900V ' Revision: '0102' Attached Changer API: No SerialNumber: '0001000036' MinBlock: 2 MaxBlock: 8388608 SCSI ID: 1 SCSI LUN: 0 Ready: yes BufferedMode: yes Medium Type: Not Loaded Density Code: 0x33 BlockSize: 0 DataCompEnabled: yes DataCompCapable: yes DataDeCompEnabled: yes CompType: 0x3 DeCompType: 0x3 BOP: yes Block Position: 0 Partition 0 Remaining Kbytes: 201778000 Partition 0 Size in Kbytes: 201779000 ActivePartition: 0 EarlyWarningSize: 0 NumPartitions: 0 MaxPartitions: 0 ... so I presume it's on. Notes: The Bacula documentation says hardware compression needs to be enable with "system tools such as mt"

    Read the article

  • Dell External SAS 5/E HBA and Hyper-V

    - by JohnyD
    I have a Dell R710 running Win2008 R2 + Hyper-V with dual SAS 5/E HBA's. I'm building a Linux VM to install Bacula on and I need to connect it to my Dell PowerVault 124T via the SAS HBA. I've been doing some looking online and I have yet to find a straightforward answer on how to connect a SAS HBA to a VM, let alone a Linux VM. The flavor is Ubuntu 32-bit.

    Read the article

  • Possible to simulate a NDMP backup?

    - by Sandra
    I like to try Amanda's and Bacula's NDMP backup features, but I don't wan't to try it out on the live NAS, just yet. Ideally would I like to find out which that suites me best, and get familiar with before trying to make a real NDMP backup from the NAS. Question Is it somehow possible to simulate a NDMP backup with some Linux hosts? Or formulated in another way: Does there exist a NDMP daemon I can install on a Linux host, so it pretends to be a NAS?

    Read the article

  • Backup linux to ftp server

    - by Alakdae
    What do you use for backups to ftp server? I've tried the setup with Amanda and virtual tapes on the ftp server mounted with Curlftpfs and I'm not satisfied with it. I just don't feel confident about Amanda. Also I cannot use anything that uses rsync on the ftp mounted filesystem because it only creates the directories and doesn't create files as it cannot execute "mkstemp". I've been thinking about Bacula but I can't find any good HOWTO for it.

    Read the article

  • Fix X11 forwarding on OSX

    - by Such
    I am looking for a way to fix/debug a X11 forwarding session on OSX. Here is my situation: From my mac I connect to a Ubuntu workstation with ssh -X (tried ssh -Y as well). X11 forwarding works perfectly with firefox for instance, X11/Quartz is started automatically on OSX and firefox is displayed. X11 forwarding does not work with bat (Bacula graphical console): X11 is started but no window is displayed. There are no errors (/private/var/log/system.log). When I try doing the same from another Ubuntu workstation, it works perfectly for both firefox and bat. I guess the problem is on OSX side then. I tried switching some options in X11 but nothing works. Would you have any idea on how to move forward? Thanks!

    Read the article

  • Managing service passwords with Puppet

    - by Jeff Ferland
    I'm setting up my Bacula configuration in Puppet. One thing I want to do is ensure that each password field is different. My current thought is to hash the hostname with a secret value that would ensure each file daemon has a unique password and that password can be written to both the director configuration and the file server. I definitely don't want to use one universal password as that would permit anybody who might compromise one machine to get access to any machine through Bacula. Is there another way to do this other than using a hash function to generate the passwords? Clarification: This is NOT about user accounts for services. This is about the authentication tokens (to use another term) in the client / server files. Example snippet: Director { # define myself Name = <%= hostname $>-dir QueryFile = "/etc/bacula/scripts/query.sql" WorkingDirectory = "/var/lib/bacula" PidDirectory = "/var/run/bacula" Maximum Concurrent Jobs = 3 Password = "<%= somePasswordFunction =>" # Console password Messages = Daemon }

    Read the article

  • Linux server is only using 60% of memory, then swapping

    - by Kamil Kisiel
    I've got a Linux server that's running our bacula backup system. The machine is grinding like mad because it's going heavy in to swap. The problem is, it's only using 60% of its physical memory! Here's the output from free -m: free -m total used free shared buffers cached Mem: 3949 2356 1593 0 0 1 -/+ buffers/cache: 2354 1595 Swap: 7629 1804 5824 and some sample output from vmstat 1: procs -----------memory---------- ---swap-- -----io---- -system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 0 2 1843536 1634512 0 4188 54 13 2524 666 2 1 1 1 89 9 0 1 11 1845916 1640724 0 388 2700 4816 221880 4879 14409 170721 4 3 63 30 0 0 9 1846096 1643952 0 0 4956 756 174832 804 12357 159306 3 4 63 30 0 0 11 1846104 1643532 0 0 4916 540 174320 580 10609 139960 3 4 64 29 0 0 4 1846084 1640272 0 2336 4080 524 140408 548 9331 118287 3 4 63 30 0 0 8 1846104 1642096 0 1488 2940 432 102516 457 7023 82230 2 4 65 29 0 0 5 1846104 1642268 0 1276 3704 452 126520 452 9494 119612 3 5 65 27 0 3 12 1846104 1641528 0 328 6092 608 187776 636 8269 113059 4 3 64 29 0 2 2 1846084 1640960 0 724 5948 0 111480 0 7751 116370 4 4 63 29 0 0 4 1846100 1641484 0 404 4144 1476 125760 1500 10668 105358 2 3 71 25 0 0 13 1846104 1641932 0 0 5872 828 153808 840 10518 128447 3 4 70 22 0 0 8 1846096 1639172 0 3164 3556 556 74884 580 5082 65362 2 2 73 23 0 1 4 1846080 1638676 0 396 4512 28 50928 44 2672 38277 2 2 80 16 0 0 3 1846080 1628808 0 7132 2636 0 28004 8 1358 14090 0 1 78 20 0 0 2 1844728 1618552 0 11140 7680 0 12740 8 763 2245 0 0 82 18 0 0 2 1837764 1532056 0 101504 2952 0 95644 24 802 3817 0 1 87 12 0 0 11 1842092 1633324 0 4416 1748 10900 143144 11024 6279 134442 3 3 70 24 0 2 6 1846104 1642756 0 0 4768 468 78752 468 4672 60141 2 2 76 20 0 1 12 1846104 1640792 0 236 4752 440 140712 464 7614 99593 3 5 58 34 0 0 3 1846084 1630368 0 6316 5104 0 20336 0 1703 22424 1 1 72 26 0 2 17 1846104 1638332 0 3168 4080 1720 211960 1744 11977 155886 3 4 65 28 0 1 10 1846104 1640800 0 132 4488 556 126016 584 8016 106368 3 4 63 29 0 0 14 1846104 1639740 0 2248 3436 428 114188 452 7030 92418 3 3 59 35 0 1 6 1846096 1639504 0 1932 5500 436 141412 460 8261 112210 4 4 63 29 0 0 10 1846104 1640164 0 3052 4028 448 147684 472 7366 109554 4 4 61 30 0 0 10 1846100 1641040 0 2332 4952 632 147452 664 8767 118384 3 4 63 30 0 4 8 1846084 1641092 0 664 4948 276 152264 292 6448 98813 5 5 62 28 0 Furthermore, the output of top sorted by CPU time seems to support the theory that swap is what's bogging down the system: top - 09:05:32 up 37 days, 23:24, 1 user, load average: 9.75, 8.24, 7.12 Tasks: 173 total, 1 running, 172 sleeping, 0 stopped, 0 zombie Cpu(s): 1.6%us, 1.4%sy, 0.0%ni, 76.1%id, 20.6%wa, 0.1%hi, 0.2%si, 0.0%st Mem: 4044632k total, 2405628k used, 1639004k free, 0k buffers Swap: 7812492k total, 1851852k used, 5960640k free, 436k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ TIME COMMAND 4174 root 17 0 63156 176 56 S 8 0.0 2138:52 35,38 bacula-fd 4185 root 17 0 63352 284 104 S 6 0.0 1709:25 28,29 bacula-sd 240 root 15 0 0 0 0 D 3 0.0 831:55.19 831:55 kswapd0 2852 root 10 -5 0 0 0 S 1 0.0 126:35.59 126:35 xfsbufd 2849 root 10 -5 0 0 0 S 0 0.0 119:50.94 119:50 xfsbufd 1364 root 10 -5 0 0 0 S 0 0.0 117:05.39 117:05 xfsbufd 21 root 10 -5 0 0 0 S 1 0.0 48:03.44 48:03 events/3 6940 postgres 16 0 43596 8 8 S 0 0.0 46:50.35 46:50 postmaster 1342 root 10 -5 0 0 0 S 0 0.0 23:14.34 23:14 xfsdatad/4 5415 root 17 0 1770m 108 48 S 0 0.0 15:03.74 15:03 bacula-dir 23 root 10 -5 0 0 0 S 0 0.0 13:09.71 13:09 events/5 5604 root 17 0 1216m 500 200 S 0 0.0 12:38.20 12:38 java 5552 root 16 0 1194m 580 248 S 0 0.0 11:58.00 11:58 java Here's the same sorted by virtual memory image size: top - 09:08:32 up 37 days, 23:27, 1 user, load average: 8.43, 8.26, 7.32 Tasks: 173 total, 1 running, 172 sleeping, 0 stopped, 0 zombie Cpu(s): 3.6%us, 3.4%sy, 0.0%ni, 62.2%id, 30.2%wa, 0.2%hi, 0.3%si, 0.0%st Mem: 4044632k total, 2404212k used, 1640420k free, 0k buffers Swap: 7812492k total, 1852548k used, 5959944k free, 100k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ TIME COMMAND 5415 root 17 0 1770m 56 44 S 0 0.0 15:03.78 15:03 bacula-dir 5604 root 17 0 1216m 492 200 S 0 0.0 12:38.30 12:38 java 5552 root 16 0 1194m 476 200 S 0 0.0 11:58.20 11:58 java 4598 root 16 0 117m 44 44 S 0 0.0 0:13.37 0:13 eventmond 9614 gdm 16 0 93188 0 0 S 0 0.0 0:00.30 0:00 gdmgreeter 5527 root 17 0 78716 0 0 S 0 0.0 0:00.30 0:00 gdm 4185 root 17 0 63352 284 104 S 20 0.0 1709:52 28,29 bacula-sd 4174 root 17 0 63156 208 88 S 24 0.0 2139:25 35,39 bacula-fd 10849 postgres 18 0 54740 216 108 D 0 0.0 0:31.40 0:31 postmaster 6661 postgres 17 0 49432 0 0 S 0 0.0 0:03.50 0:03 postmaster 5507 root 15 0 47980 0 0 S 0 0.0 0:00.00 0:00 gdm 6940 postgres 16 0 43596 16 16 S 0 0.0 46:51.39 46:51 postmaster 5304 postgres 16 0 40580 132 88 S 0 0.0 6:21.79 6:21 postmaster 5301 postgres 17 0 40448 24 24 S 0 0.0 0:32.17 0:32 postmaster 11280 root 16 0 40288 28 28 S 0 0.0 0:00.11 0:00 sshd 5534 root 17 0 37580 0 0 S 0 0.0 0:56.18 0:56 X 30870 root 30 15 31668 28 28 S 0 0.0 1:13.38 1:13 snmpd 5305 postgres 17 0 30628 16 16 S 0 0.0 0:11.60 0:11 postmaster 27403 postfix 17 0 30248 0 0 S 0 0.0 0:02.76 0:02 qmgr 10815 postfix 15 0 30208 16 16 S 0 0.0 0:00.02 0:00 pickup 5306 postgres 16 0 29760 20 20 S 0 0.0 0:52.89 0:52 postmaster 5302 postgres 17 0 29628 64 32 S 0 0.0 1:00.64 1:00 postmaster I've tried tuning the swappiness kernel parameter to both high and low values, but nothing appears to change the behavior here. I'm at a loss to figure out what's going on. How can I find out what's causing this? Update: The system is a fully 64-bit system, so there should be no question of memory limitations due to 32-bit issues. Update2: As I mentioned in the original question, I've already tried tuning swappiness to all sorts of values, including 0. The result is always the same, with approximately 1.6 GB of memory remaining unused. Update3: Added top output to the above info.

    Read the article

  • Automatically wake up notebooks not on the ethernet

    - by gletscher
    I am looking for an automated backup system and I like bacula. I have 3 Notebooks and a Desktop computer that need regular backup. Now I don't want to let them run all night just to do the backuping, so I was thinking I could use wake-on-lan to have bacula wake up the machines, then do the backups, and shut them down afterswards. While this may work with devices on the ethernet, it won't work with the Notebooks on the wifi. So is it possible to have the Notebooks schedules to automatically wake up from suspend or shutdown ? Or is it possible to interject a shutdown command if it is after a certain hour and call the bacula director to start the backup now? I'm new to controlling the linux system using scripts, so any hints on how and where to start are greatly appreciated. Thanks alot for your help, input and ideas.

    Read the article

  • SQLVDI error - attempt to release mutex not owned by caller

    - by Chris W
    I've started getting some errors in the App event log of one of our database servers (Windows 2003 & SQL Server 2005). The nightly full database backups are completing successfully however immediately after the job success is written to the event log there is a run of entries that say: SQLVDI: Loc=CVDS. Desc=Release(ClientAliveMutex). ErrorCode=(288)Attempt to release mutex not owned by caller. There's five of these logged - the server itself has more than 20 databases on it which are all backed up successfully. The server is backed up by Bacula using a VSS backup. Has anyone got any ideas what would be causing the errors? They seem to have started after a re-boot on Friday to install some patches which included KB960089. Edit: After getting the errors for a few days they've now stopped without any action on my part other than letting the backups continue as they were. It may be a coincidence but they stopped after Bacula completed its weekly full rather than the daily incremental backup.

    Read the article

  • SQLVDI error - attempt to release mutex not owned by caller

    - by Chris W
    I've started getting some errors in the App event log of one of our database servers (Windows 2003 & SQL Server 2005). The nightly full database backups are completing successfully however immediately after the job success is written to the event log there is a run of entries that say: SQLVDI: Loc=CVDS. Desc=Release(ClientAliveMutex). ErrorCode=(288)Attempt to release mutex not owned by caller. There's five of these logged - the server itself has more than 20 databases on it which are all backed up successfully. The server is backed up by Bacula using a VSS backup. Has anyone got any ideas what would be causing the errors? They seem to have started after a re-boot on Friday to install some patches which included KB960089. Edit: After getting the errors for a few days they've now stopped without any action on my part other than letting the backups continue as they were. It may be a coincidence but they stopped after Bacula completed its weekly full rather than the daily incremental backup.

    Read the article

  • No free disk space ;[

    - by skomak
    Hi I have weird situation because Linux df command says that there is no free disk space [root@backup cache]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 72G 70G 0 100% / /dev/sda1 190M 11M 170M 7% /boot tmpfs 248M 0 248M 0% /dev/shm but du -sh /* says [root@backup cache]# du -sh /* 4.0K /bacula-restores 7.4M /bin 5.4M /boot 3.6T /data 116K /dev 55M /etc 204K /home 76M /lib 16K /lost+found 12K /media 0 /misc 16K /mnt 8.0K /mount 0 /net 8.0K /opt 0 /proc 2.3G /root 32M /sbin 8.0K /selinux 168K /share 8.0K /srv 0 /sys 361M /test 20K /tmp 3.2G /usr 1.5G /var Could you tell me where is a problem? Where is my space? I can't figure it out :(

    Read the article

  • Reliable Backup Solution for Linux for Complete System Restoration

    - by Chris S
    What's the best backup solution for Linux that can completely restore the entire filesystem to a blank harddrive (including partitioning) after an old harddrive dies? I'm currently running a few Ubuntu machines, some with RAID-1 and others without RAID (mostly laptops). I'd like to implement a backup solution that can take incremental snapshots of the entire filesystem, so that if I were to replace all the harddrives in a machine, I could use the backup to restore a perfect copy of the previous filesystem. Unfortunately, nearly all the backup solutions I've found seem to be glorified rsync scripts, which only backup some files, and have no easy way to restore once the entire filesystem is gone. Some of the more complicated solutions, like Bacula, might do what I need, but require a complicated server/client setup and are notoriously difficult to maintain. I've heard that Apple's TimeMachine utility has this ability, and I've had similar success taking differential disk images with Acronis True Image on Windows, but of course neither of these work on Linux. Is there anything comparable for Ubuntu?

    Read the article

  • Lightweight, low cost enterprise backup solution

    - by Scott
    Looking for a backup solution primarily for Windows clients (XP/7), that will either back up to 2 different servers (1 on site, 1 off site - internet - can be our own server), or back up to 1 server and then we would need to somehow backup that server offsite/internet. By lightweight, I mean the backup client software should not eat up much memory and processor since some of the client machines are older. I am used to using Crashplan for home use - the pricing is nice for the amount of backup I get, and it works great / easy to install and get going - I can back up to my own machines locally and over the net. However, the price is going to be a little steep for enterprise level backup, 1500+ machines. Possibly ZManda and Bacula are good choices to consider? Are they light weight? Can the clients/agents be set to go over the net and/or multiple backup servers?

    Read the article

  • Server Backup Solutions - compiling?

    - by Webnet
    I've been researching backup solutions for a LAMP environment to backup our databases and files alike. I'm looking for open source with a UI (so I'm less likely to screw it up). I downloaded http://www.bacula.org/en/ and a few others but they all talk about compiling first.... this doesn't seem like something I should need to do.... is there a linux package that maybe handles backups that I don't know about? I should also specify I'm looking to setup a backup server which backs up from several locations.

    Read the article

  • Backup Solr home

    - by user226188
    I'm new to Solr: I've successfully installed Tomcat and Solr 4.3.1 webapp, and two collections on a CentOS 6.4 machine. Now, my server is in production and I need to make backups of solr. So, I would like to know what is the best way to backup solr... For the moment I'm dooing: stop tomcat = tar of my solr home = start tomcat, but I've read that is not a good solution? Moreover, this implie to stop all the tomcat which have other webapps than solr. I've also heard that there is a script named "backup" in solr home bin's folder ? but my bin folder is empty :( I don't want to make an another slave server with replication, for me it's not a backup solution because my backup are supposed to be send to a bacula backup server all nights. There is no builtin solution that I can work around to make a script ? like a mysqldump for Mysql servers. Thanks for help !

    Read the article

  • Postgres backup

    - by Abbass
    Hello, I have a Bacula script that does an automatic backup of a Postgres Database. The script makes two backups using (pg_dump) of the data base : The schema only and the data only. /usr/bin/pg_dump --format=c -s $dbname --file=$DUMPDIR/$dbname.schema.dump /usr/bin/pg_dump --format=c -a $dbname --file=$DUMPDIR/$dbname.data.dump The problem is that I can't figure out how to restore it with pg_restore. Do I need to create the database and the users before then restore the schema and finally the data. I did the following : pg_restore --format=c -s -C -d template1 xxx.schema.dump pg_restore --format=c -a -d xxx xxx.data.dump This first restore creates the database with emtpy tables but the second gives many error like this one : pg_restore: [archiver (db)] COPY failed: ERROR: insert or update on table "Table1" violates foreign key constraint "fkf6977a478dd41734" DETAIL: Key (contentid)=(1474566) is not present in table "Table23". Any ideas?

    Read the article

  • Should the virtualization host be allowed to run any service?

    - by Giordano
    I recently setup a virtualization server for the small company I'm running. This server runs few virtual machines that are used for development, testing, etc... My business partner works from a remote location, thus I also installed a vpn server on the virtualization host to make it possible for him to safely reach the company services. Moreover, again on the virtualization host, I installed bacula to perform the backup of the data. Is it advisable/good practice to do so or should I create one more virtual machine to do backups and VPN? Is it a bad idea to run these services on the host itself? If yes, why? Thanks in advance!

    Read the article

  • What server setup for a small web development company? [closed]

    - by Giordano
    I co-own a company with a friend of mine and we have decided to buy a new server to support our business (our current server is an Asus EEE Box, working great but too limited :) ). I should mention that we are web developers but occasionally we do small-office sys admin. Thus, 99% of time we work on GNU/Linux (mainly Ubuntu) but from time to time we need to setup a Windows environment to assist some customers (e.g. setup a temporary SQL Server 2008). Our requirements: Low budget: we don't want the cheapest solution out there but we can't afford to spend too much. Budget could be ~1000-1500€ (before VAT) Robustness: we would like to setup a RAID array and maybe have an external disk where we can store backups Virtualization: we need to be able to setup few servers for development. The scenario is something like this (~8 appliances running in parallel): Redmine + GIT server Bacula server FTP server 3-4 virtual appliances that could be set up on demand to test our applications or support a customer. The appliances could be: LAMP, Tomcat+PostgreSQL, SQL Server Support: if something breaks down it shouldn't be too difficult to find a replacement. Now, given the main requirements, there are some doubts we need to clarify: Do you suggest to buy a prepackaged solution (for example a customized Dell PowerEdge T110 or T310) or to assemble the server by ourselves (buy the separate components)? What RAID configuration do you suggest? I was thinking of RAID1 (probably cheaper) or RAID5. should we buy a hardware RAID controller or is it ok to use a software RAID (mdadm)? In case, which controller do you suggest? What processor do you suggest (Intel Xeon, i3, i5, i7, AMD)? How much RAM? (I was thinking at least 8GB, ~1GB per appliance) What virtualization software do you recommend? VMWare seems to be the best choice, but what about XEN or KVM? We don't want to buy licenses at the moment so we would like to consider only free options. What OS do you recommend? We know Ubuntu, Debian, Gentoo very well (we would like to use Ubuntu Server), however it seems a lot of people goes for CentOS. Thanks in advance if you can help us with this! It's our first "serious" server so many doubts popped up :) Please feel free to add further recommendations if you have some to share ;) Have a nice day

    Read the article

  • Good Secure Backups Developers at Home

    - by slashmais
    What is a good, secure, method to do backups, for programmers who do research & development at home and cannot afford to lose any work? Conditions: The backups must ALWAYS be within reasonably easy reach. Internet connection cannot be guaranteed to be always available. The solution must be either FREE or priced within reason, and subject to 2 above. Status Report This is for now only considering free options. The following open-source projects are suggested in the answers (here & elsewhere): BackupPC is a high-performance, enterprise-grade system for backing up Linux, WinXX and MacOSX PCs and laptops to a server's disk. Storebackup is a backup utility that stores files on other disks. mybackware: These scripts were developed to create SQL dump files for basic disaster recovery of small MySQL installations. Bacula is [...] to manage backup, recovery, and verification of computer data across a network of computers of different kinds. In technical terms, it is a network based backup program. AutoDL 2 and Sec-Bk: AutoDL 2 is a scalable transport independant automated file transfer system. It is suitable for uploading files from a staging server to every server on a production server farm [...] Sec-Bk is a set of simple utilities to securely back up files to a remote location, even a public storage location. rsnapshot is a filesystem snapshot utility for making backups of local and remote systems. rbme: Using rsync for backups [...] you get perpetual incremental backups that appear as full backups (for each day) and thus allow easy restore or further copying to tape etc. Duplicity backs directories by producing encrypted tar-format volumes and uploading them to a remote or local file server. [...] uses librsync, [for] incremental archives Other Possibilities: Using a Distributed Version Control System (DVCS) such as Git(/Easy Git), Bazaar, Mercurial answers the need to have the backup available locally. Use free online storage space as a remote backup, e.g.: compress your work/backup directory and mail it to your gmail account. Strategies See crazyscot's answer

    Read the article

< Previous Page | 1 2