Search Results

Search found 7592 results on 304 pages for 'dev'.

Page 113/304 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • How to use suse linux as a small router

    - by Mingo
    I has 2 subnet 192.168.1.0/24 and 192.168.2.0/24 and one suse linux, the linux has 2 interface, eth0 and eth1. I want to configurate the suse linux as a router so that make these 2 subnet can communicate with each other. This is my steps: 1.set the linux eth0 ip as 192.160.1.254,eth1 ip 192.168.2.254 2.add route in linux: route add -net 192.168.1.0 netmask 255.255.255.0 dev eth0 route add -net 192.168.2.0 netmask 255.255.255.0 dev eth1 3.set 192.168.1.0/24 gw as 192.168.1.254,and 192.168.2.0/24 gw as 192.168.2.254 I am not sure this will work or not? or some step i missing?

    Read the article

  • TRIM in centos 5.X?

    - by Frank Farmer
    I've got a bunch of centos 5 boxes with Intel X-25 drives (x25-m in dev, x25-e in prod, I think). We're seeing severely degraded disk performance on one of our dev boxes (which easily does 5+ gb of writes every day, meaning we write the full drive's worth of data several times a month). The box in question: Intel x25-m Ext3 (which doesn't support TRIM) centos 5 vmware ESXi Wikipedia mentions that newer versions of hdparm (which centos5 doesn't include) can bulk-TRIM free blocks. This utility also sounds potentially useful: http://blog.patshead.com/2009/12/a-quick-and-dirty-wipersh-fix-for-intel-x25-m.html Disk write performance has dropped to <1 MB/sec while copying a 300 meg directory on this system, as of a month or so ago -- it used to be able to perform the same copy operation at least 5 times faster. What can I do to recover performance on this system?

    Read the article

  • How do I convert a Linux disk image into a sparse file?

    - by endolith
    I have a bunch of disk images, made with ddrescue, on an EXT partition, and I want to reduce their size without losing data, while still being mountable. How can I fill the empty space in the image's filesystem with zeros, and then convert the file into a sparse file so this empty space is not actually stored on disk? For example: > du -s --si --apparent-size Jimage.image 120G Jimage.image > du -s --si Jimage.image 121G Jimage.image This actually only has 50G of real data on it, though, so the second measurement should be much smaller. This supposedly will fill empty space with zeros: cat /dev/zero > zero.file rm zero.file But if sparse files are handled transparently, it might actually create a sparse file without writing anything to the virtual disk, ironically preventing me from turning the virtual disk image into a sparse file itself. :) Does it? Note: For some reason, sudo dd if=/dev/zero of=./zero.file works when cat does not on a mounted disk image.

    Read the article

  • Apache virtual host proxy to nginx for ruby

    - by Kevin Brown
    I'm running a few php sites off apache and want to start rails dev. I've installed rvm/nginx and can get my ruby site by going to websiteroot.com:8000... How do I pass ruby.websiteroot.com to websiteroot.com:8000? What's the best way for me to route a subdomain for ruby dev?? I'd switch to nginx completely if it weren't for all my php sites--seems like it's easier to just proxy for ruby. Advice? My nginx config looks like this: server{ listen 8000; server_name website.com; root /home/me/sites/ruby_folder/public; ... } My apache config looks like this: <VirtualHost> ServerName ruby.website.com ProxyPreserveHost on ProxyPass / http://127.0.0.1:8000 ProxyPassReverse / http://127.0.0.1:8000 </VirtualHost>

    Read the article

  • Installing Fedora 11 fIlesystem from ISO in to a VM image

    - by okhalid
    Hi, I need to install Fedora 11 in my monitor-less linux box which is running some where in a data center. I will use Fedora 11 as a virtual machine. What I already know/have: 1) How to create LVM partitions and create ext3 filesystem 2) Mount the LVM partition and ISO image 3) Run the partition with Xen as a virtual machine What I need: 1) I need to install Fedora ll file system into an lvm partition (let's say /dev/fedora11) from an ISO image so that I have all the directories /root, /bin, /sys etc etc under /dev/fedora11 Any help would be much appreciated! NOTE: I don't have a monitor for this server, so I need to do it via SSH

    Read the article

  • Ext3 partition doesn't mount on Snow Leopard using MacFUSE

    - by Fez
    I'm dual-booting OS X and Ubuntu on a Macbook 4,1. I'm trying to mount my Linux partition in OS X. I installed MacFUSE 2.0.3,2 and fuse-ext2-0.0.7 on Snow Leopard 10.6.5. I created the directory /Volumes/Ubuntu and tried to mount the disk there using the command: fuse-ext2 /dev/disk0s4 /Volumes/Ubuntu/ This is the output I get: fuse-ext2: version:'0.0.7', fuse_version:'27' [main (../../fuse-ext2/fuse-ext2.c:324)] fuse-ext2: enter [do_probe (../../fuse-ext2/do_probe.c:30)] fuse-ext2: Error while trying to open /dev/disk0s4 (rc=13) [do_probe (../../fuse-ext2/do_probe.c:34)] fuse-ext2: Probe failed [main (../../fuse-ext2/fuse-ext2.c:340)] Any clue what's going wrong? Thanks!

    Read the article

  • Linux SW Raid: whole disk or per-partition?

    - by Steve Pomeroy
    I have inherited a machine which has 2 physical disks and uses Linux SW RAID(1). Both disks are partitioned and are are all individual arrays (/dev/md0, /dev/md6, etc.). Those arrays are then mounted (/boot, /home, etc. even /tmp). As RAID is designed to mitigate physical failures, is there any reason why one would use this technique over whole-disk arrays that are then partitioned (perhaps using LVM)? This seems prone to more potential issues, but may have some special properties that I haven't been able to glean. I'm planning on moving this setup to: disks?SWRAID(1)?LVM as I'll be making multiple VMs out of the one machine, but wanted to make sure I knew what I was doing when I got rid of the old setup.

    Read the article

  • Upgrade MySQL on Plesk on Windows

    - by Cyril Gupta
    I just got a nasty surprise when I installed a website in Unicode Hindi (Indian language) on a server, all freshly entered unicode data is turning into question marks on the server. On my dev machine it works perfectly. I found that I have MySQL version 5.0.45 (installed in default by Plesk I guess). On my dev machine i have version 5.1.33. I believe the problem could be due to the version difference. The new version of MySQL apparently has better support for Unicode than the older one. I want to upgrade MySQL on my Windows Server machine with Plesk installed on it I am reluctant to just install the new version using the mysql installer because Plesk maintains some custom settings for mysql and I am afraid the new version could change those settings and break my db. Can anyone tell me do I have to do anything special to install MySQL on plesk on windows or can I just use the new version installer?

    Read the article

  • How to choose python version to install in gentoo

    - by Shamanu4
    Hello, I'm using linux gentoo and i want to install python2.5 but it's a problem. emerge -av python shows These are the packages that would be merged, in order: Calculating dependencies... done! [ebuild U ] dev-lang/python-3.1.2-r3 [3.1.1-r1] USE="gdbm ipv6 ncurses readline ssl threads (wide-unicode%*) xml -build -doc -examples -sqlite* -tk -wininst (-ucs2%)" 9,558 kB [ebuild U ] app-admin/python-updater-0.8 [0.7] 8 kB and there are ebuild for more versions: # ls /usr/portage/dev-lang/python ChangeLog files Manifest metadata.xml python-2.4.6.ebuild python-2.5.4-r4.ebuild python-2.6.4-r1.ebuild python-2.6.5-r2.ebuild python-3.1.2-r3.ebuild How to choose ebuild that I want? (python-2.5.4-r4)

    Read the article

  • debian modem problems !!!

    - by Raafat
    hay there guys ... I'm a new Debian user, it looks like a very good choice 4 me, every thing is stable, free and easy to use. the problem is, I'm using my modem to establish a dial up connection to the internet (ppp) (a very old stupid way I'm forced to use for now), and using the KPPP application to do that, and nothing is working properly for me. it seems like it didn't recognize my modem or something. i already tried to make a few stuff, and now i know my modem is on /dev/tty0, so i made a link for that on /dev/modem, and query the modem using KPPP and it responded with something like: Ati : Ati0: Ati1: ... ... Ati7: with a textBox to fill up in front of each one of thees Atis, and now, when i press connect on kppp, it says modem ready, and that's it. BTW, my modem is MDC AC'97 any suggestions pleas ....

    Read the article

  • How to tell X.org to reload input device module? (Working around suspend-to-ram crash on Acer laptop

    - by Vi
    When X.org boots up, Synaptics touchpad works well. But when I remove the module it falls back to /dev/input/mice and don't use normal driver even when touchpad is available again. Xorg.0.log: ... (II) XINPUT: Adding extended input device "Synaptics Touchpad" (type: TOUCHPAD) (--) Synaptics Touchpad: touchpad found # { rmmod psmouse && echo mem /sys/power/state && modprobe psmouse; } (WW) : No Device specified, looking for one... (II) : Setting Device option to "/dev/input/mice" ... How to tell X.org to try it's InputDevice again (without restarting X server)? P.S. rmmod psmouse is needed to prevent crashing of Acer Extensa 5220 when resuming from suspend-to-ram. Update: Found answer myself: Doing xinput set-int-prop "Synaptics Touchpad" "Device Enabled" 8 1 after reloading the kernel module reloads touchpad. Now suspend-to-ram works OK.

    Read the article

  • How to tell X.org to reload input device module?

    - by Vi
    When X.org boots up, Synaptics touchpad works well. But when I remove the module it falls back to /dev/input/mice and don't use normal driver even when touchpad is available again. Xorg.0.log: ... (II) XINPUT: Adding extended input device "Synaptics Touchpad" (type: TOUCHPAD) (--) Synaptics Touchpad: touchpad found # { rmmod psmouse && echo mem /sys/power/state && modprobe psmouse; } (WW) : No Device specified, looking for one... (II) : Setting Device option to "/dev/input/mice" ... How to tell X.org to try it's InputDevice again (without restarting X server)? P.S. rmmod psmouse is needed to prevent crashing of Acer Extensa 5220 when resuming from suspend-to-ram. Update: Found answer myself: Doing xinput set-int-prop "Synaptics Touchpad" "Device Enabled" 8 1 after reloading the kernel module reloads touchpad. Now suspend-to-ram works OK.

    Read the article

  • Create new partition on live production CentOS server

    - by Kimmel
    I have a production server that is running on CentOS. I'd like to create a partition on the server without having to reinstall everything. I have CLI and VNC access to the remote server. Is there anyway that I can create a partition safely? Here's my output from fdisk -l Disk /dev/sda: 85.9 GB, 85899345920 bytes 255 heads, 63 sectors/track, 10443 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00033d5e Device Boot Start End Blocks Id System /dev/sda1 * 1 10444 83885056 83 Linux Thanks.

    Read the article

  • Cannot install passenger with Nginx

    - by Luc
    Hello, I have a rack application that I want to migrate from Ruby 1.8.7 + Apache + passenger to Ruby 1.9.1 + Nginx + passenger. I have made up the following script for a quick install all in one, and it raises an error... Here is the installation script: (basic one with all the steps I need to install everything on a Ubuntu 10.04 Lucid Lynx fresh box) Nginx sources cd /tmp wget http://nginx.org/download/nginx-0.7.66.tar.gz tar xzf nginx-0.7.66.tar.gz cd nginx-0.7.66 openssl required for SSL/TLS sudo apt-get install openssl sudo apt-get install libssl-dev Compilation stuff sudo apt-get zlib1g-dev Ruby interpreter 1.9.1 sudo apt-get install ruby1.9.1 ruby1.9.1-dev rubygems1.9.1 irb1.9.1 ri1.9.1 rdoc1.9.1 build-essential nginx libopenssl-ruby1.9.1 Make sure default ruby uses version 1.9.1 sudo update-alternatives --install /usr/bin/ruby ruby /usr/bin/ruby1.9.1 400 --slave /usr/share/man/man1/ruby.1.gz ruby.1.gz /usr/share/man/man1/ruby1.9.1.1.gz --slave /usr/bin/ri ri /usr/bin/ri1.9.1 --slave /usr/bin/irb irb /usr/bin/irb1.9.1 --slave /usr/bin/rdoc rdoc /usr/bin/rdoc1.9.1 sudo update-alternatives --config ruby Passenger (rake-0.8.7, fastthread-1.0.7, rack-1.1.0, passenger-2.2.14) sudo gem install passenger Activate Passenger in nginx, select option 2 to use nginx sources donwloaded above cd /var/lib/gems/1.9.1/gems/passenger-2.2.14/bin sudo ./passenger-install-nginx-module And this is the error message I got: /var/lib/gems/1.9.1/gems/passenger-2.2.14/ext/nginx/ContentHandler.c gcc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Wunused-function -Wunused-variable -Wunused-value -Werror -g -I src/core -I src/event -I src/event/modules -I src/os/unix -I /tmp/pcre-8.00 -I objs -I src/http -I src/http/modules -I src/mail \ -o objs/addon/nginx/StaticContentHandler.o \ /var/lib/gems/1.9.1/gems/passenger-2.2.14/ext/nginx/StaticContentHandler.c /var/lib/gems/1.9.1/gems/passenger-2.2.14/ext/nginx/StaticContentHandler.c: In function ‘passenger_static_content_handler’: /var/lib/gems/1.9.1/gems/passenger-2.2.14/ext/nginx/StaticContentHandler.c:71: error: ‘ngx_http_request_t’ has no member named ‘zero_in_uri’ make[1]: *** [objs/addon/nginx/StaticContentHandler.o] Error 1 make[1]: Leaving directory `/tmp/nginx-0.7.66' make: *** [build] Error 2 -------------------------------------------- It looks like something went wrong Please read our Users guide for troubleshooting tips: /var/lib/gems/1.9.1/gems/passenger-2.2.14/doc/Users guide Nginx.html I do not understand the reason of this error. Is this a compatibility problem ? Hope you have any clues :) Thanks a lot, Luc

    Read the article

  • Amazon EC2: Not able to open web application even if port it opened

    - by learner
    I have a t1.micro instance with public dns looks similar to ec2-184-72-67-202.compute-1.amazonaws.com (some numbers changed) On this machine, I am running a django app $ sudo python manage.py runserver --settings=vlists.settings.dev Validating models... 0 errors found Django version 1.4.1, using settings 'vlists.settings.dev' Development server is running at http://127.0.0.1:8000/ I have opened the port 8000 through AWS console Now when I hit the following in Chrome http://ec2-184-72-67-202.compute-1.amazonaws.com:8000, I get Oops! Google Chrome could not connect to WHat is that I am doing wrong?

    Read the article

  • Bacula & Multiple Tape Devices, and so on

    - by Tom O'Connor
    Bacula won't make use of 2 tape devices simultaneously. (Search for #-#-# for the TL;DR) A little background, perhaps. In the process of trying to get a decent working backup solution (backing up 20TB ain't cheap, or easy) at $dayjob, we bought a bunch of things to make it work. Firstly, there's a Spectra Logic T50e autochanger, 40 slots of LTO5 goodness, and that robot's got a pair of IBM HH5 Ultrium LTO5 drives, connected via FibreChannel Arbitrated Loop to our backup server. There's the backup server.. A Dell R715 with 2x 16 core AMD 62xx CPUs, and 32GB of RAM. Yummy. That server's got 2 Emulex FCe-12000E cards, and an Intel X520-SR dual port 10GE NIC. We were also sold Commvault Backup (non-NDMP). Here's where it gets really complicated. Spectra Logic and Commvault both sent respective engineers, who set up the library and the software. Commvault was running fine, in so far as the controller was working fine. The Dell server has Ubuntu 12.04 server, and runs the MediaAgent for CommVault, and mounts our BlueArc NAS as NFS to a few mountpoints, like /home, and some stuff in /mnt. When backing up from the NFS mountpoints, we were seeing ~= 290GB/hr throughput. That's CRAP, considering we've got 20-odd TB to get through, in a <48 hour backup window. The rated maximum on the BlueArc is 700MB/s (2460GB/hr), the rated maximum write speed on the tape devices is 140MB/s, per drive, so that's 492GB/hr (or double it, for the total throughput). So, the next step was to benchmark NFS performance with IOzone, and it turns out that we get epic write performance (across 20 threads), and it's like 1.5-2.5TB/hr write, but read performance is fecking hopeless. I couldn't ever get higher than 343GB/hr maximum. So let's assume that the 343GB/hr is a theoretical maximum for read performance on the NAS, then we should in theory be able to get that performance out of a) CommVault, and b) any other backup agent. Not the case. Commvault seems to only ever give me 200-250GB/hr throughput, and out of experimentation, I installed Bacula to see what the state of play there is. If, for example, Bacula gave consistently better performance and speeds than Commvault, then we'd be able to say "**$.$ Refunds Plz $.$**" #-#-# Alas, I found a different problem with Bacula. Commvault seems pretty happy to read from one part of the mountpoint with one thread, and stream that to a Tape device, whilst reading from some other directory with the other thread, and writing to the 2nd drive in the autochanger. I can't for the life of me get Bacula to mount and write to two tape drives simultaneously. Things I've tried: Setting Maximum Concurrent Jobs = 20 in the Director, File and Storage Daemons Setting Prefer Mounted Volumes = no in the Job Definition Setting multiple devices in the Autochanger resource. Documentation seems to be very single-drive centric, and we feel a little like we've strapped a rocket to a hamster, with this one. The majority of example Bacula configurations are for DDS4 drives, manual tape swapping, and FreeBSD or IRIX systems. I should probably add that I'm not too bothered if this isn't possible, but I'd be surprised. I basically want to use Bacula as proof to stick it to the software vendors that they're overpriced ;) I read somewhere that @KyleBrandt has done something similar with a modern Tape solution.. Configuration Files: *bacula-dir.conf* # # Default Bacula Director Configuration file Director { # define myself Name = backuphost-1-dir DIRport = 9101 # where we listen for UA connections QueryFile = "/etc/bacula/scripts/query.sql" WorkingDirectory = "/var/lib/bacula" PidDirectory = "/var/run/bacula" Maximum Concurrent Jobs = 20 Password = "yourekiddingright" # Console password Messages = Daemon DirAddress = 0.0.0.0 #DirAddress = 127.0.0.1 } JobDefs { Name = "DefaultFileJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = File Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" } JobDefs { Name = "DefaultTapeJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = "SpectraLogic" Messages = Standard Pool = AllTapes Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" Prefer Mounted Volumes = no } # # Define the main nightly save backup job # By default, this job will back up to disk in /nonexistant/path/to/file/archive/dir Job { Name = "BackupClient1" JobDefs = "DefaultFileJob" } Job { Name = "BackupThisVolume" JobDefs = "DefaultTapeJob" FileSet = "SpecialVolume" } #Job { # Name = "BackupClient2" # Client = backuphost-12-fd # JobDefs = "DefaultJob" #} # Backup the catalog database (after the nightly save) Job { Name = "BackupCatalog" JobDefs = "DefaultFileJob" Level = Full FileSet="Catalog" Schedule = "WeeklyCycleAfterBackup" # This creates an ASCII copy of the catalog # Arguments to make_catalog_backup.pl are: # make_catalog_backup.pl <catalog-name> RunBeforeJob = "/etc/bacula/scripts/make_catalog_backup.pl MyCatalog" # This deletes the copy of the catalog RunAfterJob = "/etc/bacula/scripts/delete_catalog_backup" Write Bootstrap = "/var/lib/bacula/%n.bsr" Priority = 11 # run after main backup } # # Standard Restore template, to be changed by Console program # Only one such job is needed for all Jobs/Clients/Storage ... # Job { Name = "RestoreFiles" Type = Restore Client=backuphost-1-fd FileSet="Full Set" Storage = File Pool = Default Messages = Standard Where = /srv/bacula/restore } FileSet { Name = "SpecialVolume" Include { Options { signature = MD5 } File = /mnt/SpecialVolume } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } # List of files to be backed up FileSet { Name = "Full Set" Include { Options { signature = MD5 } File = /usr/sbin } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } Schedule { Name = "WeeklyCycle" Run = Full 1st sun at 23:05 Run = Differential 2nd-5th sun at 23:05 Run = Incremental mon-sat at 23:05 } # This schedule does the catalog. It starts after the WeeklyCycle Schedule { Name = "WeeklyCycleAfterBackup" Run = Full sun-sat at 23:10 } # This is the backup of the catalog FileSet { Name = "Catalog" Include { Options { signature = MD5 } File = "/var/lib/bacula/bacula.sql" } } # Client (File Services) to backup Client { Name = backuphost-1-fd Address = localhost FDPort = 9102 Catalog = MyCatalog Password = "surelyyourejoking" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # # Second Client (File Services) to backup # You should change Name, Address, and Password before using # #Client { # Name = backuphost-12-fd # Address = localhost2 # FDPort = 9102 # Catalog = MyCatalog # Password = "i'mnotjokinganddontcallmeshirley" # password for FileDaemon 2 # File Retention = 30 days # 30 days # Job Retention = 6 months # six months # AutoPrune = yes # Prune expired Jobs/Files #} # Definition of file storage device Storage { Name = File # Do not use "localhost" here Address = localhost # N.B. Use a fully qualified name here SDPort = 9103 Password = "lalalalala" Device = FileStorage Media Type = File } Storage { Name = "SpectraLogic" Address = localhost SDPort = 9103 Password = "linkedinmakethebestpasswords" Device = Drive-1 Device = Drive-2 Media Type = LTO5 Autochanger = yes } # Generic catalog service Catalog { Name = MyCatalog # Uncomment the following line if you want the dbi driver # dbdriver = "dbi:sqlite3"; dbaddress = 127.0.0.1; dbport = dbname = "bacula"; DB Address = ""; dbuser = "bacula"; dbpassword = "bbmaster63" } # Reasonable message delivery -- send most everything to email address # and to the console Messages { Name = Standard mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: %t %e of %c %l\" %r" operatorcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: Intervention needed for %j\" %r" mail = root@localhost = all, !skipped operator = root@localhost = mount console = all, !skipped, !saved # # WARNING! the following will create a file that you must cycle from # time to time as it will grow indefinitely. However, it will # also keep all your messages if they scroll off the console. # append = "/var/lib/bacula/log" = all, !skipped catalog = all } # # Message delivery for daemon messages (no job). Messages { Name = Daemon mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula daemon message\" %r" mail = root@localhost = all, !skipped console = all, !skipped, !saved append = "/var/lib/bacula/log" = all, !skipped } # Default pool definition Pool { Name = Default Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year } # File Pool definition Pool { Name = File Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year Maximum Volume Bytes = 50G # Limit Volume size to something reasonable Maximum Volumes = 100 # Limit number of Volumes in Pool } Pool { Name = AllTapes Pool Type = Backup Recycle = yes AutoPrune = yes # Prune expired volumes Volume Retention = 31 days # one Moth } # Scratch pool definition Pool { Name = Scratch Pool Type = Backup } # # Restricted console used by tray-monitor to get the status of the director # Console { Name = backuphost-1-mon Password = "LastFMalsostorePasswordsLikeThis" CommandACL = status, .status } bacula-sd.conf # # Default Bacula Storage Daemon Configuration file # Storage { # definition of myself Name = backuphost-1-sd SDPort = 9103 # Director's port WorkingDirectory = "/var/lib/bacula" Pid Directory = "/var/run/bacula" Maximum Concurrent Jobs = 20 SDAddress = 0.0.0.0 # SDAddress = 127.0.0.1 } # # List Directors who are permitted to contact Storage daemon # Director { Name = backuphost-1-dir Password = "passwordslinplaintext" } # # Restricted Director, used by tray-monitor to get the # status of the storage daemon # Director { Name = backuphost-1-mon Password = "totalinsecurityabound" Monitor = yes } Device { Name = FileStorage Media Type = File Archive Device = /srv/bacula/archive LabelMedia = yes; # lets Bacula label unlabeled media Random Access = Yes; AutomaticMount = yes; # when device opened, read it RemovableMedia = no; AlwaysOpen = no; } Autochanger { Name = SpectraLogic Device = Drive-1 Device = Drive-2 Changer Command = "/etc/bacula/scripts/mtx-changer %c %o %S %a %d" Changer Device = /dev/sg4 } Device { Name = Drive-1 Drive Index = 0 Archive Device = /dev/nst0 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } Device { Name = Drive-2 Drive Index = 1 Archive Device = /dev/nst1 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } # # Send all messages to the Director, # mount messages also are sent to the email address # Messages { Name = Standard director = backuphost-1-dir = all } bacula-fd.conf # # Default Bacula File Daemon Configuration file # # # List Directors who are permitted to contact this File daemon # Director { Name = backuphost-1-dir Password = "hahahahahaha" } # # Restricted Director, used by tray-monitor to get the # status of the file daemon # Director { Name = backuphost-1-mon Password = "hohohohohho" Monitor = yes } # # "Global" File daemon configuration specifications # FileDaemon { # this is me Name = backuphost-1-fd FDport = 9102 # where we listen for the director WorkingDirectory = /var/lib/bacula Pid Directory = /var/run/bacula Maximum Concurrent Jobs = 20 #FDAddress = 127.0.0.1 FDAddress = 0.0.0.0 } # Send all messages except skipped files back to Director Messages { Name = Standard director = backuphost-1-dir = all, !skipped, !restored }

    Read the article

  • scanning only works under "sudo" (Ubuntu)

    - by JoelFan
    When I try to scan, using simple-scan, the UI says Failed to scan -- Unable to connect to scanner. When I run it from the command line I get: joel@home:/usr/bin$ simple-scan -d ** (simple-scan:6554): DEBUG: Starting Simple Scan 2.32.0.1, PID=6554 ** (simple-scan:6554): DEBUG: Restoring window to 600x400 pixels ** (simple-scan:6554): DEBUG: sane_init () -> SANE_STATUS_GOOD ** (simple-scan:6554): DEBUG: SANE version 1.0.22 ** (simple-scan:6554): DEBUG: Requesting redetection of scan devices ** (simple-scan:6554): DEBUG: Processing request ** (simple-scan:6554): DEBUG: Requesting scan at 300 dpi from device '(null)' ** (simple-scan:6554): DEBUG: scanner_scan ("(null)", 300, SCAN_SINGLE) ** (simple-scan:6554): DEBUG: sane_get_devices () -> SANE_STATUS_GOOD ** (simple-scan:6554): DEBUG: Device: name="brother2:bus4;dev1" vendor="Brother" model="MFC-210C" type="USB scanner" ** (simple-scan:6554): DEBUG: Processing request ** (simple-scan:6554): DEBUG: sane_open ("brother2:bus4;dev1") -> SANE_STATUS_IO_ERROR ** (simple-scan:6554): WARNING **: Unable to get open device: Error during device I/O FYI, I have already done: joel@home:~$ sudo chmod a+rwx /dev/bus/usb joel@home:~$ sudo chmod a+rwx /dev/bus/usb/* If I run under sudo: joel@home:~$ sudo simple-scan it works. How can I get simple-scan to work without sudo?

    Read the article

  • Error in using DBAN on HDD

    - by John Watson
    I am using DBAN to erase HDD. DBAN is loaded from a CD and BIOS Boot order has been set to favour CD drive. On starting laptop, system boots from CD and DBAN interface can be seen. DBAN detects two storage devices, HDD and the SD Card. My HDD IS 320GB but DBAN says 298GB. It erases the SD card but when i try to erase HDD, it gives following error. DBAN finished with non-fatal errors. *ERROR /dev/sdb (process crash) *ERROR /dev/sda (process crash)

    Read the article

  • Repair GRUB bootloader for multiboot system

    - by user1715324
    I have two hard disks – one a 1TB and the other a 250 GB. I installed the OSes in the following order: Windows 7 on the first hard disk (1 TB) After that Kubuntu 12.04 on a partition (/dev/sdb7) on the second hard disk (250 GB) The second drive also contains an NTFS partition. Now, kubuntu's bootloader was installed on the second hard drive's MBR (and successfully detected Windows 7). So, whenever I wanted to load Windows I used to select the first hard disk from the BIOS boot menu and the second hard disk whenever I wanted to load kubuntu. I know I could have set the second hard disk as the default drive, still I prefered this method. The problem started when I installed Linux Mint 13 on the second hard disk (/dev/sdb3) and overwrote kubuntu's original MBR. Now, the GRUB just detects Mint and Windows. The MBR on the 1 TB hard disk is untouched. Is there a way I can modify the MBR on the second hard disk now so that it will show kubuntu and Mint both?

    Read the article

  • not find 127.0.0.1 or vhost with localhost apache in mac

    - by Charly Palencia
    i was working with localhost:81 during a long time with vhost and all was rigth. BUT, right now i need to work over the 80 port and i change the http.conf and http-vhost for used the 80 port but right now into the browser localhost works ok, 127.0.0.1 and the vhost not find the server. my configurations are: * My local machine is lion osx * mamp * HTTP.conf: ServerName localhost:80 * http-vhost NameVirtualHost localhost <VirtualHost localhost> DocumentRoot "/Users/chalien/projects/ownProjects/PHP" ServerName example.dev </VirtualHost> * /private/etc/hosts 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost 127.0.0.1 example.dev * /private/etc/services http 80/udp www www-http # World Wide Web HTTP http 80/tcp www www-http # World Wide Web HTTP

    Read the article

  • software RAID array not starting in initramfs on Debian

    - by Jasper
    One of my Debian servers (kernel 2.6.30-AMD64) refuses to start the software RAID array that houses the root partition in initramfs. It dumps me with a busybox console. When I follow the necessary steps to continue booting it works fine (start the array with mdadm -A and then have LVM scan the volumes with pvscan and then vgchange -ay). I've tried starting with boot options rootdelay=10 to no avail. Also I've updated the initramfs and unpacked it to inspect whether it really tries to assemble the raid array (it does). Output before dumping to console : mount: mounting none on /dev failed: No such device W: devtmpfs not available, falling back to tpmfs for /dev and then some lvm messages saying it can't find the volumes holding the root partitions. Does anybody have a clue how I could fix this?

    Read the article

  • Multicast doesn't seem to be working on RHEL 5.5

    - by NullUser
    I'm trying to install Oracle Grid Infrastructure on two machines. Their documentation states You must enable multicasting for the cluster on the IP address subnet ranges 224.0.0.0/24 and 230.0.1.0/24 So I ran: route add -net 224.0.0.0/24 dev eth2 route add -net 230.0.1.0/24 dev eth2 route -n produces: Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 230.0.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2 224.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2 # and others An ifconfig eth2 shows, among other things, UP BROADCAST RUNNING MULTICAST. However, when I run their multicast test utility, it fails me: Test for Multicast address 230.0.1.0 Sep 3 19:40:39 | Multicast Failed for eth2 using address 230.0.1.0:42000 Test for Multicast address 224.0.0.251 Sep 3 19:41:10 | Multicast Failed for eth2 using address 224.0.0.251:42001 What am I doing wrong?

    Read the article

  • How do I debug this FS error on a flash device?

    - by abc
    I have console access to an embedded linux device. This device has flash memory part of which is partitioned as a FAT filesystem. Its running linux-2.6.31. However I am seeing these errors on the console these days and the FAT file system becomes read only. 111109:154925 FAT: Filesystem error (dev loop0) 111109:154925 fat_get_cluster: invalid cluster chain (i_pos 0) 111109:154925 FAT: Filesystem error (dev loop0) 111109:154925 fat_get_cluster: invalid cluster chain (i_pos 0) I cannot understand why this happened? What is the root cause? And what is the fix? I would appreciate answers that can point me how to investigate the possible root cause of this issue on the device.

    Read the article

  • Recover an HP recovery partition

    - by eric.chartier
    I have a (semi)-dead hard drive with an HP recovery partition on it. My goal is to Buy a new hard drive Copy the recovery partition to a drive ( dd if=/dev/sdb1 of=~/recovery.bak ) Make a new partition of 12000 mb with Windows 7 Copy back recovery partition to the new drive ( dd if=~/recovery.bak of=/dev/sdb1 ) Then press F11 when the laptop boots. However, this doesn't work. Any idea why? Edit: I suspect the F11 doesn't work because the laptop tries to boot the laptop, because my partition is the primary partition of the drive. Does anyone have any experience dealing with stuff like this?

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >