Search Results

Search found 1873 results on 75 pages for 'tom crossman'.

Page 18/75 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • How to migrate data from a Firebird database to PostGreSQL on Linux

    - by Tom Feiner
    Are there any good tools to migrate existing firebird databases to PostgreSQL for Linux systems? I've looked at: FBexport which can be used to dump the data as insert statements, but it's mainly written to export/import from one firebird db to another, not as a migration tool. There's also: Firebird to PostgreSQL Win32 tool, but it's only for win32 systems. Is there any good tool to do this? Or should I just roll my own?

    Read the article

  • Will my system fsck when I reboot?

    - by Tom Newton
    ...and how do I find out? Say I am about to reboot a server. I would like to minimize downtime, so thinking about wrapping reboot in an alias that says "hang on buddy, you're going to hit a fsck on boot". Next question.. what's the best way to say "lets do it next time?" set the last check date? I know tune2fs can set a bunch of parameters, but how would I get em?

    Read the article

  • MAC addresses on dual-NIC mainboards

    - by Tom O'Connor
    Here's a weird problem. We've got a number of devices with dual-NIC mainboards. Some are Realtek NICs, which suck. Some are Intel e1000s, which don't. I've just noticed on 2 machines, one is an Intel NIC, one is a Realtek, that when I put the MAC address of one machine into the dhcpd.conf file on our DHCP server to get it to PXE boot the machine into a rebuild environment, initially everything is fine. The server gets a DHCP allocation, and PXE boots into the Ubuntu preseed enviroment. On one or two machines, it gets as far as Ubuntu's DHCP network configuration, and fails. If i pull up a busybox shell (on tty2 on the installing machine), and run ip link, I can see that the UP flag is set on the other NIC. Here's some stuff. host xeon16-ghz240-gb48-node1 { hardware ethernet BC:AE:C5:07:1F:18; filename "pxelinux.0"; next-server 192.168.123.80; } That's what's in dhcpd.conf This is what ip link on the evil machine looks like. Only one NIC is actually connected (deliberately). As you can see, the NIC that's in the dhcpd config, is not marked as UP, and the link that is UP, isn't the one in DHCP. So far I've seen this on two brands of dual-NIC configuration. Does anyone know 1) what's causing it, and b) What we can do about it?

    Read the article

  • Time machine not recognizing sparsebundle

    - by Tom
    I've created a sparsebundle just as suggested at http://www.readynas.com/?p=253; however, Time Machine fails to recognize the mounted image. The sparseimage is stored on an external drive, and I already executed: defaults write com.apple.systempreferences TMShowUnsupportedNetworkVolumes 1 I have also verified that the naming format of the sparsebundle is: MachineName_.sparsebundle. I'm running the latest version of Snow Leopard (10.6.7).

    Read the article

  • Firefox Won't Save My Google Cookies Between Restarts

    - by Tom Purl
    I'm using firefox 11 on Ubuntu. For some strange reason, Firefox won't save my google cookies between browser restarts. I have to log in to gmail every time I restart my browser, even if I click on the check box that tells Google to remember me. The strange thing is that Firefox does actually store some gmail cookies when I log in. It's just that those cookies disappear after restarting Firefox. The especially strange thing is that this only seems to happen with *.google.com url's. I haven't noticed this problem with any other site that I use. Please note that I tried to see if this was a plugin-related problem. I therefore started Firefox in safe mode and turned off all plugins. I then logged into Gmail and told it to remember me. I then shut down Firefox and started it the same way in safe mode. I got the same bad results. Has anyone else ever seen anything like this before? Is there a reason that Firefox seems to be blacklisting Google cookies?

    Read the article

  • Output php mail calls to log file

    - by Tom McQuarrie
    This question relates to the question found here: Find the php script thats sending mails Trying to do the exact same thing but can't get the log to output what I need. Not too experienced with serverfault and ideally I'd post my followup on the original question, or PM adam to see if he ever found a solution, but looks as though server fault doesn't work that way. I can post an "answer" but that's definitely not what this is. I have a script located at /usr/local/bin/sendmail-php-logged, with the following: #!/bin/sh logger -p mail.info sendmail-php: site=${HTTP_HOST}, client=${REMOTE_ADDR}, script=${SCRIPT_NAME}, filename=${SCRIPT_FILENAME}, docroot=${DOCUMENT_ROOT}, pwd=${PWD}, uid=${UID}, user=$(whoami) /usr/sbin/sendmail -t -i $* This is logging to /var/log/maillog, but as Adam mentions in his question, none of the server variables work. Output I'm getting is: Oct 4 12:16:21 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/var/www/html/aro_chroot/sites/arocms, uid=48, user=apache Oct 4 12:16:21 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/var/www/html/aro_chroot/sites/arocms, uid=48, user=apache Oct 4 12:17:03 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/var/www/html/aro_chroot/sites/arocms, uid=48, user=apache Oct 4 12:17:05 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/root, uid=0, user=root Oct 4 12:17:11 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/var/www/html/aro_chroot/sites/arocms, uid=48, user=apache Oct 4 12:17:14 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/root, uid=0, user=root Oct 4 12:17:29 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/root, uid=0, user=root Oct 4 12:17:41 fluke logger: sendmail-php: site=, client=, script=, filename=, docroot=, pwd=/root, uid=0, user=root User ID, current user, and pwd are all working, probably because they're globally accessible script resources, and not specific to PHP, like all the others are. I've tried using other server variables as per labradort's instructions, but no joy. Here's some sample tests: logger -p mail.info sendmail-php SCRIPT_NAME: ${SCRIPT_NAME} logger -p mail.info sendmail-php SCRIPT_FILENAME: ${SCRIPT_FILENAME} logger -p mail.info sendmail-php PATH_INFO: ${PATH_INFO} logger -p mail.info sendmail-php PHP_SELF: ${PHP_SELF} logger -p mail.info sendmail-php DOCUMENT_ROOT: ${DOCUMENT_ROOT} logger -p mail.info sendmail-php REMOTE_ADDR: ${REMOTE_ADDR} logger -p mail.info sendmail-php SCRIPT_NAME: $SCRIPT_NAME logger -p mail.info sendmail-php SCRIPT_FILENAME: $SCRIPT_FILENAME logger -p mail.info sendmail-php PATH_INFO: $PATH_INFO logger -p mail.info sendmail-php PHP_SELF: $PHP_SELF logger -p mail.info sendmail-php DOCUMENT_ROOT: $DOCUMENT_ROOT logger -p mail.info sendmail-php REMOTE_ADDR: $REMOTE_ADDR And the output: Oct 4 12:58:02 fluke logger: sendmail-php SCRIPT_NAME: Oct 4 12:58:02 fluke logger: sendmail-php SCRIPT_FILENAME: Oct 4 12:58:02 fluke logger: sendmail-php PATH_INFO: Oct 4 12:58:02 fluke logger: sendmail-php PHP_SELF: Oct 4 12:58:02 fluke logger: sendmail-php DOCUMENT_ROOT: Oct 4 12:58:02 fluke logger: sendmail-php REMOTE_ADDR: Oct 4 12:58:02 fluke logger: sendmail-php SCRIPT_NAME: Oct 4 12:58:02 fluke logger: sendmail-php SCRIPT_FILENAME: Oct 4 12:58:02 fluke logger: sendmail-php PATH_INFO: Oct 4 12:58:02 fluke logger: sendmail-php PHP_SELF: Oct 4 12:58:02 fluke logger: sendmail-php DOCUMENT_ROOT: Oct 4 12:58:02 fluke logger: sendmail-php REMOTE_ADDR: I'm running php 5.3.10. Unfortunately register_globals is on, for compatibility with legacy systems, but you wouldn't think that would cause the environment variables to stop working. If someone can give me some hints as to why this might not be working I'll be a very happy man :)

    Read the article

  • Django Dying on Shared Hosting Environment (Too Many MySQL Connections)

    - by Tom
    I've had a Django site up and running on HostGator (client requirement), following these instructions, for a few weeks now. I had seen two error emails about pages dying with (1040: Too many MySQL connections) but had never been able to recreate the problem. As of today, the site is completely unresponsive and all pages, even the static files, are dying with that error. Two questions: What can I do to fix this (other than caching more stuff)? Why would static files be dying like that? I can request them directly without a problem, so how are they getting run through Django? The shared hosting setup doesn't allow for a <Location> block, but there's a flag in the rewrite rule that says only requests for files that don't exist in the filesystem should be processed. All of my static files exist on the system, though they are symbolically linked files if it matters.

    Read the article

  • Unable to access router even though internet works

    - by Tom Kaufmann
    I had access to my router and my internet was also working fine, but I was trying to do a port forward of 80 to my local machine and in the process I made a mistake. I went into Remote Management and for port 80 there were a few options like LAN, WAN, All. I accidently clicked "all" and then clicked "Disable". The problem is that I am no more able to access my router using 192.168.1.1, although my internet works. If I do a ping 192.168.1.1 I am able to receive the response, but I am no longer able to browse the internet. How can I fix this issue? I am using a zyxel p-660hn-t1a router given by my ISP.

    Read the article

  • xrandr shows two displays (LVDS1), but how can I use VGA1 only?

    - by Tom Fishman
    We're running Ubuntu 11 on this hardware: Foxconn R20-D2 Intel Atom D510 Intel NM10 Intel GMA 3150 Barebone There is no integrated display (it is a barebone box). I connected an external VGA to it. However xrandr shows two displays: Screen 0: minimum 320 x 200, current 1024 x 768, maximum 4096 x 4096 LVDS1 connected 1024x768+0+0 (normal left inverted right x axis y axis) 0mm x 0mm 1024x768 60.0*+ 800x600 60.3 56.2 640x480 59.9 VGA1 connected 1024x768+0+0 (normal left inverted right x axis y axis) 519mm x 324mm 1920x1200 60.0 + 1600x1200 60.0 1680x1050 60.0 1280x1024 76.0 75.0 72.0 60.0 1440x900 75.0 59.9 1152x864 75.0 1024x768 75.1 70.1 60.0* 832x624 74.6 800x600 72.2 75.0 60.3 640x480 72.8 75.0 66.7 60.0 720x400 70.1 But I don't have two displays. How can I get rid of LVDS1 and use only VGA1? The direct result is that I'm seeing a 1024x768 resolution on my VGA display, because the OS is using "mirror" mode which uses the lower resolution of the two. Turning off the mirror is not a solution. I want to fix it. Related logs: ... [ 20.019] (II) intel(0): Creating default Display subsection in Screen section "Default Screen Section" for depth/fbbpp 24/32 [ 20.019] (==) intel(0): Depth 24, (--) framebuffer bpp 32 [ 20.019] (==) intel(0): RGB weight 888 [ 20.019] (==) intel(0): Default visual is TrueColor [ 20.019] (II) intel(0): Integrated Graphics Chipset: Intel(R) Pineview G [ 20.019] (--) intel(0): Chipset: "Pineview G" [ 20.019] (**) intel(0): Relaxed fencing enabled [ 20.019] (**) intel(0): Wait on SwapBuffers? enabled [ 20.019] (**) intel(0): Triple buffering? enabled [ 20.019] (**) intel(0): Framebuffer tiled [ 20.019] (**) intel(0): Pixmaps tiled [ 20.020] (**) intel(0): 3D buffers tiled [ 20.020] (**) intel(0): SwapBuffers wait enabled [ 20.020] (==) intel(0): video overlay key set to 0x101fe [ 20.020] (II) intel(0): Output LVDS1 has no monitor section [ 20.020] (II) intel(0): found backlight control interface /sys/class/backlight/intel_backlight [ 20.080] (II) intel(0): Output VGA1 has no monitor section [ 20.080] (II) intel(0): EDID for output LVDS1 [ 20.081] (II) intel(0): Not using default mode "320x240" (doublescan mode not supported) [ 20.081] (II) intel(0): Not using default mode "400x300" (doublescan mode not supported) [ 20.081] (II) intel(0): Not using default mode "400x300" (doublescan mode not supported) [ 20.081] (II) intel(0): Not using default mode "512x384" (doublescan mode not supported) ... [ 20.082] (II) intel(0): Not using default mode "960x600" (doublescan mode not supported) [ 20.082] (II) intel(0): Printing probed modes for output LVDS1 [ 20.082] (II) intel(0): Modeline "1024x768"x60.0 65.00 1024 1048 1184 1344 768 771 777 806 -hsync -vsync (48.4 kHz) [ 20.082] (II) intel(0): Modeline "800x600"x60.3 40.00 800 840 968 1056 600 601 605 628 +hsync +vsync (37.9 kHz) [ 20.082] (II) intel(0): Modeline "800x600"x56.2 36.00 800 824 896 1024 600 601 603 625 +hsync +vsync (35.2 kHz) [ 20.082] (II) intel(0): Modeline "640x480"x59.9 25.18 640 656 752 800 480 490 492 525 -hsync -vsync (31.5 kHz) [ 20.149] (II) intel(0): EDID for output VGA1 [ 20.149] (II) intel(0): Manufacturer: BNQ Model: 771b Serial#: 6595 [ 20.149] (II) intel(0): Year: 2008 Week: 16 [ 20.149] (II) intel(0): EDID Version: 1.3 [ 20.149] (II) intel(0): Analog Display Input, Input Voltage Level: 0.700/0.700 V ... [ 20.152] (II) intel(0): Modeline "640x480"x60.0 25.20 640 656 752 800 480 490 492 525 -hsync -vsync (31.5 kHz) [ 20.152] (II) intel(0): Modeline "720x400"x70.1 28.32 720 738 846 900 400 412 414 449 -hsync +vsync (31.5 kHz) [ 20.152] (II) intel(0): Output LVDS1 connected [ 20.152] (II) intel(0): Output VGA1 connected [ 20.152] (II) intel(0): Using exact sizes for initial modes [ 20.152] (II) intel(0): Output LVDS1 using initial mode 1024x768 [ 20.152] (II) intel(0): Output VGA1 using initial mode 1024x768 [ 20.152] (II) intel(0): Using default gamma of (1.0, 1.0, 1.0) unless otherwise stated. ...

    Read the article

  • Why is uploading to S3 so slow?

    - by Tom Marthenal
    I am using s3cmd to upload to S3: # s3cmd put 1gb.bin s3://my-bucket/1gb.bin 1gb.bin -> s3://my-bucket/1gb.bin [1 of 1] 366706688 of 1073741824 34% in 371s 963.22 kB/s I am uploading from Linode, which has an outgoing bandwidth cap of 50 Mb/s according to support (roughly 6 MB/s). Why am I getting such slow upload speeds to S3, and how can I improve them? Update: Uploading the same file via SCP to an m1.medium EC2 instance (SCP from my Linode to the instance's EBS drive) gives about 44 Mb/s according to iftop (any compression done by the cipher is not a factor). Traceroute: Here's a traceroute to the server it's uploading to (according to tcpdump). # traceroute s3-1-w.amazonaws.com. traceroute to s3-1-w.amazonaws.com. (72.21.194.32), 30 hops max, 60 byte packets 1 207.99.1.13 (207.99.1.13) 0.635 ms 0.743 ms 0.723 ms 2 207.99.53.41 (207.99.53.41) 0.683 ms 0.865 ms 0.915 ms 3 vlan801.tbr1.mmu.nac.net (209.123.10.9) 0.397 ms 0.541 ms 0.527 ms 4 0.e1-1.tbr1.tl9.nac.net (209.123.10.102) 1.400 ms 1.481 ms 1.508 ms 5 0.gi-0-0-0.pr1.tl9.nac.net (209.123.11.62) 1.602 ms 1.677 ms 1.699 ms 6 equinix02-iad2.amazon.com (206.223.115.35) 9.393 ms 8.925 ms 8.900 ms 7 72.21.220.41 (72.21.220.41) 32.610 ms 9.812 ms 9.789 ms 8 72.21.222.141 (72.21.222.141) 9.519 ms 9.439 ms 9.443 ms 9 72.21.218.3 (72.21.218.3) 10.245 ms 10.202 ms 10.154 ms 10 * * * 11 * * * 12 * * * 13 * * * 14 * * * 15 * * * 16 * * * 17 * * * 18 * * * 19 * * * 20 * * * 21 * * * 22 * * * 23 * * * 24 * * * 25 * * * 26 * * * 27 * * * 28 * * * 29 * * * 30 * * * The latency looks reasonable, at least until the server stopped responding to ping requests.

    Read the article

  • How to create ftp server on Webmin?

    - by Tom
    Hello firstly, i'm new at vps hosting so i'm not that good at SSH and so on, So i have a vps running centOS 5.4 x64 LAMP ( linux,apache,mysql,php) and i'm really confused because i can't find FTP management or subdomains or even the email address on my webmin panel. Should i install additional modules ? Thanks

    Read the article

  • Synergy: server refused client

    - by Tom
    I am trying to get Synergy up and running from a new Windows 7 computer, to an XP computer. The Synergy installations seemed to go fine. I configured each to start on start-up, which they seem to, but the client won't connect. I get repeated: "server refused client with our name" messages in the log output. The Synergy FAQ says to "Add the client to the server's configuration file." But I can't seem to find an instruction set on how to do this. I've looked and looked, but I'm lost...

    Read the article

  • I can't uninstall Git

    - by Tom
    I was researching Git so I downloaded the Windows version to test it out on a repository on GitHub. After about 30 minutes I couldn't work out how to use it, so I decided I probably wouldn't need a distributed repository as our projects aren't that big and went back to what I know - SVN. (I thought) I uninstalled all the Git related stuff I'd put on my PC, but have now got an irritating problem where if I open any folders I get an error message saying: Hello [ERROR] Could not find git path As you can imagine, this is a real pain, does anyone have any ideas on how to fix it?

    Read the article

  • What can I do to prevent my user folder from being tampered with by malicious software?

    - by Tom Wijsman
    Let's assume some things: Back-ups do run every X minutes, yet the things I save should be permanent. There's a firewall and virus scanner in place, yet there happens to be a zero day attack on me. I am using Windows. (Although feel free to append Linux / OS X parts to your answer) Here is the problem Any software can change anything inside my user folder. Tampering with the files could cost me my life, whether it's accessing / modifying or wiping them. So, what I want to ask is: Is there a permission-based way to disallow programs from accessing my files in any way by default? Extending on the previous question, can I ensure certain programs can only access certain folders? Are there other less obtrusive ways than using Comodo? Or can I make Comodo less obtrusive? For example, the solution should be proof against (DO NOT RUN): del /F /S /Q %USERPROFILE%

    Read the article

  • Bacula & Multiple Tape Devices, and so on

    - by Tom O'Connor
    Bacula won't make use of 2 tape devices simultaneously. (Search for #-#-# for the TL;DR) A little background, perhaps. In the process of trying to get a decent working backup solution (backing up 20TB ain't cheap, or easy) at $dayjob, we bought a bunch of things to make it work. Firstly, there's a Spectra Logic T50e autochanger, 40 slots of LTO5 goodness, and that robot's got a pair of IBM HH5 Ultrium LTO5 drives, connected via FibreChannel Arbitrated Loop to our backup server. There's the backup server.. A Dell R715 with 2x 16 core AMD 62xx CPUs, and 32GB of RAM. Yummy. That server's got 2 Emulex FCe-12000E cards, and an Intel X520-SR dual port 10GE NIC. We were also sold Commvault Backup (non-NDMP). Here's where it gets really complicated. Spectra Logic and Commvault both sent respective engineers, who set up the library and the software. Commvault was running fine, in so far as the controller was working fine. The Dell server has Ubuntu 12.04 server, and runs the MediaAgent for CommVault, and mounts our BlueArc NAS as NFS to a few mountpoints, like /home, and some stuff in /mnt. When backing up from the NFS mountpoints, we were seeing ~= 290GB/hr throughput. That's CRAP, considering we've got 20-odd TB to get through, in a <48 hour backup window. The rated maximum on the BlueArc is 700MB/s (2460GB/hr), the rated maximum write speed on the tape devices is 140MB/s, per drive, so that's 492GB/hr (or double it, for the total throughput). So, the next step was to benchmark NFS performance with IOzone, and it turns out that we get epic write performance (across 20 threads), and it's like 1.5-2.5TB/hr write, but read performance is fecking hopeless. I couldn't ever get higher than 343GB/hr maximum. So let's assume that the 343GB/hr is a theoretical maximum for read performance on the NAS, then we should in theory be able to get that performance out of a) CommVault, and b) any other backup agent. Not the case. Commvault seems to only ever give me 200-250GB/hr throughput, and out of experimentation, I installed Bacula to see what the state of play there is. If, for example, Bacula gave consistently better performance and speeds than Commvault, then we'd be able to say "**$.$ Refunds Plz $.$**" #-#-# Alas, I found a different problem with Bacula. Commvault seems pretty happy to read from one part of the mountpoint with one thread, and stream that to a Tape device, whilst reading from some other directory with the other thread, and writing to the 2nd drive in the autochanger. I can't for the life of me get Bacula to mount and write to two tape drives simultaneously. Things I've tried: Setting Maximum Concurrent Jobs = 20 in the Director, File and Storage Daemons Setting Prefer Mounted Volumes = no in the Job Definition Setting multiple devices in the Autochanger resource. Documentation seems to be very single-drive centric, and we feel a little like we've strapped a rocket to a hamster, with this one. The majority of example Bacula configurations are for DDS4 drives, manual tape swapping, and FreeBSD or IRIX systems. I should probably add that I'm not too bothered if this isn't possible, but I'd be surprised. I basically want to use Bacula as proof to stick it to the software vendors that they're overpriced ;) I read somewhere that @KyleBrandt has done something similar with a modern Tape solution.. Configuration Files: *bacula-dir.conf* # # Default Bacula Director Configuration file Director { # define myself Name = backuphost-1-dir DIRport = 9101 # where we listen for UA connections QueryFile = "/etc/bacula/scripts/query.sql" WorkingDirectory = "/var/lib/bacula" PidDirectory = "/var/run/bacula" Maximum Concurrent Jobs = 20 Password = "yourekiddingright" # Console password Messages = Daemon DirAddress = 0.0.0.0 #DirAddress = 127.0.0.1 } JobDefs { Name = "DefaultFileJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = File Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" } JobDefs { Name = "DefaultTapeJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = "SpectraLogic" Messages = Standard Pool = AllTapes Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" Prefer Mounted Volumes = no } # # Define the main nightly save backup job # By default, this job will back up to disk in /nonexistant/path/to/file/archive/dir Job { Name = "BackupClient1" JobDefs = "DefaultFileJob" } Job { Name = "BackupThisVolume" JobDefs = "DefaultTapeJob" FileSet = "SpecialVolume" } #Job { # Name = "BackupClient2" # Client = backuphost-12-fd # JobDefs = "DefaultJob" #} # Backup the catalog database (after the nightly save) Job { Name = "BackupCatalog" JobDefs = "DefaultFileJob" Level = Full FileSet="Catalog" Schedule = "WeeklyCycleAfterBackup" # This creates an ASCII copy of the catalog # Arguments to make_catalog_backup.pl are: # make_catalog_backup.pl <catalog-name> RunBeforeJob = "/etc/bacula/scripts/make_catalog_backup.pl MyCatalog" # This deletes the copy of the catalog RunAfterJob = "/etc/bacula/scripts/delete_catalog_backup" Write Bootstrap = "/var/lib/bacula/%n.bsr" Priority = 11 # run after main backup } # # Standard Restore template, to be changed by Console program # Only one such job is needed for all Jobs/Clients/Storage ... # Job { Name = "RestoreFiles" Type = Restore Client=backuphost-1-fd FileSet="Full Set" Storage = File Pool = Default Messages = Standard Where = /srv/bacula/restore } FileSet { Name = "SpecialVolume" Include { Options { signature = MD5 } File = /mnt/SpecialVolume } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } # List of files to be backed up FileSet { Name = "Full Set" Include { Options { signature = MD5 } File = /usr/sbin } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } Schedule { Name = "WeeklyCycle" Run = Full 1st sun at 23:05 Run = Differential 2nd-5th sun at 23:05 Run = Incremental mon-sat at 23:05 } # This schedule does the catalog. It starts after the WeeklyCycle Schedule { Name = "WeeklyCycleAfterBackup" Run = Full sun-sat at 23:10 } # This is the backup of the catalog FileSet { Name = "Catalog" Include { Options { signature = MD5 } File = "/var/lib/bacula/bacula.sql" } } # Client (File Services) to backup Client { Name = backuphost-1-fd Address = localhost FDPort = 9102 Catalog = MyCatalog Password = "surelyyourejoking" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # # Second Client (File Services) to backup # You should change Name, Address, and Password before using # #Client { # Name = backuphost-12-fd # Address = localhost2 # FDPort = 9102 # Catalog = MyCatalog # Password = "i'mnotjokinganddontcallmeshirley" # password for FileDaemon 2 # File Retention = 30 days # 30 days # Job Retention = 6 months # six months # AutoPrune = yes # Prune expired Jobs/Files #} # Definition of file storage device Storage { Name = File # Do not use "localhost" here Address = localhost # N.B. Use a fully qualified name here SDPort = 9103 Password = "lalalalala" Device = FileStorage Media Type = File } Storage { Name = "SpectraLogic" Address = localhost SDPort = 9103 Password = "linkedinmakethebestpasswords" Device = Drive-1 Device = Drive-2 Media Type = LTO5 Autochanger = yes } # Generic catalog service Catalog { Name = MyCatalog # Uncomment the following line if you want the dbi driver # dbdriver = "dbi:sqlite3"; dbaddress = 127.0.0.1; dbport = dbname = "bacula"; DB Address = ""; dbuser = "bacula"; dbpassword = "bbmaster63" } # Reasonable message delivery -- send most everything to email address # and to the console Messages { Name = Standard mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: %t %e of %c %l\" %r" operatorcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: Intervention needed for %j\" %r" mail = root@localhost = all, !skipped operator = root@localhost = mount console = all, !skipped, !saved # # WARNING! the following will create a file that you must cycle from # time to time as it will grow indefinitely. However, it will # also keep all your messages if they scroll off the console. # append = "/var/lib/bacula/log" = all, !skipped catalog = all } # # Message delivery for daemon messages (no job). Messages { Name = Daemon mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula daemon message\" %r" mail = root@localhost = all, !skipped console = all, !skipped, !saved append = "/var/lib/bacula/log" = all, !skipped } # Default pool definition Pool { Name = Default Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year } # File Pool definition Pool { Name = File Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year Maximum Volume Bytes = 50G # Limit Volume size to something reasonable Maximum Volumes = 100 # Limit number of Volumes in Pool } Pool { Name = AllTapes Pool Type = Backup Recycle = yes AutoPrune = yes # Prune expired volumes Volume Retention = 31 days # one Moth } # Scratch pool definition Pool { Name = Scratch Pool Type = Backup } # # Restricted console used by tray-monitor to get the status of the director # Console { Name = backuphost-1-mon Password = "LastFMalsostorePasswordsLikeThis" CommandACL = status, .status } bacula-sd.conf # # Default Bacula Storage Daemon Configuration file # Storage { # definition of myself Name = backuphost-1-sd SDPort = 9103 # Director's port WorkingDirectory = "/var/lib/bacula" Pid Directory = "/var/run/bacula" Maximum Concurrent Jobs = 20 SDAddress = 0.0.0.0 # SDAddress = 127.0.0.1 } # # List Directors who are permitted to contact Storage daemon # Director { Name = backuphost-1-dir Password = "passwordslinplaintext" } # # Restricted Director, used by tray-monitor to get the # status of the storage daemon # Director { Name = backuphost-1-mon Password = "totalinsecurityabound" Monitor = yes } Device { Name = FileStorage Media Type = File Archive Device = /srv/bacula/archive LabelMedia = yes; # lets Bacula label unlabeled media Random Access = Yes; AutomaticMount = yes; # when device opened, read it RemovableMedia = no; AlwaysOpen = no; } Autochanger { Name = SpectraLogic Device = Drive-1 Device = Drive-2 Changer Command = "/etc/bacula/scripts/mtx-changer %c %o %S %a %d" Changer Device = /dev/sg4 } Device { Name = Drive-1 Drive Index = 0 Archive Device = /dev/nst0 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } Device { Name = Drive-2 Drive Index = 1 Archive Device = /dev/nst1 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } # # Send all messages to the Director, # mount messages also are sent to the email address # Messages { Name = Standard director = backuphost-1-dir = all } bacula-fd.conf # # Default Bacula File Daemon Configuration file # # # List Directors who are permitted to contact this File daemon # Director { Name = backuphost-1-dir Password = "hahahahahaha" } # # Restricted Director, used by tray-monitor to get the # status of the file daemon # Director { Name = backuphost-1-mon Password = "hohohohohho" Monitor = yes } # # "Global" File daemon configuration specifications # FileDaemon { # this is me Name = backuphost-1-fd FDport = 9102 # where we listen for the director WorkingDirectory = /var/lib/bacula Pid Directory = /var/run/bacula Maximum Concurrent Jobs = 20 #FDAddress = 127.0.0.1 FDAddress = 0.0.0.0 } # Send all messages except skipped files back to Director Messages { Name = Standard director = backuphost-1-dir = all, !skipped, !restored }

    Read the article

  • CentOS running inside VMware as WebServer times out on outside connection

    - by Tom Hart
    I have a CentOS machine running inside VMware, and I have got PHP and Apache set up on it, so if I open a browser (on the VM) and go to either localhost, or 192.168.0.3, I get a phpinfo page I made in /var/www/html/index.php, but, if on the host (Windows 7), in my browser I go to 192.168.0.3, it times out. I can ping the IP address from Windows and get a response, I just can't through the browser. Does anyone have any ideas what I need to do to get this working? This is my first time using a VM and I'm getting lost in the network settings.

    Read the article

  • Coda 2 and SCP uploading files with the wrong permission

    - by Tom Black
    Currently I have a basic Ubuntu server running a website. The website is for a few students learning HTML/PHP and each student has their own account with a symbolic link to the shared website folder. Since the students are working on the website together, each user needs to be able to modify all the files (index.html for example). So I created a Webdev group containing all of the students with the default umask of 0002 set in their .bashrc (This allows newly created files to be 774). The shared folder is owned by the group Webdev with a chmod g+s so that new files/folders also belong to the group Webdev. The problem is that the students are using an IDE (Coda 2) and when they create a new file or folder using the IDE the file has the permissions of 644 on the server (not group writable). However when I make a new file through connecting with Cyberduck (SFTP client) the file permissions are 664 (as they should be). So I don't understand why Coda would be any different. However, after some trial and error I believe that Coda is first creating the file on local disk and then uploading that file to the server. On a mac by default a newly created file is 644. When the client uploads a file that's already 644 it stays 644 on the server side (umask is kind of useless in this situation). I've also tried creating ACL permissions for that folder but an uploaded file from my mac via SCP doesn't get the default ACL permissions. In Coda there is an option to change file permissions on a transfer. However this option seems to apply a chmod to all files being uploaded or saved. When one of students is modifying a file created by someone else when they try to upload the file or save it Coda tries to also do a chmod but fails because that user isn't the owner of the file. My current solution is using bindfs... I mount the shared web folder and bindfs sets permissions and group ownership of newly created files. However, bindfs seems to be a bit slow and I'm sure there is a better solution. Even if the students ditched Coda 2 and used Mac vim with scp the newly created files on the server would behave the same (644) which is default on the mac. Other options... 1) Either I teach the students to use (ssh/chmod) with their IDE to change their own file permissions when uploading. 2) I make all the students' Macs have the default umask of 0002 which would upload files with the right permissions. 3) Write a corn script to fix the file permissions every 5 to 15 minutes... (This option I think is the worst if students are working together at the same time). Is there any way that I could make all files that are uploaded via SCP have the default file permissions of 664 even though the uploaded file has a lower permission? (After hours of searching I don't think this is possible) I guess a corn script is my best option for novice users. How do web developers work together on larger sites? similar to this: http://serverfault.com/questions/283492/how-to-specify-file-permission-when-putting-a-file-using-openssh-sftp-command Also similar: http://serverfault.com/questions/395418/managing-linux-directory-permissions-sftp

    Read the article

  • how to run multiple shell scripts in parallel

    - by tom smith
    I've got a few test scripts, each of which runs a test php app. Each script runs forever. So, cat.sh, dog.sh, and foo.sh, each run a php script, and each shell script runs the php app in a loop, so it runs forever, sleeping after each run. I'm trying to figure out how to run the scripts in parallel, and at the same time, see the output of the php apps in the stdout/term window. I thought, simply doing something like foo.sh > &2 dog.sh > &2 cat.sh > &2 in a shell script would be sufficient, but it's not working. foo.sh, runs foo.php once, and it runs correctly dog.sh, runs dog.php in a never ending loop. it runs as expected cat.sh, runs cat.php in a never ending loop *** this never runs!!! it appears that the shell script never gets to run cat.sh. if i run cat.sh by itself in a separate window/term, it runs as expected... thoughts/comments

    Read the article

  • Apache doesn't immediately notice a change in the document root

    - by Tom
    We use capistrano for website deployments and our Apache document root is a symlink to a particular code release. The deployment procedure switches the symlink from the old release to the new release as the final step of the deployment. We are migrating our webservers from real servers running RHEL 5.6 to Amazon EC2 virtual machines running Ubuntu 11.10 and the new servers are suffering from a problem where Apache doesn't immediately notice the change to it's document root when the symlink is switched. It can take a second or so (and I think I've even seen it take a couple of minutes). It's kind of like Apache has cached the physical path of the symlink for some time. Does anyone know some Apache settings I could look at to get it to "scan" for changes to it's served files quicker. Thoughts: I read that the disks on virtual machines are much slower (since they are network attached storage). Perhaps the filesystem cache somehow works differently too? If so, is there anything that can be done? The website runs PHP code. Perhaps there is some PHP config differences between RHEL and Ubuntu? I checked realpath_cache_ttl but both servers have it commented out: e.g. ; Duration of time, in seconds for which to cache realpath information for a given ; file or directory. For systems with rarely changing files, consider increasing this ; value. ; http://www.php.net/manual/en/ini.core.php#ini.realpath-cache-ttl ;realpath_cache_ttl = 120 We do use the APC opcode cache but don't think it's the issue due to experimentation. The PHP code is in different file paths for each deployment and we ensure stat=1. Here is a similar question that is very interesting: 294107 - but doesn't provide an answer for me. One solution would be to reload Apache everytime we modify the document root symlink. I'll do this if we can't find another solution.

    Read the article

  • On a failing hard drive, I am able to view data but unable to copy it - why?

    - by Tom
    I have a 2.5" external hard drive that is failing. It's not making the expected 'clicking' noise that most hard drives and I am able to view the data, but I am unable to actually retrieve the data. I attempted to use SpinRite in order to access the data on the drive, but it didn't like the external drive. When I view the drive's property page, the drive shows that it's used space is at 100% and that it has 0 bytes available; however, the progress indicator under the drive icon in Windows Explorer shows that it's roughly 50% full (which is correct). When I attempt to run Windows' "Error Checking" tool and attempt to "scan for an attempt recovery of bad sectors," the tool begins to run then immediately closes with no error message. I am able to browse the contents of the drive using Windows Explorer. When I begin to try copying any given single file, the copy process begins, an indicator starts, and then the copy fails with no real error message. The Disk Management page in Computer Management under Control Panel also shows this drive has being 'Healthy.' I dropped the drive off at a data recovery store and they said that "The data seems to be intact, but an internal failure is preventing any information from being retrieved." They offered to provide me references to a data recovery specialist. I've also attempted to run CHKDSK on the drive (with and without arguments) but it returns the following error: The type of the filesystem is RAW. CHKDSK is not available for RAW drives. Before going the route of more expensive data recovery, I'm wondering if these symptoms sound familiar to anyone? Other questions... I'm willing to continue trying tools such as TestDisk and/or PhotoRec (as the majority of the data that I'd like to salvage are photos) but how long I should expect either tool to run given approximately 400GB of data? I'm also comfortable using Linux so I welcome any suggestions for utilities or tools and strategies with which you've had success.

    Read the article

  • BGP Multihomed/Multi-location best practice

    - by Tom O'Connor
    We're in the process of designing a new iteration of our network where we improve resilliency by adding a second datacentre. We'll be adding a second datacentre, with an identical configuration of servers as our primary location. To achieve network connectivity, we're looking into a couple of possible methods. See earlier questions http://serverfault.com/questions/86736/best-way-to-improve-resilience and http://serverfault.com/questions/101582/dns-round-robin-failover-and-load-balancing I'm pretty convinced that BGP is the right way to go about this, and this question is not about RRDNS. 1) If we have 2 locations, do we announce the same IP address block from both locations? 2) If we did this, but had a management ssh interface on x.x.x.50 from datacentre A, but it was on x.x.x.150 in datacentre B. What is the best practice mechanism for achieving this? Because if I were nearest to A, then all my traffic would go to x.50, but if i attempted to connect to x.150, I'd not be able to connect, because this address wouldn't be valid at A, but only at B. Is the best solution to announce 2 different netblocks, one at each location, facilitating the need for RRDNS, or to announce a single block, and run some form of VPN between the two sites for managment traffic?

    Read the article

  • After Effects: Mac to PC

    - by Tom
    Is it possible to open an After Effects file that's been created on a Mac with PC? I don't know what version the AE was on the Mac side, but I want to oepn it with CS3 on a PC laptop.

    Read the article

  • Hints on diagnosing performance issue in OpenBSD firewall

    - by Tom
    My OpenBSD 4.6 pf firewall has started having really bad performance in the past few weeks. I've isolated the firewall (as opposed to the WAN connection, switch, cable, etc.) as the problem, but need a hint on how to further diagnose or fix the problem. The facts: Normal setup is: DSL Modem - FW Ext. NIC - FW Int. NIC - Switch - Laptop Normal setup described above gives only 25 Kbps! Plugging the laptop straight from the DSL modem gives a 1 MBps connection (full speed, as advertised). Therefore, the DSL connection seems to be OK. Plugging the laptop directly into the firewall's internal NIC (bypassing the switch) also gives only 25 Kbps. Therefore, the switch does not seem to be a problem. I've replaced the ethernet cables, but it didn't help. Here's the weird thing. Reloading the ruleset (/sbin/pfctl -Fa -f /etc/pf.conf) causes the laptop's connection to go up to 1 Mbps (i.e. full speed) for a few minutes before it gradually degrades back down to 25Kbps again. Any ideas on what's wrong or how I could further diagnose the problem?

    Read the article

  • Ubuntu - why would /var/log/dmesg stop updating after boot? does not show panic/cpu_hung errors which the console shows

    - by Tom G
    So I have an Ubuntu 10.04 install VM on a host. Latest 2.6.38-15-server kernel . /var/log/dmesg displays only the bootup but will stop recording after that. It will not show the trace/cpu_hung errors I am trying to troubleshoot. /var/log/dmesg.0 , dmesg.1 nothing - I did a string search for the text that displays on the console during the crash and NOTHING gets logged anywhere in /var/log/* . I have to call into the provider and ask them to take a screenshot of the console since nothing shows in dmesg. Why would /var/log/dmesg not record kernel panics, or such?

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >