Search Results

Search found 11862 results on 475 pages for 'gman save the unicorns'.

Page 351/475 | < Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >

  • Firefox isn't using my download manager (flash videos)

    - by John22
    I installed "Free Download Manager." I see the plugin in Tools-Add-ons (it doesn't have any options). I use several different flash video downloaders, because I haven't found one that works period on any site. When I save the video with two I tried, they are being downloaded by Firefox's default download manager (which means simultaneously - which is why I installed the download manager - I need them to download one at a time - in a prioritized queue.) [I used to use Flashgot (long ago), and it worked with some download manager I had installed - but over time it failed to see most videos. I installed Flashgot again, and it still fails to see anything but images and video ads.] Currently, I have to manually start Free Download Manager (from outside of Firefox), start the download in Firefox, stop it, copy the link location from Firefox's download menu, and then add it manually in Free Download Manager. Yuck. Do I need a different download manager (that takes over - recommendations?), or did I somehow install this one wrong or miss a setting somewhere in Firefox? Thanks for any help.

    Read the article

  • Parsing the output of "uptime" with bash

    - by Keek
    I would like to save the output of the uptime command into a csv file in a Bash script. Since the uptime command has different output formats based on the time since the last reboot I came up with a pretty heavy solution based on case, but there is surely a more elegant way of doing this. uptime output: 8:58AM up 15:12, 1 user, load averages: 0.01, 0.02, 0.00 desired result: 15:12,1 user,0.00 0.02 0.00, current code: case "`uptime | wc -w | awk '{print $1}'`" in #Count the number of words in the uptime output 10) #e.g.: 8:16PM up 2:30, 1 user, load averages: 0.09, 0.05, 0.02 echo -n `uptime | awk '{ print $3 }' | awk '{gsub ( ",","" ) ; print $0 }'`","`uptime | awk '{ print $4,$5 }' | awk '{gsub ( ",","" ) ; print $0 }'`","`uptime | awk '{ print $8,$9,$10 }' | awk '{gsub ( ",","" ) ; print $0 }'`"," ;; 12) #e.g.: 1:41pm up 105 days, 21:46, 2 users, load average: 0.28, 0.28, 0.27 echo -n `uptime | awk '{ print $3,$4,$5 }' | awk '{gsub ( ",","" ) ; print $0 }'`","`uptime | awk '{ print $6,$7 }' | awk '{gsub ( ",","" ) ; print $0 }'`","`uptime | awk '{ print $10,$11,$12 }' | awk '{gsub ( ",","" ) ; print $0 }'`"," ;; 13) #e.g.: 12:55pm up 105 days, 21 hrs, 2 users, load average: 0.26, 0.26, 0.26 echo -n `uptime | awk '{ print $3,$4,$5,$6 }' | awk '{gsub ( ",","" ) ; print $0 }'`","`uptime | awk '{ print $7,$8 }' | awk '{gsub ( ",","" ) ; print $0 }'`","`uptime | awk '{ print $11,$12,$13 }' | awk '{gsub ( ",","" ) ; print $0 }'`"," ;; esac

    Read the article

  • Eclipse grinds to a halt when building workspace

    - by Chris Thompson
    Hi all, This is a bit of a vague question because, frankly, I don't even know where to begin diagnosing the issue. My eclipse (Galileo) installation grinds to a complete halt when it's building the workspace -- to the point where I can't even type. I know the Android SDK I have installed is a major culprit because I can watch the memory usage go through the roof (through the built-in heap monitor) when the Android SDK content loader starts up. Every time I save a file though, the program just stops. The message at the bottom of the screen says Building workspace (74%) and sits there for about 30 or so seconds before completing and returning the performance to normal. I have a few other plugins installed (Maven, SVN, etc) but I'm assuming the main issue is Android. Has anybody had similar issues or any luck correcting this sort of problem? If there's anymore information you think would be helpful, just let me know...I didn't want to do a core dump on this question... I'm running it on Windows 7 64-bit for what it's worth. Thanks! Chris

    Read the article

  • Internal Use Chat/Desktop Platform

    - by Malnizzle
    I did some looking, but I didn't see a SF question directly related to this. I need to find a platform for internal/LAN use only for chat and desktop sharing. We got a lot people having to get up and walk over to another person's office for simple items, and we could save a lot of time collectivity if we had a solution where we had a client app similar to gchat or AIM, where one could right click a name and request a desktop sharing session (to or from). This wouldn't be for product presentation or long training sessions, and wouldn't need more than 2 people in the session. We use GoToMeeting for this. I've looked at TeamViewer, and they got the features I want (plus a lot more features that wouldn't help) and seem more geared to large enterprise support, and it is not exactly cheap. Also looked at OCS some as well, but it seems heavy for what I am trying to do. I don't need anything relating to phone usage. If the general opinion says otherwise, I could talked back into OCS if I am wrong about weight of the product. We use GoToAssist for remote support of clients (which I am a fan of), and I don't it fits here, correct me if I am wrong. All the workstations are XP/7. Thanks!

    Read the article

  • DFSR NTFS Permissions Not Working!??!

    - by megadood
    I have two windwos 2008 standard servers running DFSR okay. I can create a file on one server, it is replicated to the other okay etc. I have the namespace shared folder on each server shared with full control administrators / everyone change/read permissions. I then browse to the folder on server 1 e.g.\server1\namespace\share\folder1. I right click the folder, and configure the NTFS permissions as I would like for example Adminsitrators Full Control / One User Read/Write Access / No other users in the user list. I save this and then double check the second server e.g. \server2\namespace\share\folder1. I right click the same folder name as before and can see the NTFS permissions have replicated accordingly. I right click the folder and go to properties - security - advanced - effective permissions and select a user that shouldnt be able to get into that folder e.g. testuser. It agrees with the NTFS permissions and shows that testuser has no ticks next to any permissions so should be denied access. I logon to any network PC or the server as testuser. Browse to \server1\namespace\share\folder1. It lets me straight in, no access denied messages. The same applies to server2. It seems as thought all my NTFS permissions are being ignored. I have 1 DFS share and then all the subfolders are a mixture of private folders and public folders so need the NTFS permissions to work ideally. Any idea whats going on? Is this normal? From my tests all users can access any DFSR folder under the namespace\share which is quite worrying. Thanks

    Read the article

  • Looking for a help desk ticketing system..

    - by Dan
    Hi guys Im looking for a good help desk ticket solution. It must perform the following actions for it to be useful. It needs to have a single point of contact via email..e.g [email protected] If we recieve a telephone(or an email outside of the system) we need to be able to create a ticket as if had been added via the single point of contact, this needs to be done with ease in order to save time. Certain people within our organisation deal with certain customers, so if the email/ custom input support call as mentioned in bullet 2 is picked up as having a relationship with that certain person in our organisation it needs to be sent to them/put in their queue for them to work on. If a person is out of office or sick any tickets sent to them must be forwarded to somebody else or put into a seperate pool of tickets that anybody can access. Perhaps have an agent that sorts through tickets in the pool and assigns them to anybody who is available, preferably the person with fewest tickets in their queue/list. Once a customer emails and the system logs it they immediately get a response with a ticket number and maybe details of who is dealing with the problem. Any correspondance in relation to a particular ticket is automatically grouped into some sort of message, and not made into a load of separate tickets. I.e system scans incoming email subjects for ticket numbers and assosciates it with exisiting tickets if that number exists. Any help is much appreciated Thanks P.S I have taken a look at OTRS but i'm not feeling it so unless someone can convince me I guess i'm after an alternative.

    Read the article

  • How to Move SMS from iPhone to Mac?

    - by seda16
    SMS is the main form to Communicate with others, you would saved many messages on your iPhone. Well, there're many reasons you need to backup your iPhone sms to the Mac. For example, your family or friends have sent you some important and you want to save them on your iPhone in case you delete them by accident, or you just need to backup your sms for other use. So today let's talk about how to move sms from iPhone to Mac. It would be very easy if you use an app to help you, I always use the iPhone to Mac transfer on Amacsoft to copy sms from iPhone to Mac. Now let me tell you how to use this great app: Step 1:Connect iPhone to Mac First of all, you need to install and launch the iPhone to Mac transfer, then connect your iPhone to Mac. The iPhone to Mac transfer would recognise your iPhone automatically. And all information of your iPhone will be shown on an interface. Step 2:Select sms and Start the Export Now you can see many choice on the left, find "SMS" and click it, all sms on your iPhone will be listed on the right. Select and check those you want to move, then just click "Export" on the top to start the transfer. Wait just a few a minute the transfer will be done. Great! You have finish the transfer now, it's really very easy, right? I believe it won't be a problem if you want to transfer your sms from iPhone to Mac. By the way you can also use this Amacsoft iPhone to Mac transfer to move other kind of files , like photos, songs etc. If you're a windows user, you can use iPhone to PC transfer on this web to move sms from iPhone to your PC just with the same steps, good luck!

    Read the article

  • Server high memory usage at same time every day

    - by Sam Parmenter
    Right, we moved one of our main sites onto a new AWS box with plenty of grunt as it would allow us more control that we had before and future proof ourselves. About a month ago we started running into issues with high memory usage at the same time every day. In the morning an export is run to export data to a file which is the FTPed to a local machine for processing. The issues were co-inciding with the rough time of the export but when we didn't run the export one day, the server still ran into the same issues. The export has been run at other times in the day since to monitor memory usage to see if it spikes. The conclusion is that the export is fine and barely touches the sides memory wise. No noticeable change in memory usage. When the issue happens, its effect is to kill mysql and require us to restart the process. We think it might be a mysql memory issue, but might just be that mysql is just the first to feel it. Looking at the logs there is no particular query run before the memory usage hits 90%. When it strikes at about 9:20am, the memory usage spikes from a near constant 25% to 98% and very quickly kills mysql to save itself. It usually takes about 3-4 minutes to die. There are no cron jobs running at that time of the day and we haven't noticed a spike in traffic over the period of the issues. Any help would be massively appreciated! thanks.

    Read the article

  • Windows Image Backup - renamed folder now restore cannot find any backups

    - by Schneider
    A while back I decide to create a couple of Windows Image Backups of my workstation at various points during installation from clean. While doing this I decided to rename the folders containing the VHDs from 'Backup <Date>' to something else of my choosing. I didn't bother testing at the time that the restore still worked. Now I come to use these backups for doing a bare metal restore to a different computer. The problem is restore cannot 'see' any of the backups. So I have deduced that maybe I need to rename them back to the 'Backup <Date>' pattern unfortunately I cannot determine the exact values that would have originally been used here. I have tried by best guest but the images still cannot be found. I have tried doing both a network and a usb hdd restore. No luck on either. P.S. I know I can retrieve files from within the VHDs, the problem is I am trying to save myself time of reinstalling lots of big applications... not trying to recover data.

    Read the article

  • Error applying iptables rules using iptables-restore

    - by John Franic
    Hi I'm using Ubuntu 9.04 on a VPS. I'm getting an error if I apply a iptables rule. Here is what I have done. 1.Saved the existing rules iptables-save /etc/iptables.up.rules Created iptables.test.rules and add some rules to it nano /etc/iptables.test.rulesnano /etc/iptables.test.rules This is the rules I added *filter # Allows all loopback (lo0) traffic and drop all traffic to 127/8 that doesn't use lo0 -A INPUT -i lo -j ACCEPT -A INPUT -i ! lo -d 127.0.0.0/8 -j REJECT # Accepts all established inbound connections -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Allows all outbound traffic # You can modify this to only allow certain traffic -A OUTPUT -j ACCEPT # Allows HTTP and HTTPS connections from anywhere (the normal ports for websites) -A INPUT -p tcp --dport 80 -j ACCEPT -A INPUT -p tcp --dport 443 -j ACCEPT # Allows SSH connections # # THE -dport NUMBER IS THE SAME ONE YOU SET UP IN THE SSHD_CONFIG FILE # -A INPUT -p tcp -m state --state NEW --dport 22- j ACCEPT # Allow ping -A INPUT -p icmp -m icmp --icmp-type 8 -j ACCEPT # log iptables denied calls -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 # Reject all other inbound - default deny unless explicitly allowed policy -A INPUT -j REJECT -A FORWARD -j REJECT COMMIT After editing when I try to apply the rules by iptables-restore < /etc/iptables.test.rules I get the following error iptables-restore: line 42 failed Line 42 is COMMIT and I comment that out I get iptables-restore: COMMIT expected at line 43 I'm not sure what is the problem, it is expecting COMMIT but if COMMIT is there it's giving error. Could it be due to the fact i'm usin a VPS?My provider using OpenVZ for virtualizaton.

    Read the article

  • How can I most efficiently batch resize images on a Mac?

    - by Nick Douglas
    I've been batch-resizing images through Preview (OS X) through the menu bar, but I want a simpler workflow, since I do this a dozen times a day. What I want: 1. Select a group of image files in finder 2. Hit a button or two (menu item or keyboard shortcut) to do the following: a. Scale all the pictures to 600 pixels wide b. Save as JPG files at 75% quality What I also want: - All of the above, plus step a(1): Crop images to 200 pixel height I can do all that manually, to a batch of files, through Preview. I can do it one at a time with some keyboard shortcuts in Photoshop or Pixelmator. Automator (using Preview) can scale to 600 pixels on the longest dimension, but it doesn't let me specify width. (It can scale specifically to width before cropping height.) It can change to JPG, but it can't specify image quality. And I can assign a keyboard shortcut to the whole process. Is that my best option on a Mac? Can I accomplish this more efficiently through another app like Quicksilver?

    Read the article

  • Arch linux - strange behaviour after installing fglrx

    - by kosto
    I have a problem with drivers on arch linux. I installed catalyst through unnoficial catalyst repo as wiki says. pacman -S catalyst catalyst-utils aticonfig --initial After this operation i rebooted the system. KDM loaded succesfully, but when i tried to switch to console (ctrl+alt+1/2/3) i saw only some strange dots, like pixels from the text were splitted on the whole screen. I was able to go back to kdm and enter the account details tho. This gave me a hang just before kde loaded. Here's a video where i'm showing above actions. Anybody knows what caused the problem? I can still chroot to fix some issues. Thanks for interest. http://glothriel.org/arch/arch_problem.ogg same thing on gnome / gdm, that's my second try on installing catalyst on arch. Open drivers suck the battery 2x faster. ___________EDIT_____________ Ok, i found a sollution, so i'm posting if someone else shares my problem. Catalyst does not support KMS, so you need to disable it from grub. You must know where are your /etc and /boot paritions mounted. If you have only one partition for / it's even simplier. Mount / on /mnt mount /dev/sdaX /mnt where X is number of the partition where is your / installed arch-chroot /mnt nano /etc/default/grub and add line: GRUB_CMDLINE_LINUX="nomodeset" save and quit then run (this will delete your windows grub configuration) grub-mkconfig -o /boot/grub/grub.cfg exit umount /mnt reboot

    Read the article

  • Project and Business Document Organization

    - by dassouki
    How do you organize, maintain edits, revisions and the relationship between: Proposals Contracts Change Orders Deliverables Projects How do you organize your projects for re-usability? For example, is there a way to add tags to projects, to make them more accessible? What's a good data structure to dump all my files on an internet server for easy access? Presently, my work folder is setup as follows: (1)/work/ (2)/projects (3)/project_a (4)/final (which includes all final documents) (5)/contracts (5)/rfp_rfq (5)/change_orders (5)/communications (logs all emails, faxes, and meeting notes and minutes) (5)/financial (6)/paid (6)/unpaid (5)/reports (4)/old (include all documents that didn't make it into the project_a/final/ (3)/project_b (4) ... same as above ... (2)/references (3)/technical_references (3)/gov_regulations (3)/data_sources (3)/books (3)/topic_based (each area of my expertise has a folder with references in them) (2)/business_contacts (3)/contacts.xls (file contains all my contacts) (2)/banking (3)/banking.xls (contains a list of all paid and unpaid invoices as well as some cool stats) (3)/quicken (to do my taxes and yada yada) (4)/year (2)/education (courses I've taken (3)/webinars (3)/seminars (3)/online_courses (2)/publications (includes the publications I've made (3)/publication_id We're mostly 5 people working together part-time on this thing. Since this is a very structured approach, I find it really difficult to remember what I've done on previous projects and go back and forth easily. What are your suggestions on improving my processes? I'm open to closed and open source software (as long as the price isn't too high). I also want to implement a system where I can save most of the projects online to increase collaboration and efficiency and reduce bandwidth especially on document editing. Imagine emailing a document back and forth 5-10 times a day.

    Read the article

  • Take snapshot of drawing using FingerPaint

    - by Rashmi.B
    I am using MyView for drawing content on a canvas using FingerPaint API demo app. I want to capture whatever I have written on the canvas. But when I use View v1 = myview.getRootView() it is returning only the blank canvas and not the content. I want to save my drawing in SDCard. Following is my code. Let me know what do i need to change v1 = myview.getRootView(); System.out.println("v1 value = "+v1); v1.buildDrawingCache(true); v1.measure(MeasureSpec.makeMeasureSpec(0, MeasureSpec.UNSPECIFIED), MeasureSpec.makeMeasureSpec(0, MeasureSpec.UNSPECIFIED)); //v1.layout(0, 0, v1.getMeasuredWidth(), v1.getMeasuredHeight()); v1.layout(0, 0, 100, 100); //Bitmap b = Bitmap.createBitmap(v1.getDrawingCache()); myview.mBitmap = Bitmap.createBitmap(v1.getDrawingCache()); System.out.println("BITMAP VALue = "+myview.mBitmap); ByteArrayOutputStream bytes = new ByteArrayOutputStream(); //b.compress(Bitmap.CompressFormat.JPEG, 40, bytes); File f = new File(Environment.getExternalStorageDirectory()+ File.separator + "rashmitest.jpg"); try { f.createNewFile(); FileOutputStream fo = new FileOutputStream(f); fo.write(bytes.toByteArray()); } catch (Exception e) { e.printStackTrace(); } v1.setDrawingCacheEnabled(false); myview is an object of class MyView that extends View.

    Read the article

  • I must clear my cmos to be able to boot

    - by Fredou
    I have this Asus p7p55d-e pro for about 8 months(got it last July) and for this last 3-4 days I cannot boot without clearing my CMOS what I have is: Seasonic M12D 750W ASUS P7P55D-E Pro Intel Core i5 760 Quad Core Processor Lynnfield LGA1156 XFX GeForce® 8800 GT Alpha Dog 512MB DDR3 Standard (PV-T88P-YDF4) 2x Corsair XMS3 CMX4GX3M2A1600C7 4GB DDR3 2X2GB DDR3-1600 CL 7-8-7-20 I tried to remove all the unnecessary stuff: HD/dvd/pci card/usb cable/etc I tried with only 1 dimm filled, instead of my 4, each one individually it didn't work I tried changing the battery, here goes a few dollars to nowhere, didn't work if I don't reset the CMOS it sometime stock on RAM led, sometime on BOOT DEVICE led, when this happen, it stuck on CPU speed detection when I boot right after the reset, i MUST click on the F2 option (boot with default bios setting) if i go into the bios and save/restart, i have to reset it again when booted, everything is rock solid stable, tried memtest, cpu stress, etc, etc. without issue what should be my next step? trying a new psu? (i need to find one..) doing rma? (i need this mb since it's my only computer...) something else?

    Read the article

  • Bacula & Multiple Tape Devices, and so on

    - by Tom O'Connor
    Bacula won't make use of 2 tape devices simultaneously. (Search for #-#-# for the TL;DR) A little background, perhaps. In the process of trying to get a decent working backup solution (backing up 20TB ain't cheap, or easy) at $dayjob, we bought a bunch of things to make it work. Firstly, there's a Spectra Logic T50e autochanger, 40 slots of LTO5 goodness, and that robot's got a pair of IBM HH5 Ultrium LTO5 drives, connected via FibreChannel Arbitrated Loop to our backup server. There's the backup server.. A Dell R715 with 2x 16 core AMD 62xx CPUs, and 32GB of RAM. Yummy. That server's got 2 Emulex FCe-12000E cards, and an Intel X520-SR dual port 10GE NIC. We were also sold Commvault Backup (non-NDMP). Here's where it gets really complicated. Spectra Logic and Commvault both sent respective engineers, who set up the library and the software. Commvault was running fine, in so far as the controller was working fine. The Dell server has Ubuntu 12.04 server, and runs the MediaAgent for CommVault, and mounts our BlueArc NAS as NFS to a few mountpoints, like /home, and some stuff in /mnt. When backing up from the NFS mountpoints, we were seeing ~= 290GB/hr throughput. That's CRAP, considering we've got 20-odd TB to get through, in a <48 hour backup window. The rated maximum on the BlueArc is 700MB/s (2460GB/hr), the rated maximum write speed on the tape devices is 140MB/s, per drive, so that's 492GB/hr (or double it, for the total throughput). So, the next step was to benchmark NFS performance with IOzone, and it turns out that we get epic write performance (across 20 threads), and it's like 1.5-2.5TB/hr write, but read performance is fecking hopeless. I couldn't ever get higher than 343GB/hr maximum. So let's assume that the 343GB/hr is a theoretical maximum for read performance on the NAS, then we should in theory be able to get that performance out of a) CommVault, and b) any other backup agent. Not the case. Commvault seems to only ever give me 200-250GB/hr throughput, and out of experimentation, I installed Bacula to see what the state of play there is. If, for example, Bacula gave consistently better performance and speeds than Commvault, then we'd be able to say "**$.$ Refunds Plz $.$**" #-#-# Alas, I found a different problem with Bacula. Commvault seems pretty happy to read from one part of the mountpoint with one thread, and stream that to a Tape device, whilst reading from some other directory with the other thread, and writing to the 2nd drive in the autochanger. I can't for the life of me get Bacula to mount and write to two tape drives simultaneously. Things I've tried: Setting Maximum Concurrent Jobs = 20 in the Director, File and Storage Daemons Setting Prefer Mounted Volumes = no in the Job Definition Setting multiple devices in the Autochanger resource. Documentation seems to be very single-drive centric, and we feel a little like we've strapped a rocket to a hamster, with this one. The majority of example Bacula configurations are for DDS4 drives, manual tape swapping, and FreeBSD or IRIX systems. I should probably add that I'm not too bothered if this isn't possible, but I'd be surprised. I basically want to use Bacula as proof to stick it to the software vendors that they're overpriced ;) I read somewhere that @KyleBrandt has done something similar with a modern Tape solution.. Configuration Files: *bacula-dir.conf* # # Default Bacula Director Configuration file Director { # define myself Name = backuphost-1-dir DIRport = 9101 # where we listen for UA connections QueryFile = "/etc/bacula/scripts/query.sql" WorkingDirectory = "/var/lib/bacula" PidDirectory = "/var/run/bacula" Maximum Concurrent Jobs = 20 Password = "yourekiddingright" # Console password Messages = Daemon DirAddress = 0.0.0.0 #DirAddress = 127.0.0.1 } JobDefs { Name = "DefaultFileJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = File Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" } JobDefs { Name = "DefaultTapeJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = "SpectraLogic" Messages = Standard Pool = AllTapes Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" Prefer Mounted Volumes = no } # # Define the main nightly save backup job # By default, this job will back up to disk in /nonexistant/path/to/file/archive/dir Job { Name = "BackupClient1" JobDefs = "DefaultFileJob" } Job { Name = "BackupThisVolume" JobDefs = "DefaultTapeJob" FileSet = "SpecialVolume" } #Job { # Name = "BackupClient2" # Client = backuphost-12-fd # JobDefs = "DefaultJob" #} # Backup the catalog database (after the nightly save) Job { Name = "BackupCatalog" JobDefs = "DefaultFileJob" Level = Full FileSet="Catalog" Schedule = "WeeklyCycleAfterBackup" # This creates an ASCII copy of the catalog # Arguments to make_catalog_backup.pl are: # make_catalog_backup.pl <catalog-name> RunBeforeJob = "/etc/bacula/scripts/make_catalog_backup.pl MyCatalog" # This deletes the copy of the catalog RunAfterJob = "/etc/bacula/scripts/delete_catalog_backup" Write Bootstrap = "/var/lib/bacula/%n.bsr" Priority = 11 # run after main backup } # # Standard Restore template, to be changed by Console program # Only one such job is needed for all Jobs/Clients/Storage ... # Job { Name = "RestoreFiles" Type = Restore Client=backuphost-1-fd FileSet="Full Set" Storage = File Pool = Default Messages = Standard Where = /srv/bacula/restore } FileSet { Name = "SpecialVolume" Include { Options { signature = MD5 } File = /mnt/SpecialVolume } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } # List of files to be backed up FileSet { Name = "Full Set" Include { Options { signature = MD5 } File = /usr/sbin } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } Schedule { Name = "WeeklyCycle" Run = Full 1st sun at 23:05 Run = Differential 2nd-5th sun at 23:05 Run = Incremental mon-sat at 23:05 } # This schedule does the catalog. It starts after the WeeklyCycle Schedule { Name = "WeeklyCycleAfterBackup" Run = Full sun-sat at 23:10 } # This is the backup of the catalog FileSet { Name = "Catalog" Include { Options { signature = MD5 } File = "/var/lib/bacula/bacula.sql" } } # Client (File Services) to backup Client { Name = backuphost-1-fd Address = localhost FDPort = 9102 Catalog = MyCatalog Password = "surelyyourejoking" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # # Second Client (File Services) to backup # You should change Name, Address, and Password before using # #Client { # Name = backuphost-12-fd # Address = localhost2 # FDPort = 9102 # Catalog = MyCatalog # Password = "i'mnotjokinganddontcallmeshirley" # password for FileDaemon 2 # File Retention = 30 days # 30 days # Job Retention = 6 months # six months # AutoPrune = yes # Prune expired Jobs/Files #} # Definition of file storage device Storage { Name = File # Do not use "localhost" here Address = localhost # N.B. Use a fully qualified name here SDPort = 9103 Password = "lalalalala" Device = FileStorage Media Type = File } Storage { Name = "SpectraLogic" Address = localhost SDPort = 9103 Password = "linkedinmakethebestpasswords" Device = Drive-1 Device = Drive-2 Media Type = LTO5 Autochanger = yes } # Generic catalog service Catalog { Name = MyCatalog # Uncomment the following line if you want the dbi driver # dbdriver = "dbi:sqlite3"; dbaddress = 127.0.0.1; dbport = dbname = "bacula"; DB Address = ""; dbuser = "bacula"; dbpassword = "bbmaster63" } # Reasonable message delivery -- send most everything to email address # and to the console Messages { Name = Standard mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: %t %e of %c %l\" %r" operatorcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: Intervention needed for %j\" %r" mail = root@localhost = all, !skipped operator = root@localhost = mount console = all, !skipped, !saved # # WARNING! the following will create a file that you must cycle from # time to time as it will grow indefinitely. However, it will # also keep all your messages if they scroll off the console. # append = "/var/lib/bacula/log" = all, !skipped catalog = all } # # Message delivery for daemon messages (no job). Messages { Name = Daemon mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula daemon message\" %r" mail = root@localhost = all, !skipped console = all, !skipped, !saved append = "/var/lib/bacula/log" = all, !skipped } # Default pool definition Pool { Name = Default Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year } # File Pool definition Pool { Name = File Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year Maximum Volume Bytes = 50G # Limit Volume size to something reasonable Maximum Volumes = 100 # Limit number of Volumes in Pool } Pool { Name = AllTapes Pool Type = Backup Recycle = yes AutoPrune = yes # Prune expired volumes Volume Retention = 31 days # one Moth } # Scratch pool definition Pool { Name = Scratch Pool Type = Backup } # # Restricted console used by tray-monitor to get the status of the director # Console { Name = backuphost-1-mon Password = "LastFMalsostorePasswordsLikeThis" CommandACL = status, .status } bacula-sd.conf # # Default Bacula Storage Daemon Configuration file # Storage { # definition of myself Name = backuphost-1-sd SDPort = 9103 # Director's port WorkingDirectory = "/var/lib/bacula" Pid Directory = "/var/run/bacula" Maximum Concurrent Jobs = 20 SDAddress = 0.0.0.0 # SDAddress = 127.0.0.1 } # # List Directors who are permitted to contact Storage daemon # Director { Name = backuphost-1-dir Password = "passwordslinplaintext" } # # Restricted Director, used by tray-monitor to get the # status of the storage daemon # Director { Name = backuphost-1-mon Password = "totalinsecurityabound" Monitor = yes } Device { Name = FileStorage Media Type = File Archive Device = /srv/bacula/archive LabelMedia = yes; # lets Bacula label unlabeled media Random Access = Yes; AutomaticMount = yes; # when device opened, read it RemovableMedia = no; AlwaysOpen = no; } Autochanger { Name = SpectraLogic Device = Drive-1 Device = Drive-2 Changer Command = "/etc/bacula/scripts/mtx-changer %c %o %S %a %d" Changer Device = /dev/sg4 } Device { Name = Drive-1 Drive Index = 0 Archive Device = /dev/nst0 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } Device { Name = Drive-2 Drive Index = 1 Archive Device = /dev/nst1 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } # # Send all messages to the Director, # mount messages also are sent to the email address # Messages { Name = Standard director = backuphost-1-dir = all } bacula-fd.conf # # Default Bacula File Daemon Configuration file # # # List Directors who are permitted to contact this File daemon # Director { Name = backuphost-1-dir Password = "hahahahahaha" } # # Restricted Director, used by tray-monitor to get the # status of the file daemon # Director { Name = backuphost-1-mon Password = "hohohohohho" Monitor = yes } # # "Global" File daemon configuration specifications # FileDaemon { # this is me Name = backuphost-1-fd FDport = 9102 # where we listen for the director WorkingDirectory = /var/lib/bacula Pid Directory = /var/run/bacula Maximum Concurrent Jobs = 20 #FDAddress = 127.0.0.1 FDAddress = 0.0.0.0 } # Send all messages except skipped files back to Director Messages { Name = Standard director = backuphost-1-dir = all, !skipped, !restored }

    Read the article

  • Using pre-made patch cables on a punch down block?

    - by Trevor Harrison
    I need to add a 24 port switch to my wiring closet. In the (distant) past, I usually just punched each port of the switch to a 110 block on the wall (using hand-made cables), and cross connect between that and the 110 block that has the runs to each workstation. To save time, I'm thinking of buying 12 pre-made drop cables, cutting them in half (so 24 single ended cables), and punching those to my 110 block. The things I'm worried about are wire type (ie. solid vs. strands) and color scheme. I really don't know if they use different wire types (still?), but I remember that being an issue at one point. Can anyone comment on this? (I definitely won't feel comfortable trying to punch stranded wiring on my 110 block) Also, picking up a random pre-built cable I had laying around, I noticed that the color scheme used didn't appear to be T568B, but T568A, which would clash with the rest of my wall. Anyone know of an online source that specifies these things? I've looked at www.cablesforless.com (which does have nicer prices) and www.cablestogo.com (which seem stupid expensive) so far. Cables For Less doesn't specify wiring scheme, Cables To Go does specify T568B. Both seem to specify stranded wires instead of solid.

    Read the article

  • How can I get the root account to generate an acceptable ssh key?

    - by Jamie
    On an ubuntu machine I did the following: ~$ sudo su - [sudo] password for jamie: root@mydomain:~# ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: 12:34:56:78:9a:bc:de:f0:12:34:56:78:9a:bc:de:f0 [email protected] The key's randomart image is: +--[ RSA 2048]----+ | | | | | | | | | | | | | | | | | | +-----------------+ root@mydomain:~# cat /root/.ssh/id_rsa.pub | ssh -p 443 [email protected] 'cat > authorized_keys' [email protected]'s password: root@mydomain:~# ssh -p 443 [email protected] [email protected]'s password: It's asking me for a password. However, using a regular account, the following works: $ cd ; ssh-keygen -t rsa ; cat ~/.ssh/id_rsa.pub | ssh [email protected] 'cat >> ~/.ssh/authorized_keys' $ ssh [email protected] Last login: Thu Oct 24 14:48:41 2013 from 173.45.232.105 [[email protected] ~]$ Which leads me to believe it's not an issue of authorized_keys versus authorized_keys2 or permissions. Why does the 'root' account accessing the remote 'jamie' account not work? The remote machine is CentOS if that's relevant.

    Read the article

  • Restoring open software after a restart event in windows

    - by Doltknuckle
    I find that at the end of a long day, I sometimes have a large number of programs running. All which I will need to use tomorrow. Normally, this isn't an issue, I can simply lock the machine and come back tomorrow. My problem arrises when windows update launches in the middle of the night and force restarts my computer. That in turns closes all my open software. I of course save everything regularly so I don't loose anything, but I waste time reopening all of those resources whenever there is a restart. [EDIT] I should clarify that I still want to be able to restart my computer when an update comes down. Preventing the restart only delays the problem until later. I should have been more specific in that I want to be able to recover my working environment after a restart for any reason. Things like scheduled maintence, power loss, updates, and software installs. [EDIT] I can't simply have them setup to launch at startup becasuse those files change from week to week. So I need something that monitors what I have open, and gives me the option to "recover" those software sessions when I log back in. Anyone have any suggestions on what I can do? I'd even be willing to purchase software to do this for me if that is the only option. Thanks

    Read the article

  • Is there a way to "burn" audio to an ISO? (as an audio CD)

    - by Sootah
    I have an audiobook that I've downloaded via their download manager, and it's loaded into their cutesy little audio program that they force you to use. I can play the book just fine using their proprietary software, and while it's annoying when using my PC, it's utterly UNBEARABLE when I try to listen to it on my Blackberry. The program is INSANELY slow, it literally takes around 30 seconds to switch between tracks, so if I've forgotten where I am in the book it takes me around 15 minutes to finally get to where I was at. I've looked everywhere on how to transcode the book to .MP3, but evidently with their current format it's either extremely convoluted (and I have no desire to dick around with installing some older version of the codec, getting a different transcoding app, and then wrestling with getting it to actually work). Since I'm able to burn a copy of the book to an audio CD, I figure the best way to go about this is to just make the CDs and then rip them off of those to .MP3. In order to avoid wasting two hours, not to mention 14 CD-R's, I was wondering if there's a way to "burn" to an .ISO instead of an actual CD-R. I currently have SlySoft's Virtual CloneDrive installed, so I can mount .ISO's easily enough, but now I want to actually create an ISO via the CD burning process. Just in case I've not explained myself very well, here is an overview of what I intend to do: "Burn" a set of Audio CD .ISOs from the audiobook (hopefully I can do this using Windows Media Player, otherwise I'll be forced to use the audiobook app) Mount an .ISO in Virtual CloneDrive Rip the audio tracks on the mounted .ISO to .MP3s Repeat steps 2-3 until the entire book is in .MP3 format Copy .MP3s to my Blackberry so that I'm not driven insane every time I want to listen to the book in the car, and be able to use Winamp when listening on my computer EDIT: I'd suppose a rather concise way to put it is that I need something that will emulate a CD-R drive, so that you can select it as the output drive in whatever app your burning the audio CD from. (I'd suppose that when you "insert a blank CD-R" the app would then ask you what file to save to)

    Read the article

  • Is it possible to rate limit based on host headers? i.e. not just on ip address

    - by Blankman
    I have a web service endpoint that I am building where people will post an xml file to, and it will really get pounded with over 1K requests per second. Now they are sending in these xml files via http post, but a good majority of them will be rate limited. The problem is, the rate limiting will be done by the web application by looking up the source_id in the xml, and if it is over x requests per minute, it will not be processed further. I was wondering if I could do rate limit checking earlier in the processing somehow and thus save the 50K file going threw the pipeline to my web servers and eating up resources. Could a load balancer make a call out to verify rate usage somehow? If this is possible, I could maybe put the source_id in a host header so even the XML file doesn't have to be parsed and loaded into memory. Is it possible to just look at host headers and not load up the entire 50K xml file into memory? I really appreciate your insights as this takes more knowledge of the entire tcp/ip stack etc.

    Read the article

  • Windows 8 auto-hibernate from sleep not working on Retina MacBook Pro

    - by frenchglen
    I have a similar question to this one. Only my context is the 15" Retina MacBook Pro - and Windows 8. I have just the original Mac OS X Mountain Lion on there, then Windows 8 via Bootcamp. no rEFIt installed. (I just press ALT every time I restart windows, actually as a security measure to stop tech-unsavvy thugs, who, if the laptop is stolen, think it's only a mac and don't discover my Windows as quickly as they would've, and by that time I remotely activate various anti-theft mac apps and nab them that way). SO: like the related question asks, why isn't it behaving like it should? The Windows 7 FAQ states: Will sleep eventually drain my laptop battery? If your laptop battery charge gets critically low while the computer is asleep, Windows automatically puts the laptop into hibernation mode. But this is just not happening - on my rMBP Windows 8. It seems EVERY time I set the laptop to sleep (when it reaches 10%), then arriving home and plugging it in and hoping to simply resume my work, it does NOT save the session to disk and I lose ALL my work. Who's fault is it? Win 8's (a bug, grr)? Or Apple's EFI system (maybe fixable via editing EFI options/do I have to install refit to make it work perhaps?) Or maybe changing windows power options can somehow fix the problem? Thanks for your help.

    Read the article

  • How do I get a Mac to request a new IP address from another DHCP server running in parallel while Ne

    - by huyqt
    Hello, I have an interesting situation. I'm trying to us a Linux based machine to allow Mac's to Netboot (similiar to PXE boot) by running a DHCP service in parallel with the "global" DHCP server. The local DHCP server hands out IPs in a private subnet, e.g., 10.168.0.10-10.168.254-254, while the "global" DHCP server hands out IPs from the IP range 10.0.0.1 - 10.0.1.254. The local DHCP range is only supposed to be used in Preboot Execution Environment and Netboot. The local DHCP server is something I have control over, but I do not have access to the global DHCP server. I have a filter to only allow members with the vendor strings "AAPLBSDPC/i386" and "PXEClient". PXE works fine, but Netboot has a quirk. The Apple systems that haven't been connected to the network yet can Netboot fine. But once it grabs a "real" IP address from the global DHCP server, it will "save" it and request it the next time we want it to netboot (which the local dhcp server won't give it). This is what I want: Mar 30 10:52:28 dev01 dhcpd: DHCPDISCOVER from 34:15:xx:xx:xx:xx via eth1 Mar 30 10:52:29 dev01 dhcpd: DHCPOFFER on 10.168.222.46 to 34:15:xx:xx:xx:xx via eth1 Mar 30 10:52:31 dev01 dhcpd: DHCPREQUEST for 10.168.222.46 (10.168.0.1) from 34:15:xx:xx:xx:xx via eth1 Mar 30 10:52:31 dev01 dhcpd: DHCPACK on 10.168.222.46 to 34:15:xx:xx:xx:xx via eth1 Mar 30 10:52:32 dev01 in.tftpd[5890]: tftp: client does not accept options Mar 30 10:52:53 dev01 in.tftpd[5891]: tftp: client does not accept options Mar 30 10:52:53 dev01 in.tftpd[5893]: tftp: client does not accept options Mar 30 10:52:54 dev01 in.tftpd[5895]: tftp: client does not accept options This is what I get when it already has a "stored" IP: Mar 30 10:51:29 dev01 dhcpd: DHCPDISCOVER from 00:25:xx:xx:xx:xx via eth1 Mar 30 10:51:30 dev01 dhcpd: DHCPOFFER on 10.168.222.45 to 00:25:xx:xx:xx:xx via eth1 Mar 30 10:51:31 dev01 dhcpd: DHCPREQUEST for 10.0.0.61 (10.0.0.1) from 00:25:xx:xx:xx:xx via eth1: ignored (not authoritative). Do you have any suggestions? It would be much appreciated.

    Read the article

  • What would keep a Microsoft Word AutoNew() macro from running?

    - by Chris Nelson
    I'm using Microsoft Office 2003 and creating a bunch of template documents to standardize some tasks. I know it's standard practice to put the templates in an certain place Office expects to find them but that won't work for me. What I want is to have "My Template Foo.dot" and "My Template Bar.dot", etc. in the "My Foo Bar Stuff" on a shared drive and users will double click on the template to create a new Foo or Bar. What's I'd really like is for the user to double click on the Foo template and be prompted for a couple of items related to their task (e.g., a project number) and have a script in the template change the name that Save will default to something like "Foo for Project 1234.doc". I asked on Google Groups and got an answer that worked....for a while. Then my AutoNew macro stopped kicking in when I created a new document by double clicking on the template. I have no idea why or how to debug it. I'm a software engineering with 25+ years of experience but a complete Office automation noob. Specific solutions and pointers to "this is how to automate Word" FAQs are welcome. Thanks.

    Read the article

  • Amazon EC2 Reserved Instances: "Heavy Utilization" clarification

    - by gravyface
    Should be another easy one here, but I need clarification on what they define as "heavy utilization" for Reserved Instance types. From their Website: Heavy Utilization RIs – Heavy Utilization RIs offer the most absolute savings of any Reserved Instance type. They’re most appropriate for steady-state workloads where you’re willing to commit to always running these instances in exchange for our lowest hourly usage fee. With this RI, you pay a little higher upfront payment than Medium Utilization RIs, a significantly lower hourly usage fee, and you’re charged that lower hourly rate for every hour in the Reserved Instance term you purchase. Using Heavy Utilization RIs, you can save up to 41% for a 1-year term and 58% for a 3-year term vs. running On-Demand Instances. If you’re trying to find a break-even utilization, you’re economically advantaged using Heavy Utilization RIs (vs. On-Demand Instances) if you plan to use your instance more than 43% of a 1-year term or 79% of a 3-year term. I'm assuming that, if I'm planning on running a 24/7 Web Server, then regardless of how many resources I consume (bandwidth, cpu cycles, memory), I would want to go with a Heavy Utilization Reserved Instance? This one Web Server in particular will likely barely budge the cpu, but it needs to be up and running 24/7. Not 100% on what they're defining as "heavy".

    Read the article

< Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >