Search Results

Search found 73305 results on 2933 pages for 'copy run start'.

Page 427/2933 | < Previous Page | 423 424 425 426 427 428 429 430 431 432 433 434  | Next Page >

  • Running rsync on network connect

    I have one mac which is always on and is my main computer. I also have a MacBook and I'm trying to Sync my iphoto library. So I can successfully use rsync to sync files. I'm using a cron to have it run once a day. In reality the macbook isn't always on, so I'm looking for a way to run rsync when ever the two computers are connected on the same wifi network. So I'm guessing the best place is to somehow run rsync when the airport is connected. Whats the best way

    Read the article

  • Automatizing the backup of my databases and files with cron

    - by Patrick
    hi, I want to automatize the backup of my databases and files with cron. Should I add the following lines to crontab ? mysqldump -u root -pPASSWORD database_name | gzip > /home/backup/database_`date +\%m-\%d-\%Y`.sql.gz svn commit -m "Committing the working copy containing the database dump" 1) First of all, is this a good approach ? 2) It is not clear how to specify the repository and the working copy with svn" 3) How can I run svn only when the mysqldump is done and not before ? Avoiding conflicts Any other tip ? thanks

    Read the article

  • VMWare Host needing ESXi 5.0 - Running on a 2008 R2 Host?

    - by Great Big Al
    I have a VMWare image supplied by my Phone System provider which manages a contact management interface, they tell me that VMWare EsXi5.0 is the highest supported host. I'm used to MS HyperV and have no VMWare experiance. At present I have this guest running on a simple desktop PC running ESXi5.0.0, I'd idealy like to run this guest (it's Windows 7 with their software already installed and configured) on a Windows 2008R2 server I have available, as I said, I'm used to HyperV and I can't easily identify if there is a version of VMWare that supports ESXi5.0 guests that will run on a Windows 2008 R2 server host. Is there such a product, what's it called, and with one guest can I run it without purchasing a license? Thanks

    Read the article

  • Kerberos: Running an app with a parameter using krenew

    - by Mihai Todor
    I need to run an application with krenew, but the application also needs to receive a parameter via command line and I need to send its output to a file. From the documentation, it looks like this should do the trick: krenew -t -- sh -c 'compute-job > /afs/local/data/output' but, unfortunately, when I run the command below: krenew -s -- sh -c './my_app config.xml > results/test.txt &' the application just dies after a while and I can see from the output of ps aux that krenew is not running along with my_app. I am not sure what the parameter -t does, and as far as I can see, if I run krenew -s ./my_app, it works properly. I hope someone can clarify this.

    Read the article

  • How do I add color syntax highlighting to GNU emacs?

    - by Alex Reynolds
    I have two versions of emacs available to me on a locked workstation: $ /usr/local/bin/emacs --version GNU Emacs 22.3.1 $ /usr/bin/emacs --version GNU Emacs 21.4.1 In both cases, my terminal type is xterm when I run either version of emacs. When I run the v21 version of emacs, I get syntax coloring for Perl, HTML, and other modes. When I run the v22 version, I do not get syntax coloring. I would like to migrate from the v21 version because the combination of v21 emacs, GNOME Terminal and GNU Screen is eating Ctrl-arrow key chords, which prevents me from moving quickly between words. (OS X Terminal and GNU Screen do not have this issue.) The v22 version allows use of Ctrl-arrow key combinations with GNOME Terminal and GNU Screen. How do I fix the v22 version (or ask my sys admin to fix) so that it once again highlights syntax and allows me to use Ctrl-arrow key combinations?

    Read the article

  • disable 250 character URL limit in Internet Explorer

    - by Keltari
    Users of a SharePoint Document Library are getting this error: The URL for this file is too long for the application. A temporary copy of this file will be opened on your computer. You must save this copy as a new file. After doing some research, it appears Internet Explorer has a limit of about ~250 characters for a URL. Some URLs provided by SharePoint far exceed this limit. One example being 790 characters long. Is there a way to disable this limit? I have looked, but there doesnt appear to be a solution, other than shortening the folder/path names.

    Read the article

  • Backing up data stored on Amazon S3

    - by Fiver
    I have an EC2 instance running a web server that stores users' uploaded files to S3. The files are written once and never change, but are retrieved occasionally by the users. We will likely accumulate somewhere around 200-500GB of data per year. We would like to ensure this data is safe, particularly from accidental deletions and would like to be able to restore files that were deleted regardless of the reason. I have read about the versioning feature for S3 buckets, but I cannot seem to find if recovery is possible for files with no modification history. See the AWS docs here on versioning: http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectVersioning.html In those examples, they don't show the scenario where data is uploaded, but never modified, and then deleted. Are files deleted in this scenario recoverable? Then, we thought we may just backup the S3 files to Glacier using object lifecycle management: http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html But, it seems this will not work for us, as the file object is not copied to Glacier but moved to Glacier (more accurately it seems it is an object attribute that is changed, but anyway...). So it seems there is no direct way to backup S3 data, and transferring the data from S3 to local servers may be time-consuming and may incur significant transfer costs over time. Finally, we thought we would create a new bucket every month to serve as a monthly full backup, and copy the original bucket's data to the new one on Day 1. Then using something like duplicity (http://duplicity.nongnu.org/) we would synchronize the backup bucket every night. At the end of the month we would put the backup bucket's contents in Glacier storage, and create a new backup bucket using a new, current copy of the original bucket...and repeat this process. This seems like it would work and minimize the storage / transfer costs, but I'm not sure if duplicity allows bucket-to-bucket transfers directly without bringing data down to the controlling client first. So, I guess there are a couple questions here. First, does S3 versioning allow recovery of files that were never modified? Is there some way to "copy" files from S3 to Glacier that I have missed? Can duplicity or any other tool transfer files between S3 buckets directly to avoid transfer costs? Finally, am I way off the mark in my approach to backing up S3 data? Thanks in advance for any insight you could provide!

    Read the article

  • SAN alternative for VMWare

    - by CogitoErgoSum
    Has anyone utilized something aside from a SAN to run their VMWare images off? We are looking to drop in two HP Servers and VMWare on them and run them off a SAN. Due to the cost of SAN though our CFO and VP are wondering if there are any viable alternatives (I.e. NAS) that can effectively run a VMWare. I can't think of any off of the top of my head. IF anyone can provide one or a good article outlining why to stick to SAN that'd be great.

    Read the article

  • Windows Server Task Scheduler: Running scheduled executable fail-safe?

    - by Mikael Koskinen
    I have an executable which I've scheduled to run once in every five minutes (using Window's built-in Task Scheduler). It's crucial that this executable is run because it updates few time critical files. But how can I react if the virtual server running the executable goes down? At no point there shouldn't be more than 15 minutes break between the runs. As I'm using Windows Server and its Task Scheduler, I wonder is it possible to create some kind of a cluster which automatically handles the situation? The problem is that the server in question is running on Windows Azure and I don't think I can create actual clusters using the virtual machines. If the problem can be solved using a 3rd party tool, that's OK too. To generalize the question a little bit: How to make sure that an executable is run once in every 5 minutes, even if there might be server failures?

    Read the article

  • Weather Logging Software on Windows Home Server

    - by Cruiser
    I'm looking for some weather logging software that I can run as a Windows Home Server add-in, or as a service on my Home Server, so I don't need to log into my Home Server to log weather data. I have an Oregon Scientific WMR918 weather station, and an HP MediaSmart EX485 Windows Home Server. The two are currently connected through a serial bluetooth adapter, but that shouldn't matter as the computer sees it basically as a serial device. I'm currently using Cumulus to log data and upload to Weather Underground, but it is a regular windows application, so I need to remain logged into my Home Server by RDP in order to run the software (I disconnect, but don't log off so the session remains open). Ideally I would like something to run as a service or WHS add-in, so that it runs all the time without logging in, can log data from my WMR918, and can upload to Weather Underground. Thanks!

    Read the article

  • Startup script on Ubuntu 12.04 not getting executed. Dependencies / load order.

    - by user861181
    I want to create a simple startup script on Ubuntu 12.04: myscript.sh #!/bin/sh sudo /etc/init.d/nginx start cd ~/app/current god -c config/resque.god sudo /etc/init.d/redis-server start echo "SCRIPT RUN" I have it at /etc/init.d/myscript.sh When I do sudo chkconfig --level 2345 myscript.sh I get myscript.sh 2345 When I do sudo chkconfig --add myscript.sh I get insserv: warning: script 'K01myscript.sh' missing LSB tags and overrides insserv: warning: script 'myscript.sh' missing LSB tags and overrides The script you are attempting to invoke has been converted to an Upstart job, but lsb-header is not supported for Upstart jobs. insserv: warning: script 'dbus' missing LSB tags and overrides .... myscript.sh 0:off 1:off 2:on 3:on 4:on 5:on 6:off ** EDIT:: I checked the boot.log and it turns out that the script is run, but the problem is that god is not loaded yet when the script is executed. Apparently I want to load this script as the very last thing at startup (or somehow check if god is loaded and then start the script).

    Read the article

  • How do I perform an action if the upstart respawn limit is hit?

    - by Daniel Huckstep
    I have an upstart job: description "foreman" start on runlevel [2345] stop on runlevel [06] respawn respawn limit 3 60 chdir /home/deploy/app/current env RAILS_ENV=production exec sudo -u deploy bundle exec foreman start We ran into a case where a rogue character in an app file caused one of the background workers to fail but the app ran normally (weird). The app worked fine, but the workers were never working. I'd like upstart to do something (send an email) if it can't start this job, since it's not entirely obvious if everything went alright. Is there something built into upstart to handle this, or do I have to get creative?

    Read the article

  • Diagnosing extremely slow network operations.

    - by Chris Becke
    The network: * A windows 7 PC with 2 NICs - one connected to an old style ethernet hub - the other to the internet - with internet sharing enabled * An Apple iMac connected to the hub, successfully utilizing the ICS to access the internet. My problem: Using the Mac, copying from the internet is fast. However, if I connect to a SMB: share on the Windows 7 PC and try and copy anything a few kb the copy operation is appallingly slow with my network card using the Windows 7 control panel showing ~.1% utilization. The NICs are 100Mbs and show a 10x larger throughput (now ~1%) if I download large files over the internet using the Mac. WTF?

    Read the article

  • One Way Sync of a Bucket With Local Directory

    - by user48651
    I have a local directory that I would like to synchronize with an S3 bucket. I have two specific requirements: If local file is the same as the remote, do not re-transfer it to the bucket. If some files or directories exist in the bucket but do not exist on local, delete them. Basically the bucket should mirror the local copy and not vice-versa. I looked into s3cmd sync command, but unfortunately requirement 2 is not fulfilled. If files exists in the bucket but not on local copy, they will be copied to the local instead of being deleted.

    Read the article

  • Deploying website content via Subversion

    - by Johann
    we have recently set up a new development infrastructure and process for one of our clients. This involves the strict use of subversion as a central source code repository. The svn repositories contains a seperate branch for code on the live system (/branches/live/). The repositories are use for PHP content (mainly Wordpress Blogs), but in future they may hold other asp code as well. Bonus points for a solutions which more or less in the same way with ASP code on Windows Server 2008 R2. We have two servers: one staging system and one live system. The staging system is updated regularly with the code of the trunk. The live system is update manually. Each webroot on the servers are working copy of either the trunk (staging system) or the live branch (live system). The current workflow is: Developing on the dev's box - commit into the trunk - auto-deploy on staging system - testing on the staging system - merging into /branches/live/ - manual deployment on live system. This works for one-way changes very well, however we have some troubles on every wordpress (or plugin) update: The WP update process removes the directories and unpack the archive of the new version. This removes the svn admin area as well, which produces a lot of errors. We could switch to SVN 1.7 with a single, global admin area, but this would only solve on part of the problem. Finally, we have done the update via the WP Gui, restored the svn admin area, added/removed the files and committed the changes to the trunk. After testing, we had to do basically the same thing on the live server (except the commit, we just reverted the changes and merged the new files from the staging system to the live system). I'm currently thinking of the following: The htdocs of each website is a svn export Each website has a svn working copy beside the htdocs directory a script which "replays" the changes in the wc from htdocs after an update in WP (rsync'ing the changed files to the working copy, rsync'ing new files and svn add them and finally svn delete the deleted files). The script would have to exclude some files (like wp-config.php, uploads/temp directories, etc.). Are there better ways to do this? Unfortunaly, a complete CI server is out of scope due to time and budget limitations.

    Read the article

  • IF commands in a batch file

    - by Rossaluss
    I'm writing a small batch file to replace users' themes and charts in Office and I have the below batch file that works just fine. cd c:\documents and settings\%username%\application data\microsoft\templates echo Y|rmdir charts /s mkdir charts echo Y|del "c:\documents and settings\%username%\application data\microsoft\templates\document themes\*.*" net use o: \\servername\sms copy "o:\ppt themes\charts\*.*" "c:\documents and settings\%username%\application data\microsoft\templates\charts" copy "o:\ppt themes\Document Themes\*.*" "c:\documents and settings\%username%\application data\microsoft\templates\document themes" c: net use o: /delete Now what I want is the above to only run if it hasn't run before as we'll be pushing this out to all users for around 2 weeks to catch people that aren't in every day. Is there any way to begin the command with something to look for one of the new themes/charts already pushed down, and if it's present, then have it not run? Any help on this would be greatly appreciated as I'm pretty new to these batch files.

    Read the article

  • Backup script that excludes large files using Duplicity and Amazon S3

    - by Jason
    I'm trying to write an backup script that will exclude files over a certain size. My script gives the proper command, but when run within the script it outputs an an error. However if the same command is run manually everything works...??? Here is the script based on one easy found with google #!/bin/bash # Export some ENV variables so you don't have to type anything export AWS_ACCESS_KEY_ID="accesskey" export AWS_SECRET_ACCESS_KEY="secretaccesskey" export PASSPHRASE="password" SOURCE=/home/ DEST=s3+http://s3bucket GPG_KEY="7743E14E" # exclude files over 100MB exclude () { find /home/jason -size +100M \ | while read FILE; do echo -n " --exclude " echo -n \'**${FILE##/*/}\' | sed 's/\ /\\ /g' #Replace whitespace with "\ " done } echo "Using Command" echo "duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST" duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST # Reset the ENV variables. export AWS_ACCESS_KEY_ID= export AWS_SECRET_ACCESS_KEY= export PASSPHRASE= If run I recieve the error; Command line error: Expected 2 args, got 6 Enter 'duplicity --help' for help screen. Any help your could offer would be greatly appreciated.

    Read the article

  • Why does MTP work on one machine, but not another?

    - by bobmcn
    I have two Dell computers, a laptop and a desktop. I reinstalled Windows XP home on both, from the same CD. Then I installed Service pack 3 and ran windows update. When I connect my Creative Zen media player to the desktop, the MTP software that is part of XP recognizes it, and I can copy file to and from it using Windows Explorer. But when I connect the same player with the same cable to the laptop, I can see it in Windows Explorer, but none of the folders that are visible via the desktop are available, and I can't copy anything to or from it. How can I get this working on the laptop?

    Read the article

  • Separate zone exceptions for each view in BIND

    - by Stefan M
    Problem: Separate zones by query source network and return different records for LAN clients compared to WAN clients. I've implemented this at home on a small alix router with Bind 9.4. One view called "lan" and one view called "wan". The "lan" view had just the root.hints file and one zone. The "wan" view had many other zones, including a copy of the one zone from the "lan" view, but with different records. Querying domain1.tld from the LAN would give me local records. Querying domain1.tld from the WAN would give me external records. Querying domain2.tld from the LAN would give me the same records as from the WAN as it only existed in the WAN view. Now I'm trying to re-implement this on a larger scale and suddenly my view is unable to query anything outside itself. This is natural according to the bind-users list and they suggest I copy all my views into my LAN view. I'm hoping someone here has a better solution because that means I'll have to copy, and maintain, thousands of zone files in multiple views. This is unfeasible. My configuration at home resembles this. acl lanClients { 192.168.22.0/24; 127.0.0.1; }; view "intranet" { match-clients { lanClients; }; recursion yes; notify no; // Standard zones // zone "." { type hint; file "etc/root.hint"; }; zone "domain1.tld" { type master; file "intranet/domain1.tld"; }; }; view "internet" { match-clients { !localnets; any; }; recursion no; allow-transfer { slaveDNS; }; include "master.zones"; }; Requests from the LAN for domain1.tld give local records, requests from the WAN give remote records. This works fine both at home and in my new Bind 9.7 on a larger scale. The difference is that at home I have somehow managed to make my LAN get remote records from domains in master.zones, without specifying those zones as duplicates in the "intranet" view. Trying this on a larger scale with Bind 9.7 I get no results at all except for the zones specified in the view. What am I missing? I've tried the same configuration for Bind 9.7.

    Read the article

  • install grub on disk image

    - by Dima
    I have disk image with 2 partitions: Partition 1 has cramfs file system (read only). This partition contains all system files of the OS Partition 2 has ext3 file system. This partition has only configuration files that may be changed. How can I install GRUB1 boot loader on MBR. I tried to copy first 446 bytes of my hard disk and copy GRUB files to the /boot directory on the 1st (cramfs) partition. I cannot use grub-install because I have disk image and not disk itself. Any ideas?

    Read the article

  • Automate the backup of my databases and files with cron

    - by Patrick
    hi, I want to automate the backup of my databases and files with cron. Should I add the following lines to crontab ? mysqldump -u root -pPASSWORD database_name | gzip > /home/backup/database_`date +\%m-\%d-\%Y`.sql.gz svn commit -m "Committing the working copy containing the database dump" First of all, is this a good approach? It is not clear how to specify the repository and the working copy with svn? How can I run svn only when the mysqldump is done and not before ? Avoiding conflicts

    Read the article

  • Giving a scanner-printer-combo a zoom function when copying ?

    - by ldigas
    You know how everytime you go to a photocopying shop, photocopiers always have a neat zoom function (it can take whatever you give it, and zoom it in/out, so your copy comes out smaller or larger). I got one of those neat 3-in-one printer machines. It has a copy button on it, but it also has some software that comes with it (Epson SX115 to be exact is the model). Apart from going into some photo manipulation application, is there some way (software) to give it that feature. So, in short, I need something that can scan a page, scale it to let's say, a quarter of its size, and then print it out ? Anyone knows of anything like that ?

    Read the article

  • After closing the ssh terminal, the thin server is down

    - by Keating Wang
    I have a rails project run on the thin server(1.3.1) on a ubuntu server. I ssh to the server and start thin with command 'thin start -C config/thin.yml', following the thin.yml, port: 3000 log: log/thin.log timeout: 30 chdir: /home/byht/56platform/dev/tracker environment: production servers: 1 daemonize: true After thin starts successfully, I visit the project and it works well. Then, I close the terminal, I can also visit the pages that have been visited, but when I visit the pages that not been visited before closing ssh terminal, a "500" error appears on the page. I didn't find the error messages in the log file. I have tried start thin with nohup and sudo, but they are useless. I sign in the ubuntu server locally, then the problem disappears. But I need sign in the server to stat thin with ssh when I'm home.

    Read the article

  • How to get LAN ip to a variable in a Windows batch file

    - by Ville Koskinen
    I'm streaming audio from my Windows 7 laptop to a sound card attached to a router. I have a little batch script to start streaming. REM Kill any instances of vlc taskkill /im vlc.exe "c:\Program Files\VideoLAN\VLC\vlc.exe" <parameters to start http streaming> REM Wait for vlc TIMEOUT /T 10 REM start playback on router plink -ssh [email protected] -pw password killall -9 madplay plink -ssh [email protected] -pw password wget -q -O - http://192.1.159:8080/audio | madplay -Q --no-tty-control - & As you see the http stream is hard coded. It would be nice to get the address dynamically to reuse the script on other machines. Any ideas?

    Read the article

  • opening Dbf files in oracle 10g

    - by nagaraju
    This nagaraju,from India,Hyderabad. I have installed oracle 10g trail version in my system(E drive),created one database with my name(database:-nagaraju),in that created tables, prodecures ,functions ,sequences etc for my project. Due to some sudden problem,i formatted my machine C drive,now iam not ablle to open my database, i need all procedures ,tables which i created in that. Now I newly installed oracle10g again in another folder,how can i copy my old database into my inew installation database. Or can i copy the script of procedures so that ican run in new database. I have all data in Oradata folder,like DBF files etc. Could you please help me, how to do that?

    Read the article

< Previous Page | 423 424 425 426 427 428 429 430 431 432 433 434  | Next Page >