Search Results

Search found 15055 results on 603 pages for 'volume shadow copy'.

Page 175/603 | < Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >

  • Automatizing the backup of my databases and files with cron

    - by Patrick
    hi, I want to automatize the backup of my databases and files with cron. Should I add the following lines to crontab ? mysqldump -u root -pPASSWORD database_name | gzip > /home/backup/database_`date +\%m-\%d-\%Y`.sql.gz svn commit -m "Committing the working copy containing the database dump" 1) First of all, is this a good approach ? 2) It is not clear how to specify the repository and the working copy with svn" 3) How can I run svn only when the mysqldump is done and not before ? Avoiding conflicts Any other tip ? thanks

    Read the article

  • disable 250 character URL limit in Internet Explorer

    - by Keltari
    Users of a SharePoint Document Library are getting this error: The URL for this file is too long for the application. A temporary copy of this file will be opened on your computer. You must save this copy as a new file. After doing some research, it appears Internet Explorer has a limit of about ~250 characters for a URL. Some URLs provided by SharePoint far exceed this limit. One example being 790 characters long. Is there a way to disable this limit? I have looked, but there doesnt appear to be a solution, other than shortening the folder/path names.

    Read the article

  • Backing up data stored on Amazon S3

    - by Fiver
    I have an EC2 instance running a web server that stores users' uploaded files to S3. The files are written once and never change, but are retrieved occasionally by the users. We will likely accumulate somewhere around 200-500GB of data per year. We would like to ensure this data is safe, particularly from accidental deletions and would like to be able to restore files that were deleted regardless of the reason. I have read about the versioning feature for S3 buckets, but I cannot seem to find if recovery is possible for files with no modification history. See the AWS docs here on versioning: http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectVersioning.html In those examples, they don't show the scenario where data is uploaded, but never modified, and then deleted. Are files deleted in this scenario recoverable? Then, we thought we may just backup the S3 files to Glacier using object lifecycle management: http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html But, it seems this will not work for us, as the file object is not copied to Glacier but moved to Glacier (more accurately it seems it is an object attribute that is changed, but anyway...). So it seems there is no direct way to backup S3 data, and transferring the data from S3 to local servers may be time-consuming and may incur significant transfer costs over time. Finally, we thought we would create a new bucket every month to serve as a monthly full backup, and copy the original bucket's data to the new one on Day 1. Then using something like duplicity (http://duplicity.nongnu.org/) we would synchronize the backup bucket every night. At the end of the month we would put the backup bucket's contents in Glacier storage, and create a new backup bucket using a new, current copy of the original bucket...and repeat this process. This seems like it would work and minimize the storage / transfer costs, but I'm not sure if duplicity allows bucket-to-bucket transfers directly without bringing data down to the controlling client first. So, I guess there are a couple questions here. First, does S3 versioning allow recovery of files that were never modified? Is there some way to "copy" files from S3 to Glacier that I have missed? Can duplicity or any other tool transfer files between S3 buckets directly to avoid transfer costs? Finally, am I way off the mark in my approach to backing up S3 data? Thanks in advance for any insight you could provide!

    Read the article

  • One Way Sync of a Bucket With Local Directory

    - by user48651
    I have a local directory that I would like to synchronize with an S3 bucket. I have two specific requirements: If local file is the same as the remote, do not re-transfer it to the bucket. If some files or directories exist in the bucket but do not exist on local, delete them. Basically the bucket should mirror the local copy and not vice-versa. I looked into s3cmd sync command, but unfortunately requirement 2 is not fulfilled. If files exists in the bucket but not on local copy, they will be copied to the local instead of being deleted.

    Read the article

  • Diagnosing extremely slow network operations.

    - by Chris Becke
    The network: * A windows 7 PC with 2 NICs - one connected to an old style ethernet hub - the other to the internet - with internet sharing enabled * An Apple iMac connected to the hub, successfully utilizing the ICS to access the internet. My problem: Using the Mac, copying from the internet is fast. However, if I connect to a SMB: share on the Windows 7 PC and try and copy anything a few kb the copy operation is appallingly slow with my network card using the Windows 7 control panel showing ~.1% utilization. The NICs are 100Mbs and show a 10x larger throughput (now ~1%) if I download large files over the internet using the Mac. WTF?

    Read the article

  • IF commands in a batch file

    - by Rossaluss
    I'm writing a small batch file to replace users' themes and charts in Office and I have the below batch file that works just fine. cd c:\documents and settings\%username%\application data\microsoft\templates echo Y|rmdir charts /s mkdir charts echo Y|del "c:\documents and settings\%username%\application data\microsoft\templates\document themes\*.*" net use o: \\servername\sms copy "o:\ppt themes\charts\*.*" "c:\documents and settings\%username%\application data\microsoft\templates\charts" copy "o:\ppt themes\Document Themes\*.*" "c:\documents and settings\%username%\application data\microsoft\templates\document themes" c: net use o: /delete Now what I want is the above to only run if it hasn't run before as we'll be pushing this out to all users for around 2 weeks to catch people that aren't in every day. Is there any way to begin the command with something to look for one of the new themes/charts already pushed down, and if it's present, then have it not run? Any help on this would be greatly appreciated as I'm pretty new to these batch files.

    Read the article

  • Separate zone exceptions for each view in BIND

    - by Stefan M
    Problem: Separate zones by query source network and return different records for LAN clients compared to WAN clients. I've implemented this at home on a small alix router with Bind 9.4. One view called "lan" and one view called "wan". The "lan" view had just the root.hints file and one zone. The "wan" view had many other zones, including a copy of the one zone from the "lan" view, but with different records. Querying domain1.tld from the LAN would give me local records. Querying domain1.tld from the WAN would give me external records. Querying domain2.tld from the LAN would give me the same records as from the WAN as it only existed in the WAN view. Now I'm trying to re-implement this on a larger scale and suddenly my view is unable to query anything outside itself. This is natural according to the bind-users list and they suggest I copy all my views into my LAN view. I'm hoping someone here has a better solution because that means I'll have to copy, and maintain, thousands of zone files in multiple views. This is unfeasible. My configuration at home resembles this. acl lanClients { 192.168.22.0/24; 127.0.0.1; }; view "intranet" { match-clients { lanClients; }; recursion yes; notify no; // Standard zones // zone "." { type hint; file "etc/root.hint"; }; zone "domain1.tld" { type master; file "intranet/domain1.tld"; }; }; view "internet" { match-clients { !localnets; any; }; recursion no; allow-transfer { slaveDNS; }; include "master.zones"; }; Requests from the LAN for domain1.tld give local records, requests from the WAN give remote records. This works fine both at home and in my new Bind 9.7 on a larger scale. The difference is that at home I have somehow managed to make my LAN get remote records from domains in master.zones, without specifying those zones as duplicates in the "intranet" view. Trying this on a larger scale with Bind 9.7 I get no results at all except for the zones specified in the view. What am I missing? I've tried the same configuration for Bind 9.7.

    Read the article

  • install grub on disk image

    - by Dima
    I have disk image with 2 partitions: Partition 1 has cramfs file system (read only). This partition contains all system files of the OS Partition 2 has ext3 file system. This partition has only configuration files that may be changed. How can I install GRUB1 boot loader on MBR. I tried to copy first 446 bytes of my hard disk and copy GRUB files to the /boot directory on the 1st (cramfs) partition. I cannot use grub-install because I have disk image and not disk itself. Any ideas?

    Read the article

  • Deploying website content via Subversion

    - by Johann
    we have recently set up a new development infrastructure and process for one of our clients. This involves the strict use of subversion as a central source code repository. The svn repositories contains a seperate branch for code on the live system (/branches/live/). The repositories are use for PHP content (mainly Wordpress Blogs), but in future they may hold other asp code as well. Bonus points for a solutions which more or less in the same way with ASP code on Windows Server 2008 R2. We have two servers: one staging system and one live system. The staging system is updated regularly with the code of the trunk. The live system is update manually. Each webroot on the servers are working copy of either the trunk (staging system) or the live branch (live system). The current workflow is: Developing on the dev's box - commit into the trunk - auto-deploy on staging system - testing on the staging system - merging into /branches/live/ - manual deployment on live system. This works for one-way changes very well, however we have some troubles on every wordpress (or plugin) update: The WP update process removes the directories and unpack the archive of the new version. This removes the svn admin area as well, which produces a lot of errors. We could switch to SVN 1.7 with a single, global admin area, but this would only solve on part of the problem. Finally, we have done the update via the WP Gui, restored the svn admin area, added/removed the files and committed the changes to the trunk. After testing, we had to do basically the same thing on the live server (except the commit, we just reverted the changes and merged the new files from the staging system to the live system). I'm currently thinking of the following: The htdocs of each website is a svn export Each website has a svn working copy beside the htdocs directory a script which "replays" the changes in the wc from htdocs after an update in WP (rsync'ing the changed files to the working copy, rsync'ing new files and svn add them and finally svn delete the deleted files). The script would have to exclude some files (like wp-config.php, uploads/temp directories, etc.). Are there better ways to do this? Unfortunaly, a complete CI server is out of scope due to time and budget limitations.

    Read the article

  • opening Dbf files in oracle 10g

    - by nagaraju
    This nagaraju,from India,Hyderabad. I have installed oracle 10g trail version in my system(E drive),created one database with my name(database:-nagaraju),in that created tables, prodecures ,functions ,sequences etc for my project. Due to some sudden problem,i formatted my machine C drive,now iam not ablle to open my database, i need all procedures ,tables which i created in that. Now I newly installed oracle10g again in another folder,how can i copy my old database into my inew installation database. Or can i copy the script of procedures so that ican run in new database. I have all data in Oradata folder,like DBF files etc. Could you please help me, how to do that?

    Read the article

  • Giving a scanner-printer-combo a zoom function when copying ?

    - by ldigas
    You know how everytime you go to a photocopying shop, photocopiers always have a neat zoom function (it can take whatever you give it, and zoom it in/out, so your copy comes out smaller or larger). I got one of those neat 3-in-one printer machines. It has a copy button on it, but it also has some software that comes with it (Epson SX115 to be exact is the model). Apart from going into some photo manipulation application, is there some way (software) to give it that feature. So, in short, I need something that can scan a page, scale it to let's say, a quarter of its size, and then print it out ? Anyone knows of anything like that ?

    Read the article

  • Automate the backup of my databases and files with cron

    - by Patrick
    hi, I want to automate the backup of my databases and files with cron. Should I add the following lines to crontab ? mysqldump -u root -pPASSWORD database_name | gzip > /home/backup/database_`date +\%m-\%d-\%Y`.sql.gz svn commit -m "Committing the working copy containing the database dump" First of all, is this a good approach? It is not clear how to specify the repository and the working copy with svn? How can I run svn only when the mysqldump is done and not before ? Avoiding conflicts

    Read the article

  • Multiple Copies of Windows Calculator

    - by Brian Boatright
    Just did a clean install of Win7 x64. I have a Microsoft Ergo Keyboard 4000 and use the calculator key a lot. Previously I could hit it and get multiple copies of calculator to popup. Now it will only show one copy of calculator. I tried adding a shortcut to the calculator app but it has the same limitation. However if I click the calculator icon it will open a new one each time. How can I fix this so each time I press the calculator key it will open a new copy?

    Read the article

  • How can I install a custom (patched) PECL extension?

    - by JKS
    I'm trying to use the htscanner PECL extension on my CentOS 5/PHP 5.2.6 machine, but there's a bug in the latest version where a newline character is added to the end of every php_value directive. This behavior causes my include_path and error_log values not to work. The bug and the patch are documented on the PECL site: http://pecl.php.net/bugs/bug.php?id=16891 I've downloaded the latest version, applied the patch, and re-compressed the package — but I can't get the PECL installer to accept it — or any local package, for that matter. I've tried every variation of the pecl install syntax that I can think of, and the only times I'm able to get it to work, it downloads an online copy first and ignores the local copy. Can anyone recommend a method for installing a PECL extension from a local file? Thanks for your consideration.

    Read the article

  • Why does MTP work on one machine, but not another?

    - by bobmcn
    I have two Dell computers, a laptop and a desktop. I reinstalled Windows XP home on both, from the same CD. Then I installed Service pack 3 and ran windows update. When I connect my Creative Zen media player to the desktop, the MTP software that is part of XP recognizes it, and I can copy file to and from it using Windows Explorer. But when I connect the same player with the same cable to the laptop, I can see it in Windows Explorer, but none of the folders that are visible via the desktop are available, and I can't copy anything to or from it. How can I get this working on the laptop?

    Read the article

  • Limit of dvd rom

    - by user23950
    I have lite on dvd rom. And I'm going to copy lots of files from maybe 40-70 dvd's. And I'm using this dvd rom for about 4-7 months now. And I have also burned lots of dvd's. and I've copied the 2nd dvd. But the rom is making sound. A sound that I do not hear frequently. Does it depend on the dvd's that I'm reading or my dvd rom is getting old. How many dvd's do you think the rom can copy without threatening its health

    Read the article

  • How can I fix Problems with interlaced video jerking/flicking when playedback on DVD players? (Mixin

    - by Simon P Stevens
    I'm trying to make a DVD and the final DVD jerks when played on standalone DVD players. It seems to play fine on PCs. I think the problem may be to do with interlacing settings when rendering the final output, but I'll outline the whole editing process I have followed in case I've made a mistake somewhere else. Most of the footage comes from a sony handy cam (one of those mini DVD ones) so isn't great quality. It was set to "high quality" (haha) and 16:9 aspect ratio when it was recorded. I copy the files directly from the mini DVDs onto the hard drive and import them into Cinelerra. In Cinelerra I set the format to 25fps, 720x576, RGBA-8bit, 16:9, interlaced bottom fields first. When I've finished the editing, I add a Fields to frames effect (set to bottom first) to each video track. I render to audio and video separately: Audio: AC3, 128kbps Video: YUV4MPEG steam, video pipe settings: ffmpeg -f yuv4mpegpipe -i - -y -target dvd -flags +ilme+ildct mpeg2video % Cinelerra often crashes during the rendering, so I set it to generate a new video file at each label, and combine them using cat when I've got a sucesful render of each one. Once I've combined them, I use mencoder to re-index them: mencoder -forceidx -oac copy -ovc copy merged.m2v -o mergedReIndexed.m2v I combine the audio and video files using ffmpeg: ffmpeg -i AudioFile.ac3 -i VideoFile.m2v -target dvd -flags +ilme+ildct FinalMovie.mpg Then I build the menus with spumux and I create the DVD file system with dvdauthor, and finally I write it do a dvd-r like this: nice -n -20 growisofs -dvd-compat -speed=2 -Z /dev/dvd -dvd-video -V VIDEO ./ && eject /dev/dvd Originally, when I did it the DVD flickered badly, so as suggested in a guide I added the fields to frames effect in cinelerra. Now it doesn't "flicker", but has become "jerky" when there is lots of motion, particularly when the camera is moving, so the whole background moves. This is what I've tried so far: Removed "mpeg2video" from cinelerra video render pipe. Removed +ilme from render pipe. Removed +ildct from render pipe. Removed +ilme from render audio/video rejoin command. Removed +ildct from render audio/video rejoin command. Added -alt to render pipe. Added -alt to render audio/video rejoin command. Tried with and without the frames to fields effect in Cinelerra. and various combinations of the above. I've also tried this: change the Cinelerra fps to 50, use fields to frames (instead of frames to fields), render to an intermediate QTforlinux jpeg video stream, re-importing that back into Cinelerra, adding a frames to fields effect and then rendering that output as normal (@25fps), and I still have the same problem. Has anyone experienced this "jerking" playback before? Can anyone give any suggestions on how to fix it? (Like I say, it plays back fine on a PC, but not on any of the standalone players I've tried)

    Read the article

  • Linux Live CD for old computer

    - by Joel Coehoorn
    I have a pentium II (that's right, pentium II) with a scant 200MB of ram. This was a high-end workstation in it's day. The machine currently runs dos on a raid array, and I need to pull some data from it. I figure my best chance at this is to use a linux live cd to copy the data to one of our active directory network shares (there is a network card in the machine). Unfortunately, my linux skills are abysmal, so I'm not sure where to get started: Where should I look to find a linux cd that will run well on such an old system Since I'm likely gonna need to be command-line only, what do I need to do to configure the network card and mount the network share via the command line? Bonus points: exact syntax needed to copy and convert the entire volume for use in VMware server 2.0, but really just copying all the data should be enough.

    Read the article

  • How do I back up my Windows partition from an Ubuntu live CD?

    - by lalli
    My Windows partition (C:) is corrupt. I'm booting up from an Ubuntu live CD and trying to copy all the files from C: to my external drive, but the system expands all of the links, producing a projected copy size of 1.8TB (my external drive is just 1TB, and the data in c: is around 700MB). Then I looked at dd and other backup utilities. Anything I looked into, I couldn't figure out whether or not the image would be readable in Windows through any other app. Has anyone else tried to back up data from a corrupted Windows installation using Ubuntu?

    Read the article

  • Anyway to backup nginx before recompiling

    - by JM4
    I am looking to install the HttpGeoipModule for NGINX but learning I have to recompile the entire thing from source in order to do so. I have a new Media Temple DV 4.0 server and that comes with nginx v 1.3.0 stock and have never had to recompile from source before and a bit nervous to make changes without being able to revert to a previous state in the event something messes up (that and the fact it is affecting a live server so no idea what downtime is). My plan was to copy all the existing modules used (nginx -V to list them all and copy the modules already compiled). Then rebuild from source with the copied info above and including the ./configure --with-http_geoip_module reference. Is is possible to backup the existing nginx configuration in the event something goes wrong?

    Read the article

  • SSH not working after Restoring Running-Config to a Replacement Cisco Router

    - by Kyle Brandt
    One of my Cisco routers died over the weekend, Cisco sent the replacement and I restored the the config using copy tftp: running-config. Everything seems to work fine but I can no longer ssh into the router (I can telnet). The connection is refused, so it isn't listening on port 22 it seems like. I had previously backed up the config by just doing ssh router 'show run' > backup_config from my workstation. So: Is there anything wrong with my method of backup vs copy running-config tftp:? I know I haven't given any debug information, but is there something typical I need to do to get ssh working?

    Read the article

  • How can I get keyboard shortcuts for certain characters listed in character map that don't have an ALT equivalent listed?

    - by Kat
    Does anyone know how to get a complete listing of character map equivalents? For example, look in Windows character map under Arial for ¼ . It says you can type ALT+0188 . But some things do not have an Alt equivalent listed. For example ? only gives its unicode of U+ 1254 and no "Alt number". Obviously you can just copy and paste, but is there a way to find an Alt equivalent for that and other characters so one doesn't need to copy and paste each time? Or any other workaround suggestions? Thanks!

    Read the article

  • Get a file from a load balanced server in Windows Server

    - by Leandro
    I've a load balanced server on production environment for my application. The server is on Windows Server 2008 R2. I'm running a web application that creates and save a file into a folder on the web path. So I need to create a job that copy this file into another server. The main idea is that a file watcher checks for the file and then copy it instantly. But how can I know in what server it's the file? Please avoid "why you don't" answers to get a directly answer, if it's someone.

    Read the article

  • What is the easiest way to get perfmon counter names into a text file?

    - by Bill Paetzke
    I'd like to create a settings file for my logman command. I expect to have lots of perfmon counters. Is there any easy way to get all the perfmon counters' exact text anywhere? The only thing I thought of was to create a Perfmon Counter Log through the GUI and then export the list of selected counters--but I don't see an export option! I guess I could manually copy what I see on the screen, but that seems inefficient. I'm going to be dealing with tens of counters. Maybe there is a list somewhere? That'd be easier to copy and paste from.

    Read the article

  • mirroring linux server to external usb harddrive

    - by DuPie
    My google-fu must be sucking. i havent been able to find a good solution for the following: numerous Linux server on commodity hardware Trying to do a recovery mirror copy to external harddrives External harddrives are smaller than source harddrives, but larger than data External drives are connected via usb2 (slow) Servers range from 20GB of data to 400GB of data Servers are remote, so hands on access is a pain need to copy boot files. empty external drives currently Basicly, looking for a way to do use a ghosting solution from INSIDE a running linux server to an external harddrive, without booting a cd etc. the rsync/cpio solutions i've looked at dont work great with grub/dev/proc etc. I understand that since the system isnt offline, it wont be a "mirror" image as files change, but thats ok. Are there any free/commercial products that would work?

    Read the article

< Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >