Search Results

Search found 7592 results on 304 pages for 'dev c'.

Page 154/304 | < Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >

  • Backing up data stored on Amazon S3

    - by Fiver
    I have an EC2 instance running a web server that stores users' uploaded files to S3. The files are written once and never change, but are retrieved occasionally by the users. We will likely accumulate somewhere around 200-500GB of data per year. We would like to ensure this data is safe, particularly from accidental deletions and would like to be able to restore files that were deleted regardless of the reason. I have read about the versioning feature for S3 buckets, but I cannot seem to find if recovery is possible for files with no modification history. See the AWS docs here on versioning: http://docs.aws.amazon.com/AmazonS3/latest/dev/ObjectVersioning.html In those examples, they don't show the scenario where data is uploaded, but never modified, and then deleted. Are files deleted in this scenario recoverable? Then, we thought we may just backup the S3 files to Glacier using object lifecycle management: http://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html But, it seems this will not work for us, as the file object is not copied to Glacier but moved to Glacier (more accurately it seems it is an object attribute that is changed, but anyway...). So it seems there is no direct way to backup S3 data, and transferring the data from S3 to local servers may be time-consuming and may incur significant transfer costs over time. Finally, we thought we would create a new bucket every month to serve as a monthly full backup, and copy the original bucket's data to the new one on Day 1. Then using something like duplicity (http://duplicity.nongnu.org/) we would synchronize the backup bucket every night. At the end of the month we would put the backup bucket's contents in Glacier storage, and create a new backup bucket using a new, current copy of the original bucket...and repeat this process. This seems like it would work and minimize the storage / transfer costs, but I'm not sure if duplicity allows bucket-to-bucket transfers directly without bringing data down to the controlling client first. So, I guess there are a couple questions here. First, does S3 versioning allow recovery of files that were never modified? Is there some way to "copy" files from S3 to Glacier that I have missed? Can duplicity or any other tool transfer files between S3 buckets directly to avoid transfer costs? Finally, am I way off the mark in my approach to backing up S3 data? Thanks in advance for any insight you could provide!

    Read the article

  • Windows XP Professional Version 2002 SP3 restarts when I try to Hibernate

    - by dario_ramos
    I googled this and tried some solutions, but nothing seems to work. In the past, Hibernate worked fine. Someone told me that this can be caused by specific hardware, and I have lots of that, because this is a dev machine which we use to develop modules that interact with lots of different hardware. Moreover, this machine also uses RAID, which might have something to do. Is there any way to troubleshoot this by looking at some log file or using some tool?

    Read the article

  • What is the garbage text that is being printed by wvdial in terminal?

    - by Hrishi
    When I dial using wvdial, sometimes it prints some garbage text into the terminal. This is not happening every time, but in the garbage text I can see some readable strings which is often irc logs(from xchat) or GET requests from the browser. One of my friend told me that this is probably something it's reading from /dev/random for Random entropy, but I couldn't find any supporting information. What is this text, and why is it being printed to the terminal? See the below picture for an example:

    Read the article

  • RHEL json gem installation requires make?

    - by Salahuddin559
    When I try to install json gem (gem install json), at first it fails to do so, because of some dev package issue. After fixing it, it fails saying that "sh: make: command not found" and "ERROR: Error installing json: ERROR: Failed to build gem native extension.". Why is it failing on make? Notice this is not Mac, this is in RHEL 5 (4 or 5, not sure). Why is it not able to do some "build gem native extension"?

    Read the article

  • Use bootable vhd with Virtualbox

    - by user18151
    Hello, I want to create a bootable Windows 7 vhd using the steps mentioned at: http://www.microsoft.com/downloads/details.aspx?FamilyID=80ede31d-3509-407b-a896-0beea8705589&displaylang=en However, I wanted to know if I will be able to access the vhd using Virtualbox too. I intend to install VS2008 in the VM and use it in Virtualbox when doing quick work and on native hardware when doing a lot of work. I don't want to mess up my actual Win7 installation with VS2008 dev work.

    Read the article

  • Shouldn't emesene themes change the emesene indicator icon?

    - by D Connors
    As it stands right now, I can't get the emesene icon in the indicator applet (or notification area, or whatever it's named) to change. No matter what icon theme I choose, even installing new themes, the icon is always the default one. Is that a standard behavior, or am I getting a bug? I'm running the latest version of emesene from their ppa (1.6-dev), on Lucid Lynx.

    Read the article

  • IIS7 - Basic Authentication Module missing?

    - by FlySwat
    I'd like to use basic HTTP authentication to keep people out of our dev site instance since it is unfortantly exposed to the wild internet. However, in IIS7, the only authentication modes listed are Forms, Anonymous and Impersonation. Where did the "Basic Authentication" module go, and how can I get it back?

    Read the article

  • Mount Windows share on Linux boot

    - by Delameko
    I'm running VirtualBox in Windows. I have Linux 10.04 installed as a VM. Whenever I log in I have to run to following command to mount my shared Windows web dev folder: sudo mount.vboxsf web_apps /mnt/web_apps Where can I put this line (minus the sudo) so that it will run once when Linux boots up? I'm guessing there must be a root .profile or .login script that runs at some point?

    Read the article

  • Advice on moving a machine room to a new location?

    - by MikeJ
    Our company is moving to new offices in a couple of months, and I am responsible for looking after the move of the development servers in the company. most of the dev equipment is in 5, 42U cabinets + rack for switching/routing equipment. How do most people do this sort of thing? Move the cabinent whole or extract the indvidual components and move the racks empty. any advise on prep and shutdown before the move would be welcome

    Read the article

  • How to tune TCP TIME_WAIT timeout on Solaris?

    - by Hongli Lai
    I'm trying to change the TCP TIME_WAIT timeout on Solaris. According to some Google results I need to run this command: ndd -set /dev/tcp tcp_time_wait_interval 60000 However I get: operation failed: Not owner What am I doing wrong? I'm already running ndd as root. Is there another way to tune TIME_WAIT?

    Read the article

  • http benchmarking?

    - by Sam Williams
    im running varnish-nginx(php-fpm) and im using ab but it keeps messing up. [root@localhost src]# ab -k -n 100000 -c 750 http://192.168.135.12/index.php This is ApacheBench, Version 2.0.40-dev <$Revision: 1.146 $> apache-2.0 Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Copyright 2006 The Apache Software Foundation, http://www.apache.org/ Benchmarking 192.168.135.12 (be patient) apr_socket_recv: Connection reset by peer (104) is there anything else i can use? or am i doing it wrong?

    Read the article

  • Nagios shell script cannot be executed

    - by MeinAccount
    I'm trying to monitor GitLab with nagios. I've created the following command definition and shell script but when checking the service I'm receiving the following e-mail. How can I solve this? The file is executable. [...] nagios : 3 incorrect password attempts ; TTY=unknown ; PWD=/ ; USER=git ; COMMAND=/bin/bash -c /var/lib/nagios/custom_plugins/check_gitlab.sh Command definition: define command { command_name custom_check_gitlab command_line /var/lib/nagios/custom_plugins/check_gitlab.sh } Shell script: #! /bin/sh # [...] RAILS_ENV="production" # Script variable names should be lower-case not to conflict with internal /bin/sh variables such as PATH, EDITOR or SHELL. app_root="/home/git/gitlab" app_user="git" unicorn_conf="$app_root/config/unicorn.rb" pid_path="$app_root/tmp/pids" socket_path="$app_root/tmp/sockets" web_server_pid_path="$pid_path/unicorn.pid" sidekiq_pid_path="$pid_path/sidekiq.pid" ### Here ends user configuration ### # Switch to the app_user if it is not he/she who is running the script. if [ "$USER" != "$app_user" ]; then sudo -u "$app_user" -H -i $0 "$@"; exit; fi # Switch to the gitlab path, if it fails exit with an error. if ! cd "$app_root" ; then echo "Failed to cd into $app_root, exiting!"; exit 1 fi ### Init Script functions check_pids(){ if ! mkdir -p "$pid_path"; then echo "Could not create the path $pid_path needed to store the pids." exit 1 fi # If there exists a file which should hold the value of the Unicorn pid: read it. if [ -f "$web_server_pid_path" ]; then wpid=$(cat "$web_server_pid_path") else wpid=0 fi if [ -f "$sidekiq_pid_path" ]; then spid=$(cat "$sidekiq_pid_path") else spid=0 fi } # Checks whether the different parts of the service are already running or not. check_status(){ check_pids # If the web server is running kill -0 $wpid returns true, or rather 0. # Checks of *_status should only check for == 0 or != 0, never anything else. if [ $wpid -ne 0 ]; then kill -0 "$wpid" 2>/dev/null web_status="$?" else web_status="-1" fi if [ $spid -ne 0 ]; then kill -0 "$spid" 2>/dev/null sidekiq_status="$?" else sidekiq_status="-1" fi } check_pids check_status if [ "$web_status" != "0" -a "$sidekiq_status" != "0" ]; then echo "GitLab is not running." exit 2 fi if [ "$web_status" != "0" ]; then printf "The GitLab Unicorn webserver is \033[31mnot running\033[0m.\n" exit 1 fi if [ "$sidekiq_status" != "0" ]; then printf "The GitLab Sidekiq job dispatcher is \033[31mnot running\033[0m.\n" exit 1 fi if [ "$web_status" = "0" -a "$sidekiq_status" = "0" ]; then printf "GitLab and all it's components are \033[32mup and running\033[0m.\n" exit 0 fi

    Read the article

  • Routing and arp

    - by adolgarev
    If I have >ip ro 192.168.14.0/24 dev eth0 another host can obtain my mac address. But if I flush routing info: >ip ro flush table main arp resolution doesn't work. Broadcast packets "Who has 192.168.14.149" reach eth0 but OS (Linux) doesn't respond despite eth0 has address 192.168.14.149. What connection exists between routing and arp resolution?

    Read the article

  • Cross-platform restart of Apache

    - by l0b0
    I'd like to have a single command that'll restart Apache on any *nix OS. Currently I'm working with Ubuntu, which has /usr/sbin/apache2ctl /usr/sbin/service no apachectl no httpd and Scientific Linux CERN 5, which has /usr/sbin/apachectl /etc/init.d/httpd no apache2ctl no service I'd like to avoid using a hack like which service 2>/dev/null || which /etc/init.d/httpd

    Read the article

  • Why is pdf format used?

    - by dan_vitch
    I will admit that I am new to the tech/dev field. It seems to become a trend that every time I have to work with pdfs a part of me dies. Why is this format as ubiquitous as it seems to be? Is it just non-tech people that prefer pdfs?

    Read the article

  • Could not upload .htaccess

    - by syalam
    I am using Springloops to automatically take my SVN repo and deploy onto my server. I am getting the following error: Could not upload .htaccess Could not upload .htaccess using BINARY transfer ---------------------------------------------------- Connecting to dev.convrrt.com Logging in as convrrt Entering destination directory ~/ Entering passive mode REVISION: 1 -> 30 Getting changes Deleting files Removing directories Creating directories and files Extracting file: .htaccess...OK Uploading file: .htaccess [644] R: interrupted How can I diagnose this?

    Read the article

  • How to fix grub after moving root partition?

    - by Grzenio
    Hi, Because I am using one of the new WD disks I am trying to aling my root partition with the real sectors, as described here: http://community.wdc.com/t5/Desktop/Problem-with-WD-Advanced-Format-drive-in-LINUX-WD15EARS/m-p/10920#M631 So I copied all files to a temp location, deleted my partition (/dev/sda3), recreated it a few cylinders later (same name) and copied the files to the newly created partition. But now when I try to boot, I get my old grub menu but after selecting my kernel version it hangs... Any idea how I can fix it?

    Read the article

  • Tomato VPN connect but cannot ping LAN IP

    - by David Hamilton
    I've setup TomatoVPN using these settings on the server: TAP UDP 1194 Client address pool 10.10.9.1 -10.10.9.254 LAN clients are configured with 10.10.10.x I can connect from a remote client, but pinging anything in the 10.10.10.x results in a "Destination Host Unreachable" error. Here's my client configuration script: remote x.x.x.x 1194 client dev tap0 proto udp resolv-retry infinite nobind persist-key persist-tun float ca ca.crt cert client1.crt key client1.key ns-cert-type server Any suggestions as how I can make this properly bridge the two networks?

    Read the article

  • How can I fix Problems with interlaced video jerking/flicking when playedback on DVD players? (Mixin

    - by Simon P Stevens
    I'm trying to make a DVD and the final DVD jerks when played on standalone DVD players. It seems to play fine on PCs. I think the problem may be to do with interlacing settings when rendering the final output, but I'll outline the whole editing process I have followed in case I've made a mistake somewhere else. Most of the footage comes from a sony handy cam (one of those mini DVD ones) so isn't great quality. It was set to "high quality" (haha) and 16:9 aspect ratio when it was recorded. I copy the files directly from the mini DVDs onto the hard drive and import them into Cinelerra. In Cinelerra I set the format to 25fps, 720x576, RGBA-8bit, 16:9, interlaced bottom fields first. When I've finished the editing, I add a Fields to frames effect (set to bottom first) to each video track. I render to audio and video separately: Audio: AC3, 128kbps Video: YUV4MPEG steam, video pipe settings: ffmpeg -f yuv4mpegpipe -i - -y -target dvd -flags +ilme+ildct mpeg2video % Cinelerra often crashes during the rendering, so I set it to generate a new video file at each label, and combine them using cat when I've got a sucesful render of each one. Once I've combined them, I use mencoder to re-index them: mencoder -forceidx -oac copy -ovc copy merged.m2v -o mergedReIndexed.m2v I combine the audio and video files using ffmpeg: ffmpeg -i AudioFile.ac3 -i VideoFile.m2v -target dvd -flags +ilme+ildct FinalMovie.mpg Then I build the menus with spumux and I create the DVD file system with dvdauthor, and finally I write it do a dvd-r like this: nice -n -20 growisofs -dvd-compat -speed=2 -Z /dev/dvd -dvd-video -V VIDEO ./ && eject /dev/dvd Originally, when I did it the DVD flickered badly, so as suggested in a guide I added the fields to frames effect in cinelerra. Now it doesn't "flicker", but has become "jerky" when there is lots of motion, particularly when the camera is moving, so the whole background moves. This is what I've tried so far: Removed "mpeg2video" from cinelerra video render pipe. Removed +ilme from render pipe. Removed +ildct from render pipe. Removed +ilme from render audio/video rejoin command. Removed +ildct from render audio/video rejoin command. Added -alt to render pipe. Added -alt to render audio/video rejoin command. Tried with and without the frames to fields effect in Cinelerra. and various combinations of the above. I've also tried this: change the Cinelerra fps to 50, use fields to frames (instead of frames to fields), render to an intermediate QTforlinux jpeg video stream, re-importing that back into Cinelerra, adding a frames to fields effect and then rendering that output as normal (@25fps), and I still have the same problem. Has anyone experienced this "jerking" playback before? Can anyone give any suggestions on how to fix it? (Like I say, it plays back fine on a PC, but not on any of the standalone players I've tried)

    Read the article

  • Linux filesystem with inodes close on the disk

    - by pts
    I'd like to make the ls -laR /media/myfs on Linux as fast as possible. I'll have 1 million files on the filesystem, 2TB of total file size, and some directories containing as much as 10000 files. Which filesystem should I use and how should I configure it? As far as I understand, the reason why ls -laR is slow because it has to stat(2) each inode (i.e. 1 million stat(2)s), and since inodes are distributed randomly on the disk, each stat(2) needs one disk seek. Here are some solutions I had in mind, none of which I am satisfied with: Create the filesystem on an SSD, because the seek operations on SSDs are fast. This wouldn't work, because a 2TB SSD doesn't exist, or it's prohibitively expensive. Create a filesystem which spans on two block devices: an SSD and a disk; the disk contains file data, and the SSD contains all the metadata (including directory entries, inodes and POSIX extended attributes). Is there a filesystem which supports this? Would it survive a system crash (power outage)? Use find /media/myfs on ext2, ext3 or ext4, instead of ls -laR /media/myfs, because the former can the advantage of the d_type field (see in the getdents(2) man page), so it doesn't have to stat. Unfortunately, this doesn't meet my requirements, because I need all file sizes as well, which find /media/myfs doesn't print. Use a filesystem, such as VFAT, which stores inodes in the directory entries. I'd love this one, but VFAT is not reliable and flexible enough for me, and I don't know of any other filesystem which does that. Do you? Of course, storing inodes in the directory entries wouldn't work for files with a link count more than 1, but that's not a problem since I have only a few dozen such files in my use case. Adjust some settings in /proc or sysctl so that inodes are locked to system memory forever. This would not speed up the first ls -laR /media/myfs, but it would make all subsequent invocations amazingly fast. How can I do this? I don't like this idea, because it doesn't speed up the first invocation, which currently takes 30 minutes. Also I'd like to lock the POSIX extended attributes in memory as well. What do I have to do for that? Use a filesystem which has an online defragmentation tool, which can be instructed to relocate inodes to the the beginning of the block device. Once the relocation is done, I can run dd if=/dev/sdb of=/dev/null bs=1M count=256 to get the beginning of the block device fetched to the kernel in-memory cache without seeking, and then the stat(2) operations would be fast, because they read from the cache. Is there a way to lock those inodes and/or blocks into memory once they have been read? Which filesystem has such a defragmentation tool?

    Read the article

< Previous Page | 150 151 152 153 154 155 156 157 158 159 160 161  | Next Page >