Search Results

Search found 1094 results on 44 pages for 'snapshot'.

Page 7/44 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Replication Services as ETL extraction tool

    - by jorg
    In my last blog post I explained the principles of Replication Services and the possibilities it offers in a BI environment. One of the possibilities I described was the use of snapshot replication as an ETL extraction tool: “Snapshot Replication can also be useful in BI environments, if you don’t need a near real-time copy of the database, you can choose to use this form of replication. Next to an alternative for Transactional Replication it can be used to stage data so it can be transformed and moved into the data warehousing environment afterwards. In many solutions I have seen developers create multiple SSIS packages that simply copies data from one or more source systems to a staging database that figures as source for the ETL process. The creation of these packages takes a lot of (boring) time, while Replication Services can do the same in minutes. It is possible to filter out columns and/or records and it can even apply schema changes automatically so I think it offers enough features here. I don’t know how the performance will be and if it really works as good for this purpose as I expect, but I want to try this out soon!” Well I have tried it out and I must say it worked well. I was able to let replication services do work in a fraction of the time it would cost me to do the same in SSIS. What I did was the following: Configure snapshot replication for some Adventure Works tables, this was quite simple and straightforward. Create an SSIS package that executes the snapshot replication on demand and waits for its completion. This is something that you can’t do with out of the box functionality. While configuring the snapshot replication two SQL Agent Jobs are created, one for the creation of the snapshot and one for the distribution of the snapshot. Unfortunately these jobs are  asynchronous which means that if you execute them they immediately report back if the job started successfully or not, they do not wait for completion and report its result afterwards. So I had to create an SSIS package that executes the jobs and waits for their completion before the rest of the ETL process continues. Fortunately I was able to create the SSIS package with the desired functionality. I have made a step-by-step guide that will help you configure the snapshot replication and I have uploaded the SSIS package you need to execute it. Configure snapshot replication   The first step is to create a publication on the database you want to replicate. Connect to SQL Server Management Studio and right-click Replication, choose for New.. Publication…   The New Publication Wizard appears, click Next Choose your “source” database and click Next Choose Snapshot publication and click Next   You can now select tables and other objects that you want to publish Expand Tables and select the tables that are needed in your ETL process In the next screen you can add filters on the selected tables which can be very useful. Think about selecting only the last x days of data for example. Its possible to filter out rows and/or columns. In this example I did not apply any filters. Schedule the Snapshot Agent to run at a desired time, by doing this a SQL Agent Job is created which we need to execute from a SSIS package later on. Next you need to set the Security Settings for the Snapshot Agent. Click on the Security Settings button.   In this example I ran the Agent under the SQL Server Agent service account. This is not recommended as a security best practice. Fortunately there is an excellent article on TechNet which tells you exactly how to set up the security for replication services. Read it here and make sure you follow the guidelines!   On the next screen choose to create the publication at the end of the wizard Give the publication a name (SnapshotTest) and complete the wizard   The publication is created and the articles (tables in this case) are added Now the publication is created successfully its time to create a new subscription for this publication.   Expand the Replication folder in SSMS and right click Local Subscriptions, choose New Subscriptions   The New Subscription Wizard appears   Select the publisher on which you just created your publication and select the database and publication (SnapshotTest)   You can now choose where the Distribution Agent should run. If it runs at the distributor (push subscriptions) it causes extra processing overhead. If you use a separate server for your ETL process and databases choose to run each agent at its subscriber (pull subscriptions) to reduce the processing overhead at the distributor. Of course we need a database for the subscription and fortunately the Wizard can create it for you. Choose for New database   Give the database the desired name, set the desired options and click OK You can now add multiple SQL Server Subscribers which is not necessary in this case but can be very useful.   You now need to set the security settings for the Distribution Agent. Click on the …. button Again, in this example I ran the Agent under the SQL Server Agent service account. Read the security best practices here   Click Next   Make sure you create a synchronization job schedule again. This job is also necessary in the SSIS package later on. Initialize the subscription at first synchronization Select the first box to create the subscription when finishing this wizard Complete the wizard by clicking Finish The subscription will be created In SSMS you see a new database is created, the subscriber. There are no tables or other objects in the database available yet because the replication jobs did not ran yet. Now expand the SQL Server Agent, go to Jobs and search for the job that creates the snapshot:   Rename this job to “CreateSnapshot” Now search for the job that distributes the snapshot:   Rename this job to “DistributeSnapshot” Create an SSIS package that executes the snapshot replication We now need an SSIS package that will take care of the execution of both jobs. The CreateSnapshot job needs to execute and finish before the DistributeSnapshot job runs. After the DistributeSnapshot job has started the package needs to wait until its finished before the package execution finishes. The Execute SQL Server Agent Job Task is designed to execute SQL Agent Jobs from SSIS. Unfortunately this SSIS task only executes the job and reports back if the job started succesfully or not, it does not report if the job actually completed with success or failure. This is because these jobs are asynchronous. The SSIS package I’ve created does the following: It runs the CreateSnapshot job It checks every 5 seconds if the job is completed with a for loop When the CreateSnapshot job is completed it starts the DistributeSnapshot job And again it waits until the snapshot is delivered before the package will finish successfully Quite simple and the package is ready to use as standalone extract mechanism. After executing the package the replicated tables are added to the subscriber database and are filled with data:   Download the SSIS package here (SSIS 2008) Conclusion In this example I only replicated 5 tables, I could create a SSIS package that does the same in approximately the same amount of time. But if I replicated all the 70+ AdventureWorks tables I would save a lot of time and boring work! With replication services you also benefit from the feature that schema changes are applied automatically which means your entire extract phase wont break. Because a snapshot is created using the bcp utility (bulk copy) it’s also quite fast, so the performance will be quite good. Disadvantages of using snapshot replication as extraction tool is the limitation on source systems. You can only choose SQL Server or Oracle databases to act as a publisher. So if you plan to build an extract phase for your ETL process that will invoke a lot of tables think about replication services, it would save you a lot of time and thanks to the Extract SSIS package I’ve created you can perfectly fit it in your usual SSIS ETL process.

    Read the article

  • App Engine: how would you... snapshotting entities

    - by Andrew B.
    Let's say you have two kinds, Message and Contact, related by a db.ListProperty of keys on Message. A user creates a message, adds some contacts as recipients, and emails the message. Later, the user deletes one of the contact entities that was a recipient of the message. Our application should delete the appropriate Contact entity, but we want to preserve the original recipient list for the message that was sent for the user's records. In essence, we want a snapshot of the message entity at the time it was sent. If we naively delete the contact entity, though, we lose snapshot integrity; if not, we are left with an invalid key. How would you handle this situation, either in controller logic or model changes? class User(db.Model): email = db.EmailProperty(required=True) class Contact(db.Model): email = db.EmailProperty(required=True) user = db.ReferenceProperty(User, collection_name='contacts') class Message(db.Model): recipients = db.ListProperty(db.Key) # contacts sender = db.ReferenceProperty(User, collection_name='messages') body = db.TextProperty() is_emailed = db.BooleanProperty(default=False)

    Read the article

  • Command does not execute in crontab while command itself works just fine

    - by fuzzybee
    I have this script from Colin Johnson on Github - https://github.com/colinbjohnson/aws-missing-tools/tree/master/ec2-automate-backup It seems great. I have modified it to send email to myself every time an EBS snapshot is created or deleted. The following works like a charm ec2-automate-backup.sh -v "vol-myvolumeid" -k 3 However, it does not execute at all as part of my crontab (I didn't receive any emails) #some command that got commented out */5 * * * * ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3; * * * * * date /root/logs/crontab.log; */5 * * * * date /root/logs/crontab2.log Please note that the 2nd and 3rd execute just fines as I can see the date and time in log files. What could I have missed here? The full ec2-automate-backup.sh is as follows: #!/bin/bash - # Author: Colin Johnson / [email protected] # Date: 2012-09-24 # Version 0.1 # License Type: GNU GENERAL PUBLIC LICENSE, Version 3 # #confirms that executables required for succesful script execution are available prerequisite_check() { for prerequisite in basename ec2-create-snapshot ec2-create-tags ec2-describe-snapshots ec2-delete-snapshot date do #use of "hash" chosen as it is a shell builtin and will add programs to hash table, possibly speeding execution. Use of type also considered - open to suggestions. hash $prerequisite &> /dev/null if [[ $? == 1 ]] #has exits with exit status of 70, executable was not found then echo "In order to use `basename $0`, the executable \"$prerequisite\" must be installed." 1>&2 | mailx -s "Error happened 0" [email protected] ; exit 70 fi done } #get_EBS_List gets a list of available EBS instances depending upon the selection_method of EBS selection that is provided by user input get_EBS_List() { case $selection_method in volumeid) if [[ -z $volumeid ]] then echo "The selection method \"volumeid\" (which is $app_name's default selection_method of operation or requested by using the -s volumeid parameter) requires a volumeid (-v volumeid) for operation. Correct usage is as follows: \"-v vol-6d6a0527\",\"-s volumeid -v vol-6d6a0527\" or \"-v \"vol-6d6a0527 vol-636a0112\"\" if multiple volumes are to be selected." 1>&2 | mailx -s "Error happened 1" [email protected] ; exit 64 fi ebs_selection_string="$volumeid" ;; tag) if [[ -z $tag ]] then echo "The selected selection_method \"tag\" (-s tag) requires a valid tag (-t key=value) for operation. Correct usage is as follows: \"-s tag -t backup=true\" or \"-s tag -t Name=my_tag.\"" 1>&2 | mailx -s "Error happened 2" [email protected] ; exit 64 fi ebs_selection_string="--filter tag:$tag" ;; *) echo "If you specify a selection_method (-s selection_method) for selecting EBS volumes you must select either \"volumeid\" (-s volumeid) or \"tag\" (-s tag)." 1>&2 | mailx -s "Error happened 3" [email protected] ; exit 64 ;; esac #creates a list of all ebs volumes that match the selection string from above ebs_backup_list_complete=`ec2-describe-volumes --show-empty-fields --region $region $ebs_selection_string 2>&1` #takes the output of the previous command ebs_backup_list_result=`echo $?` if [[ $ebs_backup_list_result -gt 0 ]] then echo -e "An error occured when running ec2-describe-volumes. The error returned is below:\n$ebs_backup_list_complete" 1>&2 | mailx -s "Error happened 4" [email protected] ; exit 70 fi ebs_backup_list=`echo "$ebs_backup_list_complete" | grep ^VOLUME | cut -f 2` #code to right will output list of EBS volumes to be backed up: echo -e "Now outputting ebs_backup_list:\n$ebs_backup_list" } create_EBS_Snapshot_Tags() { #snapshot tags holds all tags that need to be applied to a given snapshot - by aggregating tags we ensure that ec2-create-tags is called only onece snapshot_tags="" #if $name_tag_create is true then append ec2ab_${ebs_selected}_$date_current to the variable $snapshot_tags if $name_tag_create then ec2_snapshot_resource_id=`echo "$ec2_create_snapshot_result" | cut -f 2` snapshot_tags="$snapshot_tags --tag Name=ec2ab_${ebs_selected}_$date_current" fi #if $purge_after_days is true, then append $purge_after_date to the variable $snapshot_tags if [[ -n $purge_after_days ]] then snapshot_tags="$snapshot_tags --tag PurgeAfter=$purge_after_date --tag PurgeAllow=true" fi #if $snapshot_tags is not zero length then set the tag on the snapshot using ec2-create-tags if [[ -n $snapshot_tags ]] then echo "Tagging Snapshot $ec2_snapshot_resource_id with the following Tags:" ec2-create-tags $ec2_snapshot_resource_id --region $region $snapshot_tags #echo "Snapshot tags successfully created" | mailx -s "Snapshot tags successfully created" [email protected] fi } date_command_get() { #finds full path to date binary date_binary_full_path=`which date` #command below is used to determine if date binary is gnu, macosx or other date_binary_file_result=`file -b $date_binary_full_path` case $date_binary_file_result in "Mach-O 64-bit executable x86_64") date_binary="macosx" ;; "ELF 64-bit LSB executable, x86-64, version 1 (SYSV)"*) date_binary="gnu" ;; *) date_binary="unknown" ;; esac #based on the installed date binary the case statement below will determine the method to use to determine "purge_after_days" in the future case $date_binary in gnu) date_command="date -d +${purge_after_days}days -u +%Y-%m-%d" ;; macosx) date_command="date -v+${purge_after_days}d -u +%Y-%m-%d" ;; unknown) date_command="date -d +${purge_after_days}days -u +%Y-%m-%d" ;; *) date_command="date -d +${purge_after_days}days -u +%Y-%m-%d" ;; esac } purge_EBS_Snapshots() { #snapshot_tag_list is a string that contains all snapshots with either the key PurgeAllow or PurgeAfter set snapshot_tag_list=`ec2-describe-tags --show-empty-fields --region $region --filter resource-type=snapshot --filter key=PurgeAllow,PurgeAfter` #snapshot_purge_allowed is a list of all snapshot_ids with PurgeAllow=true snapshot_purge_allowed=`echo "$snapshot_tag_list" | grep .*PurgeAllow'\t'true | cut -f 3` for snapshot_id_evaluated in $snapshot_purge_allowed do #gets the "PurgeAfter" date which is in UTC with YYYY-MM-DD format (or %Y-%m-%d) purge_after_date=`echo "$snapshot_tag_list" | grep .*$snapshot_id_evaluated'\t'PurgeAfter.* | cut -f 5` #if purge_after_date is not set then we have a problem. Need to alter user. if [[ -z $purge_after_date ]] #Alerts user to the fact that a Snapshot was found with PurgeAllow=true but with no PurgeAfter date. then echo "A Snapshot with the Snapshot ID $snapshot_id_evaluated has the tag \"PurgeAllow=true\" but does not have a \"PurgeAfter=YYYY-MM-DD\" date. $app_name is unable to determine if $snapshot_id_evaluated should be purged." 1>&2 | mailx -s "Error happened 5" [email protected] else #convert both the date_current and purge_after_date into epoch time to allow for comparison date_current_epoch=`date -j -f "%Y-%m-%d" "$date_current" "+%s"` purge_after_date_epoch=`date -j -f "%Y-%m-%d" "$purge_after_date" "+%s"` #perform compparison - if $purge_after_date_epoch is a lower number than $date_current_epoch than the PurgeAfter date is earlier than the current date - and the snapshot can be safely removed if [[ $purge_after_date_epoch < $date_current_epoch ]] then echo "The snapshot \"$snapshot_id_evaluated\" with the Purge After date of $purge_after_date will be deleted." ec2-delete-snapshot --region $region $snapshot_id_evaluated echo "Old snapshots successfully deleted for $volumeid" | mailx -s "Old snapshots successfully deleted for $volumeid" [email protected] fi fi done } #calls prerequisitecheck function to ensure that all executables required for script execution are available prerequisite_check app_name=`basename $0` #sets defaults selection_method="volumeid" region="ap-southeast-1" #date_binary allows a user to set the "date" binary that is installed on their system and, therefore, the options that will be given to the date binary to perform date calculations date_binary="" #sets the "Name" tag set for a snapshot to false - using "Name" requires that ec2-create-tags be called in addition to ec2-create-snapshot name_tag_create=false #sets the Purge Snapshot feature to false - this feature will eventually allow the removal of snapshots that have a "PurgeAfter" tag that is earlier than current date purge_snapshots=false #handles options processing while getopts :s:r:v:t:k:pn opt do case $opt in s) selection_method="$OPTARG";; r) region="$OPTARG";; v) volumeid="$OPTARG";; t) tag="$OPTARG";; k) purge_after_days="$OPTARG";; n) name_tag_create=true;; p) purge_snapshots=true;; *) echo "Error with Options Input. Cause of failure is most likely that an unsupported parameter was passed or a parameter was passed without a corresponding option." 1>&2 ; exit 64;; esac done #sets date variable date_current=`date -u +%Y-%m-%d` #sets the PurgeAfter tag to the number of days that a snapshot should be retained if [[ -n $purge_after_days ]] then #if the date_binary is not set, call the date_command_get function if [[ -z $date_binary ]] then date_command_get fi purge_after_date=`$date_command` echo "Snapshots taken by $app_name will be eligible for purging after the following date: $purge_after_date." fi #get_EBS_List gets a list of EBS instances for which a snapshot is desired. The list of EBS instances depends upon the selection_method that is provided by user input get_EBS_List #the loop below is called once for each volume in $ebs_backup_list - the currently selected EBS volume is passed in as "ebs_selected" for ebs_selected in $ebs_backup_list do ec2_snapshot_description="ec2ab_${ebs_selected}_$date_current" ec2_create_snapshot_result=`ec2-create-snapshot --region $region -d $ec2_snapshot_description $ebs_selected 2>&1` if [[ $? != 0 ]] then echo -e "An error occured when running ec2-create-snapshot. The error returned is below:\n$ec2_create_snapshot_result" 1>&2 ; exit 70 else ec2_snapshot_resource_id=`echo "$ec2_create_snapshot_result" | cut -f 2` echo "Snapshots successfully created for volume $volumeid" | mailx -s "Snapshots successfully created for $volumeid" [email protected] fi create_EBS_Snapshot_Tags done #if purge_snapshots is true, then run purge_EBS_Snapshots function if $purge_snapshots then echo "Snapshot Purging is Starting Now." purge_EBS_Snapshots fi cron log Oct 23 10:24:01 ip-10-130-153-227 CROND[28214]: (root) CMD (root (ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3;)) Oct 23 10:24:01 ip-10-130-153-227 CROND[28215]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:25:01 ip-10-130-153-227 CROND[28228]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:25:01 ip-10-130-153-227 CROND[28229]: (root) CMD (date >> /root/logs/crontab2.log) Oct 23 10:26:01 ip-10-130-153-227 CROND[28239]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:27:01 ip-10-130-153-227 CROND[28247]: (root) CMD (root (ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3;)) Oct 23 10:27:01 ip-10-130-153-227 CROND[28248]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:28:01 ip-10-130-153-227 CROND[28263]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:29:01 ip-10-130-153-227 CROND[28275]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:30:01 ip-10-130-153-227 CROND[28292]: (root) CMD (root (ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3;)) Oct 23 10:30:01 ip-10-130-153-227 CROND[28293]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:30:01 ip-10-130-153-227 CROND[28294]: (root) CMD (date >> /root/logs/crontab2.log) Oct 23 10:31:01 ip-10-130-153-227 CROND[28312]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:32:01 ip-10-130-153-227 CROND[28319]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:33:01 ip-10-130-153-227 CROND[28325]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:33:01 ip-10-130-153-227 CROND[28324]: (root) CMD (root (ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3;)) Oct 23 10:34:01 ip-10-130-153-227 CROND[28345]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:35:01 ip-10-130-153-227 CROND[28362]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:35:01 ip-10-130-153-227 CROND[28363]: (root) CMD (date >> /root/logs/crontab2.log) Mails to root From [email protected] Tue Oct 23 06:00:01 2012 Return-Path: <[email protected]> Date: Tue, 23 Oct 2012 06:00:01 GMT From: [email protected] (Cron Daemon) To: [email protected] Subject: Cron <root@ip-10-130-153-227> root ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3 Content-Type: text/plain; charset=UTF-8 Auto-Submitted: auto-generated X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/root> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=root> X-Cron-Env: <USER=root> Status: R /bin/sh: root: command not found

    Read the article

  • Taking a snapshot (cloning) of a project using Linq 2 Sql

    - by JoeBlogs1999
    I have a Project entity with several child tables, eg ProjectAwards ProjectTeamMember I would like to copy the data from Project (and child tables) into a new Project record and update the Project status. eg var projectEntity = getProjectEntity(projectId); draftProjectEntity = projectEntity draftProjectEntity.Status = NewStatus context.SubmitChanges(); I found this link from Marc Gravell Its part of the way there but it updates the child records to the new draftProject, where I need it to copy.

    Read the article

  • backup of KVM VM's running on Ubuntu 12.4.1 precise edition from a remote machine

    - by Dr. Death
    I am creating a library API which will take the backup of all the VM's running on KVM hypervisor. My VM's can be of any type. I am taking this backup from a remote machine and need to put the backup at remote server. I have KVM, Libvirt installed on my system. Some of my VM's are LVM based and some are normal VM's running on KVM. I research and found out an excellent perl script for taking the backup http://pof.eslack.org/2010/12/23/best-solution-to-fully-backup-kvm-virtual-machines/ but since I am developing this library in C++ I cannot use it however it has given me a good understanding of how it will work. One thing I didnot able to sort out is if my VM's are not created using virt-manager or are created using any other tool them virsh system list command does not give them in the list of running VM's however they are running perfectly on my KVM server. Is there a way to list these VM's in my system list anyhow? secondly, when I am taking backup from the remote machine I am getting out of my ssh mode as soon as my libvirt command finishes and for every command I need to ssh again, Is there a way that I do not need to ssh each and every time? I have already used the rsa key for ssh but when once my command finishes my control moves to the remote machine again and try to find out my source VM location in remote machine's local drives which in turn fails it. here is the main problem I am facing. also for the LVM based VM I am able to take the live backup but for non LVM based my machines are getting suspended and not been able to take the live backup. Since my library will work on the remote machine only I might not know the VM's configruation on the KVM server. so need to make it consistent for all the VM's. Please share any thing related to this issue so that I may be able to take the live backup of the non lvm vm's also. I'll update my working and any research findings time to time to all of you. Thanks in advance for your suggestions in these regards.

    Read the article

  • Why VM snapshots are affecting performance?

    - by Samselvaprabu
    I read in one of the VMware KB article says that snapshots will directly proportional to VM performance. But my team keep asking me how snapshots can affect performance. I would like to give them solid reason behind the statement that snapshots are performance killers. Can any one explain a little bit theory behind why actually snapshots are affecting the performance? Is it just because Disk I/O rate of hard disk would be slow?

    Read the article

  • Restoring the exact state of a linux install to a different laptop with different sized drives and other hardware

    - by user259774
    I have an IBM running a Manjaro install that has already been used and settled into, with packages installed, browser profiles, etc, etc. The drive is 60gb, and it has a swap partition and an ext4 root partition. I need to move this profile to a Toshiba computer with a 320gb drive. How should I go about this? My inclination would be to shut down the toshiba, boot a live linux system, dd the whole 60gb drive to a file, boot the toshiba to a live system, then dd the file to its 320gb drive. Would this work? I know that it wouldn't with windows, but I believe this is an artificially imposed limitation from Microsoft. Is this correct, or is Linux similarly limited? If not, how could I go about this? Would clonezilla work, or would the hardware disparities prevent it from working?

    Read the article

  • Is there a way to create a consistent snapshot/SnapMirror across multiple volumes?

    - by Tomer Gabel
    We use a NetApp FAS 6-series filer with an application that spans multiple volumes. For backup purposes I would like to create a consistent snapshot that spans these volumes at the same point in time (or at least with an extremely low delta); additionally, we'd like to to use SnapMirror to replace the production environment to test volumes. The problem is in creating a consistent snapshot/SnapMirror, since these commands are not transactional and do not take multiple parameters. I tried scripting consecutive "snap create" or "snapmirror resync" commands via SSH, but there's always a 0.5-2 second difference between each snapshot. It's currently "good enough", but I'm seriously concerned about the consistency impact with increased load (we're currently in pre-production). Has anyone managed to create a consistent snapshot that spans several volumes? How did you pull it off?

    Read the article

  • Is READ UNCOMMITTED / NOLOCK safe in this situation?

    - by Ben Challenor
    I know that snapshot isolation would fix this problem, but I'm wondering if NOLOCK is safe in this specific case so that I can avoid the overhead. I have a table that looks something like this: drop table Data create table Data ( Id BIGINT NOT NULL, Date BIGINT NOT NULL, Value BIGINT, constraint Cx primary key (Date, Id) ) create nonclustered index Ix on Data (Id, Date) There are no updates to the table, ever. Deletes can occur but they should never contend with the SELECT because they affect the other, older end of the table. Inserts are regular and page splits to the (Id, Date) index are extremely common. I have a deadlock situation between a standard INSERT and a SELECT that looks like this: select top 1 Date, Value from Data where Id = @p0 order by Date desc because the INSERT acquires a lock on Cx (Date, Id; Value) and then Ix (Id, Date), but the SELECT acquires a lock on Ix (Id, Date) and then Cx (Date, Id; Value). This is because the SELECT first seeks on Ix and then joins to a seek on Cx. Swapping the clustered and non-clustered index would break this cycle, but it is not an acceptable solution because it would introduce cycles with other (more complex) SELECTs. If I add NOLOCK to the SELECT, can it go wrong in this case? Can it return: More than one row, even though I asked for TOP 1? No rows, even though one exists and has been committed? Worst of all, a row that doesn't satisfy the WHERE clause? I've done a lot of reading about this online, but the only reproductions of over- or under-count anomalies I've seen (one, two) involve a scan. This involves only seeks. Jeff Atwood has a post about using NOLOCK that generated a good discussion. I was particularly interested in a comment by Rick Townsend: Secondly, if you read dirty data, the risk you run is of reading the entirely wrong row. For example, if your select reads an index to find your row, then the update changes the location of the rows (e.g.: due to a page split or an update to the clustered index), when your select goes to read the actual data row, it's either no longer there, or a different row altogether! Is this possible with inserts only, and no updates? If so, then I guess even my seeks on an insert-only table could be dangerous. Update: I'm trying to figure out how snapshot isolation works. It seems to be row-based, where transactions read the table (with no shared lock!), find the row they are interested in, and then see if they need to get an old version of the row from the version store in tempdb. But in my case, no row will have more than one version, so the version store seems rather pointless. And if the row was found with no shared lock, how is it different to just using NOLOCK?

    Read the article

  • Are ZFS snapshots + S3 a viable backup system for several VMs and general fileserver storage?

    - by AllanA
    I've been tasked with setting up a backup system for my small office (around 12 people). Most of our production stuff is on the AWS cloud, so what I need to back up are some small office/development files (under 100G right now), plus our operational VMs and development, which round out to a bit under 1T. I just need something reliable, convenient, and straightforward. I'm comfortable with Linux, FreeBSD, and to some extent Solaris 10, so I'm leaning toward a full server rather than an appliance system ala Openfiler or FreeNAS. What I'm contemplating is a small fileserver for general storage and nightly backups of the virtual machines, followed up by an offsite backup to Amazon's S3 storage service. It'd be the usual incremental backups nightly and full backup weekly. My question is if using ZFS snapshots, both locally and dumped to S3 via 'zfs send [-i]', is a viable backup tool? Or should I stick to using Duplicity, or some other method entirely? ZFS snapshots on the internal fileserver/backup machine sound like a perfect way to provide quick and convenient data recovery, so I'm likely to go with that for local redundancy. (If you folks see scenarios where relying on ZFS snapshots would be worse than a more traditional archiving backup, feel free to convince me.) But are snapshots flexible enough to lean on for recovery from the loss of my backup server? Or am I better off with something more traditional? (feel free to recommend free or commercial backup solutions you favor.)

    Read the article

  • model (3ds) stats & snapshot in linux

    - by acidzombie24
    I want to write an app that takes in a model filename via cmd line, create a list of stats (poly count, scaling, as much as possible or maybe the stats that i would like) and to load the model with its textures (with anything else) and draw it from multiple position to save the images as pngs. How would i do this? are there utils i can use to extract data from models? how about drawing the models? my server does not have a desktop or video card, would no video HW be a problem?

    Read the article

  • Virtualbox, merging snapshots and base disk

    - by Henrik
    Hi, I have a virtual machine with about 30 snapshots in branches. The current development path is 22 snapshots plus the base disk. The amount of files is seemingly having an impact now on IO and the dev laptop I'm using (don't know if it is host disk performance issues with the 140GB total size over a lot of fragments, or just the fact that it is hitting sectors distributed across a lot of files). I would like to merge the current development branch of snapshots together with the base disk, but I am unsure if the following command would produce the correct outcome. I am not able to boot this disk after the procedure completes (5-6 hours). vboxmanage clonehd "C:\VPC-Storage\.VirtualBox\Machines\CRM\Snapshots\{245b27ac-e658-470a-b978-8e62137c33b1}.vhd" "E:\crm-20100624.vhd" --format VHD --type normal Could anyone confirm if this is the correct approach or not?

    Read the article

  • Hidden Periodic Screenshots on a corporate workstation?

    - by ssxuser80
    Can anyone recommend something that allows us to take hidden periodic screenshots of a workstation? We have a user who we believe is abusing his computer privileges. We have our suspicions that he may be playing games, etc. We need to monitor his screen without him being aware of it. Currently, the IT Department here is using Dameware Mini Remote Control to view his login sessions. But there isn't an option to set up automatic periodic screenshots. I'm hoping to find a tool that has this option and can be centrally managed as well. Thank you for your time. Any help would be greatly appreciated.

    Read the article

  • why Observable snapshot observer vector

    - by han14466
    In Observable's notifyObservers method, why does the coder use arrLocal = obs.toArray();? Why does not coder iterate vector directly? Thanks public void notifyObservers(Object arg) { Object[] arrLocal; synchronized (this) { /* We don't want the Observer doing callbacks into * arbitrary code while holding its own Monitor. * The code where we extract each Observable from * the Vector and store the state of the Observer * needs synchronization, but notifying observers * does not (should not). The worst result of any * potential race-condition here is that: * 1) a newly-added Observer will miss a * notification in progress * 2) a recently unregistered Observer will be * wrongly notified when it doesn't care */ if (!changed) return; arrLocal = obs.toArray(); clearChanged(); } for (int i = arrLocal.length-1; i>=0; i--) ((Observer)arrLocal[i]).update(this, arg); }

    Read the article

  • Netgear NV+ resize volume for snapshots

    - by kurresmack
    Hey! I have a Netgear NV+ that I want to setup snapshots on. As I do not have any space allocated for ths I just wanted to check what will happend to my data if I allocate space for snapshots? I could not find any resource for this on google which makes me to belive that no data is affected but just wanted to make sure that there is no foramt!

    Read the article

  • Flex sending snapshot without using base64Encode

    - by atd
    var is:ImageSnapshot = myImagesnapshot; var str:String = ImageSnapshot.encodeImageAsBase64(is); As of now, I am sending my jpeg data to the server with the code above. The problem is that it almost doubles the size of the data. Is there a way to send the image data directly without using any encoding.

    Read the article

  • How do I get my snapshots in Nexus to appear in the m2eclipse dependency search?

    - by Brabster
    Hi, I've been working through the Nexus guide this weekend and I've got everything set up, to the point that I can publish a snapshot to my local nexus install. I can't seem to work out how to get m2eclipse to see the snapshot and offer it as an option in the Add Dependencies search panel. How do I do that? Thanks! In case it's of any use, my settings.xml is as follows: <settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd"> <localRepository /> <interactiveMode /> <usePluginRegistry /> <offline /> <pluginGroups /> <servers> <server> <id>localSnap</id> <username>deployment</username> <password>*****</password> </server> </servers> <mirrors> <mirror> <!--This sends everything else to /public --> <id>nexus</id> <mirrorOf>*</mirrorOf> <url>http://localhost:8080/nexus/content/groups/public</url> </mirror> </mirrors> <profiles> <profile> <id>nexus</id> <!--Enable snapshots for the built in central repo to direct --> <!--all requests to nexus via the mirror --> <repositories> <repository> <id>central</id> <url>http://central</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>true</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>central</id> <url>http://central</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>true</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <!--make the profile active all the time --> <activeProfile>nexus</activeProfile> </activeProfiles> </settings>

    Read the article

  • Begin the Clone Wars Have!

    - by Antony Reynolds
    Creating a New Virtual Machine from an Existing Virtual Disk In previous posts I described how I set up an OEL6 machine under VirtualBox that can run an 11gR2 database and FMW 11.1.1.5.  That is great if you want the DB and FMW running in the same virtual image and it has served me well for some proof of concepts and also for some testing of different JVMs.  However I also wanted to run some testing of FMW with the database running on a separate physical machine.  So in this post I will show how to take a VirtualBox image and create a new image based on the disks from that original image. What are my Options? There is more than one way to skin a cat, or in this case to create two separate VMs that can run on different hardware.  Some of the options include: Create new virtual disk images for each new VM. Clone the existing disk images and point the new VM at the cloned images. Point the new VM at the existing snapshots. #1 is too much like hard work, install OEL twice, install a database again, install FMW again, run RCU again!  Life is too short! #2 is probably the safest way of doing things.  VirtualBox allows you to clone a disk image for use in a separate machine.  However this of course duplicates the disk and means that it is now occupying 3 times the space, once for the original disk and twice more for the two clones I would need. #3 is the most space efficient way of doing things.  It does mean however that I can only run the new “cloned” images if I have access to the original image because that is where the base snapshots reside.  However this is not a problem for me as long as I remember to keep all threee images together.  So this is the approach we will follow. Snapshot, What Snapshot? As we are going to create new virtual machines based on existing snapshots we need to figure out which snapshot to use.  We do this by opening the “Media Manager” from within VirtualBox and moving the mouse over the snapshot images until we find the snapshots we want – the snapshot name is identified in the “Attached to:” comment.  In my case I wanted the FMW installed snapshot because that had a database configured for FMW alongside the FMW software.  I made a note of the filename of that snapshot (actually I just noted the first 5 characters as that was all that was needed to uniquely identify the snapshot file). When we create the new machines we will point them at the snapshot filename we have just checked. Network or NotWork? Because we want the two new machines to communicate with each other when hosted in different physical machines we can’t use the default NAT networking mode without a lot of hassle.  But at the same time we need them to have fixed IP addresses relative to each other so that they can see each other whilst also being able to see the outside world. To achieve all these requirements I created two network adapters for each machine.  Adapter 1 was a standard NAT mapping.  This will allow each machine to get a dynamic IP address (10.0.2.15 by default) that can be used to access the external world through the VBox provided NAT gateway.  This is the same as the existing configuration. The second adapter I created as a bridged adapter.  This gives the virtual machine direct access to the host network card and by using fixed IP addresses each machine can see the other.  It is important to choose fixed IP addresses that are not routable across your internal network so you don’t get any clashes with other machines on your network.  Of course you could always get proper fixed IP addresses from your network people, but I have serveral people using my images and as long as I don’t have two instances of the same VM on the same network segment this is easier and avoids reconfiguring the network every time someone wants a copy of my VM.  If it is available I would suggest using the 10.0.3.* network as 10.0.2.* is the default NAT network.  You can check availability by pinging 10.0.3.1 and 10.0.3.2 from your host machine.  If it times out then you are probably safe to use that. Creating the New VMs Now that I had collected the data that I needed I went ahead and created the new VMs. When asked for a “Boot Hard Disk” I used the “Choose a virtual hard disk file…” link to find the snapshot I had previously selected and set that to be the existing hard disk.  I chose the previously existing SOA 11.1.1.5 install for both the new DB and FMW machines because that snapshot had the database with the RCU completed that I wanted for my DB machine and it had the SOA software installed which I wanted for my FMW machine. After the initial creation of the virtual machine go into the network setting section and enable a second adapter which will be bridged.  Make a note of the MAC addresses (the last four digits should be sufficient) of the two adapters so that you can later set the bridged adapter to use fixed IP and the NAT adapter to use DHCP. We are now ready to start the VMs and reconfigure Linux. Reconfiguring Linux Because I now have two new machines I need to change their network configuration.  In particular I need to change the hostname, update the hosts file and change the network settings. Changing the Hostname I renamed both hosts by running the hostname command as root: hostname vboxfmw.oracle.com I also edited the /etc/sysconfig file and set the correct hostname in there. HOSTNAME=vboxfmw.oracle.com Changing the Network Settings I needed to change the network configuration to give the bridged network a fixed IP address.  I first explicitly set the MAC addresses of the two adapters, because the order of the virtual adapters in the VirtualBox Manager is not necessarily the same as the order of the adapters in the guest OS.  So I went in to the System->Preferences->Network Connections screen and explicitly set the “Device MAC address” for the two adapters. Having correctly mapped the Linux adapters to the VirtualBox adapters I then set the Bridged adapter to use fixed IP addressing rather than DHCP.  There is no need for additional routing or default gateways because we expect the two machine to be on the same LAN segment. Updating the Hosts File Having renamed the machines and reconfigured the network I then updated the /etc/hosts file to refer to the new machine name add a new line to the hosts file to provide an additional IP address for my server (the new fixed IP address) add a new line for the fixed IP address of the other virtual machine 10.0.3.101      vboxdb.oracle.com       vboxdb  # Added by NetworkManager 10.0.2.15       vboxdb.oracle.com       vboxdb  # Added by NetworkManager 10.0.3.102      vboxfmw.oracle.com      vboxfmw # Added by NetworkManager 127.0.0.1       localhost.localdomain   localhost ::1     vboxdb.oracle.com       vboxdb  localhost6.localdomain6 localhost6 To make sure everything takes effect I restarted the server. Reconfiguring the Database on the DB Machine Because we changed the hostname the listener and the EM console no longer start so I need to modify the listener.ora to use the new hostname and I also need to rebuild the EM configuration because it also relies on the hostname. I edited the $ORACLE_HOME/network/admin/listener.ora and changed the listening address to the new hostname:       (ADDRESS = (PROTOCOL = TCP)(HOST = vboxdb.oracle.com)(PORT = 1521)) After changing the listener.ora I was able to start the listener using: lsnrctl start I also had to reconfigure the EM database control.  I first deconfigured it using the command: emca -deconfig dbcontrol db -repos drop This drops the repository and removes any existing registered dbcontrols. I then re-configured it using the following command: emca -config dbcontrol db -repos create This creates the EM repository and then configures and starts dbcontrol. Now my database machine is ready so I can close it down and take a snapshot. Disabling the Database on the FMW Machine I set up the database to start automatically by creating a service called “dbora”.  On the FMW machine I do not need the database running so I can prevent it auto-starting by running the following command: chkconfig –del dbora Note that because I am using a snapshot it is not a waste of disk space to have the DB installed but not used.  As long as I don’t run it, it won’t cost me anything. I can now close the FMW machine down and take a snapshot. Creating a New Domain The FMW machine is now ready to create a new domain.  When creating the domain I can point it at the second machine which is running the database.  I can potentially run these machines on two separate physical machines as long as I have the original virtual machine available to both of the physical machines. Gotchas in Snapshotting VirtualBox does not support the concept of linked machines in a network like some virtualization technologies so when creating a snapshot it is a good idea to shut both VMs down and then take a snapshot on both of them.  This is because we want to keep the database in sync with the middleware.  One way to make sure that this happens would be to place all the domain configuration files on the database server via an NFS share, this would mean that all we would need to snapshot would be the database machine because that would hold all the state and configuration. The Sky’s the Limit We have covered a simple case of having just two machines.  I have a more complicated configuration in which two machine run a RAC database off the same base OS image, and two more machines run a SOA cluster based on the same OS image.  Just remember what machine holds state and what are the consequences of taking a snapshot.

    Read the article

  • easiest way to convert virtualbox snapshots to tree view

    - by amir beygi
    HI all My virtual box snapshot view is like this Name: Snapshot 2 (UUID: cb45aef4-54d4-4c4e-ad3e-dd7cccb6103a) Name: s131 (UUID: 8ec30c82-7796-4e51-8161-979f1b95fb0f) Name: s131 (UUID: 42066f33-969b-41f3-a779-7f6e2c45ea2c) Name: s131 (UUID: d71b9bc5-b862-46b5-ae4d-f88d3dd9756d) Name: s131 (UUID: 681896a9-7e61-4b5a-90bc-cb1bd785c6fc) Name: s131 (UUID: d7bf8593-d218-442d-b23b-4ee16e74087d) Name: s131 (UUID: e8b16fd2-7add-4294-b908-34c4e6dc79dc) Name: s131 (UUID: 57c3f5d7-d4ed-4a62-a7b8-5594f819e08e) Name: Snapshot 3 (UUID: 4a684149-9dd6-4bb2-baf5-5f590e91a344) Name: Snapshot 4 (UUID: d4cbaa7c-ae78-41e0-9962-46c587a9c667) Name: Snapshot 5 (UUID: 81567b6e-eea9-49a6-b3b8-a07f0be337d8) * and i want to convert this text to a tree like this Name: Snapshot 2 (UUID: cb45aef4-54d4-4c4e-ad3e-dd7cccb6103a) +--Name: s131 (UUID: 8ec30c82-7796-4e51-8161-979f1b95fb0f) +--Name: s131 (UUID: 42066f33-969b-41f3-a779-7f6e2c45ea2c) +--Name: s131 (UUID: d71b9bc5-b862-46b5-ae4d-f88d3dd9756d) +--Name: s131 (UUID: 681896a9-7e61-4b5a-90bc-cb1bd785c6fc) | +--Name: s131 (UUID: d7bf8593-d218-442d-b23b-4ee16e74087d) | +--Name: s131 (UUID: e8b16fd2-7add-4294-b908-34c4e6dc79dc) | +--Name: s131 (UUID: 57c3f5d7-d4ed-4a62-a7b8-5594f819e08e) +--Name: Snapshot 3 (UUID: 4a684149-9dd6-4bb2-baf5-5f590e91a344) +--Name: Snapshot 4 (UUID: d4cbaa7c-ae78-41e0-9962-46c587a9c667) +--Name: Snapshot 5 (UUID: 81567b6e-eea9-49a6-b3b8-a07f0be337d8) * or even an array that contents line number and parent's line number. My environment is linux, programming language is C, and i got this results from this shell command VBoxManage snapshot s2000 showvminfo s|grep Name|grep UUID

    Read the article

  • Ivy and Snapshots (Nexus)

    - by Uberpuppy
    Hey folks, I'm using ant, ivy and nexus repo manager to build and store my artifacts. I managed to get everything working: dependency resolution and publishing. Until I hit a problem... (of course!). I was publishing to a 'release' repo in nexus, which is locked to 'disable redeploy' (even if you change the setting to 'allow redeploy' (really lame UI there imo). You can imagine how pissed off I was getting when my changes weren't updating through the repo before I realised that this was happening. Anyway, I now have to switch everything to use a 'Snapshot' repo in nexus. Problem is that this messes up my publish. I've tried a variety of things, including extensive googling, and haven't got anywhere whatsoever. The error I get is a bad PUT request, error code 400. Can someone who has got this working please give me a pointer on what I'm missing. Many thanks, Alastair fyi, here's my config: Note that I have removed any attempts at getting snapshots to work as I didn't know what was actually (potentially) useful and what was complete guff. This is therefore the working release-only setup. Also, please note that I've added the XXX-API ivy.xml for info only. I can't even get the xxx-common to publish (and that doesn't even have dependencies). Ant task: <target name="publish" depends="init-publish"> <property name="project.generated.ivy.file" value="${project.artifact.dir}/ivy.xml"/> <property name="project.pom.file" value="${project.artifact.dir}/${project.handle}.pom"/> <echo message="Artifact dir: ${project.artifact.dir}"/> <ivy:deliver deliverpattern="${project.generated.ivy.file}" organisation="${project.organisation}" module="${project.artifact}" status="integration" revision="${project.revision}" pubrevision="${project.revision}" /> <ivy:resolve /> <ivy:makepom ivyfile="${project.generated.ivy.file}" pomfile="${project.pom.file}"/> <ivy:publish resolver="${ivy.omnicache.publisher}" module="${project.artifact}" organisation="${project.organisation}" revision="${project.revision}" pubrevision="${project.revision}" pubdate="now" overwrite="true" publishivy="true" status="integration" artifactspattern="${project.artifact.dir}/[artifact]-[revision](-[classifier]).[ext]" /> </target> Couple of ivy files to give an idea of internal dependencies: XXX-Common project: <ivy-module version="2.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://ant.apache.org/ivy/schemas/ivy.xsd"> <info organisation="com.myorg.xxx" module="xxx_common" status="integration" revision="1.0"> </info> <publications> <artifact name="xxx_common" type="jar" ext="jar"/> <artifact name="xxx_common" type="pom" ext="pom"/> </publications> <dependencies> </dependencies> </ivy-module> XXX-API project: <ivy-module version="2.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://ant.apache.org/ivy/schemas/ivy.xsd"> <info organisation="com.myorg.xxx" module="xxx_api" status="integration" revision="1.0"> </info> <publications> <artifact name="xxx_api" type="jar" ext="jar"/> <artifact name="xxx_api" type="pom" ext="pom"/> </publications> <dependencies> <dependency org="com.myorg.xxx" name="xxx_common" rev="1.0" transitive="true" /> </dependencies> </ivy-module> IVY Settings.xml: <ivysettings> <properties file="${ivy.project.dir}/project.properties" /> <settings defaultResolver="chain" defaultConflictManager="all" /> <credentials host="${ivy.credentials.host}" realm="Sonatype Nexus Repository Manager" username="${ivy.credentials.username}" passwd="${ivy.credentials.passwd}" /> <caches> <cache name="ivy.cache" basedir="${ivy.cache.dir}" /> </caches> <resolvers> <ibiblio name="xxx_publisher" m2compatible="true" root="${ivy.xxx.publish.url}" /> <chain name="chain"> <url name="xxx"> <ivy pattern="${ivy.xxx.repo.url}/com/myorg/xxx/[module]/[revision]/ivy-[revision].xml" /> <artifact pattern="${ivy.xxx.repo.url}/com/myorg/xxx/[module]/[revision]/[artifact]-[revision].[ext]" /> </url> <ibiblio name="xxx" m2compatible="true" root="${ivy.xxx.repo.url}"/> <ibiblio name="public" m2compatible="true" root="${ivy.master.repo.url}" /> <url name="com.springsource.repository.bundles.release"> <ivy pattern="http://repository.springsource.com/ivy/bundles/release/[organisation]/[module]/[revision]/[artifact]-[revision].[ext]" /> <artifact pattern="http://repository.springsource.com/ivy/bundles/release/[organisation]/[module]/[revision]/[artifact]-[revision].[ext]" /> </url> <url name="com.springsource.repository.bundles.external"> <ivy pattern="http://repository.springsource.com/ivy/bundles/external/[organisation]/[module]/[revision]/[artifact]-[revision].[ext]" /> <artifact pattern="http://repository.springsource.com/ivy/bundles/external/[organisation]/[module]/[revision]/[artifact]-[revision].[ext]" /> </url> </chain> </resolvers> </ivysettings>

    Read the article

  • How to start a major iPhone app update in Xcode

    - by Eric
    I have an app in the iPhone app store and have released several minor updates to it. I want to begin work on some major feature additions and reorganization, but don't want to lose the source code of my most recent version in case everything goes horribly wrong. Should I start a new Xcode project from scratch and copy my existing source in? If I do this will I be able to submit the build from this new project as an update or will Apple complain that the build comes from a different Xcode project? I've seen (but not used) Xcode's "Snapshots" and "Source Control" features - are these what I'm looking for? Any help or direction greatly appreciated.

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >