Search Results

Search found 3179 results on 128 pages for 'merge replication'.

Page 39/128 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • How to replicate a windows servers (IIS,Files,ConfigurationState)?

    - by Geo
    Maybe a better question is: What is the closest competitor for DoubleTake? I am looking to replicate a windows production server in case it fails have a immediate backup. Any idead? NOTE 1: I forget to add that this server is on the EC2 Amazon Cloud. NOTE 2: The main situation we have is recreating the configuration settings like IIS, FTP Server, SQL Server, SVN Server. NOTE 3: So far I have been giving three options as answers for my original question: AppAssurance -- After talking to their sales team they do not support Amazon as cloud provider. Basically there is a technical need to be able to reboot from a disk or similar media. So ESX Virtual machine environment will work, but not the EC2. Acronis -- which works as a backup in ghost style. This will work for other type of scenarios. Use the Amazon EC2 API -- This option is ideal, but only works if you are developing a cloud application rather than hosting a regular application in a cloud scenario. This means that I am still looking for the answer. Any other ideas.

    Read the article

  • Create kickstart configuration file from existing configuration.

    - by ÜMineiro
    Is there a script or another way to automatically generate a kickstart configuration file from the system state of an existing server so that the file can be use to replicate (not clone) the configuration of the system in another install? I know that the anaconda-ks.cfg file is stored on the /root dir. but the system in question have been extensively changed since it's installation, and the file is of no use now.

    Read the article

  • Logon Failure: the target account name is incorrect after making a ghost image of a server

    - by cop1152
    I recently replaced a failing SCSI drive in a Windows 2000 server with an IDE drive. I made an image of the SCSI drive and Ghosted it. The purpose of the machine was to give out DHCP at one location and host a couple of files. When I restarted the machine with the new drive, DHCP appears to be working fine, but I cannot get to any of the shares. Instead, I get the following message when attempting to navigate using Explorer. Logon Failure: the target account name is incorrect It appears that this machine is not communicating with the main domain controller. Changes to user accounts (performed on the domain controller) are not replicated on this machine.

    Read the article

  • What's the best / easiest way to combine two mailboxes on Exchange 2007?

    - by jmassey
    I've found this and this(2) (sorry, maximum hyperlink limit for new users is 1, apparently), but they both seem targeted toward much more complex cases than what I'm trying to do, and I just want to make sure I'm not missing some better approach. Here's the scenario: 'Alice' has retired. 'Bob' has taken over Alice's position. Bob was already with the organization in a different but related position, and so they already have their own Exchange account with mail, calendars, etc., that they need to keep. I need to get all of Alice's old mail, calendar entries, etc., merged into Bob's existing stuff. Ideally, I don't want to have all of Alice's stuff in a separate 'recovery' folder that Bob would have to switch back and forth between to look at older stuff; I want it all just merged into Bob's current Inbox / Calendar. I'm assuming (read: hoping) that there's a better way to do this than fiddling with permissions and exporting to and then importing from a .pst. Office version is 2007 for everybody that uses Exchange, if that helps. Exchange is version 8.1. What (preferably step-by-step - I'm new to Exchange) is the best way to do this? I can't imagine this is an uncommon scenario, but my google-fu has failed me; there seems to be nothing on this subject that isn't geared towards far more complex scenarios. (2): h t t p://technet.microsoft.com/en-us/library/bb201751%28EXCHG.80%29.aspx

    Read the article

  • Mysql Master-ColdMaster

    - by enedebe
    I explain my case: I'm at Amazon AWS and I want to be fault tolerant on a entire region failure. My basic problem is to have the db in sync with 2 regions. My options: Master-Master (high lag) Hand made sync every 5 minutes Master-ColdMaster?! (copy on the fly but Master won't wait the other region commit) In my system we could afford loosing a piece of data (we're not a bank) the last inserts in the db, but we could not afford more than 10 minutes of downtime. The database is small and the level of inserts is low, and I wouldn't affect the normal usage waiting other region commit. Is the 3 solution posible? And the most important, once the primary fail how we can detect and change the rol between master-coldmaster -- coldmaster-master ? Is there any clean-mode to restore between failure? Thank's!

    Read the article

  • How do I make a Windows virtual machine replicate to another datacenter/cloud?

    - by zippy
    We have a Windows 2008 VM running IIS and SQL Server Express (it's an all-in-one web application). We need to have another copy at our secondary datacenter site. What is the best way to do this? It doesn't have to be running all the time but it has to have almost the latest copy of the current VM. I took a look at VMWare Fault Tolerance and after the heart attack at the price I starting looking for another solution. If need be I wouldn't mind copying it over to a cloud VM provider, if I can find one that lets me copy my own VMs up and start them up without any conversion process.

    Read the article

  • Mirroring svn repository

    - by cardy
    I have an svn repository and I'd like to have it duplicated over multiple machines for availability purpose. By now when my vps goes down, I'm unable to connect to repository and this is very annoying. Easiest (and expansive) solution is to setup two identical machine and make them work like clones. I'd like to know if there are any alternative (involving 2 machines). Ideally I would have two vps in different datacenters, so if one goes down I can rely on the other. Thanks. I need a mirror both for read and write not only for read. Svn Repos are berkley-db based

    Read the article

  • Concatenate two mp4 videos into one file

    - by Jer
    I recently ripped my DVD collection to play them on my media center PC, and several of them are two-disc movies (i.e. The Lord of the Rings). Since I ripped each DVD individually, that gives me two video files for some movies. I am using Ubuntu Linux - how can I concatenate these two MP4/H.264 videos into a single MP4 video file? Preferably from the command line, and without re-encoding everything during the process (although I can try figuring out a video editor like Pitivi if that's the only solution.)

    Read the article

  • Using a AWS EC2 Server to host a busy website and I need to set up a loadbalancing

    - by Philip Isaacs
    My company has one EC2 server running on AWS with a MYSQL-DB and Apache on the same instance. This one instance hosts a website built on PHP Zend Framework. The site runs like crap when it starts to get busy with a lot of traffic so I'm looking for some advice on how to set up something that can handle the load better. My first question is should I move the mysql DB on to a separate EC2 instance or perhaps use AWS's RDS service which looks like a nice option. I'm sort of new to some of this but I'm guessing I'll need at least two EC2 instances for serving the website from and some sort of load balancing mechanism to distribute traffic. But maybe not, I'm not sure. Also what are some best practices for how to replicate the data so that they stay in sync on both instances? Okay I know these are a lot of questions. But I don't know where to start so any advice will help.

    Read the article

  • is ROBOCOPY a good form of backup as a tool?

    - by dasko
    seeing how it does not do a write verify unless scripted (or is it needed?) is it a decent option to dump a few folders to another server? i am just worried about whether the data, after being copied, might be corrupted but you would not be aware of it being buggy? i have used it in the past without issue but am seeking feedback in case i missed something obvious.

    Read the article

  • Scaling a LAMP website hosted on EC2

    - by Gublooo
    Hello, I'm very new to all this - I've recently managed to launch my website on EC2. As next step, I want to learn how to scale the website. I have a general idea but wanted some input from the experts about how to go about it. My website is based on LAMP but also has Red5 server which allows users to record messages and also used for playing them back. Currently this is the architecture I'm planning to setup for initial scaling. Deploy four small EC2 instances for the following purposes: Instance-1: On this instance I will run the MySql database Instance-2: On this instance I will run the red5 server Instance-3 & Instance-4 These 2 instances will be used to deploy the website and will have Apache running on them. They will communicate with the mysql server on Instance-1 and red5 server on Instance-2 using the internal IP address. As an when required, I will launch another instance of the same EBS - I will have EBS of say 50 GIG where all the mysql data will be stored. Also red5 will use this EBS to store the video messages Load-Balancer - Use the load balancer provided by Amazon to load balance Instance-3 and Instance-4 This is what I have in mind. I could be way off so please bear with me. Also I have not taken into account the case of scaling MySql server as I currently have no idea about how that will be done and whether or not it is necessary initially. I am aware that Amazon provides auto scaling and mysql scaling as well but I dont want to get into that right now. Your feedback is appreciated Thanks

    Read the article

  • glusterfs mounts get unmounted when 1 of the 2 bricks goes offline

    - by Shiquemano
    I have an odd case where 1 of the 2 replicated glusterfs bricks will go offline and take all of the client mounts down with it. As I understand it, this should not be happening. It should fail over to the brick that is still online, but this hasn't been the case. I suspect that this is due to configuration issue. Here is a description of the system: 2 gluster servers on dedicated hardware (gfs0, gfs1) 8 client servers on vms (client1, client2, client3, ... , client8) Half of the client servers are mounted with gfs0 as the primary, and the other half are pointed at gfs1. Each of the clients are mounted with the following entry in /etc/fstab: /etc/glusterfs/datavol.vol /data glusterfs defaults 0 0 Here is the content of /etc/glusterfs/datavol.vol: volume datavol-client-0 type protocol/client option transport-type tcp option remote-subvolume /data/datavol option remote-host gfs0 end-volume volume datavol-client-1 type protocol/client option transport-type tcp option remote-subvolume /data/datavol option remote-host gfs1 end-volume volume datavol-replicate-0 type cluster/replicate subvolumes datavol-client-0 datavol-client-1 end-volume volume datavol-dht type cluster/distribute subvolumes datavol-replicate-0 end-volume volume datavol-write-behind type performance/write-behind subvolumes datavol-dht end-volume volume datavol-read-ahead type performance/read-ahead subvolumes datavol-write-behind end-volume volume datavol-io-cache type performance/io-cache subvolumes datavol-read-ahead end-volume volume datavol-quick-read type performance/quick-read subvolumes datavol-io-cache end-volume volume datavol-md-cache type performance/md-cache subvolumes datavol-quick-read end-volume volume datavol type debug/io-stats option count-fop-hits on option latency-measurement on subvolumes datavol-md-cache end-volume The config above is the latest attempt at making this behave properly. I have also tried the following entry in /etc/fstab: gfs0:/datavol /data glusterfs defaults,backupvolfile-server=gfs1 0 0 This was the entry for half of the clients, while the other half had: gfs1:/datavol /data glusterfs defaults,backupvolfile-server=gfs0 0 0 The results were exactly the same as the above configuration. Both configs connect everything just fine, they just don't fail over. Any help would be appreciated.

    Read the article

  • Cloning a VM to add a new MySQL Slave

    - by Ben Holness
    I am in the process of adding a new slave to a replicated mysql setup. All of the slave nodes are virtual machines. If I clone one of the nodes to a new VM, then start it with no networking, stop mysql, change the server-id in my.cnf to a new id and then restart mysql and networking, will it all work correctly, or will mysql get confused because it used to be a different server id? OS: Ubuntu 10.10 VM Platform: VMWare 5 MySQL : Server version: 5.1.49-1ubuntu8.1-log (Ubuntu)

    Read the article

  • Replicate portion of an LDAP directory to external server

    - by colemanm
    We're in the process of setting up a Jabber server on Amazon EC2 right now, and we'd like to have our internal users authenticate via LDAP so we don't have to create/manage a separate set of user accounts than the master directory in the office. My question is: is there a way to copy, unidirectionally, a segment of our internal LDAP directory (the user accounts OU) to an external LDAP server and authenticate Jabber against that? We're trying to work around having our externally hosted machines out in the cloud accessing our internal network directly... If we can replicate in one direction only a subset of the user accounts, then if that gets compromised we don't necessarily have a critical security breach into our internal network.

    Read the article

  • Sharing / replicating EBS across AWS nodes

    - by skrat
    I would like to use single EBS storage across multiple EC2 nodes (web/app servers). I've read some articles on snapshot sharing, but that doesn't suit well for what we need. We use filesystem for storing DB record attachments, so if one such attachment gets created, we need it to be immediately available to all nodes (to serve). So far only NFS seem to be viable, but it's a pain to configure and maintain. Another option could be storing those attachments on S3 instead, but that would cut us of doing any analysis on that data. This must be quite common problem when scaling in AWS, what solutions are there?

    Read the article

  • What are the typical methods used to scale up/out email storage servers?

    - by nareshov
    Hi, What I've tried: I have two email storage architectures. Old and new. Old: courier-imapds on several (18+) 1TB-storage servers. If one of them show signs of running out of disk space, we migrate a few email accounts to another server. the servers don't have replicas. no backups either. New: dovecot2 on a single huge server with 16TB (SATA) storage and a few SSDs we store fresh mails on the SSDs and run a doveadm purge to move mails older than a day to the SATA disks there is an identical server which has a max-15min-old rsync backup from the primary server higher-ups/management wanted to pack in as much storage as possible per server in order to minimise the cost of SSDs per server the rsync'ing is done because GlusterFS wasn't replicating well under that high small/random-IO. scaling out was expected to be done with provisioning another pair of such huge servers on facing disk-crunch issues like in the old architecture, manual moving of email accounts would be done. Concerns/doubts: I'm not convinced with the synchronously-replicated filesystem idea works well for heavy random/small-IO. GlusterFS isn't working for us yet, I'm not sure if there's another filesystem out there for this use case. The idea was to keep identical pairs and use DNS round-robin for email delivery and IMAP/POP3 access. And if one the servers went down for whatever reasons (planned/unplanned), we'd move the IP to the other server in the pair. In filesystems like Lustre, I get the advantage of a single namespace whereby I do not have to worry about manually migrating accounts around and updating MAILHOME paths and other metadata/data. Questions: What are the typical methods used to scale up/out with the traditional software (courier-imapd / dovecot)? Do traditional software that store on a locally mounted filesystem pose a roadblock to scale out with minimal "problems"? Does one have to re-write (parts of) these to work with an object-storage of some sort - such as OpenStack object storage?

    Read the article

  • Looking for a solid redirection infrastructre

    - by isoman
    We have critical servers (webservers and databases) that are fully replicated, except for the reverse proxy that we use to hide the internal stuff. This proxy is acting like a router that filters and redirects traffic to the main server and switch for failover if the main one is down. We want to find an alternative to this proxy because one single entry point is not enough. Is there any company that has a solid and redundant infrastructure that offers redirection to an IP and allows quick switching to another one?

    Read the article

  • Git push from post-receive

    - by meka
    I have two servers, let's call them first and second. First one is where the real development is done, and second one should be the replica. What I would like to do is put "git push" in post-receive, but there is one problem. Post-receive is executed as the user doing git push to first server, so I can't chmod 600 ssh key with no pass. What is the best practice for this? Thanx!

    Read the article

  • Pushing image changes to multiple servers

    - by gms8994
    I need the ability to push images out to multiple servers whenever they're updated. I've looked at Network Filesystems, but they're all but worthless due to their speed. Images can be uploaded to any one of 3 servers, and would then need to be copied to the other 2. Any suggestions? I'm open to try just about anything. EDIT: Graphics data (jpg, gif, png, etc). Linux only. We're currently using rsync. But having it work back and forth is getting cumbersome. It's all local network.

    Read the article

  • How can I sync Access databases and keep them up-to-date?

    - by user327472
    I have an Access database on my server. We split it up and use the front-end database for search data and adding new records or reports in local computer. If we update or add a new record, that writes to the back-end of database. I want to use this database in the other building with other servers. Also, those servers have no direct connection. How can I sync both back-end databases to keep the database data up to date? These details may be useful: It's a big amount of data - about 25,750 client records. I guess there are more than 25 tables at 80 MB.

    Read the article

  • Move MySQL master

    - by Noodles
    I currently have a master db server (lets call it db1) and 6 slaves (slave1-6). I've setup a new server (db2) as a slave of db1 and it's in sync. I want to change all the slaves to use db2 instead of db1, but with minimal downtime/data loss. At the moment the only way I can think of doing it is shutting down our website (so data stops being written to db1), waiting until all the slaves are up to date, flush logs on db1, shut it down. Reset master on db2, change all slaves to point to db2 with log position = 0. Is this the right way to do it or is there a way to do it without taking the site offline?

    Read the article

  • How do I sync a subset of tables between two databases on the same mysql database server

    - by Mike
    would like to be able to sync a subset of tables between two mysql databases that are running on the same server. One of the databases acts as the master where inserts, updates and deletes can be made. The second database uses those same tables for read-only operations. I do not want to use federated tables to achieve this. The long term goal will be to separate the 2 databases to multiple servers, The second database that has the subset of tables as read-only may also be replicated a few times over to distribute geographically for load and performance purposes each with unqiue data.... Once that is achieved, I plan to use binlog to replicate those specific tables on the secondary databases. In the meantime, I'd like to keep these tables in sync. Is there a more elegant way to do this than other than using a cronjob and mysqldump?

    Read the article

  • StoreGeneratedPattern T4 EntityFramework concern

    - by LoganWolfer
    Hi everyone, Here's the situation : I use SQL Server 2008 R2, SQL Replication, Visual Studio 2010, EntityFramework 4, C# 4. The course-of-action from our DBA is to use a rowguid column for SQL Replication to work with our setup. These columns need to have a StoreGeneratedPattern property set to Computed on every one of these columns. The problem : Every time the T4 template regenerate our EDMX (ADO.NET Entity Data Model) file (for example, when we update it from our database), I need to go manually in the EDMX XML file to add this property to every one of them. It has to go from this : <Property Name="rowguid" Type="uniqueidentifier" Nullable="false" /> To this : <Property Name="rowguid" Type="uniqueidentifier" Nullable="false" StoreGeneratedPattern="Computed"/> The solution : I'm trying to find a way to customize an ADO.NET EntityObject Generator T4 file to generate a StoreGeneratedPattern="Computed" to every rowguid that I have. I'm fairly new to T4, I only did customization to AddView and AddController T4 templates for ASP.NET MVC 2, like List.tt for example. I've looked through the EF T4 file, and I can't seem to find through this monster where I could do that (and how). My best guess is somewhere in this part of the file, line 544 to 618 of the original ADO.NET EntityObject Generator T4 file : //////// //////// Write PrimitiveType Properties. //////// private void WritePrimitiveTypeProperty(EdmProperty primitiveProperty, CodeGenerationTools code) { MetadataTools ef = new MetadataTools(this); #> /// <summary> /// <#=SummaryComment(primitiveProperty)#> /// </summary><#=LongDescriptionCommentElement(primitiveProperty, 1)#> [EdmScalarPropertyAttribute(EntityKeyProperty=<#=code.CreateLiteral(ef.IsKey(primitiveProperty))#>, IsNullable=<#=code.CreateLiteral(ef.IsNullable(primitiveProperty))#>)] [DataMemberAttribute()] <#=code.SpaceAfter(NewModifier(primitiveProperty))#><#=Accessibility.ForProperty(primitiveProperty)#> <#=code.Escape(primitiveProperty.TypeUsage)#> <#=code.Escape(primitiveProperty)#> { <#=code.SpaceAfter(Accessibility.ForGetter(primitiveProperty))#>get { <#+ if (ef.ClrType(primitiveProperty.TypeUsage) == typeof(byte[])) { #> return StructuralObject.GetValidValue(<#=code.FieldName(primitiveProperty)#>); <#+ } else { #> return <#=code.FieldName(primitiveProperty)#>; <#+ } #> } <#=code.SpaceAfter(Accessibility.ForSetter((primitiveProperty)))#>set { <#+ if (ef.IsKey(primitiveProperty)) { if (ef.ClrType(primitiveProperty.TypeUsage) == typeof(byte[])) { #> if (!StructuralObject.BinaryEquals(<#=code.FieldName(primitiveProperty)#>, value)) <#+ } else { #> if (<#=code.FieldName(primitiveProperty)#> != value) <#+ } #> { <#+ PushIndent(CodeRegion.GetIndent(1)); } #> <#=ChangingMethodName(primitiveProperty)#>(value); ReportPropertyChanging("<#=primitiveProperty.Name#>"); <#=code.FieldName(primitiveProperty)#> = StructuralObject.SetValidValue(value<#=OptionalNullableParameterForSetValidValue(primitiveProperty, code)#>); ReportPropertyChanged("<#=primitiveProperty.Name#>"); <#=ChangedMethodName(primitiveProperty)#>(); <#+ if (ef.IsKey(primitiveProperty)) { PopIndent(); #> } <#+ } #> } } private <#=code.Escape(primitiveProperty.TypeUsage)#> <#=code.FieldName(primitiveProperty)#><#=code.StringBefore(" = ", code.CreateLiteral(primitiveProperty.DefaultValue))#>; partial void <#=ChangingMethodName(primitiveProperty)#>(<#=code.Escape(primitiveProperty.TypeUsage)#> value); partial void <#=ChangedMethodName(primitiveProperty)#>(); <#+ } Any help would be appreciated. Thanks in advance. EDIT : Didn't find answer to this problem yet, if anyone have ideas to automate this, would really be appreciated.

    Read the article

  • JBossCacheService: exception occurred in cache put error occurred after changing cache mode to REPL_

    - by logoin
    Hi, we have a horizontal cluster set up on JBoss 4.2. The session replication worked fine until we changed cache mode from REPL_ASYNC to REPL_SYNC to fix a issue. We started to see warning for some session failovers: [org.jboss.web.tomcat.service.session.InstantSnapshotManager.ROOT] Failed to replicate session java.lang.RuntimeException bc [local7.warning] JBossCacheService: exception occurred in cache put ... org.jboss.web.tomcat.service.session.JBossCacheWrapper.put(JBossCacheWrapper.java:147) org.jboss.web.tomcat.service.session.JBossCacheService.putSession(JBossCacheService.java:315) org.jboss.web.tomcat.service.session.JBossCacheClusteredSession.processSessionRepl(JBossCacheClusteredSession.java:125) Does anyone have any idea why this happen and how to fix it if we want to still use REPL_SYNC? Any help is appreciated. Thanks!

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >