Search Results

Search found 3179 results on 128 pages for 'merge replication'.

Page 54/128 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • Get Trained in Storage

    - by mseika
    Oracle University has scheduled the following OPN Only Storage course: Course: Pillar Axiom 600 Install and MaxRep Replication Dates:         14-18 Jul 2014                     27-31 Oct 2014 Location:    Edinburgh You will gain the knowledge and skills necessary to install, administer, configure, and maintain Pillar Axiom 600 SAN Storage and Pillar Axiom MaxRep Replication. More details and online registration Remember: your OPN discount will be applied to the standard prices shown on Oracle University web pages. For assistance in booking and more information, contact the Oracle University Service Desk: eMail: [email protected] Telephone: 01 189 249 066

    Read the article

  • Git branching / rebasing good practices

    - by Pawel Krupinski
    I have a following scenario: 3 branches: - Master - MyBranch branched off Master for the purpose of developing a new feature of the system - MyBranchLocal branched off MyBranch as my local copy of the branch MyBranch is being rebased against and pushed to by other developers (who are working on the same feature as I am). As the owner of the MyBranch branch I want to keep it in sync with Master by rebasing. I also need to merge the changes I make to MyBranchLocal with MyBranch. What is a good way to do that? Couple of possible scenarios I tried so far: I. 1. Commit change to MyBranchLocal 2. Rebase MyBranch against Master 3. Rebase MyBranchLocal against MyBranch 4. Merge MyBranch with MyBranchLocal II. 1. Commit change to MyBranchLocal 2. Merge MyBranch with MyBranchLocal 3. Rebase MyBranch against Master 4. Rebase MyBranchLocal against MyBranch III. 1. Commit change to MyBranchLocal 2. Rebase MyBranch against Master 3. Merge MyBranch with MyBranchLocal 4. Rebase MyBranchLocal against MyBranch I already know that scenario III seems to be messing the commit history up a lot, potentially duplicating commits. What is your experience? What scenarios do you recommend?

    Read the article

  • Setting project for eclipse using maven

    - by egaga
    Hi, I'm trying to start modifying an existing application with Eclipse. Actually I had it working before, but I deleted the project, and now with "mvn eclipse:eclipse" I get the following: [INFO] Resource directory's path matches an existing source directory. Resources will be merged with the source directory src/main/resources [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Request to merge when 'filtering' is not identical. Original=resource src/main/resources: output=target/classes, include=[atlassian-plugin.xml], exclude=[**/*.java], test=false, filtering=true, merging with=resource src/main/resources: output=target/classes, include=[], exclude=[atlassian-plugin.xml|**/*.java], test=false, filtering=false [INFO] ------------------------------------------------------------------------ [INFO] Trace org.apache.maven.lifecycle.LifecycleExecutionException: Request to merge when 'filtering' is not identical. Original=resource src/main/resources: output=target/classes, include=[atlassian-plugin.xml], exclude=[**/*.java], test=false, filtering=true, merging with=resource src/main/resources: output=target/classes, include=[], exclude=[atlassian-plugin.xml|**/*.java], test=false, filtering=false at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:583) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStandaloneGoal(DefaultLifecycleExecutor.java:512) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(DefaultLifecycleExecutor.java:482) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:330) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:291) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:142) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:336) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:129) at org.apache.maven.cli.MavenCli.main(MavenCli.java:287) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: org.apache.maven.plugin.MojoExecutionException: Request to merge when 'filtering' is not identical. Original=resource src/main/resources: output=target/classes, include=[atlassian-plugin.xm l], exclude=[**/*.java], test=false, filtering=true, merging with=resource src/main/resources: output=target/classes, include=[], exclude=[atlassian-plugin.xml|**/*.java], test=false, filtering=false at org.apache.maven.plugin.eclipse.EclipseSourceDir.merge(EclipseSourceDir.java:302) at org.apache.maven.plugin.eclipse.EclipsePlugin.extractResourceDirs(EclipsePlugin.java:1605) at org.apache.maven.plugin.eclipse.EclipsePlugin.buildDirectoryList(EclipsePlugin.java:1490) at org.apache.maven.plugin.eclipse.EclipsePlugin.createEclipseWriterConfig(EclipsePlugin.java:1180) at org.apache.maven.plugin.eclipse.EclipsePlugin.writeConfiguration(EclipsePlugin.java:1043) at org.apache.maven.plugin.ide.AbstractIdeSupportMojo.execute(AbstractIdeSupportMojo.java:511) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:451) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:558) ... 16 more

    Read the article

  • Asymptotic runtime of list-to-tree function

    - by Deestan
    I have a merge function which takes time O(log n) to combine two trees into one, and a listToTree function which converts an initial list of elements to singleton trees and repeatedly calls merge on each successive pair of trees until only one tree remains. Function signatures and relevant implementations are as follows: merge :: Tree a -> Tree a -> Tree a --// O(log n) where n is size of input trees singleton :: a -> Tree a --// O(1) empty :: Tree a --// O(1) listToTree :: [a] -> Tree a --// Supposedly O(n) listToTree = listToTreeR . (map singleton) listToTreeR :: [Tree a] -> Tree a listToTreeR [] = empty listToTreeR (x:[]) = x listToTreeR xs = listToTreeR (mergePairs xs) mergePairs :: [Tree a] -> [Tree a] mergePairs [] = [] mergePairs (x:[]) = [x] mergePairs (x:y:xs) = merge x y : mergePairs xs This is a slightly simplified version of exercise 3.3 in Purely Functional Data Structures by Chris Okasaki. According to the exercise, I shall now show that listToTree takes O(n) time. Which I can't. :-( There are trivially ceil(log n) recursive calls to listToTreeR, meaning ceil(log n) calls to mergePairs. The running time of mergePairs is dependent on the length of the list, and the sizes of the trees. The length of the list is 2^h-1, and the sizes of the trees are log(n/(2^h)), where h=log n is the first recursive step, and h=1 is the last recursive step. Each call to mergePairs thus takes time (2^h-1) * log(n/(2^h)) I'm having trouble taking this analysis any further. Can anyone give me a hint in the right direction?

    Read the article

  • How to use void*

    - by Rondogiannis Aristophanes
    I am imlementing a simple merge function and I have got stuck, as the compiler gives me errors that I cannot explain. Here is my merge function: void merge(void *a, int beg, int middle, int end, int (*cmp)(const void*, const void* { std::stack<void*> first; std::stack<void*> second; for(int i = beg; i < middle; i++) { first.push(a+i); } for(int i = middle; i < end; i++) { second.push(a+i); } for(int i = beg; i < end; i++) { if(first.empty()) { void *tmp = second.top(); second.pop(); a+i = tmp; } else if(second.empty()) { void *tmp = first.top(); first.pop(); a+i = tmp; } else if(cmp(first.top(), second.top())) { void *tmp = first.top(); first.pop(); a+i = tmp; } else { void *tmp = second.top(); second.pop(); a+i = tmp; } } } And here is the error: sort.h: In function `void merge(void*, int, int, int, int (*)(const void*, const void*))': sort.h:9: error: pointer of type `void *' used in arithmetic sort.h:12: error: pointer of type `void *' used in arithmetic sort.h:19: error: pointer of type `void *' used in arithmetic sort.h:19: error: non-lvalue in assignment sort.h:23: error: pointer of type `void *' used in arithmetic sort.h:23: error: non-lvalue in assignment sort.h:27: error: pointer of type `void *' used in arithmetic sort.h:27: error: non-lvalue in assignment sort.h:31: error: pointer of type `void *' used in arithmetic sort.h:31: error: non-lvalue in assignment Can anyone help me? TIA.

    Read the article

  • git rebse onto remote updates

    - by Blake Chambers
    I work with a small team that uses git for source cod management. Recently, we have been doing topic branches to keep track of features then merging them into master locally then pushing them to a central git repository on a remote server. This works great when no changes have been made in master: I create my topic branch, commit it, merge it into master, then push. Hooray. However, if someone has pushed to origin before i do, my commits are not fast-forward. Thus a merge commit ensues. This also happens when a topic branch needs to merge with master locally to ensure my changes work with the code as of now. So, we end up with merge commits everywhere and a git log rivaling a friendship bracelet. So, rebasing is the obvious choice. What I would like is to: create topic branches holding several commits checkout master and pull (fast-forward because i haven't committed to master) rebase topic branches onto the new head of master rebase topics against master(so the topics start at masters head), bringing master up to my topic head My way of doing this currently is listed below: git checkout master git rebase master topic_1 git rebase topic_1 topic_2 git checkout master git rebase topic_2 git branch -d topic_1 topic_2 Is there a faster way to do this?

    Read the article

  • git rebase onto remote updates

    - by Blake Chambers
    I work with a small team that uses git for source cod management. Recently, we have been doing topic branches to keep track of features then merging them into master locally then pushing them to a central git repository on a remote server. This works great when no changes have been made in master: I create my topic branch, commit it, merge it into master, then push. Hooray. However, if someone has pushed to origin before i do, my commits are not fast-forward. Thus a merge commit ensues. This also happens when a topic branch needs to merge with master locally to ensure my changes work with the code as of now. So, we end up with merge commits everywhere and a git log rivaling a friendship bracelet. So, rebasing is the obvious choice. What I would like is to: create topic branches holding several commits checkout master and pull (fast-forward because i haven't committed to master) rebase topic branches onto the new head of master rebase topics against master(so the topics start at masters head), bringing master up to my topic head My way of doing this currently is listed below: git checkout master git rebase master topic_1 git rebase topic_1 topic_2 git checkout master git rebase topic_2 git branch -d topic_1 topic_2 Is there a faster way to do this?

    Read the article

  • Query crashes MS Access

    - by user284651
    THE TASK: I am in the process of migrating a DB from MS Access to Maximizer. In order to do this I must take 64 tables in MS ACCESS and merge them into one. The output must be in the form of a TAB or CSV file. Which will then be imported into Maximizer. THE PROBLEM: Access is unable to perform a query that is so complex it seems, as it crashes any time I run the query. ALTERNATIVES: I have thought about a few alternatives, and would like to do the least time-consuming one, out of these, while also taking advantage of any opportunities to learn something new. Export each table into CSVs and import into SQLight and then make a query with it to do the same as what ACCESS fails to do (merge 64 tables). Export each table into CSVs and write a script to access each one and merge the CSVs into a single CSV. Somehow connect to the MS ACCESS DB (API), and write a script to pull data from each table and merge them into a CSV file. QUESTION: What do you recommend?

    Read the article

  • Best way to update/insert into a table based on a remote table.

    - by martilyo
    I have two very large enterprise tables in an Oracle 10g database. One table keeps the historical information of the other table. The problem is, I'm getting to the point where the records are just too many that my insert update is taking too long and my session is getting killed by the governor. Here's a pseudocode of my update process: sqlsel := 'SELECT col1, col2, col3, sysdate FROM table2@remote_location dpi WHERE (col1, col2, col3) IN ( SELECT col1, col2, col3 FROM table2@remote_location MINUS SELECT DISTINCT col1, col2, col3 FROM table1 mpc WHERE facility = '''||load_facility||''' )'; EXECUTE IMMEDIATE sqlsel BULK COLLECT INTO table1; I've tried the MERGE statement: MERGE INTO table1 t1 USING ( SELECT col1, col2, col3 FROM table2@remote_location ) t2 ON ( t1.col1 = t2.col1 AND t1.col2 = t2.col2 AND t1.col3 = t2.col3 ) WHEN NOT MATCHED THEN INSERT (t1.col1, t1.col2, t1.col3, t1.update_dttm ) VALUES (t2.col1, t2.col2, t2.col3, sysdate ) But there seems to be a confirmed bug on versions prior to Oracle 10.2.0.4 on the merge statement when doing a merge using a remote database. The chance of getting an enterprise upgrade is slim so is there a way to further optimize my first query or write it in another way to have it run best performance wise? Thanks.

    Read the article

  • TFS 2010 Build Custom Activity for Merging Assemblies

    - by Jakob Ehn
    *** The sample build process template discussed in this post is available for download from here: http://cid-ee034c9f620cd58d.office.live.com/self.aspx/BlogSamples/ILMerge.xaml ***   In my previous post I talked about library builds that we use to build and replicate dependencies between applications in TFS. This is typically used for common libraries and tools that several other application need to reference. When the libraries grow in size over time, so does the number of assemblies. So all solutions that uses the common library must reference all the necessary assemblies that they need, and if we for example do a refactoring and extract some code into a new assembly, all the clients must update their references to reflect these changes, otherwise it won’t compile. To improve on this, we use a tool from Microsoft Research called ILMerge (Download from here). It can be used to merge several assemblies into one assembly that contains all types. If you haven’t used this tool before, you should check it out. Previously I have implemented this in builds using a simple batch file that contains the full command, something like this: "%ProgramFiles(x86)%\microsoft\ilmerge\ilmerge.exe" /target:library /attr:ClassLibrary1.bl.dll /out:MyNewLibrary.dll ClassLibrary1.dll ClassLibrar2.dll ClassLibrary3.dll This merges 3 assemblies (ClassLibrary1, 2 and 3) into a new assembly called MyNewLibrary.dll. It will copy the attributes (file version, product version etc..) from ClassLibrary1.dll, using the /attr switch. For more info on ILMerge command line tool, see the above link. This approach works, but requires a little bit too much knowledge for the developers creating builds, therefor I have implemented a custom activity that wraps the use of ILMerge. This makes it much simpler to setup a new build definition and have the build automatically do the merging. The usage of the activity is then implemented as part of the Library Build process template mentioned in the previous post. For this article I have just created a simple build process template that only performs the ILMerge operation.   Below is the code for the custom activity. To make it compile, you need to reference the ILMerge.exe assembly. /// <summary> /// Activity for merging a list of assembies into one, using ILMerge /// </summary> public sealed class ILMergeActivity : BaseCodeActivity { /// <summary> /// A list of file paths to the assemblies that should be merged /// </summary> [RequiredArgument] public InArgument<IEnumerable<string>> InputAssemblies { get; set; } /// <summary> /// Full path to the generated assembly /// </summary> [RequiredArgument] public InArgument<string> OutputFile { get; set; } /// <summary> /// Which input assembly that the attibutes for the generated assembly should be copied from. /// Optional. If not specified, the first input assembly will be used /// </summary> public InArgument<string> AttributeFile { get; set; } /// <summary> /// Kind of assembly to generate, dll or exe /// </summary> public InArgument<TargetKindEnum> TargetKind { get; set; } // If your activity returns a value, derive from CodeActivity<TResult> // and return the value from the Execute method. protected override void Execute(CodeActivityContext context) { string message = InputAssemblies.Get(context).Aggregate("", (current, assembly) => current + (assembly + " ")); TrackMessage(context, "Merging " + message + " into " + OutputFile.Get(context)); ILMerge m = new ILMerge(); m.SetInputAssemblies(InputAssemblies.Get(context).ToArray()); m.TargetKind = TargetKind.Get(context) == TargetKindEnum.Dll ? ILMerge.Kind.Dll : ILMerge.Kind.Exe; m.OutputFile = OutputFile.Get(context); m.AttributeFile = !String.IsNullOrEmpty(AttributeFile.Get(context)) ? AttributeFile.Get(context) : InputAssemblies.Get(context).First(); m.SetTargetPlatform(RuntimeEnvironment.GetSystemVersion().Substring(0,2), RuntimeEnvironment.GetRuntimeDirectory()); m.Merge(); TrackMessage(context, "Generated " + m.OutputFile); } } [Browsable(true)] public enum TargetKindEnum { Dll, Exe } NB: The activity inherits from a BaseCodeActivity class which is an internal helper class which contains some methods and properties useful for moste custom activities. In this case, it uses the TrackeMessage method for writing to the build log. You either need to remove the TrackMessage method calls, or implement this yourself (which is not very hard… ) The custom activity has the following input arguments: InputAssemblies A list with the (full) paths to the assemblies to merge OutputFile The name of the resulting merged assembly AttributeFile Which assembly to use as the template for the attribute of the merged assembly. This argument is optional and if left blank, the first assembly in the input list is used TargetKind Decides what type of assembly to create, can be either a dll or an exe Of course, there are more switches to the ILMerge.exe, and these can be exposed as input arguments as well if you need it. To show how the custom activity can be used, I have attached a build process template (see link at the top of this post) that merges the output of the projects being built (CommonLibrary.dll and CommonLibrary2.dll) into a merged assembly (NewLibrary.dll). The build process template has the following custom process parameters:   The Assemblies To Merge argument is passed into a FindMatchingFiles activity to located all assemblies that are located in the BinariesDirectory folder after the compilation has been performed by Team Build. Here is the complete sequence of activities that performs the merge operation. It is located at the end of the Try, Compile, Test and Associate… sequence: It splits the AssembliesToMerge parameter and appends the full path (using the BinariesDirectory variable) and then enumerates the matching files using the FindMatchingFiles activity. When running the build, you can see that it merges two assemblies into a new one:     And the merged assembly (and associated pdb file) is copied to the drop location together with the rest of the assemblies:

    Read the article

  • Can any iSCSI NAS appliance replicate / clone a LUN to an external drive?

    - by Boden
    I would like to backup using Windows Imaging to some kind of NAS appliance. I believe this will require the NAS to support iSCSI. I would then like the appliance to support the replication of the iSCSI LUN to an external eSATA or USB disk connected directly to the appliance. I've found plenty of NAS appliances that can do iSCSI and replicate to an external drive, but none that I've found thus far can do both at once. That is, the devices can do iSCSI, but then the replication feature doesn't work. The idea here is to backup to an appliance located in a secure office far away from the server room. Offsite backups to external hard drive could be managed from the appliance. The benefits of such a setup would be: 1) very unlikely that fire or random theft would affect both server-room backup and "remote" backup appliance 2) offsite backups could be managed by multiple trusted people without granting access to server room 3) Windows imaging provides poor man's deduplication, so each backup volume can contain a decent backup history. I understand why this would be a non-trivial thing to implement, but I'm wondering if such a thing exists? Preferably a tabletop, low to medium cost device. Alternative solutions welcome. NOTE: I'm backing up very few but very large files, so file replication is not a good option.

    Read the article

  • Virtual Fileserver

    - by Sergei
    Hi, We are planning to move our production servers to the datacenter and virtualize remaining servers in the process.Datacenter will have HP blades with vSphere on top.Currentliy we are using Celerra NS20 as fileserver.Since datacenter is using HP kit and EVA 4400 as SAN, we cannot have Celerra there, as EMC supoprt for Celerra does not work for non EMC array. I have searched for possible options and one of them was to have HP NAS blade X3800sb instead of Celerra.However this seems like overkill for me.We are only using Celerra for about 100 users and 50 servers and I think having X3800sb could be waste of resources. The other option would be to have a virtual fileserver as a part of vmware environment in datacenter.We only need CIFS to be provided.The only option I can think of is Windows Storage server.We had a bad expirience with Windows servers used as fileservers ( memory leaks one thing) in the past and this was one of the reasons we moved to Celerra. What are the other options?We need something as reliable as Celerra with as many options as possible.For example , Celerra has per folder quotas, deduplication, dynamic volume allocation, automatic failover, VTLU, replication. Also we would need to replicate NAS data to the failover site.We could use block level replication , SAN-to-SAN, but this would mean wasted bandwidth, as we need only subset of folders to be replicated.We used CA XSoft for windows servers in the past and Celerra has option for Celerra replication. Thank you very much in advance, Please ask me if I missed any details!

    Read the article

  • Can any iSCSI NAS appliance replicate / clone a LUN to an external drive?

    - by Boden
    I would like to backup using Windows Imaging to some kind of NAS appliance. I believe this will require the NAS to support iSCSI. I would then like the appliance to support the replication of the iSCSI LUN to an external eSATA or USB disk connected directly to the appliance. I've found plenty of NAS appliances that can do iSCSI and replicate to an external drive, but none that I've found thus far can do both at once. That is, the devices can do iSCSI, but then the replication feature doesn't work. The idea here is to backup to an appliance located in a secure office far away from the server room. Offsite backups to external hard drive could be managed from the appliance. The benefits of such a setup would be: 1) very unlikely that fire or random theft would affect both server-room backup and "remote" backup appliance 2) offsite backups could be managed by multiple trusted people without granting access to server room 3) Windows imaging provides poor man's deduplication, so each backup volume can contain a decent backup history. I understand why this would be a non-trivial thing to implement, but I'm wondering if such a thing exists? Preferably a tabletop, low to medium cost device. Alternative solutions welcome. NOTE: I'm backing up very few but very large files, so file replication is not a good option.

    Read the article

  • What is the fastest way to clone an INNODB table within the same server?

    - by Vic
    Our development server is a replication slave of our production server. We have a script that developers use if they want to run their applications/bug fixes against fresh data. That script looks like this: dbs=( analytics auth logs users ) server=localhost conn="-h ${server} -u ${username} --password=${password}" # Stop the replication client so we don't encounter weird data. echo "STOP SLAVE" | mysql ${conn} # Bunch of bulk insert optimizations echo "SET autocommit=0" | mysql ${conn} echo "SET unique_checks=0" | mysql ${conn} echo "SET foreign_key_checks=0" | mysql ${conn} # Restore all databases and tables. for sourcedb in ${dbs[*]} do destdb=${prefix}${sourcedb} echo "Dropping database ${destdb}..." echo "DROP DATABASE IF EXISTS ${destdb}" | mysql ${conn} echo "CREATE DATABASE ${destdb}" | mysql ${conn} # First, all the tables. for table in `echo "SHOW FULL TABLES WHERE Table_type <> 'VIEW'" | mysql $conn $sourcedb | tail -n +2`; do if [[ "${table}" != 'BASE' && "${table}" != 'TABLE' && "${table}" != 'VIEW' ]] ; then createTable=`echo "SHOW CREATE TABLE ${table}"|mysql -B -r $conn $sourcedb|tail -n +2|cut -f 2-` echo "Restoring ${destdb}/${table}..." echo "$createTable ;" | mysql $conn $destdb insertData="INSERT INTO ${destdb}.${table} SELECT * FROM ${sourcedb}.${table}" echo "$insertData" | mysql $conn $destdb fi fi done done echo "SET foreign_key_checks=1" | mysql ${conn} echo "SET unique_checks=1" | mysql ${conn} echo "COMMIT" | mysql ${conn} # Restart the replication client echo "START SLAVE" | mysql ${conn} All of these operations are, as I mentioned, within the same server. Is there a faster way to clone the tables I'm not seeing? They're all INNODB tables. Thanks!

    Read the article

  • PHP on several servers with session-sharing

    - by Etu
    there's certanly other threads about this, but I have one more question. We are about to scale the website at work to have more than one server. And we need to share the sessions between the servers. We have been looking into different solutions, one in memcached and use Memcached as sessionhandler in PHP. That will probably work. And the idea would be to run memcached on every machine and let all webservers access all other servers memcached servers, and then we have shared sessions between the machines, yay. (we have no resources to setup with sticky-sessions yet, that's a later project. we need this running, and we need this running now. and we will loadbalance with DNS for a starter) But then... If I want to take one server down, say, for maintenance, or a server crashes, or whatever reason. I don't want the users to just loose their sessions and have to start from the beginning... That's why we need some kind of replication, which Memcached does not support. Then I found http://repcached.lab.klab.org/ -- which has multi-master replication of memcached, which is great, and is what I want. But does it work with 2 machines? Say 3, 5, 10? For future scaling. I also looked into redishttp://redis.io/ -- which also seems great, but is a bit more "shaky" with the php-session-handler support, and no multi-master-replication. The thing is that I like to use memcached, but I want to be able to power down one of two boxes without loosing half of the sessions. Any suggestions?

    Read the article

  • Create DFS replica from a NAS drive

    - by Mark
    We have two offices, at two different locations. In one we have a NAS, with some shares. We also have a Domain Controller using Windows 2003 R2. We have setup a second Domain Controller using Windows 2003 R2 to put that in the second office. What we would also like is to replicate the NAS drive onto the second Domain Controller so in the second office they have a local copy, and that their changes are replicated back to the NAS. Is there a way to setup DFS replication to do this? Or will it only work with local folders on each Server? Update 1 Sept Base on the answer below, I think I need to add some clarification. The real issue is that the NAS which hosts the shared folder that we want to replicate is external to both servers. And we have a particular share mapped to say S: . In the replication setup it doesnt seem to accept network shares external to the server to be candidates for replication. I can understand why, I just need confirmation that DFSR will only work with block devices that are local on at least one server. Is this the case?

    Read the article

  • How to upgrade a 1.4.3 TortoiseSVN-created repository to 1.6.x?

    - by SiegeX
    A few years ago, TortoiseSVN 1.4.3 was deployed to our software development team and we are now looking at upgrading the client to the latest 1.6.x version. I had hoped this upgrade would be transparent with the additional features and modifications being client-side. For the most part, this was true except for a very important feature -- merging. When I try to merge a feature branch back into truck I get a show-stopping "Merge tracking not supported error." Here are some facts worth noting: When the repo was first created (before I was on board), it was created via the TortoiseSVN client itself. We do not have a 'svn server daemon' per se, rather the repository folders/database resides on a share folder that is accessible from our workstation machines via file:///. This was actually an eye opener for me, I had always thought there was some SVN server daemon we were talking to. We do not have any access to the underlying machine hosting the SVN share other than the ability to read/write to the share itself. I don't even know what OS the machine is running on. This share server was chosen because its drives are backed up nightly by our IT group. In all honesty, we really don't need the merge tracking feature although it would be nice to have. For the time being it would be sufficient to be able to use a 1.6.x TortoiseSVN client on the 1.4.3 repository and have it merge (sans tracking) without error. So now the question becomes, how does one upgrade a client-created 1.4.3 repo to a 1.6.x compatible version without access to the underlying machine the repo resides on? I was hoping the TortoiseSVN client itself had the ability to do this but that does not appear to be the case. Will I be forced to copy the entire repo over to my local drive, run some svn commands to upgrade the repo locally then copy the repo back to the share point? If so, will doing this break any compatibility with the the 1.4.3 clients in case we cant upgrade them all at the same time? Thanks for the help.

    Read the article

  • How to upgrade a 1.4.3 TortoiseSVN-created repository to 1.6.x?

    - by SiegeX
    A few years ago, TortoiseSVN 1.4.3 was deployed to our software development team and we are now looking at upgrading the client to the latest 1.6.x version. I had hoped this upgrade would be transparent with the additional features and modifications being client-side. For the most part, this was true except for a very important feature -- merging. When I try to merge a feature branch back into truck I get a show-stopping "Merge tracking not supported error." Here are some facts worth noting: When the repo was first created (before I was on board), it was created via the TortoiseSVN client itself. We do not have a 'svn server daemon' per se, rather the repository folders/database resides on a share folder that is accessible from our workstation machines via file:///. This was actually an eye opener for me, I had always thought there was some SVN server daemon we were talking to. We do not have any access to the underlying machine hosting the SVN share other than the ability to read/write to the share itself. I don't even know what OS the machine is running on. This share server was chosen because its drives are backed up nightly by our IT group. In all honesty, we really don't need the merge tracking feature although it would be nice to have. For the time being it would be sufficient to be able to use a 1.6.x TortoiseSVN client on the 1.4.3 repository and have it merge (sans tracking) without error. So now the question becomes, how does one upgrade a client-created 1.4.3 repo to a 1.6.x compatible version without access to the underlying machine the repo resides on? I was hoping the TortoiseSVN client itself had the ability to do this but that does not appear to be the case. Will I be forced to copy the entire repo over to my local drive, run some svn commands to upgrade the repo locally then copy the repo back to the share point? If so, will doing this break any compatibility with the the 1.4.3 clients in case we cant upgrade them all at the same time? Thanks for the help.

    Read the article

  • Command line raw image processing tools in Linux?

    - by ???
    I'm wondering if there is any command to process raw images, for example, cat raw1.img | raw2jpg -w 640 -h 480 -pitch 1024 -pixelformat R8G8B8 and more examples: cat raw1.img raw2.img >y-merge.img tr='transpose -pitch 1024 -depth 24' cat <(cat raw1.img | $tr) <(cat raw2.img | $tr) | transpose -pitch 480 >x-merge.img and something like this: cat gamebitmap.dat | ( w=`readint32` h=`readint32` raw2png -w $w -h $h -depth 24 -pixelformat R8G8B8 ) | png2svg -extractoutline -fuzzy -error 8 -smooth Seems a little tricky, but is it possible? does ImageMagick support such raw formats?

    Read the article

  • How to convert an CHM file into a single HTML file?

    - by ruslan
    I have tried many different CHM-to-HTML utilities, but I am having a difficult time finding one that is able to produce a single HTML file. I can decompile a CHM file using hh.exe, but I don't know how to easily merge the resulting files into a single HTML file, all while preserving the correct order of pages. Is there a free tool which can do this? If not, how can I merge the HTML files in order?

    Read the article

  • Windows programs to create timeline charts?

    - by justshams
    I would like to create a chart for my source control depicting the trunk and all the branches, with various details, like creation date, merge date, created revision, merge revision, close revision etc. I want it to look like this: I have looked into an appliation called SmartDraw, but unable to the required kind of output from it. It would be awesome if the data can be generated by reading an Excel file input. It would be required that the software runs on Windows XP SP3.

    Read the article

  • sync Framework for Compact Framework

    - by CF_Maintainer
    Has anyone got sync framework to work on a mobile device as a sync mechanism in place of RDA or Merge replication? If yes, could you point me to any resources available. If one was to start a green field compact framework based application, what would one use as the sync mechanism (sync framework/RDA/Merge replication/any other...)? Thanks

    Read the article

  • How to flatten already filled out PDF form using iTextSharp

    - by andryuha
    I'm using iTextSharp to merge a number of pdf files together into a single file. I'm using method described in iTextSharp official tutorials, specifically here, which merges files page by page via PdfWriter and PdfImportedPage. Turns out some of the files I need to merge are filled out PDF Forms and using this method of merging form data is lost. I've see several examples of using PdfStamper to fill out forms and flatten them. What I can't find, is a way to flatten already filled out PDF Form and hopefully merge it with the other files without saving it flattened out version first. Thanks

    Read the article

  • .gitconfig error

    - by Tanner
    I edited my .gitconfig file to add support for LabView and it appears that I did something that Git doesn't exactly like. The problem is it (Git) doesn't tell me what it doesn't like. What did I do wrong? The error message doesn't help much either: "fatal: bad config file line 13 in c:/Users/Tanner/.gitconfig" [gui] recentrepo = C:/Users/Tanner/Desktop/FIRST 2010 Beta/Java/LoganRover [user] name = Tanner Smith email = [email protected] [merge "labview"] name = LabView 3-Way Merge driver = “C:\Program Files\National Instruments\Shared\LabVIEW Merge\LVMerge.exe” “C:\Program Files\National Instruments\LabVIEW 8.6\LabVIEW.exe” %O %B %A %A recursive = binary And I'm not seeing a line 13, but usually that would mean something is wrong at the end? I don't know, Git is new to me.

    Read the article

  • git strategy to have a set of commits limited to a particular branch

    - by becomingGuru
    I need to merge between dev and master frequently. I also have a commit that I need to apply to dev only, for things to work locally. Earlier I only merged from dev to master, so I had a branch production_changes that contained the "undo commit" of the dev special commit. and from the master, I merged this. Used to work fine. Now each time I merge from dev to master and vice versa, I am having to cherry-pick and apply the same commit again and again :(. Which is UGLY. What strategy can I adapt so that I can seamlessly merge between 2 branches, yet retain some of the changes only on one of those branches?

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >