Search Results

Search found 1815 results on 73 pages for 'percona xtradb cluster'.

Page 46/73 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • What alternatives do I have if I want a distributed multi-master database?

    - by Jonas
    I will build a system where I want to reduce single-point-of-failures, and I need a database. Is there any (free) relational database systems that can handle multi-master setups good (i.e where it is easy to add and remove nodes) or is it better to go with a NoSQL-database? As what I have understood, a key-value store will handle this better. What database system do you recommend for a multi-master (cluster) setup?

    Read the article

  • Table clusters in SQLServer

    - by Bruno Martinez
    In Oracle, a table cluster is a group of tables that share common columns and store related data in the same blocks. When tables are clustered, a single data block can contain rows from multiple tables. For example, a block can store rows from both the employees and departments tables rather than from only a single table: http://download.oracle.com/docs/cd/E11882_01/server.112/e10713/tablecls.htm#i25478 Can this be done in SQLServer?

    Read the article

  • Multicore programming: what's necessary to do it?

    - by Casey
    I have a quadcore processor and I would really like to take advantage of all those cores when I'm running quick simulations. The problem is I'm only familiar with the small Linux cluster we have in the lab and I'm using Vista at home. What sort of things do I want to look into for multicore programming with C or Java? What is the lingo that I want to google? Thanks for the help.

    Read the article

  • using qsub (sge) with multi-threaded applications

    - by dan12345
    i wanted to submit a multi-threaded job to the cluster network i'm working with - but the man page about qsub is not clear how this is done - By default i guess it just sends it as a normal job regardless of the multi-threading - but this might cause problems, i.e. sending many multi-threaded jobs to the same computer, slowing things down. Does anyone know how to accomplish this? thanks. The batch server system is sge.

    Read the article

  • Running OpenMPI on Windows XP

    - by iamweird
    Hi there. I'm trying to build a simple cluster based on Windows XP. I compiled OpenMPI-1.4.2 successfully, and tools like mpicc and ompi_info work too, but I can't get my mpirun working properly. The only output I can see is Z:\orterun --hostfile z:\hosts.txt -np 2 hostname [host0:04728] Failed to initialize COM library. Error code = -2147417850 [host0:04728] [[8946,0],0] ORTE_ERROR_LOG: Error in file ..\..\openmpi-1.4.2 \orte\mca\ess\hnp\ess_hnp_module.c at line 218 -------------------------------------------------------------------------- It looks like orte_init failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during orte_init; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): orte_plm_init failed -- Returned value Error (-1) instead of ORTE_SUCCESS -------------------------------------------------------------------------- [host0:04728] [[8946,0],0] ORTE_ERROR_LOG: Error in file ..\..\openmpi-1.4.2 \orte\runtime\orte_init.c at line 132 -------------------------------------------------------------------------- It looks like orte_init failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during orte_init; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): orte_ess_set_name failed -- Returned value Error (-1) instead of ORTE_SUCCESS -------------------------------------------------------------------------- [host0:04728] [[8946,0],0] ORTE_ERROR_LOG: Error in file ..\..\..\..\openmpi -1.4.2\orte\tools\orterun\orterun.c at line 543 Where z:\hosts.txt appears as follows: host0 host1 Z: is a shared network drive available to both host0 and host1. What my problem is and how do I fix it? Upd: Ok, this problem seems to be fixed. It seems to me that WideCap driver and/or software components causes this error to appear. A "clean" machine runs local task successfully. Anyway, I still cannot run a task within at least 2 machines, I'm getting following message: Z:\mpirun --hostfile z:\hosts.txt -np 2 hostname connecting to host1 username:cluster password:******** Save Credential?(Y/N) y [host0:04728] This feature hasn't been implemented yet. [host0:04728] Could not connect to namespace cimv2 on node host1. Error code =-2147024891 -------------------------------------------------------------------------- mpirun was unable to start the specified application as it encountered an error. More information may be available above. -------------------------------------------------------------------------- I googled a little and did all the things as described here: http://www.open-mpi.org/community/lists/users/2010/03/12355.php but I'm still getting the same error. Can anyone help me? Upd2: Error code -2147024891 might be WMI error WBEM_E_INVALID_PARAMETER (0x80041008) which occures when one of the parameters passed to the WMI call is not correct. Does this mean that the problem is in OpenMPI source code itself? Or maybe it's because of wrong/outdated wincred.h and credui.lib I used while building OpenMPI from the source code?

    Read the article

  • Windows Azure: Parallelization of the code

    - by veda
    I have some matrix multiplication operation. I want to parallelize the execution of those operations through multiple processors.. This can be done on high performance computing cluster using MPI (Message Passing Interface). Like wise, can I do some parallelization in the cloud using multiple worker roles. Is there any means for doing that.

    Read the article

  • Ushahidi - How to make the markers stay on the map on zoom change?

    - by Guttemberg
    I am using the platform Ushahidi Web-2.7.3 , see: http://ti5.net.br/provedorlegal.com.br, and when I zoom in beyond a certain level, the clustered markers disappear from the map. I also tested this on an older version of a site, see: http://movimentofichalimpa.org/mapa, where the clustered markers do not disappear on zooming in, but just become ungrouped, as is normal with a cluster strategy. How can I make the markers remain on the map when zooming in?

    Read the article

  • Cassandra replication system - how it works

    - by inquisitor
    Does cassandra replicates only on write procedure (with choosen consistency level)? Is there any auto-replicate option for absent nodes, if i want symetric data in every node? If I plug in new node to cluster there is no auto replication - how to sync data from others nodes with new one? If I want replication like multimaster (2 nodes) with slave backup (1 node) known from MySQL, what is the proper logic setup and manage that on cassandra (3 nodes)? How about two nodes?

    Read the article

  • Nutch search always returns 0 results

    - by darbour
    I have set up nutch 1.0 on a cluster. It has been setup and has successfully crawled, I copied the crawl directory using the dfs -copyToLocal and set the value of searcher.dir in the nutch-site.xml file located in the tomcat directory to point to that directory. Still when I try to search I receive 0 results. Any help would be greatly appreciated.

    Read the article

  • Drawing a polygon around groups of datapoints in MATLAB

    - by Hossein
    Hi, I have a set of datapoints each of which belongs to a certain cluster(group).I need to draw a polygone around each of these clusters.Does anyone knows how to do it? PS: It doesn't matter if I use or not use the actual datapoints for drawing the polygon. I just need them to be wrapped in a polygon. Thanks

    Read the article

  • Erlang: What's a good way to automatically assign node names?

    - by mwt
    I want to have an EC2 based cluster that can grow and shrink at will. No node will be special in any way nor do I want them to have to coordinate their names with any other nodes. I don't want to hard code the names since I want to use one image and spin them up as needed. I understand nodes have to have names to communicate, though. What's a good strategy for automatically and dynamically coming up with a name at start script time?

    Read the article

  • Switch User in RedHat like XP [closed]

    - by rd42
    In our cluster, RedHat4 & 5 machines, if someone locks the computer and walks away no body can use it. Is there a feature in RedHat5, Gnome, KDE etc that would allow for the option of switching users at the lock screen, so more than one person can be logged in? Thanks, rd42

    Read the article

  • Splitting a list based on another list values in Mathematica

    - by Max
    In Mathematica I have a list of point coordinates size = 50; points = Table[{RandomInteger[{0, size}], RandomInteger[{0, size}]}, {i, 1, n}]; and a list of cluster indices these points belong to clusterIndices = {1, 1, 1, 1, 1, 1, 1, 2, 2, 1, 2, 1, 2, 1, 1, 1, 1, 1, 1, 1}; what is the easiest way to split the points into two separate lists based on the clusterIndices values?

    Read the article

  • Changing the block size of a dfs file in Hadoop

    - by Sam
    I found that my map tasks is currently inefficient when parsing one particular set of files (total 2 TB). I'd like to change the block size of files in the Hadoop dfs (from 64MB to 128 MB). I can't find how to do it in the documentation for only one set of files and not the entire cluster, does anyone know the command that would change the block size when I upload it (ie copy from local to the dfs)? Thanks!

    Read the article

  • Smooth redeployment of WAR in production?

    - by stephanos
    I was wondering if there is a 'smooth way' of redeploying a Java WAR to a production server (no cluster, no OSGi)? All I can come up with is stop server, update file, restart server. And 10 minutes beforehand I need to display a maintenance warning on the site. What's your approach?

    Read the article

  • Good resource for studying Database High Availability techniques

    - by Invincible
    Hello Can anybody suggest some good resource/book on Database high availability techniques? Moreover, High-availability of system software like Intrusion Prevention system or Web servers. I am considering high-availability is global term which covers clustring, cloud computing, replication, replica management, distributed synchronization for cluster. Thanks in advance!

    Read the article

  • SQL Server Redirection

    - by MrTehee
    We are switching from a SQL cluster to a mirrored solution. The problem is that we have a bunch of programs that would have to switch connection strings to handle the failover. Is there any way the we can set up a redirect or proxy that would take any legacy requests and forward them to the mirrored solution?

    Read the article

  • Exchange Server 2007 Setup

    - by AlamedaDad
    Hi, I'm working on a upgrade to Exchange 2007 and I wanted to get some advise on hardware choices. We currently have an Exchange 2003 STD server with 400 users split between 6 AD Sites, that is housed on a single server. We need to move to a redundant, fault tolerant system to support our users. I'm planning on installing 2 Dell 1950 servers with W2k8-std to act as CAS and Hub servers, with NLB to allow abstraction of the actual server name to the users. There won't be an edge system since we have a Barracuda box already that will handle in/out spam/virus filtering. Backend I'm planning on 2 mailbox servers which will be Dell 2950s with 16GB RAM, 2 either dual-core or quad-core CPUs and 6 300GB SAS drives in some RAID config. These systems will be clustered using W2k8 Ent clustering and running CCR in Exchange. My questions are as follows: Is 16GB enough RAM for serving that many mailboxes along with the windows clustering and ccr? I'm trying to figure out disk layouts and I'm unsure of whether to use all local disk or some local and some SAN, via an OpenFiler iSCSI server. The SAN would be a Dell 2850 with 6 - 300GB SCSI drives and a PERC controller to slice as I want, with 8GB RAM. Option 1: 2 drives, RAID 1 - OS 2 drives, RAID 1 - Logs 2 drives, RAID 1 - Mail stores Option 2: 2 drives, RAID 1 - OS and logs 4 drives, RAID 5 - Mail Stores and scratch space for eseutil. Option 3: 2 drives, RAID 1 - OS 2 drives, RAID 1 - Logs 2 drives, RAID 0 - scratch space ~300GB iSCSI volume for mail stores Option 4: 2 drives, RAID 1 - OS 4 drives, RAID 5 - scratch space ~300GB iSCSI volume for mail stores ~300GB iSCSI volume for logs I have 2 sockets for CPUs and need to chose between dual and quad cores. The dual core have faster clocks but less cache and I'm thinking older architecture. Am I better off with more cores and cache while sacraficing clock speed? I am planning on adding the new E2K7 cluster to the E2K3 server and then move each mailbox over, all at once, then remove the old server. This seems more complicated than simply getting rid of the 2003 server and then adding the 2007 cluster and restoring the mailboxes using PowerControls or exmerge. The migration option lets me do this on my time, where a cutover means it all needs to work at once. If I go with the cutover method, how can I prebuild the servers and add them to the domain right after removing the 2003 server, or can't I? I think the answer is no and the migration is my only real option if I want to prebuild. I need to also migrate about 30GB of Public Folders. Is there anything special about this, other than specifying in the E2K7 install that I want older Outlook clients and PF's setup? I guess I could even keep the E2K3 server to host just the PFs? Lastly, if I have a mix of Outlook 200, 2003 and 2007 what do I need to do to make sure they all have access to the GAL and OAB? At time of cutover, we'll be at like 90% 2007, but we will have some older stuff around. My plan is to use Outlook Anywhere on laptops that are used outside the physical network. Are there any gotchas involved in that? I'm even thinking about using is for all Outlook clients, does anyone do that? The reason I'm considering it is that our WAN is really VPN tunnels over internet connections, so not a fully messhed, stable WAN. Thank you all very much for the assistance in advance and I look forward to discussion of these points! Regards...Michael

    Read the article

  • Recommendations for distributed processing/distributed storage systems

    - by Eddie
    At my organization we have a processing and storage system spread across two dozen linux machines that handles over a petabyte of data. The system right now is very ad-hoc; processing automation and data management is handled by a collection of large perl programs on independent machines. I am looking at distributed processing and storage systems to make it easier to maintain, evenly distribute load and data with replication, and grow in disk space and compute power. The system needs to be able to handle millions of files, varying in size between 50 megabytes to 50 gigabytes. Once created, the files will not be appended to, only replaced completely if need be. The files need to be accessible via HTTP for customer download. Right now, processing is automated by perl scripts (that I have complete control over) which call a series of other programs (that I don't have control over because they are closed source) that essentially transforms one data set into another. No data mining happening here. Here is a quick list of things I am looking for: Reliability: These data must be accessible over HTTP about 99% of the time so I need something that does data replication across the cluster. Scalability: I want to be able to add more processing power and storage easily and rebalance the data on across the cluster. Distributed processing: Easy and automatic job scheduling and load balancing that fits with processing workflow I briefly described above. Data location awareness: Not strictly required but desirable. Since data and processing will be on the same set of nodes I would like the job scheduler to schedule jobs on or close to the node that the data is actually on to cut down on network traffic. Here is what I've looked at so far: Storage Management: GlusterFS: Looks really nice and easy to use but doesn't seem to have a way to figure out what node(s) a file actually resides on to supply as a hint to the job scheduler. GPFS: Seems like the gold standard of clustered filesystems. Meets most of my requirements except, like glusterfs, data location awareness. Ceph: Seems way to immature right now. Distributed processing: Sun Grid Engine: I have a lot of experience with this and it's relatively easy to use (once it is configured properly that is). But Oracle got its icy grip around it and it no longer seems very desirable. Both: Hadoop/HDFS: At first glance it looked like hadoop was perfect for my situation. Distributed storage and job scheduling and it was the only thing I found that would give me the data location awareness that I wanted. But I don't like the namename being a single point of failure. Also, I'm not really sure if the MapReduce paradigm fits the type of processing workflow that I have. It seems like you need to write all your software specifically for MapReduce instead of just using Hadoop as a generic job scheduler. OpenStack: I've done some reading on this but I'm having trouble deciding if it fits well with my problem or not. Does anyone have opinions or recommendations for technologies that would fit my problem well? Any suggestions or advise would be greatly appreciated. Thanks!

    Read the article

  • javax.naming.InvalidNameException using Oracle BPM and weblogic when accessing directory

    - by alfredozn
    We are getting this exception when we start our cluster (2 managed servers, 1 admin), we have deployed only the ears corresponding to the OBPM 10.3.1 SP1 in a weblogic 10.3. When the server cluster starts, one of the managed servers (the first to start) get overloaded and ran out of connections to the directory DB because of this repeatedly error. It looks like the engine is trying to get the info from the LDAP server but I don't know why it is building a wrong query. fuego.directory.DirectoryRuntimeException: Exception [javax.naming.InvalidNameException: CN=Alvarez Guerrero Bernardo DEL:ca9ef28d-3b94-4e8f-a6bd-8c880bb3791b,CN=Deleted Objects,DC=corp: [LDAP: error code 34 - 0000208F: NameErr: DSID-031001BA, problem 2006 (BAD_NAME), data 8349, best match of: 'CN=Alvarez Guerrero Bernardo DEL:ca9ef28d-3b94-4e8f-a6bd-8c880bb3791b,CN=Deleted Objects,DC=corp,dc=televisa,dc=com,dc=mx' ^@]; remaining name 'CN=Alvarez Guerrero Bernardo DEL:ca9ef28d-3b94-4e8f-a6bd-8c880bb3791b,CN=Deleted Objects,DC=corp']. at fuego.directory.DirectoryRuntimeException.wrapException(DirectoryRuntimeException.java:85) at fuego.directory.hybrid.ldap.JNDIQueryExecutor.selectById(JNDIQueryExecutor.java:163) at fuego.directory.hybrid.ldap.JNDIQueryExecutor.selectById(JNDIQueryExecutor.java:110) at fuego.directory.hybrid.ldap.Repository.selectById(Repository.java:38) at fuego.directory.hybrid.msad.MSADGroupValueProvider.getAssignedParticipantsInternal(MSADGroupValueProvider.java:124) at fuego.directory.hybrid.msad.MSADGroupValueProvider.getAssignedParticipants(MSADGroupValueProvider.java:70) at fuego.directory.hybrid.ldap.Group$7.getValue(Group.java:149) at fuego.directory.hybrid.ldap.Group$7.getValue(Group.java:152) at fuego.directory.hybrid.ldap.LDAPResult.getValue(LDAPResult.java:76) at fuego.directory.hybrid.ldap.LDAPOrganizationGroupAccessor.setInfo(LDAPOrganizationGroupAccessor.java:352) at fuego.directory.hybrid.ldap.LDAPOrganizationGroupAccessor.build(LDAPOrganizationGroupAccessor.java:121) at fuego.directory.hybrid.ldap.LDAPOrganizationGroupAccessor.build(LDAPOrganizationGroupAccessor.java:114) at fuego.directory.hybrid.ldap.LDAPOrganizationGroupAccessor.fetchGroup(LDAPOrganizationGroupAccessor.java:94) at fuego.directory.hybrid.HybridGroupAccessor.fetchGroup(HybridGroupAccessor.java:146) at sun.reflect.GeneratedMethodAccessor66.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at fuego.directory.provider.DirectorySessionImpl$AccessorProxy.invoke(DirectorySessionImpl.java:756) at $Proxy66.fetchGroup(Unknown Source) at fuego.directory.DirOrganizationalGroup.fetch(DirOrganizationalGroup.java:275) at fuego.metadata.GroupManager.loadGroup(GroupManager.java:225) at fuego.metadata.GroupManager.find(GroupManager.java:57) at fuego.metadata.ParticipantManager.addNestedGroups(ParticipantManager.java:621) at fuego.metadata.ParticipantManager.buildCompleteRoleAssignments(ParticipantManager.java:527) at fuego.metadata.Participant$RoleTransitiveClousure.build(Participant.java:760) at fuego.metadata.Participant$RoleTransitiveClousure.access$100(Participant.java:692) at fuego.metadata.Participant.buildRoles(Participant.java:401) at fuego.metadata.Participant.updateMembers(Participant.java:372) at fuego.metadata.Participant.<init>(Participant.java:64) at fuego.metadata.Participant.createUncacheParticipant(Participant.java:84) at fuego.server.persistence.jdbc.JdbcProcessInstancePersMgr.loadItems(JdbcProcessInstancePersMgr.java:1706) at fuego.server.persistence.Persistence.loadInstanceItems(Persistence.java:838) at fuego.server.AbstractInstanceService.readInstance(AbstractInstanceService.java:791) at fuego.ejbengine.EJBInstanceService.getLockedROImpl(EJBInstanceService.java:218) at fuego.server.AbstractInstanceService.getLockedROImpl(AbstractInstanceService.java:892) at fuego.server.AbstractInstanceService.getLockedImpl(AbstractInstanceService.java:743) at fuego.server.AbstractInstanceService.getLockedImpl(AbstractInstanceService.java:730) at fuego.server.AbstractInstanceService.getLocked(AbstractInstanceService.java:144) at fuego.server.AbstractInstanceService.getLocked(AbstractInstanceService.java:162) at fuego.server.AbstractInstanceService.unselectAllItems(AbstractInstanceService.java:454) at fuego.server.execution.ToDoItemUnselect.execute(ToDoItemUnselect.java:105) at fuego.server.execution.DefaultEngineExecution$AtomicExecutionTA.runTransaction(DefaultEngineExecution.java:304) at fuego.transaction.TransactionAction.startNestedTransaction(TransactionAction.java:527) at fuego.transaction.TransactionAction.startTransaction(TransactionAction.java:548) at fuego.transaction.TransactionAction.start(TransactionAction.java:212) at fuego.server.execution.DefaultEngineExecution.executeImmediate(DefaultEngineExecution.java:123) at fuego.server.execution.DefaultEngineExecution.executeAutomaticWork(DefaultEngineExecution.java:62) at fuego.server.execution.EngineExecution.executeAutomaticWork(EngineExecution.java:42) at fuego.server.execution.ToDoItem.executeAutomaticWork(ToDoItem.java:261) at fuego.ejbengine.ItemExecutionBean$1.execute(ItemExecutionBean.java:223) at fuego.server.execution.DefaultEngineExecution$AtomicExecutionTA.runTransaction(DefaultEngineExecution.java:304) at fuego.transaction.TransactionAction.startBaseTransaction(TransactionAction.java:470) at fuego.transaction.TransactionAction.startTransaction(TransactionAction.java:551) at fuego.transaction.TransactionAction.start(TransactionAction.java:212) at fuego.server.execution.DefaultEngineExecution.executeImmediate(DefaultEngineExecution.java:123) at fuego.server.execution.EngineExecution.executeImmediate(EngineExecution.java:66) at fuego.ejbengine.ItemExecutionBean.processMessage(ItemExecutionBean.java:209) at fuego.ejbengine.ItemExecutionBean.onMessage(ItemExecutionBean.java:120) at weblogic.ejb.container.internal.MDListener.execute(MDListener.java:466) at weblogic.ejb.container.internal.MDListener.transactionalOnMessage(MDListener.java:371) at weblogic.ejb.container.internal.MDListener.onMessage(MDListener.java:327) at weblogic.jms.client.JMSSession.onMessage(JMSSession.java:4547) at weblogic.jms.client.JMSSession.execute(JMSSession.java:4233) at weblogic.jms.client.JMSSession.executeMessage(JMSSession.java:3709) at weblogic.jms.client.JMSSession.access$000(JMSSession.java:114) at weblogic.jms.client.JMSSession$UseForRunnable.run(JMSSession.java:5058) at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:516) at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201) at weblogic.work.ExecuteThread.run(ExecuteThread.java:173) Caused by: javax.naming.InvalidNameException: CN=Alvarez Guerrero Bernardo DEL:ca9ef28d-3b94-4e8f-a6bd-8c880bb3791b,CN=Deleted Objects,DC=corp: [LDAP: error code 34 - 0000208F: NameErr: DSID-031001BA, problem 2006 (BAD_NAME), data 8349, best match of: 'CN=Alvarez Guerrero Bernardo DEL:ca9ef28d-3b94-4e8f-a6bd-8c880bb3791b,CN=Deleted Objects,DC=corp,dc=televisa,dc=com,dc=mx' ^@]; remaining name 'CN=Alvarez Guerrero Bernardo DEL:ca9ef28d-3b94-4e8f-a6bd-8c880bb3791b,CN=Deleted Objects,DC=corp' at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:2979) at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:2794) at com.sun.jndi.ldap.LdapCtx.searchAux(LdapCtx.java:1826) at com.sun.jndi.ldap.LdapCtx.c_search(LdapCtx.java:1749) at com.sun.jndi.toolkit.ctx.ComponentDirContext.p_search(ComponentDirContext.java:368) at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.search(PartialCompositeDirContext.java:338) at com.sun.jndi.toolkit.ctx.PartialCompositeDirContext.search(PartialCompositeDirContext.java:321) at javax.naming.directory.InitialDirContext.search(InitialDirContext.java:248) at fuego.jndi.FaultTolerantLdapContext.search(FaultTolerantLdapContext.java:612) at fuego.directory.hybrid.ldap.JNDIQueryExecutor.selectById(JNDIQueryExecutor.java:136) ... 67 more

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >