Search Results

Search found 3589 results on 144 pages for 'cluster computing'.

Page 33/144 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • What applications can be used in a Red Hat/CentOS cluster?

    - by Sandra
    Hi, When I look at the Red Hat cluster manuals 1 2, they only explain how to install it but not what applications can use it. I am new to clusters, so I don't know these things =) Let's say I want to 3 node high performance cluster; What applications would work with it? Also, how does an application talk to the cluster? Does the application need to have been written to support clusters? Sandra

    Read the article

  • Setting up an Active-Active IIS Cluster with ARR - is it possible?

    - by Ahmed Zubair
    I would like to know if we can setup an Active-Active IIS Cluster using Windows Cluster services that shares a common storage to store web content and WITHOUT the use of Windows NLB. I'm aware that this may not be a best practice or not a recommended setup, however, the setup is to be configured as below: Two web servers running IIS 7.5 (needs a common storage for web content) for HA and another set of two servers for sql cluster in active-passive mode for HA. Also is it possible to enable ARR on 2 node active-active IIS cluster for load balancing http requests? Appreciate if someone replies with both pros & cons of the setup.

    Read the article

  • How can I format an SD card with a more robust Linux-usable filesystem with a specific cluster size for better write performace?

    - by Harvey
    Goal: microSD card formatted... for best write performance for use only with embedded Linux for better reliability (random power failures may occur) using an 64kB cluster size I'm using an 8GB microSD card for data storage inside an embedded Linux/ARM device. The SD card is not removable. I've been using ext3 instead of the pre-installed FAT32 because it seems to better handle random power failures during writes. However, I kept noticing that my write performance is always best with the pre-installed FAT32 from Kingston. If I reformat the card with FAT32, the performance still suffers. After browsing wikipedia, I stumbled upon the following comment saying that some cards are optimized for specific cluster sizes. In my case, the Kingston comes pre-formatted for an 64kB cluster size. Risks of reformatting Reformatting an SD card with a different file system, or even with the same one, may make the card slower, or shorten its lifespan. Some cards use wear leveling, in which frequently modified blocks are mapped to different portions of memory at different times, and some wear-leveling algorithms are designed for the access patterns typical of the file allocation table on a FAT16 or FAT32 device.[60] In addition, the preformatted file system may use a cluster size that matches the erase region of the physical memory on the card; reformatting may change the cluster size and make writes less efficient.

    Read the article

  • Need to reformat SQL Cluster Disk. How do I recover my SQL installation?

    - by I.T. Support
    We need to reformat the SQL cluster disk in our SQL cluster. The drive contains the shared installation files for SQL as well as databases. My concern is how SQL/The Cluster will react to after we wipe the disk resource. Questions: Is there a defined procedure for this? How should we backup and restore the disk? After the reformat, how do we get the clustered SQL server back online? Thanks

    Read the article

  • Fortran recursion segmentation faults

    - by ConnorG
    Hey all - I have to design and implement a Fortran routine to determine the size of clusters on a square lattice, and it seemed extremely convenient to code the subroutine recursively. However, whenever my lattice size grows beyond a certain value (around 200/side), the subroutine consistently segfaults. Here's my cluster-detection routine: RECURSIVE SUBROUTINE growCluster(lattice, adj, idx, area) INTEGER, INTENT(INOUT) :: lattice(:), area INTEGER, INTENT(IN) :: adj(:,:), idx lattice(idx) = -1 area = area + 1 IF (lattice(adj(1,idx)).GT.0) & CALL growCluster(lattice,adj,adj(1,idx),area) IF (lattice(adj(2,idx)).GT.0) & CALL growCluster(lattice,adj,adj(2,idx),area) IF (lattice(adj(3,idx)).GT.0) & CALL growCluster(lattice,adj,adj(3,idx),area) IF (lattice(adj(4,idx)).GT.0) & CALL growCluster(lattice,adj,adj(4,idx),area) END SUBROUTINE growCluster where adj(1,n) represents the north neighbor of site n, adj(2,n) represents the west and so on. What would cause the erratic segfault behavior? Is the cluster just "too huge" for large lattice sizes?

    Read the article

  • IP failover with 2 nodes on different subnet: cannot ping virtual IP from second node?

    - by quanta
    I'm going to setup redundant failover Redmine: another instance was installed on the second server without problem MySQL (running on the same machine with Redmine) was configured as master-master replication Because they are in different subnet (192.168.3.x and 192.168.6.x), it seems that VIPArip is the only choice. /etc/ha.d/ha.cf on node1 logfacility none debug 1 debugfile /var/log/ha-debug logfile /var/log/ha-log autojoin none warntime 3 deadtime 6 initdead 60 udpport 694 ucast eth1 node2.ip keepalive 1 node node1 node node2 crm respawn /etc/ha.d/ha.cf on node2: logfacility none debug 1 debugfile /var/log/ha-debug logfile /var/log/ha-log autojoin none warntime 3 deadtime 6 initdead 60 udpport 694 ucast eth0 node1.ip keepalive 1 node node1 node node2 crm respawn crm configure show: node $id="6c27077e-d718-4c82-b307-7dccaa027a72" node1 node $id="740d0726-e91d-40ed-9dc0-2368214a1f56" node2 primitive VIPArip ocf:heartbeat:VIPArip \ params ip="192.168.6.8" nic="lo:0" \ op start interval="0" timeout="20s" \ op monitor interval="5s" timeout="20s" depth="0" \ op stop interval="0" timeout="20s" \ meta is-managed="true" property $id="cib-bootstrap-options" \ stonith-enabled="false" \ dc-version="1.0.12-unknown" \ cluster-infrastructure="Heartbeat" \ last-lrm-refresh="1338870303" crm_mon -1: ============ Last updated: Tue Jun 5 18:36:42 2012 Stack: Heartbeat Current DC: node2 (740d0726-e91d-40ed-9dc0-2368214a1f56) - partition with quorum Version: 1.0.12-unknown 2 Nodes configured, unknown expected votes 1 Resources configured. ============ Online: [ node1 node2 ] VIPArip (ocf::heartbeat:VIPArip): Started node1 ip addr show lo: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet 192.168.6.8/32 scope global lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever I can ping 192.168.6.8 from node1 (192.168.3.x): # ping -c 4 192.168.6.8 PING 192.168.6.8 (192.168.6.8) 56(84) bytes of data. 64 bytes from 192.168.6.8: icmp_seq=1 ttl=64 time=0.062 ms 64 bytes from 192.168.6.8: icmp_seq=2 ttl=64 time=0.046 ms 64 bytes from 192.168.6.8: icmp_seq=3 ttl=64 time=0.059 ms 64 bytes from 192.168.6.8: icmp_seq=4 ttl=64 time=0.071 ms --- 192.168.6.8 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3000ms rtt min/avg/max/mdev = 0.046/0.059/0.071/0.011 ms but cannot ping virtual IP from node2 (192.168.6.x) and outside. Did I miss something? PS: you probably want to set IP2UTIL=/sbin/ip in the /usr/lib/ocf/resource.d/heartbeat/VIPArip resource agent script if you get something like this: Jun 5 11:08:10 node1 lrmd: [19832]: info: RA output: (VIPArip:stop:stderr) 2012/06/05_11:08:10 ERROR: Invalid OCF_RESK EY_ip [192.168.6.8] http://www.clusterlabs.org/wiki/Debugging_Resource_Failures Reply to @DukeLion: Which router receives RIP updates? When I start the VIPArip resource, ripd was run with below configuration file (on node1): /var/run/resource-agents/VIPArip-ripd.conf: hostname ripd password zebra debug rip events debug rip packet debug rip zebra log file /var/log/quagga/quagga.log router rip !nic_tag no passive-interface lo:0 network lo:0 distribute-list private out lo:0 distribute-list private in lo:0 !metric_tag redistribute connected metric 3 !ip_tag access-list private permit 192.168.6.8/32 access-list private deny any

    Read the article

  • In the days of modern computing, in 'typical business apps' - why does performance matter?

    - by Prog
    This may seem like an odd question to some of you. I'm a hobbyist Java programmer. I have developed several games, an AI program that creates music, another program for painting, and similar stuff. This is to tell you that I have an experience in programming, but not in professional development of business applications. I see a lot of talk on this site about performance. People often debate what would be the most efficient algorithm in C# to perform a task, or why Python is slow and Java is faster, etc. What I'm trying to understand is: why does this matter? There are specific areas of computing where I see why performance matters: games, where tens of thousands of computations are happening every second in a constant-update loop, or low level systems which other programs rely on, such as OSs and VMs, etc. But for the normal, typical high-level business app, why does performance matter? I can understand why it used to matter, decades ago. Computers were much slower and had much less memory, so you had to think carefully about these things. But today, we have so much memory to spare and computers are so fast: does it actually matter if a particular Java algorithm is O(n^2)? Will it actually make a difference for the end users of this typical business app? When you press a GUI button in a typical business app, and behind the scenes it invokes an O(n^2) algorithm, in these days of modern computing - do you actually feel the inefficiency? My question is split in two: In practice, today does performance matter in a typical normal business program? If it does, please give me real-world examples of places in such an application, where performance and optimizations are important.

    Read the article

  • Why is Python used for high-performance/scientific computing (but Ruby isn't)?

    - by Cyclops
    There's a quote from a PyCon 2011 talk that goes: At least in our shop (Argonne National Laboratory) we have three accepted languages for scientific computing. In this order they are C/C++, Fortran in all its dialects, and Python. You’ll notice the absolute and total lack of Ruby, Perl, Java. It was in the more general context of high-performance computing. Granted the quote is only from one shop, but another question about languages for HPC, also lists Python as one to learn (and not Ruby). Now, I can understand C/C++ and Fortran being used in that problem-space (and Perl/Java not being used). But I'm surprised that there would be a major difference in Python and Ruby use for HPC, given that they are fairly similar. (Note - I'm a fan of Python, but have nothing against Ruby). Is there some specific reason why the one language took off? Is it about the libraries available? Some specific language features? The community? Or maybe just historical contigency, and it could have gone the other way?

    Read the article

  • BizTalk Server 2009 - Architecture Options

    - by StuartBrierley
    I recently needed to put forward a proposal for a BizTalk 2009 implementation and as a part of this needed to describe some of the basic architecture options available for consideration.  While I already had an idea of the type of environment that I would be looking to recommend, I felt that presenting a range of options while trying to explain some of the strengths and weaknesses of those options was a good place to start.  These outline architecture options should be equally valid for any version of BizTalk Server from 2004, through 2006 and R2, up to 2009.   The following diagram shows a crude representation of the common implementation options to consider when designing a BizTalk environment.         Each of these options provides differing levels of resilience in the case of failure or disaster, with the later options also providing more scope for performance tuning and scalability.   Some of the options presented above make use of clustering. Clustering may best be described as a technology that automatically allows one physical server to take over the tasks and responsibilities of another physical server that has failed. Given that all computer hardware and software will eventually fail, the goal of clustering is to ensure that mission-critical applications will have little or no downtime when such a failure occurs. Clustering can also be configured to provide load balancing, which should generally lead to performance gains and increased capacity and throughput.   (A) Single Servers   This option is the most basic BizTalk implementation that should be considered. It involves the deployment of a single BizTalk server in conjunction with a single SQL server. This configuration does not provide for any resilience in the case of the failure of either server. It is however the cheapest and easiest to implement option of those available.   Using a single BizTalk server does not provide for the level of performance tuning that is otherwise available when using more than one BizTalk server in a cluster.   The common edition of BizTalk used in single server implementations is the standard edition. It should be noted however that if future demand requires increased capacity for a solution, this BizTalk edition is limited to scaling up the implementation and not scaling out the number of servers in use. Any need to scale out the solution would require an upgrade to the enterprise edition of BizTalk.   (B) Single BizTalk Server with Clustered SQL Servers   This option uses a single BizTalk server with a cluster of SQL servers. By utilising clustered SQL servers we can ensure that there is some resilience to the implementation in respect of the databases that BizTalk relies on to operate. The clustering of two SQL servers is possible with the standard edition but to go beyond this would require the enterprise level edition. While this option offers improved resilience over option (A) it does still present a potential single point of failure at the BizTalk server.   Using a single BizTalk server does not provide for the level of performance tuning that is otherwise available when using more than one BizTalk server in a cluster.   The common edition of BizTalk used in single server implementations is the standard edition. It should be noted however that if future demand requires increased capacity for a solution, this BizTalk edition is limited to scaling up the implementation and not scaling out the number of servers in use. You are also unable to take advantage of multiple message boxes, which would allow us to balance the SQL load in the event of any bottlenecks in this area of the implementation. Any need to scale out the solution would require an upgrade to the enterprise edition of BizTalk.   (C) Clustered BizTalk Servers with Clustered SQL Servers   This option makes use of a cluster of BizTalk servers with a cluster of SQL servers to offer high availability and resilience in the case of failure of either of the server types involved. Clustering of BizTalk is only available with the enterprise edition of the product. Clustering of two SQL servers is possible with the standard edition but to go beyond this would require the enterprise level edition.    The use of a BizTalk cluster also provides for the ability to balance load across the servers and gives more scope for performance tuning any implemented solutions. It is also possible to add more BizTalk servers to an existing cluster, giving scope for scaling out the solution as future demand requires.   This might be seen as the middle cost option, providing a good level of protection in the case of failure, a decent level of future proofing, but at a higher cost than the single BizTalk server implementations.   (D) Clustered BizTalk Servers with Clustered SQL Servers – with disaster recovery/service continuity   This option is similar to that offered by (C) and makes use of a cluster of BizTalk servers with a cluster of SQL servers to offer high availability and resilience in case of failure of either of the server types involved. Clustering of BizTalk is only available with the enterprise edition of the product. Clustering of two SQL servers is possible with the standard edition but to go beyond this would require the enterprise level edition.    As with (C) the use of a BizTalk cluster also provides for the ability to balance load across the servers and gives more scope for performance tuning the implemented solution. It is also possible to add more BizTalk servers to an existing cluster, giving scope for scaling the solution out as future demand requires.   In this scenario however, we would be including some form of disaster recovery or service continuity. An example of this would be making use of multiple sites, with the BizTalk server cluster operating across sites to offer resilience in case of the loss of one or more sites. In this scenario there are options available for the SQL implementation depending on the network implementation; making use of either one cluster per site or a single SQL cluster across the network. A multi-site SQL implementation would require some form of data replication across the sites involved.   This is obviously an expensive and complex option, but does provide an extraordinary amount of protection in the case of failure.

    Read the article

  • Application throws NotSerializableException when run on an jboss cluster

    - by Kalpana
    Environment: JBoss 5.1.0, JBoss Seam 2.2.0 While trying to get my application running in a clustered environment after login I am getting the following exception. Post login we try to store the currentUser in jboss seam session context. java.io.NotSerializableException: org.jboss.seam.util.AnnotatedBeanProperty How to resolve this? java.io.NotSerializableException: org.jboss.seam.util.AnnotatedBeanProperty at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1156) at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java :1509) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:14 74) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.jav a:1392) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1150) at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java :1509) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:14 74) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.jav a:1392) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1150) at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:326) at java.util.ArrayList.writeObject(ArrayList.java:570) at sun.reflect.GeneratedMethodAccessor339.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:94 5) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:14 61) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.jav a:1392) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1150) at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java :1509) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:14 74) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.jav a:1392) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1150) at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:326) at java.util.HashMap.writeObject(HashMap.java:1001) at sun.reflect.GeneratedMethodAccessor338.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:94 5) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:14 61) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.jav a:1392) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1150) at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:326) at org.jboss.ha.framework.server.SimpleCachableMarshalledValue.serialize (SimpleCachableMarshalledValue.java:271) at org.jboss.ha.framework.server.SimpleCachableMarshalledValue.writeExte rnal(SimpleCachableMarshalledValue.java:252) at java.io.ObjectOutputStream.writeExternalData(ObjectOutputStream.java: 1421) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.jav a:1390) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1150) at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:326) at org.jboss.cache.marshall.CacheMarshaller200.marshallObject(CacheMarsh aller200.java:460) at org.jboss.cache.marshall.CacheMarshaller300.marshallObject(CacheMarsh aller300.java:47) at org.jboss.cache.marshall.CacheMarshaller200.marshallMap(CacheMarshall er200.java:569) at org.jboss.cache.marshall.CacheMarshaller200.marshallObject(CacheMarsh aller200.java:370) at org.jboss.cache.marshall.CacheMarshaller300.marshallObject(CacheMarsh aller300.java:47) at org.jboss.cache.marshall.CacheMarshaller200.marshallCommand(CacheMars haller200.java:519) at org.jboss.cache.marshall.CacheMarshaller200.marshallObject(CacheMarsh aller200.java:314) at org.jboss.cache.marshall.CacheMarshaller300.marshallObject(CacheMarsh aller300.java:47) at org.jboss.cache.marshall.CacheMarshaller200.marshallCommand(CacheMars haller200.java:519) at org.jboss.cache.marshall.CacheMarshaller200.marshallObject(CacheMarsh aller200.java:314) at org.jboss.cache.marshall.CacheMarshaller300.marshallObject(CacheMarsh aller300.java:47) at org.jboss.cache.marshall.CacheMarshaller200.objectToObjectStream(Cach eMarshaller200.java:191) at org.jboss.cache.marshall.CacheMarshaller200.objectToObjectStream(Cach eMarshaller200.java:136) at org.jboss.cache.marshall.VersionAwareMarshaller.objectToBuffer(Versio nAwareMarshaller.java:182) at org.jboss.cache.marshall.VersionAwareMarshaller.objectToBuffer(Versio nAwareMarshaller.java:52) at org.jboss.cache.marshall.CommandAwareRpcDispatcher$ReplicationTask.ca ll(CommandAwareRpcDispatcher.java:369) at org.jboss.cache.marshall.CommandAwareRpcDispatcher$ReplicationTask.ca ll(CommandAwareRpcDispatcher.java:341) at org.jboss.cache.util.concurrent.WithinThreadExecutor.submit(WithinThr eadExecutor.java:82) at org.jboss.cache.marshall.CommandAwareRpcDispatcher.invokeRemoteComman ds(CommandAwareRpcDispatcher.java:206) at org.jboss.cache.RPCManagerImpl.callRemoteMethods(RPCManagerImpl.java: 748) at org.jboss.cache.RPCManagerImpl.callRemoteMethods(RPCManagerImpl.java: 716) at org.jboss.cache.RPCManagerImpl.callRemoteMethods(RPCManagerImpl.java: 721) at org.jboss.cache.interceptors.BaseRpcInterceptor.replicateCall(BaseRpc Interceptor.java:161) at org.jboss.cache.interceptors.BaseRpcInterceptor.replicateCall(BaseRpc Interceptor.java:135) at org.jboss.cache.interceptors.BaseRpcInterceptor.replicateCall(BaseRpc Interceptor.java:107) at org.jboss.cache.interceptors.ReplicationInterceptor.handleCrudMethod( ReplicationInterceptor.java:160) at org.jboss.cache.interceptors.ReplicationInterceptor.visitPutDataMapCo mmand(ReplicationInterceptor.java:113) at org.jboss.cache.commands.write.PutDataMapCommand.acceptVisitor(PutDat aMapCommand.java:104) at org.jboss.cache.interceptors.base.CommandInterceptor.invokeNextInterc eptor(CommandInterceptor.java:116) at org.jboss.cache.interceptors.base.CommandInterceptor.handleDefault(Co mmandInterceptor.java:131) at org.jboss.cache.commands.AbstractVisitor.visitPutDataMapCommand(Abstr actVisitor.java:60) at org.jboss.cache.commands.write.PutDataMapCommand.acceptVisitor(PutDat aMapCommand.java:104) at org.jboss.cache.interceptors.base.CommandInterceptor.invokeNextInterc eptor(CommandInterceptor.java:116) at org.jboss.cache.interceptors.TxInterceptor.attachGtxAndPassUpChain(Tx Interceptor.java:301) at org.jboss.cache.interceptors.TxInterceptor.handleDefault(TxIntercepto r.java:283) at org.jboss.cache.commands.AbstractVisitor.visitPutDataMapCommand(Abstr actVisitor.java:60) at org.jboss.cache.commands.write.PutDataMapCommand.acceptVisitor(PutDat aMapCommand.java:104) at org.jboss.cache.interceptors.base.CommandInterceptor.invokeNextInterc eptor(CommandInterceptor.java:116) at org.jboss.cache.interceptors.CacheMgmtInterceptor.visitPutDataMapComm and(CacheMgmtInterceptor.java:97) at org.jboss.cache.commands.write.PutDataMapCommand.acceptVisitor(PutDat aMapCommand.java:104) at org.jboss.cache.interceptors.base.CommandInterceptor.invokeNextInterc eptor(CommandInterceptor.java:116) at org.jboss.cache.interceptors.InvocationContextInterceptor.handleAll(I nvocationContextInterceptor.java:178) at org.jboss.cache.interceptors.InvocationContextInterceptor.visitPutDat aMapCommand(InvocationContextInterceptor.java:64) at org.jboss.cache.commands.write.PutDataMapCommand.acceptVisitor(PutDat aMapCommand.java:104) at org.jboss.cache.interceptors.InterceptorChain.invoke(InterceptorChain .java:287) at org.jboss.cache.invocation.CacheInvocationDelegate.invokePut(CacheInv ocationDelegate.java:705) at org.jboss.cache.invocation.CacheInvocationDelegate.put(CacheInvocatio nDelegate.java:519) at org.jboss.ha.cachemanager.CacheManagerManagedCache.put(CacheManagerMa nagedCache.java:277) at org.jboss.web.tomcat.service.session.distributedcache.impl.jbc.JBossC acheWrapper.put(JBossCacheWrapper.java:148) at org.jboss.web.tomcat.service.session.distributedcache.impl.jbc.Abstra ctJBossCacheService.storeSessionData(AbstractJBossCacheService.java:405) at org.jboss.web.tomcat.service.session.ClusteredSession.processSessionR eplication(ClusteredSession.java:1166) at org.jboss.web.tomcat.service.session.JBossCacheManager.processSession Repl(JBossCacheManager.java:1937) at org.jboss.web.tomcat.service.session.JBossCacheManager.storeSession(J BossCacheManager.java:309) at org.jboss.web.tomcat.service.session.InstantSnapshotManager.snapshot( InstantSnapshotManager.java:51) at org.jboss.web.tomcat.service.session.ClusteredSessionValve.handleRequ est(ClusteredSessionValve.java:147) at org.jboss.web.tomcat.service.session.ClusteredSessionValve.invoke(Clu steredSessionValve.java:94) at org.jboss.web.tomcat.service.session.LockingValve.invoke(LockingValve .java:62) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(Authentica torBase.java:433) at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValv e.java:92) at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.proce ss(SecurityContextEstablishmentValve.java:126) at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invok e(SecurityContextEstablishmentValve.java:70) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.j ava:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.j ava:102) at org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedC onnectionValve.java:158) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineVal ve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.jav a:330) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java :829) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.proce ss(Http11Protocol.java:598) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:44 7) at java.lang.Thread.run(Thread.java:619) 16:38:35,789 ERROR [CommandAwareRpcDispatcher] java.io.NotSerializableException: org.jboss.seam.util.AnnotatedBeanProperty 16:38:35,789 WARN [/a12] Failed to replicate session YwBL69cG-zdm0m5CvzNj3Q__ java.lang.RuntimeException: Failure to marshal argument(s) at org.jboss.cache.marshall.CommandAwareRpcDispatcher$ReplicationTask.ca ll(CommandAwareRpcDispatcher.java:374) at org.jboss.cache.marshall.CommandAwareRpcDispatcher$ReplicationTask.ca ll(CommandAwareRpcDispatcher.java:341) at org.jboss.cache.util.concurrent.WithinThreadExecutor.submit(WithinThr eadExecutor.java:82) at org.jboss.cache.marshall.CommandAwareRpcDispatcher.invokeRemoteComman ds(CommandAwareRpcDispatcher.java:206) at org.jboss.cache.RPCManagerImpl.callRemoteMethods(RPCManagerImpl.java: 748) at org.jboss.cache.RPCManagerImpl.callRemoteMethods(RPCManagerImpl.java: 716) at org.jboss.cache.RPCManagerImpl.callRemoteMethods(RPCManagerImpl.java: 721) at org.jboss.cache.interceptors.BaseRpcInterceptor.replicateCall(BaseRpc Interceptor.java:161) at org.jboss.cache.interceptors.BaseRpcInterceptor.replicateCall(BaseRpc Interceptor.java:135) at org.jboss.cache.interceptors.BaseRpcInterceptor.replicateCall(BaseRpc Interceptor.java:107) at org.jboss.cache.interceptors.ReplicationInterceptor.handleCrudMethod( ReplicationInterceptor.java:160) at org.jboss.cache.interceptors.ReplicationInterceptor.visitPutDataMapCo mmand(ReplicationInterceptor.java:113) at org.jboss.cache.commands.write.PutDataMapCommand.acceptVisitor(PutDat aMapCommand.java:104) at org.jboss.cache.interceptors.base.CommandInterceptor.invokeNextInterc eptor(CommandInterceptor.java:116) at org.jboss.cache.interceptors.base.CommandInterceptor.handleDefault(Co mmandInterceptor.java:131) at org.jboss.cache.commands.AbstractVisitor.visitPutDataMapCommand(Abstr actVisitor.java:60) at org.jboss.cache.commands.write.PutDataMapCommand.acceptVisitor(PutDat aMapCommand.java:104) at org.jboss.cache.interceptors.base.CommandInterceptor.invokeNextInterc eptor(CommandInterceptor.java:116) at org.jboss.cache.interceptors.TxInterceptor.attachGtxAndPassUpChain(Tx Interceptor.java:301) at org.jboss.cache.interceptors.TxInterceptor.handleDefault(TxIntercepto r.java:283) at org.jboss.cache.commands.AbstractVisitor.visitPutDataMapCommand(Abstr actVisitor.java:60) at org.jboss.cache.commands.write.PutDataMapCommand.acceptVisitor(PutDat aMapCommand.java:104) at org.jboss.cache.interceptors.base.CommandInterceptor.invokeNextInterc eptor(CommandInterceptor.java:116) at org.jboss.cache.interceptors.CacheMgmtInterceptor.visitPutDataMapComm and(CacheMgmtInterceptor.java:97) at org.jboss.cache.commands.write.PutDataMapCommand.acceptVisitor(PutDat aMapCommand.java:104) at org.jboss.cache.interceptors.base.CommandInterceptor.invokeNextInterc eptor(CommandInterceptor.java:116) at org.jboss.cache.interceptors.InvocationContextInterceptor.handleAll(I nvocationContextInterceptor.java:178) at org.jboss.cache.interceptors.InvocationContextInterceptor.visitPutDat aMapCommand(InvocationContextInterceptor.java:64) at org.jboss.cache.commands.write.PutDataMapCommand.acceptVisitor(PutDat aMapCommand.java:104) at org.jboss.cache.interceptors.InterceptorChain.invoke(InterceptorChain .java:287) at org.jboss.cache.invocation.CacheInvocationDelegate.invokePut(CacheInv ocationDelegate.java:705) at org.jboss.cache.invocation.CacheInvocationDelegate.put(CacheInvocatio nDelegate.java:519) at org.jboss.ha.cachemanager.CacheManagerManagedCache.put(CacheManagerMa nagedCache.java:277) at org.jboss.web.tomcat.service.session.distributedcache.impl.jbc.JBossC acheWrapper.put(JBossCacheWrapper.java:148) at org.jboss.web.tomcat.service.session.distributedcache.impl.jbc.Abstra ctJBossCacheService.storeSessionData(AbstractJBossCacheService.java:405) at org.jboss.web.tomcat.service.session.ClusteredSession.processSessionR eplication(ClusteredSession.java:1166) at org.jboss.web.tomcat.service.session.JBossCacheManager.processSession Repl(JBossCacheManager.java:1937) at org.jboss.web.tomcat.service.session.JBossCacheManager.storeSession(J BossCacheManager.java:309) at org.jboss.web.tomcat.service.session.InstantSnapshotManager.snapshot( InstantSnapshotManager.java:51) at org.jboss.web.tomcat.service.session.ClusteredSessionValve.handleRequ est(ClusteredSessionValve.java:147) at org.jboss.web.tomcat.service.session.ClusteredSessionValve.invoke(Clu steredSessionValve.java:94) at org.jboss.web.tomcat.service.session.LockingValve.invoke(LockingValve .java:62) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(Authentica torBase.java:433) at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValv e.java:92) at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.proce ss(SecurityContextEstablishmentValve.java:126) at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invok e(SecurityContextEstablishmentValve.java:70) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.j ava:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.j ava:102) at org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedC onnectionValve.java:158) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineVal ve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.jav a:330) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java :829) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.proce ss(Http11Protocol.java:598) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:44 7) at java.lang.Thread.run(Thread.java:619)

    Read the article

  • Sharing transactions between web applications, which run in the same cluster

    - by pihentagy
    We (will) have the following architecture: Base.war will be a self-contained spring-hibernate application All applications will run under Glassfish, and may be clustered E1.war will sit on top of Base.war, extending it's functionality There could be further extensions (E2.war, E3.war, …) sitting on top of Base.war Either wars could start a transaction, and transactions could span between wars Without shutting down Base.war, or any other Ex.war, it should be possible to upgrade an Ey.war Is there a solution for this with spring-hibernate-glassfish environment?

    Read the article

  • Cluster Graph Visualization using python

    - by AlgoMan
    I am assembling different visualization tools that are available in python language. I found the Treemap. (http://pypi.python.org/pypi/treemap/1.05) Can you suggest some other tools that are available. I am exploring different ways of visualization of web data.

    Read the article

  • Best practices for building a simple, scalable cluster on Amazon EC2 for a Java web app

    - by Alex B
    I want to build a Java web app and deploy it on EC2. It will be written in Java and will use MySQL. I was hoping to get some pointers on the actual deployment process and configuration. In particular I'm interested in the following topics: machine images (diy vs ready made) mysql replication and backup to S3 ways of deploying and redeploying the app to EC2 without interruptions firewalls? load balancing and auto scaling cloudtools (or alternative tools)

    Read the article

  • castle scheduler - cluster

    - by cvista
    Hi We're using the castle scheduler component: http://using.castleproject.org/display/Comp/Castle.Components.Scheduler?showChildren=false I have a wcf service which creates the tasks and that does it's job fine. I then have a console app running (will be a windows service eventually) which should then keep an eye out for tasks to run. Thing is the each create their own scheduler in the DB but they both have the same clusterid. Should the console app be able to run the tasks created by the wcf service? If not - how can i make it do that? Cheers w://

    Read the article

  • Why does GetClusterShape return null when the cluster specification was retrieved through the GetClu

    - by Markus Olsson
    Suppose I have a virtual earth shape layer called shapeLayer1 (my creative energy is apparently at an alltime low). When i call the GetClusteredShapes method I get an array of VEClusterSpecification objects that represent each and every one of my currently visible clusters; no problem there. But when I call the GetClusterShape() method it returns null... null! Why on earth would it do that? I used firebug to confirm that the private variable of the VEClusterSpecification that's supposed to hold a reference to the shape is indeed null so it's not the method that's causing the problem. Some have suggested that this is actually documented behavior Returns null if a VEClusterSpecification object was returned from the VEShapeLayer.GetClusteredShapes Method But looking at the current MSDN documentation for the VEShape class it says: Returns if a VEClusterSpecification object was returned from the VEShapeLayer.GetClusteredShapes Method Is this a bug or a feature? Is there any known workarounds or (if it is a bug) some plan on when they are going to fix it?

    Read the article

  • What is the best way for communication between cluster nodes

    - by Tom
    I have an application written in a combination of ASP/VB6/VBScript and ASP.NET/C# that consists of a website part, SOAP-like webservice part and a queue processing part processing incoming files in a hotfolder. We are used to running under load balancers (Microsoft or other make). Often we need to communicate between the different load balanced servers. Currently we do this through the SQL Server database that is common for all nodes, however, this comes with a performance penalty as each message requires a transaction and continual polling from the other nodes. What would be better ways to achieve this? Tom, Appelby

    Read the article

  • COM+ Connection Pooling Doesn't Appear to be working on SQL Server 2005 Cluster

    - by kmacmahon
    We have a COM+ Data Layer that utilized Connection Pooling. Its deployed to 3 clusters, 2 SQL Server 2000 and 1 SQL Server 2005 environment. We noticed today that our monitoring software is reporting Thousands of Logins per minute on the SQL Server 2005 box. I did some tracing in both environments and profiler is reporting this for the 2000 boxes: sp_reset_connection SQL CALL sp_reset_connection SQL CALL sp_reset_connection SQL CALL and this for the 2005 box: Audit Logout sp_reset_connection Audit Login SQL CALL Audit Logout sp_reset_connection Audit Login SQL CALL Audit Logout sp_reset_connection Audit Login SQL CALL Is there some sort configuration for SQL Server 2005 different from SQL Server 2000 that we might be missing that would be creating this issue?

    Read the article

  • Register to the Solaris 11.1 and Solaris Cluster webcast!

    - by Karoly Vegh
    On the 7. November there will be a live webcast about Oracle Solaris 11.1 and Oracle Solaris Cluster that you do not want to miss: the Online Launch Event: Oracle Solaris 11 - Innovations for your Data Center.  This live webcast will have three sessions: Executive Keynote: Oracle Solaris 11 - Innovations for your data center  Oracle Technical Session: Oracle Solaris 11.1  Oracle Technical Session: Oracle Solaris Cluster  There will be a live Q&A session, but feel free to tweet as well with #solaris.  see you there! -- charlie  

    Read the article

  • Intel lance Parallel Studio XE 2013 et Cluster Studio XE 2013, ses suites d'outils pour booster les applications parallèles

    Intel lance Parallel Studio XE 2013 et Cluster Studio XE 2013 Ses suites d'outils pour booster les applications parallèles De l'analyse des données analytiques en temps réel aux traitements d'un volume important de données scientifiques, le parallélisme occupe une part de plus en plus importante dans le monde du développement. Des outils tels que ceux qu'Intel vient d'annoncer permettent d'optimiser et d'analyser les applications parallèles, réputées pour leurs grandes complexités. Il s'agit de Parallel Studio XE 2013 et Cluster Studio XE 2013, pour les langages C/C++ et Fortran sous Windows et Linux. [IMG]http://idelways.developpez.com/news/images/intel...

    Read the article

  • Hadoop safemode recovery - taking lot of time

    - by Algorist
    Hi, We are running our cluster on Amazon EC2. we are using cloudera scripts to setup hadoop. On the master node, we start below services. 609 $AS_HADOOP '"$HADOOP_HOME"/bin/hadoop-daemon.sh start namenode' 610 $AS_HADOOP '"$HADOOP_HOME"/bin/hadoop-daemon.sh start secondarynamenode' 611 $AS_HADOOP '"$HADOOP_HOME"/bin/hadoop-daemon.sh start jobtracker' 612 613 $AS_HADOOP '"$HADOOP_HOME"/bin/hadoop dfsadmin -safemode wait' On the slave machine, we run the below services. 625 $AS_HADOOP '"$HADOOP_HOME"/bin/hadoop-daemon.sh start datanode' 626 $AS_HADOOP '"$HADOOP_HOME"/bin/hadoop-daemon.sh start tasktracker' The main problem we are facing is, hdfs safemode recovery is taking more than an hour and this is causing delays in our job completion. Below are the main log messages. 1. domU-12-31-39-0A-34-61.compute-1.internal 10/05/05 20:44:19 INFO ipc.Client: Retrying connect to server: ec2-184-73-64-64.compute-1.amazonaws.com/10.192.11.240:8020. Already tried 21 time(s). 2. The reported blocks 283634 needs additional 322258 blocks to reach the threshold 0.9990 of total blocks 606499. Safe mode will be turned off automatically. The first message is thrown in task trackers log because, job tracker is not started. job tracker didn't start because of hdfs safemode recovery. The second message is thrown during the recovery process. Is there something I am doing wrong? How much time does normal hdfs safemode recovery takes? Will there be any speedup, by not starting task trackers till job tracker is started? Are there any known hadoop problems on amazon cluster? Thanks for your help. Regards Bala Mudiam

    Read the article

  • JBoss Clustered Service that sends emails from txt file

    - by michael lucas
    I need a little push in the right direction. Here's my problem: I have to create an ultra-reliable service that sends email messages to clients whose addresses are stored in txt file on FTP server. Single txt file may contain unlimited number of entries. Most often the file contains about 300,000 entries. Service exposes interface with just two simple methods: TaskHandle sendEmails(String ftpFilePath); ProcessStatus checkProcessStatus(TaskHandle taskHandle); Method sendEmails() returns TaskHandle by which we can ask for ProcessStatus. For such a service to be reliable clustering is necessary. Processing single txt file might take a long time. Restarting one node in a cluster should have no impact on sending emails. We use JBoss AS 4.2.0 which comes with a nice HASingletonController that ensure one instance of service is running at given time. But once a fail-over happens, the second service should continue work from where the first one stopped. How can I share state between nodes in a cluster in such a way that leaves no possibility of sending some emails twice?

    Read the article

  • SAPPHIRE NOW : SAP ouvre un site pour tester HANA, sa solution de In-Memory Computing au coeur de sa "stratégie d'innovation"

    SAPPHIRE NOW : SAP ouvre un site pour tester HANA Sa solution de In-Memory Computing au coeur de sa « stratégie d'innovation » Autre jour, autre ambiance au SAPPHIRE NOW de SAP qui se tient actuellement à Madrid. Si la présentation de Jim Hagemann Snabe, hier, était placée sous le signe de la sciences fiction, celle de Vishal Sikka, membre exécutive du Board de SAP, était aujourd'hui placé sous celui de la mythologie et de la Grèce antique. Le message, en revanche, confirmait celui introduit par le le co-PDG de la société : Cloud, Mobilité et In-Memory so...

    Read the article

  • Sortie de la version 0.7 de Apache Cassandra, le SGBD NoSQL phare du Cloud Computing, utilisé par Facebook et Twitter

    Sortie de la version 0.7 de Apache Cassandra Le SGBD NoSQL phare du Cloud Computing, utilisé par Facebook et Twitter Mise à jour du 12/01/2011 par Idelways La fondation Apache vient d'annoncer la disponibilité de la version 0.7 de Cassandra, le système de gestion de base de donnée de type NoSQL conçu pour supporter des quantités massives de données. Parmi les nouveautés de cette version, l'arrivée des Indexs Secondaires, une manière efficace d'exécuter des requêtes sur des noeud locaux de stockages côté client. Autre nouveauté, le support des « Large Row », qui permettent d'avoir jusqu'à 2 milliards de co...

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >