Search Results

Search found 1914 results on 77 pages for 'mongrel cluster'.

Page 20/77 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • thought about shared storage (NFS, Lustre) [closed]

    - by user134880
    Possible Duplicate: Can you help me with my capacity planning? Now I habe small cluster with total of 8 nodes. 6 of them are computing nodes (apache and vmware) and 2 nodes are for storage. 2 storage nodes are identical. Each storage server is linux box with 8 x 1Tb WD RE4 in soft raid 10. 1st box is master and 2nd is slave. Data is mirrored with DRDB. We export NFSv4 shares to Apache (for document root) and iSCSI to Vmware. Now all is working pretty good and stable. But it will be soon time to upgrade our system. I have been thinking of Lustre. Does some one has any real experience with Lustre or NFS medium clusters? Will it be good idea just to upgrade server and change hdd's to 3Tb ? With NFS we will always have only 2 servers to maintain (one primary and one slave). Thanks. QUESTIONS: 1) Does some one used Lustre? In production? I have seen a lot of info about how it is hard to setup Lustre because you need to compile own kernel and patches. It's answers from newbies. Is there some one who has used Lustre for some period of time? 2) About disk upgrades - it's only description of strategy. I'm not asking if it is enough 3Tb or not. I just ask if it is right just to replace hdds instead of adding new server (like with Lustre) Thanks again.

    Read the article

  • What applications can be used in a Red Hat/CentOS cluster?

    - by Sandra
    Hi, When I look at the Red Hat cluster manuals 1 2, they only explain how to install it but not what applications can use it. I am new to clusters, so I don't know these things =) Let's say I want to 3 node high performance cluster; What applications would work with it? Also, how does an application talk to the cluster? Does the application need to have been written to support clusters? Sandra

    Read the article

  • Setting up an Active-Active IIS Cluster with ARR - is it possible?

    - by Ahmed Zubair
    I would like to know if we can setup an Active-Active IIS Cluster using Windows Cluster services that shares a common storage to store web content and WITHOUT the use of Windows NLB. I'm aware that this may not be a best practice or not a recommended setup, however, the setup is to be configured as below: Two web servers running IIS 7.5 (needs a common storage for web content) for HA and another set of two servers for sql cluster in active-passive mode for HA. Also is it possible to enable ARR on 2 node active-active IIS cluster for load balancing http requests? Appreciate if someone replies with both pros & cons of the setup.

    Read the article

  • How can I format an SD card with a more robust Linux-usable filesystem with a specific cluster size for better write performace?

    - by Harvey
    Goal: microSD card formatted... for best write performance for use only with embedded Linux for better reliability (random power failures may occur) using an 64kB cluster size I'm using an 8GB microSD card for data storage inside an embedded Linux/ARM device. The SD card is not removable. I've been using ext3 instead of the pre-installed FAT32 because it seems to better handle random power failures during writes. However, I kept noticing that my write performance is always best with the pre-installed FAT32 from Kingston. If I reformat the card with FAT32, the performance still suffers. After browsing wikipedia, I stumbled upon the following comment saying that some cards are optimized for specific cluster sizes. In my case, the Kingston comes pre-formatted for an 64kB cluster size. Risks of reformatting Reformatting an SD card with a different file system, or even with the same one, may make the card slower, or shorten its lifespan. Some cards use wear leveling, in which frequently modified blocks are mapped to different portions of memory at different times, and some wear-leveling algorithms are designed for the access patterns typical of the file allocation table on a FAT16 or FAT32 device.[60] In addition, the preformatted file system may use a cluster size that matches the erase region of the physical memory on the card; reformatting may change the cluster size and make writes less efficient.

    Read the article

  • Need to reformat SQL Cluster Disk. How do I recover my SQL installation?

    - by I.T. Support
    We need to reformat the SQL cluster disk in our SQL cluster. The drive contains the shared installation files for SQL as well as databases. My concern is how SQL/The Cluster will react to after we wipe the disk resource. Questions: Is there a defined procedure for this? How should we backup and restore the disk? After the reformat, how do we get the clustered SQL server back online? Thanks

    Read the article

  • Fortran recursion segmentation faults

    - by ConnorG
    Hey all - I have to design and implement a Fortran routine to determine the size of clusters on a square lattice, and it seemed extremely convenient to code the subroutine recursively. However, whenever my lattice size grows beyond a certain value (around 200/side), the subroutine consistently segfaults. Here's my cluster-detection routine: RECURSIVE SUBROUTINE growCluster(lattice, adj, idx, area) INTEGER, INTENT(INOUT) :: lattice(:), area INTEGER, INTENT(IN) :: adj(:,:), idx lattice(idx) = -1 area = area + 1 IF (lattice(adj(1,idx)).GT.0) & CALL growCluster(lattice,adj,adj(1,idx),area) IF (lattice(adj(2,idx)).GT.0) & CALL growCluster(lattice,adj,adj(2,idx),area) IF (lattice(adj(3,idx)).GT.0) & CALL growCluster(lattice,adj,adj(3,idx),area) IF (lattice(adj(4,idx)).GT.0) & CALL growCluster(lattice,adj,adj(4,idx),area) END SUBROUTINE growCluster where adj(1,n) represents the north neighbor of site n, adj(2,n) represents the west and so on. What would cause the erratic segfault behavior? Is the cluster just "too huge" for large lattice sizes?

    Read the article

  • IP failover with 2 nodes on different subnet: cannot ping virtual IP from second node?

    - by quanta
    I'm going to setup redundant failover Redmine: another instance was installed on the second server without problem MySQL (running on the same machine with Redmine) was configured as master-master replication Because they are in different subnet (192.168.3.x and 192.168.6.x), it seems that VIPArip is the only choice. /etc/ha.d/ha.cf on node1 logfacility none debug 1 debugfile /var/log/ha-debug logfile /var/log/ha-log autojoin none warntime 3 deadtime 6 initdead 60 udpport 694 ucast eth1 node2.ip keepalive 1 node node1 node node2 crm respawn /etc/ha.d/ha.cf on node2: logfacility none debug 1 debugfile /var/log/ha-debug logfile /var/log/ha-log autojoin none warntime 3 deadtime 6 initdead 60 udpport 694 ucast eth0 node1.ip keepalive 1 node node1 node node2 crm respawn crm configure show: node $id="6c27077e-d718-4c82-b307-7dccaa027a72" node1 node $id="740d0726-e91d-40ed-9dc0-2368214a1f56" node2 primitive VIPArip ocf:heartbeat:VIPArip \ params ip="192.168.6.8" nic="lo:0" \ op start interval="0" timeout="20s" \ op monitor interval="5s" timeout="20s" depth="0" \ op stop interval="0" timeout="20s" \ meta is-managed="true" property $id="cib-bootstrap-options" \ stonith-enabled="false" \ dc-version="1.0.12-unknown" \ cluster-infrastructure="Heartbeat" \ last-lrm-refresh="1338870303" crm_mon -1: ============ Last updated: Tue Jun 5 18:36:42 2012 Stack: Heartbeat Current DC: node2 (740d0726-e91d-40ed-9dc0-2368214a1f56) - partition with quorum Version: 1.0.12-unknown 2 Nodes configured, unknown expected votes 1 Resources configured. ============ Online: [ node1 node2 ] VIPArip (ocf::heartbeat:VIPArip): Started node1 ip addr show lo: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet 192.168.6.8/32 scope global lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever I can ping 192.168.6.8 from node1 (192.168.3.x): # ping -c 4 192.168.6.8 PING 192.168.6.8 (192.168.6.8) 56(84) bytes of data. 64 bytes from 192.168.6.8: icmp_seq=1 ttl=64 time=0.062 ms 64 bytes from 192.168.6.8: icmp_seq=2 ttl=64 time=0.046 ms 64 bytes from 192.168.6.8: icmp_seq=3 ttl=64 time=0.059 ms 64 bytes from 192.168.6.8: icmp_seq=4 ttl=64 time=0.071 ms --- 192.168.6.8 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3000ms rtt min/avg/max/mdev = 0.046/0.059/0.071/0.011 ms but cannot ping virtual IP from node2 (192.168.6.x) and outside. Did I miss something? PS: you probably want to set IP2UTIL=/sbin/ip in the /usr/lib/ocf/resource.d/heartbeat/VIPArip resource agent script if you get something like this: Jun 5 11:08:10 node1 lrmd: [19832]: info: RA output: (VIPArip:stop:stderr) 2012/06/05_11:08:10 ERROR: Invalid OCF_RESK EY_ip [192.168.6.8] http://www.clusterlabs.org/wiki/Debugging_Resource_Failures Reply to @DukeLion: Which router receives RIP updates? When I start the VIPArip resource, ripd was run with below configuration file (on node1): /var/run/resource-agents/VIPArip-ripd.conf: hostname ripd password zebra debug rip events debug rip packet debug rip zebra log file /var/log/quagga/quagga.log router rip !nic_tag no passive-interface lo:0 network lo:0 distribute-list private out lo:0 distribute-list private in lo:0 !metric_tag redistribute connected metric 3 !ip_tag access-list private permit 192.168.6.8/32 access-list private deny any

    Read the article

  • BizTalk Server 2009 - Architecture Options

    - by StuartBrierley
    I recently needed to put forward a proposal for a BizTalk 2009 implementation and as a part of this needed to describe some of the basic architecture options available for consideration.  While I already had an idea of the type of environment that I would be looking to recommend, I felt that presenting a range of options while trying to explain some of the strengths and weaknesses of those options was a good place to start.  These outline architecture options should be equally valid for any version of BizTalk Server from 2004, through 2006 and R2, up to 2009.   The following diagram shows a crude representation of the common implementation options to consider when designing a BizTalk environment.         Each of these options provides differing levels of resilience in the case of failure or disaster, with the later options also providing more scope for performance tuning and scalability.   Some of the options presented above make use of clustering. Clustering may best be described as a technology that automatically allows one physical server to take over the tasks and responsibilities of another physical server that has failed. Given that all computer hardware and software will eventually fail, the goal of clustering is to ensure that mission-critical applications will have little or no downtime when such a failure occurs. Clustering can also be configured to provide load balancing, which should generally lead to performance gains and increased capacity and throughput.   (A) Single Servers   This option is the most basic BizTalk implementation that should be considered. It involves the deployment of a single BizTalk server in conjunction with a single SQL server. This configuration does not provide for any resilience in the case of the failure of either server. It is however the cheapest and easiest to implement option of those available.   Using a single BizTalk server does not provide for the level of performance tuning that is otherwise available when using more than one BizTalk server in a cluster.   The common edition of BizTalk used in single server implementations is the standard edition. It should be noted however that if future demand requires increased capacity for a solution, this BizTalk edition is limited to scaling up the implementation and not scaling out the number of servers in use. Any need to scale out the solution would require an upgrade to the enterprise edition of BizTalk.   (B) Single BizTalk Server with Clustered SQL Servers   This option uses a single BizTalk server with a cluster of SQL servers. By utilising clustered SQL servers we can ensure that there is some resilience to the implementation in respect of the databases that BizTalk relies on to operate. The clustering of two SQL servers is possible with the standard edition but to go beyond this would require the enterprise level edition. While this option offers improved resilience over option (A) it does still present a potential single point of failure at the BizTalk server.   Using a single BizTalk server does not provide for the level of performance tuning that is otherwise available when using more than one BizTalk server in a cluster.   The common edition of BizTalk used in single server implementations is the standard edition. It should be noted however that if future demand requires increased capacity for a solution, this BizTalk edition is limited to scaling up the implementation and not scaling out the number of servers in use. You are also unable to take advantage of multiple message boxes, which would allow us to balance the SQL load in the event of any bottlenecks in this area of the implementation. Any need to scale out the solution would require an upgrade to the enterprise edition of BizTalk.   (C) Clustered BizTalk Servers with Clustered SQL Servers   This option makes use of a cluster of BizTalk servers with a cluster of SQL servers to offer high availability and resilience in the case of failure of either of the server types involved. Clustering of BizTalk is only available with the enterprise edition of the product. Clustering of two SQL servers is possible with the standard edition but to go beyond this would require the enterprise level edition.    The use of a BizTalk cluster also provides for the ability to balance load across the servers and gives more scope for performance tuning any implemented solutions. It is also possible to add more BizTalk servers to an existing cluster, giving scope for scaling out the solution as future demand requires.   This might be seen as the middle cost option, providing a good level of protection in the case of failure, a decent level of future proofing, but at a higher cost than the single BizTalk server implementations.   (D) Clustered BizTalk Servers with Clustered SQL Servers – with disaster recovery/service continuity   This option is similar to that offered by (C) and makes use of a cluster of BizTalk servers with a cluster of SQL servers to offer high availability and resilience in case of failure of either of the server types involved. Clustering of BizTalk is only available with the enterprise edition of the product. Clustering of two SQL servers is possible with the standard edition but to go beyond this would require the enterprise level edition.    As with (C) the use of a BizTalk cluster also provides for the ability to balance load across the servers and gives more scope for performance tuning the implemented solution. It is also possible to add more BizTalk servers to an existing cluster, giving scope for scaling the solution out as future demand requires.   In this scenario however, we would be including some form of disaster recovery or service continuity. An example of this would be making use of multiple sites, with the BizTalk server cluster operating across sites to offer resilience in case of the loss of one or more sites. In this scenario there are options available for the SQL implementation depending on the network implementation; making use of either one cluster per site or a single SQL cluster across the network. A multi-site SQL implementation would require some form of data replication across the sites involved.   This is obviously an expensive and complex option, but does provide an extraordinary amount of protection in the case of failure.

    Read the article

  • Application throws NotSerializableException when run on an jboss cluster

    - by Kalpana
    Environment: JBoss 5.1.0, JBoss Seam 2.2.0 While trying to get my application running in a clustered environment after login I am getting the following exception. Post login we try to store the currentUser in jboss seam session context. java.io.NotSerializableException: org.jboss.seam.util.AnnotatedBeanProperty How to resolve this? java.io.NotSerializableException: org.jboss.seam.util.AnnotatedBeanProperty at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1156) at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java :1509) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:14 74) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.jav a:1392) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1150) at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java :1509) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:14 74) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.jav a:1392) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1150) at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:326) at java.util.ArrayList.writeObject(ArrayList.java:570) at sun.reflect.GeneratedMethodAccessor339.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:94 5) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:14 61) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.jav a:1392) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1150) at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java :1509) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:14 74) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.jav a:1392) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1150) at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:326) at java.util.HashMap.writeObject(HashMap.java:1001) at sun.reflect.GeneratedMethodAccessor338.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:94 5) at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:14 61) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.jav a:1392) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1150) at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:326) at org.jboss.ha.framework.server.SimpleCachableMarshalledValue.serialize (SimpleCachableMarshalledValue.java:271) at org.jboss.ha.framework.server.SimpleCachableMarshalledValue.writeExte rnal(SimpleCachableMarshalledValue.java:252) at java.io.ObjectOutputStream.writeExternalData(ObjectOutputStream.java: 1421) at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.jav a:1390) at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1150) at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:326) at org.jboss.cache.marshall.CacheMarshaller200.marshallObject(CacheMarsh aller200.java:460) at org.jboss.cache.marshall.CacheMarshaller300.marshallObject(CacheMarsh aller300.java:47) at org.jboss.cache.marshall.CacheMarshaller200.marshallMap(CacheMarshall er200.java:569) at org.jboss.cache.marshall.CacheMarshaller200.marshallObject(CacheMarsh aller200.java:370) at org.jboss.cache.marshall.CacheMarshaller300.marshallObject(CacheMarsh aller300.java:47) at org.jboss.cache.marshall.CacheMarshaller200.marshallCommand(CacheMars haller200.java:519) at org.jboss.cache.marshall.CacheMarshaller200.marshallObject(CacheMarsh aller200.java:314) at org.jboss.cache.marshall.CacheMarshaller300.marshallObject(CacheMarsh aller300.java:47) at org.jboss.cache.marshall.CacheMarshaller200.marshallCommand(CacheMars haller200.java:519) at org.jboss.cache.marshall.CacheMarshaller200.marshallObject(CacheMarsh aller200.java:314) at org.jboss.cache.marshall.CacheMarshaller300.marshallObject(CacheMarsh aller300.java:47) at org.jboss.cache.marshall.CacheMarshaller200.objectToObjectStream(Cach eMarshaller200.java:191) at org.jboss.cache.marshall.CacheMarshaller200.objectToObjectStream(Cach eMarshaller200.java:136) at org.jboss.cache.marshall.VersionAwareMarshaller.objectToBuffer(Versio nAwareMarshaller.java:182) at org.jboss.cache.marshall.VersionAwareMarshaller.objectToBuffer(Versio nAwareMarshaller.java:52) at org.jboss.cache.marshall.CommandAwareRpcDispatcher$ReplicationTask.ca ll(CommandAwareRpcDispatcher.java:369) at org.jboss.cache.marshall.CommandAwareRpcDispatcher$ReplicationTask.ca ll(CommandAwareRpcDispatcher.java:341) at org.jboss.cache.util.concurrent.WithinThreadExecutor.submit(WithinThr eadExecutor.java:82) at org.jboss.cache.marshall.CommandAwareRpcDispatcher.invokeRemoteComman ds(CommandAwareRpcDispatcher.java:206) at org.jboss.cache.RPCManagerImpl.callRemoteMethods(RPCManagerImpl.java: 748) at org.jboss.cache.RPCManagerImpl.callRemoteMethods(RPCManagerImpl.java: 716) at org.jboss.cache.RPCManagerImpl.callRemoteMethods(RPCManagerImpl.java: 721) at org.jboss.cache.interceptors.BaseRpcInterceptor.replicateCall(BaseRpc Interceptor.java:161) at org.jboss.cache.interceptors.BaseRpcInterceptor.replicateCall(BaseRpc Interceptor.java:135) at org.jboss.cache.interceptors.BaseRpcInterceptor.replicateCall(BaseRpc Interceptor.java:107) at org.jboss.cache.interceptors.ReplicationInterceptor.handleCrudMethod( ReplicationInterceptor.java:160) at org.jboss.cache.interceptors.ReplicationInterceptor.visitPutDataMapCo mmand(ReplicationInterceptor.java:113) at org.jboss.cache.commands.write.PutDataMapCommand.acceptVisitor(PutDat aMapCommand.java:104) at org.jboss.cache.interceptors.base.CommandInterceptor.invokeNextInterc eptor(CommandInterceptor.java:116) at org.jboss.cache.interceptors.base.CommandInterceptor.handleDefault(Co mmandInterceptor.java:131) at org.jboss.cache.commands.AbstractVisitor.visitPutDataMapCommand(Abstr actVisitor.java:60) at org.jboss.cache.commands.write.PutDataMapCommand.acceptVisitor(PutDat aMapCommand.java:104) at org.jboss.cache.interceptors.base.CommandInterceptor.invokeNextInterc eptor(CommandInterceptor.java:116) at org.jboss.cache.interceptors.TxInterceptor.attachGtxAndPassUpChain(Tx Interceptor.java:301) at org.jboss.cache.interceptors.TxInterceptor.handleDefault(TxIntercepto r.java:283) at org.jboss.cache.commands.AbstractVisitor.visitPutDataMapCommand(Abstr actVisitor.java:60) at org.jboss.cache.commands.write.PutDataMapCommand.acceptVisitor(PutDat aMapCommand.java:104) at org.jboss.cache.interceptors.base.CommandInterceptor.invokeNextInterc eptor(CommandInterceptor.java:116) at org.jboss.cache.interceptors.CacheMgmtInterceptor.visitPutDataMapComm and(CacheMgmtInterceptor.java:97) at org.jboss.cache.commands.write.PutDataMapCommand.acceptVisitor(PutDat aMapCommand.java:104) at org.jboss.cache.interceptors.base.CommandInterceptor.invokeNextInterc eptor(CommandInterceptor.java:116) at org.jboss.cache.interceptors.InvocationContextInterceptor.handleAll(I nvocationContextInterceptor.java:178) at org.jboss.cache.interceptors.InvocationContextInterceptor.visitPutDat aMapCommand(InvocationContextInterceptor.java:64) at org.jboss.cache.commands.write.PutDataMapCommand.acceptVisitor(PutDat aMapCommand.java:104) at org.jboss.cache.interceptors.InterceptorChain.invoke(InterceptorChain .java:287) at org.jboss.cache.invocation.CacheInvocationDelegate.invokePut(CacheInv ocationDelegate.java:705) at org.jboss.cache.invocation.CacheInvocationDelegate.put(CacheInvocatio nDelegate.java:519) at org.jboss.ha.cachemanager.CacheManagerManagedCache.put(CacheManagerMa nagedCache.java:277) at org.jboss.web.tomcat.service.session.distributedcache.impl.jbc.JBossC acheWrapper.put(JBossCacheWrapper.java:148) at org.jboss.web.tomcat.service.session.distributedcache.impl.jbc.Abstra ctJBossCacheService.storeSessionData(AbstractJBossCacheService.java:405) at org.jboss.web.tomcat.service.session.ClusteredSession.processSessionR eplication(ClusteredSession.java:1166) at org.jboss.web.tomcat.service.session.JBossCacheManager.processSession Repl(JBossCacheManager.java:1937) at org.jboss.web.tomcat.service.session.JBossCacheManager.storeSession(J BossCacheManager.java:309) at org.jboss.web.tomcat.service.session.InstantSnapshotManager.snapshot( InstantSnapshotManager.java:51) at org.jboss.web.tomcat.service.session.ClusteredSessionValve.handleRequ est(ClusteredSessionValve.java:147) at org.jboss.web.tomcat.service.session.ClusteredSessionValve.invoke(Clu steredSessionValve.java:94) at org.jboss.web.tomcat.service.session.LockingValve.invoke(LockingValve .java:62) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(Authentica torBase.java:433) at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValv e.java:92) at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.proce ss(SecurityContextEstablishmentValve.java:126) at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invok e(SecurityContextEstablishmentValve.java:70) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.j ava:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.j ava:102) at org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedC onnectionValve.java:158) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineVal ve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.jav a:330) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java :829) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.proce ss(Http11Protocol.java:598) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:44 7) at java.lang.Thread.run(Thread.java:619) 16:38:35,789 ERROR [CommandAwareRpcDispatcher] java.io.NotSerializableException: org.jboss.seam.util.AnnotatedBeanProperty 16:38:35,789 WARN [/a12] Failed to replicate session YwBL69cG-zdm0m5CvzNj3Q__ java.lang.RuntimeException: Failure to marshal argument(s) at org.jboss.cache.marshall.CommandAwareRpcDispatcher$ReplicationTask.ca ll(CommandAwareRpcDispatcher.java:374) at org.jboss.cache.marshall.CommandAwareRpcDispatcher$ReplicationTask.ca ll(CommandAwareRpcDispatcher.java:341) at org.jboss.cache.util.concurrent.WithinThreadExecutor.submit(WithinThr eadExecutor.java:82) at org.jboss.cache.marshall.CommandAwareRpcDispatcher.invokeRemoteComman ds(CommandAwareRpcDispatcher.java:206) at org.jboss.cache.RPCManagerImpl.callRemoteMethods(RPCManagerImpl.java: 748) at org.jboss.cache.RPCManagerImpl.callRemoteMethods(RPCManagerImpl.java: 716) at org.jboss.cache.RPCManagerImpl.callRemoteMethods(RPCManagerImpl.java: 721) at org.jboss.cache.interceptors.BaseRpcInterceptor.replicateCall(BaseRpc Interceptor.java:161) at org.jboss.cache.interceptors.BaseRpcInterceptor.replicateCall(BaseRpc Interceptor.java:135) at org.jboss.cache.interceptors.BaseRpcInterceptor.replicateCall(BaseRpc Interceptor.java:107) at org.jboss.cache.interceptors.ReplicationInterceptor.handleCrudMethod( ReplicationInterceptor.java:160) at org.jboss.cache.interceptors.ReplicationInterceptor.visitPutDataMapCo mmand(ReplicationInterceptor.java:113) at org.jboss.cache.commands.write.PutDataMapCommand.acceptVisitor(PutDat aMapCommand.java:104) at org.jboss.cache.interceptors.base.CommandInterceptor.invokeNextInterc eptor(CommandInterceptor.java:116) at org.jboss.cache.interceptors.base.CommandInterceptor.handleDefault(Co mmandInterceptor.java:131) at org.jboss.cache.commands.AbstractVisitor.visitPutDataMapCommand(Abstr actVisitor.java:60) at org.jboss.cache.commands.write.PutDataMapCommand.acceptVisitor(PutDat aMapCommand.java:104) at org.jboss.cache.interceptors.base.CommandInterceptor.invokeNextInterc eptor(CommandInterceptor.java:116) at org.jboss.cache.interceptors.TxInterceptor.attachGtxAndPassUpChain(Tx Interceptor.java:301) at org.jboss.cache.interceptors.TxInterceptor.handleDefault(TxIntercepto r.java:283) at org.jboss.cache.commands.AbstractVisitor.visitPutDataMapCommand(Abstr actVisitor.java:60) at org.jboss.cache.commands.write.PutDataMapCommand.acceptVisitor(PutDat aMapCommand.java:104) at org.jboss.cache.interceptors.base.CommandInterceptor.invokeNextInterc eptor(CommandInterceptor.java:116) at org.jboss.cache.interceptors.CacheMgmtInterceptor.visitPutDataMapComm and(CacheMgmtInterceptor.java:97) at org.jboss.cache.commands.write.PutDataMapCommand.acceptVisitor(PutDat aMapCommand.java:104) at org.jboss.cache.interceptors.base.CommandInterceptor.invokeNextInterc eptor(CommandInterceptor.java:116) at org.jboss.cache.interceptors.InvocationContextInterceptor.handleAll(I nvocationContextInterceptor.java:178) at org.jboss.cache.interceptors.InvocationContextInterceptor.visitPutDat aMapCommand(InvocationContextInterceptor.java:64) at org.jboss.cache.commands.write.PutDataMapCommand.acceptVisitor(PutDat aMapCommand.java:104) at org.jboss.cache.interceptors.InterceptorChain.invoke(InterceptorChain .java:287) at org.jboss.cache.invocation.CacheInvocationDelegate.invokePut(CacheInv ocationDelegate.java:705) at org.jboss.cache.invocation.CacheInvocationDelegate.put(CacheInvocatio nDelegate.java:519) at org.jboss.ha.cachemanager.CacheManagerManagedCache.put(CacheManagerMa nagedCache.java:277) at org.jboss.web.tomcat.service.session.distributedcache.impl.jbc.JBossC acheWrapper.put(JBossCacheWrapper.java:148) at org.jboss.web.tomcat.service.session.distributedcache.impl.jbc.Abstra ctJBossCacheService.storeSessionData(AbstractJBossCacheService.java:405) at org.jboss.web.tomcat.service.session.ClusteredSession.processSessionR eplication(ClusteredSession.java:1166) at org.jboss.web.tomcat.service.session.JBossCacheManager.processSession Repl(JBossCacheManager.java:1937) at org.jboss.web.tomcat.service.session.JBossCacheManager.storeSession(J BossCacheManager.java:309) at org.jboss.web.tomcat.service.session.InstantSnapshotManager.snapshot( InstantSnapshotManager.java:51) at org.jboss.web.tomcat.service.session.ClusteredSessionValve.handleRequ est(ClusteredSessionValve.java:147) at org.jboss.web.tomcat.service.session.ClusteredSessionValve.invoke(Clu steredSessionValve.java:94) at org.jboss.web.tomcat.service.session.LockingValve.invoke(LockingValve .java:62) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(Authentica torBase.java:433) at org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValv e.java:92) at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.proce ss(SecurityContextEstablishmentValve.java:126) at org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invok e(SecurityContextEstablishmentValve.java:70) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.j ava:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.j ava:102) at org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedC onnectionValve.java:158) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineVal ve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.jav a:330) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java :829) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.proce ss(Http11Protocol.java:598) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:44 7) at java.lang.Thread.run(Thread.java:619)

    Read the article

  • Sharing transactions between web applications, which run in the same cluster

    - by pihentagy
    We (will) have the following architecture: Base.war will be a self-contained spring-hibernate application All applications will run under Glassfish, and may be clustered E1.war will sit on top of Base.war, extending it's functionality There could be further extensions (E2.war, E3.war, …) sitting on top of Base.war Either wars could start a transaction, and transactions could span between wars Without shutting down Base.war, or any other Ex.war, it should be possible to upgrade an Ey.war Is there a solution for this with spring-hibernate-glassfish environment?

    Read the article

  • Cluster Graph Visualization using python

    - by AlgoMan
    I am assembling different visualization tools that are available in python language. I found the Treemap. (http://pypi.python.org/pypi/treemap/1.05) Can you suggest some other tools that are available. I am exploring different ways of visualization of web data.

    Read the article

  • Best practices for building a simple, scalable cluster on Amazon EC2 for a Java web app

    - by Alex B
    I want to build a Java web app and deploy it on EC2. It will be written in Java and will use MySQL. I was hoping to get some pointers on the actual deployment process and configuration. In particular I'm interested in the following topics: machine images (diy vs ready made) mysql replication and backup to S3 ways of deploying and redeploying the app to EC2 without interruptions firewalls? load balancing and auto scaling cloudtools (or alternative tools)

    Read the article

  • castle scheduler - cluster

    - by cvista
    Hi We're using the castle scheduler component: http://using.castleproject.org/display/Comp/Castle.Components.Scheduler?showChildren=false I have a wcf service which creates the tasks and that does it's job fine. I then have a console app running (will be a windows service eventually) which should then keep an eye out for tasks to run. Thing is the each create their own scheduler in the DB but they both have the same clusterid. Should the console app be able to run the tasks created by the wcf service? If not - how can i make it do that? Cheers w://

    Read the article

  • Why does GetClusterShape return null when the cluster specification was retrieved through the GetClu

    - by Markus Olsson
    Suppose I have a virtual earth shape layer called shapeLayer1 (my creative energy is apparently at an alltime low). When i call the GetClusteredShapes method I get an array of VEClusterSpecification objects that represent each and every one of my currently visible clusters; no problem there. But when I call the GetClusterShape() method it returns null... null! Why on earth would it do that? I used firebug to confirm that the private variable of the VEClusterSpecification that's supposed to hold a reference to the shape is indeed null so it's not the method that's causing the problem. Some have suggested that this is actually documented behavior Returns null if a VEClusterSpecification object was returned from the VEShapeLayer.GetClusteredShapes Method But looking at the current MSDN documentation for the VEShape class it says: Returns if a VEClusterSpecification object was returned from the VEShapeLayer.GetClusteredShapes Method Is this a bug or a feature? Is there any known workarounds or (if it is a bug) some plan on when they are going to fix it?

    Read the article

  • What is the best way for communication between cluster nodes

    - by Tom
    I have an application written in a combination of ASP/VB6/VBScript and ASP.NET/C# that consists of a website part, SOAP-like webservice part and a queue processing part processing incoming files in a hotfolder. We are used to running under load balancers (Microsoft or other make). Often we need to communicate between the different load balanced servers. Currently we do this through the SQL Server database that is common for all nodes, however, this comes with a performance penalty as each message requires a transaction and continual polling from the other nodes. What would be better ways to achieve this? Tom, Appelby

    Read the article

  • COM+ Connection Pooling Doesn't Appear to be working on SQL Server 2005 Cluster

    - by kmacmahon
    We have a COM+ Data Layer that utilized Connection Pooling. Its deployed to 3 clusters, 2 SQL Server 2000 and 1 SQL Server 2005 environment. We noticed today that our monitoring software is reporting Thousands of Logins per minute on the SQL Server 2005 box. I did some tracing in both environments and profiler is reporting this for the 2000 boxes: sp_reset_connection SQL CALL sp_reset_connection SQL CALL sp_reset_connection SQL CALL and this for the 2005 box: Audit Logout sp_reset_connection Audit Login SQL CALL Audit Logout sp_reset_connection Audit Login SQL CALL Audit Logout sp_reset_connection Audit Login SQL CALL Is there some sort configuration for SQL Server 2005 different from SQL Server 2000 that we might be missing that would be creating this issue?

    Read the article

  • Register to the Solaris 11.1 and Solaris Cluster webcast!

    - by Karoly Vegh
    On the 7. November there will be a live webcast about Oracle Solaris 11.1 and Oracle Solaris Cluster that you do not want to miss: the Online Launch Event: Oracle Solaris 11 - Innovations for your Data Center.  This live webcast will have three sessions: Executive Keynote: Oracle Solaris 11 - Innovations for your data center  Oracle Technical Session: Oracle Solaris 11.1  Oracle Technical Session: Oracle Solaris Cluster  There will be a live Q&A session, but feel free to tweet as well with #solaris.  see you there! -- charlie  

    Read the article

  • Intel lance Parallel Studio XE 2013 et Cluster Studio XE 2013, ses suites d'outils pour booster les applications parallèles

    Intel lance Parallel Studio XE 2013 et Cluster Studio XE 2013 Ses suites d'outils pour booster les applications parallèles De l'analyse des données analytiques en temps réel aux traitements d'un volume important de données scientifiques, le parallélisme occupe une part de plus en plus importante dans le monde du développement. Des outils tels que ceux qu'Intel vient d'annoncer permettent d'optimiser et d'analyser les applications parallèles, réputées pour leurs grandes complexités. Il s'agit de Parallel Studio XE 2013 et Cluster Studio XE 2013, pour les langages C/C++ et Fortran sous Windows et Linux. [IMG]http://idelways.developpez.com/news/images/intel...

    Read the article

  • JBoss Clustered Service that sends emails from txt file

    - by michael lucas
    I need a little push in the right direction. Here's my problem: I have to create an ultra-reliable service that sends email messages to clients whose addresses are stored in txt file on FTP server. Single txt file may contain unlimited number of entries. Most often the file contains about 300,000 entries. Service exposes interface with just two simple methods: TaskHandle sendEmails(String ftpFilePath); ProcessStatus checkProcessStatus(TaskHandle taskHandle); Method sendEmails() returns TaskHandle by which we can ask for ProcessStatus. For such a service to be reliable clustering is necessary. Processing single txt file might take a long time. Restarting one node in a cluster should have no impact on sending emails. We use JBoss AS 4.2.0 which comes with a nice HASingletonController that ensure one instance of service is running at given time. But once a fail-over happens, the second service should continue work from where the first one stopped. How can I share state between nodes in a cluster in such a way that leaves no possibility of sending some emails twice?

    Read the article

  • How to Achieve OC4J RMI Load Balancing

    - by fip
    This is an old, Oracle SOA and OC4J 10G topic. In fact this is not even a SOA topic per se. Questions of RMI load balancing arise when you developed custom web applications accessing human tasks running off a remote SOA 10G cluster. Having returned from a customer who faced challenges with OC4J RMI load balancing, I felt there is still some confusions in the field how OC4J RMI load balancing work. Hence I decide to dust off an old tech note that I wrote a few years back and share it with the general public. Here is the tech note: Overview A typical use case in Oracle SOA is that you are building web based, custom human tasks UI that will interact with the task services housed in a remote BPEL 10G cluster. Or, in a more generic way, you are just building a web based application in Java that needs to interact with the EJBs in a remote OC4J cluster. In either case, you are talking to an OC4J cluster as RMI client. Then immediately you must ask yourself the following questions: 1. How do I make sure that the web application, as an RMI client, even distribute its load against all the nodes in the remote OC4J cluster? 2. How do I make sure that the web application, as an RMI client, is resilient to the node failures in the remote OC4J cluster, so that in the unlikely case when one of the remote OC4J nodes fail, my web application will continue to function? That is the topic of how to achieve load balancing with OC4J RMI client. Solutions You need to configure and code RMI load balancing in two places: 1. Provider URL can be specified with a comma separated list of URLs, so that the initial lookup will land to one of the available URLs. 2. Choose a proper value for the oracle.j2ee.rmi.loadBalance property, which, along side with the PROVIDER_URL property, is one of the JNDI properties passed to the JNDI lookup.(http://docs.oracle.com/cd/B31017_01/web.1013/b28958/rmi.htm#BABDGFBI) More details below: About the PROVIDER_URL The JNDI property java.name.provider.url's job is, when the client looks up for a new context at the very first time in the client session, to provide a list of RMI context The value of the JNDI property java.name.provider.url goes by the format of a single URL, or a comma separate list of URLs. A single URL. For example: opmn:ormi://host1:6003:oc4j_instance1/appName1 A comma separated list of multiple URLs. For examples:  opmn:ormi://host1:6003:oc4j_instanc1/appName, opmn:ormi://host2:6003:oc4j_instance1/appName, opmn:ormi://host3:6003:oc4j_instance1/appName When the client looks up for a new Context the very first time in the client session, it sends a query against the OPMN referenced by the provider URL. The OPMN host and port specifies the destination of such query, and the OC4J instance name and appName are actually the “where clause” of the query. When the PROVIDER URL reference a single OPMN server Let's consider the case when the provider url only reference a single OPMN server of the destination cluster. In this case, that single OPMN server receives the query and returns a list of the qualified Contexts from all OC4Js within the cluster, even though there is a single OPMN server in the provider URL. A context represent a particular starting point at a particular server for subsequent object lookup. For example, if the URL is opmn:ormi://host1:6003:oc4j_instance1/appName, then, OPMN will return the following contexts: appName on oc4j_instance1 on host1 appName on oc4j_instance1 on host2, appName on oc4j_instance1 on host3,  (provided that host1, host2, host3 are all in the same cluster) Please note that One OPMN will be sufficient to find the list of all contexts from the entire cluster that satisfy the JNDI lookup query. You can do an experiment by shutting down appName on host1, and observe that OPMN on host1 will still be able to return you appname on host2 and appName on host3. When the PROVIDER URL reference a comma separated list of multiple OPMN servers When the JNDI propery java.naming.provider.url references a comma separated list of multiple URLs, the lookup will return the exact same things as with the single OPMN server: a list of qualified Contexts from the cluster. The purpose of having multiple OPMN servers is to provide high availability in the initial context creation, such that if OPMN at host1 is unavailable, client will try the lookup via OPMN on host2, and so on. After the initial lookup returns and cache a list of contexts, the JNDI URL(s) are no longer used in the same client session. That explains why removing the 3rd URL from the list of JNDI URLs will not stop the client from getting the EJB on the 3rd server. About the oracle.j2ee.rmi.loadBalance Property After the client acquires the list of contexts, it will cache it at the client side as “list of available RMI contexts”.  This list includes all the servers in the destination cluster. This list will stay in the cache until the client session (JVM) ends. The RMI load balancing against the destination cluster is happening at the client side, as the client is switching between the members of the list. Whether and how often the client will fresh the Context from the list of Context is based on the value of the  oracle.j2ee.rmi.loadBalance. The documentation at http://docs.oracle.com/cd/B31017_01/web.1013/b28958/rmi.htm#BABDGFBI list all the available values for the oracle.j2ee.rmi.loadBalance. Value Description client If specified, the client interacts with the OC4J process that was initially chosen at the first lookup for the entire conversation. context Used for a Web client (servlet or JSP) that will access EJBs in a clustered OC4J environment. If specified, a new Context object for a randomly-selected OC4J instance will be returned each time InitialContext() is invoked. lookup Used for a standalone client that will access EJBs in a clustered OC4J environment. If specified, a new Context object for a randomly-selected OC4J instance will be created each time the client calls Context.lookup(). Please note the regardless of the setting of oracle.j2ee.rmi.loadBalance property, the “refresh” only occurs at the client. The client can only choose from the "list of available context" that was returned and cached from the very first lookup. That is, the client will merely get a new Context object from the “list of available RMI contexts” from the cache at the client side. The client will NOT go to the OPMN server again to get the list. That also implies that if you are adding a node to the server cluster AFTER the client’s initial lookup, the client would not know it because neither the server nor the client will initiate a refresh of the “list of available servers” to reflect the new node. About High Availability (i.e. Resilience Against Node Failure of Remote OC4J Cluster) What we have discussed above is about load balancing. Let's also discuss high availability. This is how the High Availability works in RMI: when the client use the context but get an exception such as socket is closed, it knows that the server referenced by that Context is problematic and will try to get another unused Context from the “list of available contexts”. Again, this list is the list that was returned and cached at the very first lookup in the entire client session.

    Read the article

  • Shrink NTFS Windows 7 Partition with GParted

    - by user15961
    I am running a dual-boot system with Windows 7 and Ubuntu 10.10. Initially I allocated about 20GB for my Ubuntu partition; however, I quickly ran out of that space and am now looking to expand my partition. Currently my NTFS partition (450GB) has about 130GB of free space. I tried using GParted to shrink the partition but encountered the following error. I booted into windows so I could run chkdsk but the countdown freezes at 1 upon reboot. I tried multiple methods to resolve that issue but nothing seems to work. Finally I gave up, and now I just want to know what is the best way for me to force GParted to shrink the partition regardless of the errors. I don't really have anything important and I don't mind risking the data. I just don't want to wipe the entire NTFS partition because I don't have a Windows install CD and might require Windows later on for some programs. I tried using sudo ntfsresize but that spews out the same error as GParted... Any ideas? Check and repair file system (ntfs) on /dev/sda2 00:00:09 ( ERROR ) calibrate /dev/sda2 00:00:00 ( SUCCESS ) path: /dev/sda2 start: 36944325 end: 976771119 size: 939826795 (448.14 GiB) check file system on /dev/sda2 for errors and (if possible) fix them 00:00:09 ( ERROR ) ntfsresize -P -i -f -v /dev/sda2 ntfsresize v2.0.0 (libntfs 10:0:0) Device name : /dev/sda2 NTFS volume version: 3.1 Cluster size : 4096 bytes Current volume size: 481191318016 bytes (481192 MB) Current device size: 481191319040 bytes (481192 MB) Checking for bad sectors ... Checking filesystem consistency ... Cluster 63468 is referenced multiple times! Cluster 63469 is referenced multiple times! Cluster 63465 is referenced multiple times! Cluster 63466 is referenced multiple times! Cluster 63467 is referenced multiple times! Cluster 165621 is referenced multiple times! Cluster 165622 is referenced multiple times! Cluster 165623 is referenced multiple times! Cluster 165624 is referenced multiple times! ERROR: Filesystem check failed! ERROR: 9 clusters are referenced multiply times. NTFS is inconsistent. Run chkdsk /f on Windows then reboot it TWICE! The usage of the /f parameter is very IMPORTANT! No modification was and will be made to NTFS by this software until it gets repaired.

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >