Search Results

Search found 1914 results on 77 pages for 'mongrel cluster'.

Page 28/77 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • Slicing the EDG

    - by Antony Reynolds
    Different SOA Domain Configurations In this blog entry I would like to introduce three different configurations for a SOA environment.  I have omitted load balancers and OTD/OHS as they introduce a whole new round of discussion.  For each possible deployment architecture I have identified some of the advantages. Super Domain This is a single EDG style domain for everything needed for SOA/OSB.   It extends the standard EDG slightly but otherwise assumes a single “super” domain. This is basically the SOA EDG.  I have broken out JMS servers and Coherence servers to improve scalability and reduce dependencies. Key Points Separate JMS allows those servers to be kept up separately from rest of SOA Domain, allowing JMS clients to post messages even if rest of domain is unavailable. JMS servers are only used to host application specific JMS destinations, SOA/OSB JMS destinations remain in relevant SOA/OSB managed servers. Separate Coherence servers allow OSB cache to be offloaded from OSB servers. Use of Coherence by other components as a shared infrastructure data grid service. Coherence cluster may be managed by WLS but more likely run as a standalone Coherence cluster. Benefits Single Administration Point (1 Admin Server) Closely follows EDG with addition of application specific JMS servers and standalone Coherence servers for OSB caching and application specific caches. Coherence grid can be scaled independent of OSB/SOA. JMS queues provide for inter-application communication. Drawbacks Patching is an all or nothing affair. Startup time for SOA may be slow if large number of composites deployed. Multiple Domains This extends the EDG into multiple domains, allowing separate management and update of these domains.  I see this type of configuration quite often with customers, although some don't have OWSM, others don't have separate Coherence etc. SOA & BAM are kept in the same domain as little benefit is obtained by separating them. Key Points Separate JMS allows those servers to be kept up separately from rest of SOA Domain, allowing JMS clients to post messages even if other domains are unavailable. JMS servers are only used to host application specific JMS destinations, SOA/OSB JMS destinations remain in relevant SOA/OSB managed servers. Separate Coherence servers allow OSB cache to be offloaded from OSB servers. Use of Coherence by other components as a shared infrastructure data grid service. Coherence cluster may be managed by WLS but more likely run as a standalone Coherence cluster. Benefits Follows EDG but in separate domains and with addition of application specific JMS servers and standalone Coherence servers for OSB caching and application specific caches. Coherence grid can be scaled independent of OSB/SOA. JMS queues provide for inter-application communication. Patch lifecycle of OSB/SOA/JMS are no longer lock stepped. JMS may be kept running independently of other domains allowing applications to insert messages fro later consumption by SOA/OSB. OSB may be kept running independent of other domains, allowing service virtualization to continue independent of other domains availability. All domains use same OWSM policy store (MDS-WSM). Drawbacks Multiple domains to manage and configure. Multiple Admin servers (single view requires use of Grid Control) Multiple Admin servers/WSM clusters waste resources. Additional homes needed to enjoy benefits of separate patching. Cross domain trust needs setting up to simplify cross domain interactions. Startup time for SOA may be slow if large number of composites deployed. Shared Service Environment This model extends the previous multiple domain arrangement to provide a true shared service environment.This extends the previous model by allowing multiple additional SOA domains and/or other domains to take advantage of the shared services.  Only one non-shared domain is shown, but there could be multiple, allowing groups of applications to share patching independent of other application groups. Key Points Separate JMS allows those servers to be kept up separately from rest of SOA Domain, allowing JMS clients to post messages even if other domains are unavailable. JMS servers are only used to host application specific JMS destinations, SOA/OSB JMS destinations remain in relevant SOA/OSB managed servers. Separate Coherence servers allow OSB cache to be offloaded from OSB servers. Use of Coherence by other components as a shared infrastructure data grid service Coherence cluster may be managed by WLS but more likely run as a standalone Coherence cluster. Shared SOA Domain hosts Human Workflow Tasks BAM Common "utility" composites Single OSB domain provides "Enterprise Service Bus" All domains use same OWSM policy store (MDS-WSM) Benefits Follows EDG but in separate domains and with addition of application specific JMS servers and standalone Coherence servers for OSB caching and application specific caches. Coherence grid can be scaled independent of OSB/SOA. JMS queues provide for inter-application communication. Patch lifecycle of OSB/SOA/JMS are no longer lock stepped. JMS may be kept running independently of other domains allowing applications to insert messages fro later consumption by SOA/OSB. OSB may be kept running independent of other domains, allowing service virtualization to continue independent of other domains availability. All domains use same OWSM policy store (MDS-WSM). Supports large numbers of deployed composites in multiple domains. Single URL for Human Workflow end users. Single URL for BAM end users. Drawbacks Multiple domains to manage and configure. Multiple Admin servers (single view requires use of Grid Control) Multiple Admin servers/WSM clusters waste resources. Additional homes needed to enjoy benefits of separate patching. Cross domain trust needs setting up to simplify cross domain interactions. Human Workflow needs to be specially configured to point to shared services domain. Summary The alternatives in this blog allow for patching to have different impacts, depending on the model chosen.  Each organization must decide the tradeoffs for itself.  One extreme is to go for the shared services model and have one domain per SOA application.  This requires a lot of administration of the multiple domains.  The other extreme is to have a single super domain.  This makes the entire enterprise susceptible to an outage at the same time due to patching or other domain level changes.  Hopefully this blog will help your organization choose the right model for you.

    Read the article

  • Solaris Web Magazine JP ?????

    - by kazun
    #midashi{ font-size:120%; border-left: 8px solid #FF0000;/*??????????????????*/ border-bottom:dotted 1px #cccccc;/*?????????????*/ width:515px;/*??????*/ line-height: 26px;/*h3?????*/ padding-left: 5px;/*?????????*/ color:#333333; /*????*/ font-weight:bold; } .select{ padding-top:2px; padding-left: 3px;/*?????????*/ font-size:10px; color:#999999; display: block; } #midashi2{ font-size:120%; border-left: 8px solid #FF0000;/*??????????????????*/ border-bottom:dotted 1px #cccccc;/*?????????????*/ width:205px;/*??????*/ line-height: 26px;/*h3?????*/ padding-left: 5px;/*?????????*/ color:#333333; /*????*/ font-weight:bold; } .select{ padding-top:2px; padding-left: 3px;/*?????????*/ font-size:10px; color:#999999; display: block; } ???? ????????:Oracle OpenWorld Tokyo 2012 ?????? ????:?????????????????:???????Oracle Solaris Studio 12.3? ???? Oracle Solaris ???????????????????? Oracle OpenWorld Tokyo 2012 ?????? Oracle Solaris 11 ?????????:?Oracle Solaris 11 ?????·????·??? ?2???? ?????????????????:???????Oracle Solaris Studio 12.3? ????? ???? ??????????????????????? Oracle Solaris Oracle Solaris Studio Oracle Solaris Cluster ????? ???? Oracle Technology Network ??????????????????????????????? Oracle Solaris 11 Oracle Solaris 10 Oracle Solaris Cluster Enterprise Edition Oracle Solaris Studio OTN? ????/????  ?????????#4?6/15(?)??? 2012/5/21 Oracle Solaris ??????? #3 2012/5/23 ?83? ????! ???????? ~Oracle x Sun ?6?: Solaris 10 ?? Solaris 11 ?????????????(Slideshare) ?????? Solaris 11 Solaris 10 Oracle Solaris Cluster Oracle Solaris Studio Oracle Linux OTN? ??????????? ?????????? Oracle Solaris ????????????????????????????????????????????????? ???????????????????????????????????????????????? OTN ???? ?????? ????? ?????? ???? Oracle Software Delivery Cloud My Oracle Support ????????? Oracle PartnerNetwork Oracle Solaris Knowledge Zone ????????? Solaris ?????? Oracle|Sun ????????? Oracle Japan (??????) Oracle University ????? Oracle Solaris 11 ?????? Oracle Solaris 11 ??????????? Sun Cluster for Hign Availability ???????? ???????? ?????????? Server / Storage System ????

    Read the article

  • ?!Solaris ??20??? ????Solaris 11.1????????!

    - by OTN-J Master
    Solaris 11.1 ??????????? US OTN???????????????????????????????Solaris 11.1??????????? (OTN Japan???????????????????????????????) ???????Oracle Solaris ?????????????????????????????????????????????????????????????????????????????Solaris 11??????????????300???????????????10?4??Oracle OpenWorld?????Solaris 11.1?????????(??????????)?????????????Solaris?????20????????20??????????????????? ???????????????????????OS??????????????????????????Solaris 11.1????? ?????????????2012?11?7???8:00????Oracle Solaris 11.1?????Oracle Solaris Cluster???????·????·???????????????????????Solaris??20??????????????Solaris??????????????Oracle Solaris 11.1??Oracle Solarus Cluster???????????????????????11?8???1?????????????????????????????????Solaris?????????????????·???????????????????

    Read the article

  • ???????/???MySQL?????? ??????

    - by Yusuke.Yamamoto
    ????? ??:2011/07/25 ??:??????/?? MySQL ??????????????????????????????Associate?Administrator?Developer?Cluster ?????????????????????????????????????????????????? MySQL 5 Certified Associate ?? / ????????????????MySQL 5 Developer Certified Professional ?? / ????????????????MySQL 5 Database Administrator Certified Professional ?? / ????????????????MySQL 5.1 Cluster Database Administrator Certified Expert ?? / ???????????????? ????????? ????????????????? http://otndnld.oracle.co.jp/ondemand/otn-seminar/movie/20110725MySQL_Cert.wmv http://www.oracle.com/technetwork/jp/ondemand/db-new/20110725-ou-cert-455765-ja.pdf

    Read the article

  • Why is Windows Task Scheduler trying to launch multiple instances?

    - by Paul H
    We have a number of Windows Scheduled tasks that run on one Server 2008 Webserver (not R2) which is in a cluster. We recently moved from an original webserver Cluster to a new webserver Cluser (Server 2008 - not R2). The new webserver (in the cluster) running the Windows Tasks is setup the same as on the original we believe. BUT we now find that on the new Windows Server the Windows Task Scheduler seems to want to instantly start each task three times. If we set the option to queue up a new task we get: Event ID 324 Task Scheduler queued instance "{9a1a8411-b042-45ff-8e6b-89874df230d7}" of task "\Client Reporting" and will launch it as soon as instance "{2bcc3df6-ea3b-4453-90c2-75b8b1946388}" completes. If we set the option to stop an existing task we get: Event ID 323 Task Scheduler stopped instance "{e685a910-b32b-414e-85fd-96bbe54314a2}" of task "\Client Reporting" in order to launch new instance "{4db66265-1f51-4ede-8535-ac7c3cb5c4c1}" . Ticked settings: Allow task to be run on demand. Run task as soon as possible after a scheduled start is missed. Stop the task if running for longer than 1 hour. If the running task does not end when requested force it to stop. Start the task only if the computer is on AC power. Stop the task if the computer switches to battery power. Selected option: If the task is already running - stop the existing instance. Note: We moved the tasks from one server to another in the cluster to see if it the Task Scheduler on the particular server we'd picked causing the problem. Same behaviour. Could it be something to do with the build of the new servers? We have very similar tasks set up on another server cluster that work OK without all this multiple starting. Comparing those tasks to the ones here - there does not seem to be anything obviously different in terms of settings available to us through the options within the Task Scheduler. Trigger: The task is scheduled to be triggered daily, once an hour - and to be stopped if it exceeds this time. Action: Runs a .bat file. What could be causing this/where we can look to see what logic is causing the tasks to start multiple times in this way?

    Read the article

  • Opscenter repair service times out. ERROR: Requested range intersects a local range [...]

    - by jlemire-zs
    My production cluster had the repair service enabled since april 16th with the default 9 days time to completion and repairs would complete properly. However, since may 22nd, it is being disabled automatically by Opscenter: From /var/log/opscenter/opscenterd.log: [...] 2014-06-03 21:13:47-0400 [zs_prod] ERROR: Repair task (<Node 10.1.0.22='6417880425364517165'>, (-4019838962446882275L, -4006140687792135587L), set(['zs_logging', 'OpsCenter'])) timed out after 3600 seconds. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: Repair task (<Node 10.1.0.22='6417880425364517165'>, (-4006140687792135587L, -4006140687792135586L), set(['zs_logging', 'OpsCenter'])) timed out after 3600 seconds. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: More than 100 errors during repair service, shutting down repair service 2014-06-03 22:16:44-0400 [zs_prod] INFO: Stopping repair service [...] From /var/log/opscenter/repair_service/zs_prod.log: [...] 2014-06-03 22:16:44-0400 [zs_prod] ERROR: Repair task (<Node 10.1.0.22='6417880425364517165'>, (-4006140687792135587L, -4006140687792135586L), set(['zs_logging', 'OpsCenter'])) timed out after 3600 seconds. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: Task (<Node 10.1.0.22='6417880425364517165'>, (-4006140687792135587L, -4006140687792135586L), set(['zs_logging', 'OpsCenter'])) has failed 1 times. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: 101 errors have ocurred out of 100 allowed. 2014-06-03 22:16:44-0400 [zs_prod] ERROR: More than 100 errors during repair service, shutting down repair service 2014-06-03 22:16:44-0400 [zs_prod] INFO: Stopping repair service On the nodes on which the repair fails, from /var/log/cassandra/system.log: ERROR [RMI TCP Connection(93502)-10.1.0.22] 2014-06-03 20:12:28,858 StorageService.java (line 2560) Repair session failed: java.lang.IllegalArgumentException: Requested range intersects a local range but is not fully contained in one; this would lead to i mprecise repair at org.apache.cassandra.service.ActiveRepairService.getNeighbors(ActiveRepairService.java:164) at org.apache.cassandra.repair.RepairSession.<init>(RepairSession.java:128) at org.apache.cassandra.repair.RepairSession.<init>(RepairSession.java:117) at org.apache.cassandra.service.ActiveRepairService.submitRepairSession(ActiveRepairService.java:97) at org.apache.cassandra.service.StorageService.forceKeyspaceRepair(StorageService.java:2620) at org.apache.cassandra.service.StorageService$5.runMayThrow(StorageService.java:2556) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) These errors, which only occurs if the repair service is running, are the only errors these nodes experience. Outside of the repair task, the Cassandra cluster works perfectly. I am running Opscenter 4.1.2 with a 6 nodes DSE 4.0.2 cluster installed on linux virtual machines. The nodes run a vanilla installation of Ubuntu Server 12.04 64-bit and DSE was installed and secured according to the provided installation documentation. I have been experiencing that problem on my development cluster for a while too (with DSE 4.0.0, 4.0.1 and 4.0.2), but I thought this was because of some configuration error on my part. The problem has appeared spontaneously at some point too. The Cassandra cluster has been working very smoothly with a good write throughput. It is very stable and has enough resources to work with. We did not notice any problems with the applications that depend on it.

    Read the article

  • JBossMQ - Clustered Queues/NameNotFoundException: QueueConnectionFactory error

    - by mfarver
    I am trying to get an application working on a JBoss Cluster. It uses Queues internally, and the developer claims that it should work correctly in a clustered environment. I have jbossmq setup as a ha-singleton on the cluster. The application works correctly on whichever node currently is running the queue, but fails on the other nodes with a: "javax.naming.NameNotFoundException: QueueConnectionFactory not bound" error. I can look at JNDIview from the jmx-console and see that indeed the QueueConnectionFactory class only appears on the primary node in the Global context. Is there a way to see the Cluster's JNDI listing instead of each server? The steps I took from a default Jboss 4.2.3.GA installation were to use the "all" configuration. Then removed /server/all/deploy/hsqldb-ds.xml and /deploy-hasingleton/jms/hsqldb-jdbc2-service.xml, copying the example/jms/mysq-jdbc2-service.xml file into its place (editing that file to use DefaultDS instead of MySqlDS). Finally I created a mysql-ds.xml file in the deploy directory pointing "DefaultDS" at an empty database. I created a -services.xml file in the deploy directory with the queue definition. like the one below: <server> <mbean code="org.jboss.mq.server.jmx.Queue" name="jboss.mq.destination:service=Queue,name=myfirstqueue"> <depends optional-attribute-name="DestinationManager"> jboss.mq:service=DestinationManager </depends> </mbean> </server> All of the other cluster features of working, the servers list each other in the view, and sessions are replicating back and forth. The JBoss documentation is somewhat light in this area, is there another setting I might have missed? Or is this likely to be a code issue (is there different code to do a JNDI lookup in a clusted environment?) Thanks

    Read the article

  • cisco 2900xl - SNMP - Get mac address of device connected to an interface

    - by ankit
    Hello all, Basically what i want to do is to find out what is the mac address of a device plugged in to an interface on the switch (FastEthernet0/1 for example) reading through the switch documentaion i found out that i can configure snmp trap on it to make it notify of any new mac address the switch detects by using the command snmp-server enable traps mac-notifiction but for some reason my switch does not support this feature. the only options i see are CORE_SWITCH(config)#snmp-server enable traps ? c2900 Enable SNMP c2900 traps cluster Enable Cluster traps config Enable SNMP config traps entity Enable SNMP entity traps hsrp Enable SNMP HSRP traps snmp Enable SNMP traps vlan-membership Enable VLAN Membership traps vtp Enable SNMP VTP traps <cr> so the other way would be for me to run a cronjon on my gateway to poll the switch periodically using snmp to get new mac addresses i have looked everywhere but cant seem to find the OID that would provide me this information. any help i can get would me very much appreciated ! here's the output from "show version" on my switch Cisco Internetwork Operating System Software IOS (tm) C2900XL Software (C2900XL-C3H2S-M), Version 12.0(5.4)WC(1), MAINTENANCE INTERIM SOFTWARE Copyright (c) 1986-2001 by cisco Systems, Inc. Compiled Tue 10-Jul-01 11:52 by devgoyal Image text-base: 0x00003000, data-base: 0x00333CD8 ROM: Bootstrap program is C2900XL boot loader CORE_SWITCH uptime is 1 hour, 24 minutes System returned to ROM by power-on System image file is "flash:c2900XL-c3h2s-mz.120-5.4.WC.1.bin" cisco WS-C2912-XL (PowerPC403GA) processor (revision 0x11) with 8192K/1024K bytes of memory. Processor board ID FAB0409X1WS, with hardware revision 0x01 Last reset from power-on Processor is running Enterprise Edition Software Cluster command switch capable Cluster member switch capable 12 FastEthernet/IEEE 802.3 interface(s) 32K bytes of flash-simulated non-volatile configuration memory. Base ethernet MAC Address: 00:01:42:D0:67:00 Motherboard assembly number: 73-3397-08 Power supply part number: 34-0834-01 Motherboard serial number: FAB040843G4 Power supply serial number: DAB05030HR8 Model revision number: A0 Motherboard revision number: C0 Model number: WS-C2912-XL-EN System serial number: FAB0409X1WS Configuration register is 0xF thanks, -ankit

    Read the article

  • All websites migrated from server running IIS6 to IIS7

    - by Leah
    Hi, I hope someone will be able to help me with this. We have recently migrated all of our clients' sites to a new server running IIS7 - all the sites were originally running on a server running IIS6. Ever since the migration, lots of our clients are reporting error messages. There seems to be quite a number of issues related to sending emails and also, we have had the following error message reported by several different clients: Server Error in '/' Application. -------------------------------------------------------------------------------- Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that <machineKey> configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Web.HttpException: Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that <machineKey> configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. I have read elsewhere that this error can appear if a button is clicked before the whole page has finished loading. But as this error has now appeared on multiple sites and only since the server migration, it seems to me that it must be something else. I was wondering if someone could tell me if there is something specific which needs to be changed for .NET sites when sites are moved from a server running IIS6 to a server running IIS7? I don't deal with the actual servers very much so I'm afraid this is very much a grey area for me. Any help would be very much appreciated.

    Read the article

  • JBossMQ - Clustered Queues/NameNotFoundException: QueueConnectionFactory error

    - by mfarver
    I am trying to get an application working on a JBoss Cluster. It uses Queues internally, and the developer claims that it should work correctly in a clustered environment. I have jbossmq setup as a ha-singleton on the cluster. The application works correctly on whichever node currently is running the queue, but fails on the other nodes with a: "javax.naming.NameNotFoundException: QueueConnectionFactory not bound" error. I can look at JNDIview from the jmx-console and see that indeed the QueueConnectionFactory class only appears on the primary node in the Global context. Is there a way to see the Cluster's JNDI listing instead of each server? The steps I took from a default Jboss 4.2.3.GA installation were to use the "all" configuration. Then removed /server/all/deploy/hsqldb-ds.xml and /deploy-hasingleton/jms/hsqldb-jdbc2-service.xml, copying the example/jms/mysq-jdbc2-service.xml file into its place (editing that file to use DefaultDS instead of MySqlDS). Finally I created a mysql-ds.xml file in the deploy directory pointing "DefaultDS" at an empty database. I created a -services.xml file in the deploy directory with the queue definition. like the one below: <server> <mbean code="org.jboss.mq.server.jmx.Queue" name="jboss.mq.destination:service=Queue,name=myfirstqueue"> <depends optional-attribute-name="DestinationManager"> jboss.mq:service=DestinationManager </depends> </mbean> </server> All of the other cluster features of working, the servers list each other in the view, and sessions are replicating back and forth. The JBoss documentation is somewhat light in this area, is there another setting I might have missed? Or is this likely to be a code issue (is there different code to do a JNDI lookup in a clusted environment?) Thanks

    Read the article

  • HTML5 video (mp4 and ogv) problems in Safari and Firefox - but Chrome is all good

    - by qryss
    Hi folks, I have the following code: <video width="640" height="360" controls id="video-player" poster="/movies/poster.png"> <source src="/movies/640x360.m4v" type='video/mp4; codecs="avc1.42E01E, mp4a.40.2"'> <source src="/movies/640x360.ogv" type='video/ogg; codecs="theora, vorbis"'> </video> I'm using Rails (Mongrel in development and Mongrel+Apache in production). Chrome (Mac and Win) can play either file (tested by one then the other source tags) whether locally or from my production servers. Safari (Mac and Win) can play the mp4 file fine locally but not from production. Firefox 3.6 won't play the video in either OS. I just get a grey cross in the middle of the video player area. I've made sure that both Mongrel and Apache in each case have the right MIME types set. From Chrome's results I know there is nothing inherently wrong with my video files or the way the files are being asked for or delivered. Anyone got any clues? Or even a clue as to how to diagnose the problem? For Firefox I looked at https://developer.mozilla.org/En/Using_audio_and_video_in_Firefox where it refers to an 'error' event and an 'error' attribute. It seems the 'error' event is thrown pretty well straightaway and at that time there is no error attribute. Very helpful... :( Help enormously appreciated! Thanks in advance... Chris

    Read the article

  • Big Data – Buzz Words: What is Hadoop – Day 6 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned what is NoSQL. In this article we will take a quick look at one of the four most important buzz words which goes around Big Data – Hadoop. What is Hadoop? Apache Hadoop is an open-source, free and Java based software framework offers a powerful distributed platform to store and manage Big Data. It is licensed under an Apache V2 license. It runs applications on large clusters of commodity hardware and it processes thousands of terabytes of data on thousands of the nodes. Hadoop is inspired from Google’s MapReduce and Google File System (GFS) papers. The major advantage of Hadoop framework is that it provides reliability and high availability. What are the core components of Hadoop? There are two major components of the Hadoop framework and both fo them does two of the important task for it. Hadoop MapReduce is the method to split a larger data problem into smaller chunk and distribute it to many different commodity servers. Each server have their own set of resources and they have processed them locally. Once the commodity server has processed the data they send it back collectively to main server. This is effectively a process where we process large data effectively and efficiently. (We will understand this in tomorrow’s blog post). Hadoop Distributed File System (HDFS) is a virtual file system. There is a big difference between any other file system and Hadoop. When we move a file on HDFS, it is automatically split into many small pieces. These small chunks of the file are replicated and stored on other servers (usually 3) for the fault tolerance or high availability. (We will understand this in the day after tomorrow’s blog post). Besides above two core components Hadoop project also contains following modules as well. Hadoop Common: Common utilities for the other Hadoop modules Hadoop Yarn: A framework for job scheduling and cluster resource management There are a few other projects (like Pig, Hive) related to above Hadoop as well which we will gradually explore in later blog posts. A Multi-node Hadoop Cluster Architecture Now let us quickly see the architecture of the a multi-node Hadoop cluster. A small Hadoop cluster includes a single master node and multiple worker or slave node. As discussed earlier, the entire cluster contains two layers. One of the layer of MapReduce Layer and another is of HDFC Layer. Each of these layer have its own relevant component. The master node consists of a JobTracker, TaskTracker, NameNode and DataNode. A slave or worker node consists of a DataNode and TaskTracker. It is also possible that slave node or worker node is only data or compute node. The matter of the fact that is the key feature of the Hadoop. In this introductory blog post we will stop here while describing the architecture of Hadoop. In a future blog post of this 31 day series we will explore various components of Hadoop Architecture in Detail. Why Use Hadoop? There are many advantages of using Hadoop. Let me quickly list them over here: Robust and Scalable – We can add new nodes as needed as well modify them. Affordable and Cost Effective – We do not need any special hardware for running Hadoop. We can just use commodity server. Adaptive and Flexible – Hadoop is built keeping in mind that it will handle structured and unstructured data. Highly Available and Fault Tolerant – When a node fails, the Hadoop framework automatically fails over to another node. Why Hadoop is named as Hadoop? In year 2005 Hadoop was created by Doug Cutting and Mike Cafarella while working at Yahoo. Doug Cutting named Hadoop after his son’s toy elephant. Tomorrow In tomorrow’s blog post we will discuss Buzz Word – MapReduce. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • WebLogic job scheduling

    - by XpiritO
    Hello, overflowers :) I'm trying to implement a WebLogic job scheduling example, to test my cluster capabilities of fail-over on scheduled tasks (to ensure that these tasks are executed on fail over scenario). With this in mind, I've been following this example and trying to configure everything accordingly. Here are the steps I've done so far: Configured a cluster with 1 admin server (AdminServer) and 2 managed instances (Noddy and Snoopy); Set up database tables (using Oracle XE): ACTIVE and WEBLOGIC_TIMERS; Set up data source to access DB and associated it to the scheduling tasks under "Settings for cluster" "Scheduling"; Implemented a job (TimerListener) and a servlet to initialize the job scheduling, as follows: . package timedexecution; import java.io.IOException; import java.io.PrintWriter; import java.io.Serializable; import java.text.SimpleDateFormat; import java.util.Date; import javax.naming.InitialContext; import javax.naming.NamingException; import javax.servlet.ServletException; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import commonj.timers.Timer; import commonj.timers.TimerListener; import commonj.timers.TimerManager; public class TimerServlet extends HttpServlet { private static final long serialVersionUID = 1L; protected static void logMessage(String message, PrintWriter out){ out.write("<p>"+ message +"</p>"); System.out.println(message); } @Override public void service(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { PrintWriter out = response.getWriter(); // out.println("<html>"); out.println("<head><title>TimerServlet</title></head>"); // try { // logMessage("service() entering try block to intialize the timer from JNDI", out); // InitialContext ic = new InitialContext(); TimerManager jobScheduler = (TimerManager)ic.lookup("weblogic.JobScheduler"); // logMessage("jobScheduler reference " + jobScheduler, out); // jobScheduler.schedule(new ExampleTimerListener(), 0, 30*1000); // logMessage("Timer scheduled!", out); // //execute this job every 30 seconds logMessage("service() started the timer", out); // logMessage("Started the timer - status:", out); // } catch (NamingException ne) { String msg = ne.getMessage(); logMessage("Timer schedule failed!", out); logMessage(msg, out); } catch (Throwable t) { logMessage("service() error initializing timer manager with JNDI name weblogic.JobScheduler " + t,out); } // out.println("</body></html>"); out.close(); } private static class ExampleTimerListener implements Serializable, TimerListener { private static final long serialVersionUID = 8313912206357147939L; public void timerExpired(Timer timer) { SimpleDateFormat sdf = new SimpleDateFormat(); System.out.println( "timerExpired() called at " + sdf.format( new Date() ) ); } } } Then I executed the servlet to start the scheduling on the first managed instance (Noddy server), which returned as expected: (Servlet execution output) service() entering try block to intialize the timer from JNDI jobScheduler reference weblogic.scheduler.TimerServiceImpl@43b4c7 Timer scheduled! service() started the timer Started the timer - status: Which resulted in the creation of 2 rows in my DB tables: WEBLOGIC_TIMERS table state after servlet execution: "EDIT"; "TIMER_ID"; "LISTENER"; "START_TIME"; "INTERVAL"; "TIMER_MANAGER_NAME"; "DOMAIN_NAME"; "CLUSTER_NAME"; ""; "Noddy_1268653040156"; "[datatype]"; "1268653040156"; "30000"; "weblogic.JobScheduler"; "myCluster"; "Cluster" ACTIVE table state after servlet execution: "EDIT"; "SERVER"; "INSTANCE"; "DOMAINNAME"; "CLUSTERNAME"; "TIMEOUT"; ""; "service.SINGLETON_MASTER"; "6382071947583985002/Noddy"; "QRENcluster"; "Cluster"; "10.03.15" Although, the job is not executed as scheduled. It should print a message on the server's log output (Noddy.out file) with a timestamp, saying that the timer had expired. It doesn't. My log files state as follows: Admin server log (myCluster.log file): ####<15/Mar/2010 10H45m GMT> <Warning> <Cluster> <test-ad> <Noddy> <[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1268649925727> <BEA-000192> <No currently living server was found that could host TimerMaster. The server will retry in a few seconds.> Noddy server log (Noddy.out file): service() entering try block to intialize the timer from JNDI jobScheduler reference weblogic.scheduler.TimerServiceImpl@43b4c7 Timer scheduled! service() started the timer Started the timer - status: <15/Mar/2010 10H45m GMT> <Warning> <Cluster> <BEA-000192> <No currently living server was found that could host TimerMaster. The server will retry in a few seconds.> (Noddy.log file): ####<15/Mar/2010 11H24m GMT> <Info> <Common> <test-ad> <Noddy> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1268652270128> <BEA-000628> <Created "1" resources for pool "TxDataSourceOracle", out of which "1" are available and "0" are unavailable.> ####<15/Mar/2010 11H37m GMT> <Info> <Cluster> <test-ad> <Noddy> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1268653040226> <BEA-000182> <Job Scheduler created a job with ID Noddy_1268653040156 for TimerListener with description timedexecution.TimerServlet$ExampleTimerListener@2ce79a> ####<15/Mar/2010 11H39m GMT> <Info> <JDBC> <test-ad> <Noddy> <[ACTIVE] ExecuteThread: '3' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1268653166307> <BEA-001128> <Connection for pool "TxDataSourceOracle" closed.> Can anyone help me out discovering what's wrong with my configuration? Thanks in advance for your help!

    Read the article

  • Node.js vs PHP processing speed

    - by Cody Craven
    I've been looking into node.js recently and wanted to see a true comparison of processing speed for PHP vs Node.js. In most of the comparisons I had seen, Node trounced Apache/PHP set ups handily. However all of the tests were small 'hello worlds' that would not accurately reflect any webpage's markup. So I decided to create a basic HTML page with 10,000 hello world paragraph elements. In these tests Node with Cluster was beaten to a pulp by PHP on Nginx utilizing PHP-FPM. So I'm curious if I am misusing Node somehow or if Node is really just this bad at processing power. Note that my results were equivalent outputting "Hello world\n" with text/plain as the HTML, but I only included the HTML as it's closer to the use case I was investigating. My testing box: Core i7-2600 Intel CPU (has 8 threads with 4 cores) 8GB DDR3 RAM Fedora 16 64bit Node.js v0.6.13 Nginx v1.0.13 PHP v5.3.10 (with PHP-FPM) My test scripts: Node.js script var cluster = require('cluster'); var http = require('http'); var numCPUs = require('os').cpus().length; if (cluster.isMaster) { // Fork workers. for (var i = 0; i < numCPUs; i++) { cluster.fork(); } cluster.on('death', function (worker) { console.log('worker ' + worker.pid + ' died'); }); } else { // Worker processes have an HTTP server. http.Server(function (req, res) { res.writeHead(200, {'Content-Type': 'text/html'}); res.write('<html>\n<head>\n<title>Speed test</title>\n</head>\n<body>\n'); for (var i = 0; i < 10000; i++) { res.write('<p>Hello world</p>\n'); } res.end('</body>\n</html>'); }).listen(80); } This script is adapted from Node.js' documentation at http://nodejs.org/docs/latest/api/cluster.html PHP script <?php echo "<html>\n<head>\n<title>Speed test</title>\n</head>\n<body>\n"; for ($i = 0; $i < 10000; $i++) { echo "<p>Hello world</p>\n"; } echo "</body>\n</html>"; My results Node.js $ ab -n 500 -c 20 http://speedtest.dev/ This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking speedtest.dev (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Finished 500 requests Server Software: Server Hostname: speedtest.dev Server Port: 80 Document Path: / Document Length: 190070 bytes Concurrency Level: 20 Time taken for tests: 14.603 seconds Complete requests: 500 Failed requests: 0 Write errors: 0 Total transferred: 95066500 bytes HTML transferred: 95035000 bytes Requests per second: 34.24 [#/sec] (mean) Time per request: 584.123 [ms] (mean) Time per request: 29.206 [ms] (mean, across all concurrent requests) Transfer rate: 6357.45 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.2 0 2 Processing: 94 547 405.4 424 2516 Waiting: 0 331 399.3 216 2284 Total: 95 547 405.4 424 2516 Percentage of the requests served within a certain time (ms) 50% 424 66% 607 75% 733 80% 813 90% 1084 95% 1325 98% 1843 99% 2062 100% 2516 (longest request) PHP/Nginx $ ab -n 500 -c 20 http://speedtest.dev/test.php This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking speedtest.dev (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Finished 500 requests Server Software: nginx/1.0.13 Server Hostname: speedtest.dev Server Port: 80 Document Path: /test.php Document Length: 190070 bytes Concurrency Level: 20 Time taken for tests: 0.130 seconds Complete requests: 500 Failed requests: 0 Write errors: 0 Total transferred: 95109000 bytes HTML transferred: 95035000 bytes Requests per second: 3849.11 [#/sec] (mean) Time per request: 5.196 [ms] (mean) Time per request: 0.260 [ms] (mean, across all concurrent requests) Transfer rate: 715010.65 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.2 0 1 Processing: 3 5 0.7 5 7 Waiting: 1 4 0.7 4 7 Total: 3 5 0.7 5 7 Percentage of the requests served within a certain time (ms) 50% 5 66% 5 75% 5 80% 6 90% 6 95% 6 98% 6 99% 6 100% 7 (longest request) Additional details Again what I'm looking for is to find out if I'm doing something wrong with Node.js or if it is really just that slow compared to PHP on Nginx with FPM. I certainly think Node has a real niche that it could fit well, however with these test results (which I really hope I made a mistake with - as I like the idea of Node) lead me to believe that it is a horrible choice for even a modest processing load when compared to PHP (let alone JVM or various other fast solutions). As a final note, I also tried running an Apache Bench test against node with $ ab -n 20 -c 20 http://speedtest.dev/ and consistently received a total test time of greater than 0.900 seconds.

    Read the article

  • INS-40719 error when Install Oracle RAC?

    - by Data-Base
    I'm tying to (learn how to) install Oracle RAC 11g on CentOS 6 all went OK so far but I get INS-40719 Error message regarding SCAN Name I do not have DNS server and I'm not going to try to use it on this setup I add this line to /etc/hosts 192.168.244.100 rac-cluster then used "rac-cluster" as the SCAN name and it's still not working with the same error message! any one can guide me on how to make it work? 1- do I have to add "192.168.244.100 rac-cluster" to /etc/hosts on both nodes? 2- do I need to edit/add any thing else on the nodes? cheers

    Read the article

  • Can I have different ESX hosts accessing the same LUN over different protocols?

    - by Kevin Kuphal
    I currently have a cluster of two ESX 3.5U2 servers connected directly via FiberChannel to a NetApp 3020 cluster. These hosts mount four VMFS LUNs for virtual machine storage. Currently these LUNs are only made available via our FiberChannel initator in the Netapp configuration If I were to add an ESXi host to the cluster for internal IT use can I: Make the same VMFS LUNs available via the iSCSI initiator on the Netapp Connect this ESXi host to those LUNs via iSCSI Do all of this while the existing two ESX hosts are connected to those LUNs via FiberChannel Does anyone have experience with this type of mixed protocol environment, specifically with Netapp?

    Read the article

  • Doubts about Cloud Infrastructure

    - by Pravin
    Maybe a little more of the same questions that others have asked but wanted to clarify my doubt, for some years run my hosting company (reseller of esds) and I've done well so far, but I am determined to bring quality and server technology to offer another level. So far I have understood that there is a difference between cloud and cluster servers because the cluster function as load balancers that distribute in different servers roles and use the servers less overloaded in the cloud is the union of multiple servers and then the same is vitualized unlike the cluster that is allowed to use the resources of the CPU and RAM servers in the virtualized environment. My approach is to use 3 dedicated servers to create a cloud server, My doubts: Does this type of cloud servers are only reserved for big companies? (Either because the union of the servers is done by hardware or software with high price) What characteristics should these servers meet? Possibly through software which should be used? Available? Thanks for your time, Cheers!

    Read the article

  • Unable to access virtaul IP after configuring NLB in windows 2008

    - by krish
    I have Two webseervers(WS1, WS2). I have added NLB component on both the machines. In WS1, i have added a cluster with IP as (eg:xx.yy.zz.100) and added WS1, WS2 to the same cluster. Now i have deployed application in WS1& WS2 and tried accessing the cluster IP address from WS1 and WS2. App opened. Now i have test machine which is in the same domain of WS1 and WS2. I tried accessing the application with clustered IP in the test machine, it did not work.However it is working when accessed with the dedicated IP of WS1 and WS2. But for making NLB i have to access the app with clusted IP. Help asap would be appriciated.

    Read the article

  • Hyper-V 2012 and P2000 SAS SAN

    - by user155950
    Hi I am having major problems setting up a Hyper-V 2012 cluster on a P2000 SAS SAN. Running System Center VMM 2012 SP1 I am unable to see any storage to create my cluster. Has anyone had experienced anything similar? Under fabric and storage I can't add the P2000, all I can do is use storage spaces in server manager to create a storage pool and virtual disk. This allows me to create a file share which I can add to VMM but I still can't see any disk to create a cluster. I am just about at the point where I want to tear my hair out wipe the servers and stick VMware on them because I know it works as I have set several systems up like this in the past. The Hyper-V servers can see the storage and in server manager on my management machine it seems to know both servers can see the same disk. VMM is running on the same machine and it can't see any disk. Help..... Thanks Mike

    Read the article

  • re-point LM to a new vCenter (share same database)

    - by CapiZikus
    1) I'm planning to create a new vCenter server which database point to the same db as current vCenter (the one LM pointing to atm), Then I'm planning to repoint the LM to a new vCenter, ( the new one will see the same esx host, datastore, etc) Is LM will be okay if I do this? 2) The currect VC is a dediated server and a new vCenter will be VM, the current vCenter has database installed on local machine (inc update manager as well) I'm planning to move the local db to cluster db then point the current vCenter to this new cluster and make sure everything is working before promote a new one. Update manager will also has it own VM and point to a new db cluster. Is anythingelse I miss out or need to pay more attention on? thanks

    Read the article

  • What is the best OS for a server hosting a simple Ruby on Rails based pastebin

    - by Koning Baard XIV
    I have created a simple pastebin in Ruby on Rails and Python. I want to host it in an intranet and it will have like about 1000 users. I want to use one Apache server with a cluster of Mongrel servers. The server itself is a 2 GHz Intel Centrino with 2 GB RAM. What do you think is the best OS to host this? I thought about Damn Small Linux or a custom LFS system. Ubuntu servers come with loads of stuff I don't need. Maybe there are some better OSes? It must be capable of: Running Apache Running Ruby Running Python Running Mongrel with Ruby on Rails SSH Can anyone reccomend me one? Thanks. PS: I am not going to run Windows Server or Mac OS X Server (Mac's are expensive).

    Read the article

  • HAProxy NGInx SSL setup

    - by Niclas
    I've been looking around different setups for a server cluster supporting SSL and I would like to benchmark my idea with you. Requirements: All servers in the cluster should be under the same full domain name. (http and https) Routing to subsystems is done on URI matching in HA proxy. All URIs have support for SSL support. Wish: Centralizing routing rules ---<----http-----<-- | | Inet -->HA--+---https--->NGInx_SSL_1..N | | +---http---> Apache_1..M | +---http---> NodeJS Idea: Configure HA to route all SSL traffic (mode=tcp,algorithm=Source) to an NGInx cluster turning https traffic into http. Re-pass the http traffic from NGInx to the HA for normal load-balancing which performs load balancing based on HA config. My question is simply: Is this the best way to to configure based on requirements above?

    Read the article

  • VMWARE V2V migration without shared storage

    - by TheCleaner
    I would like to do the following: Take Physical box running w2k8r2 and Sql2008R2 and do a P2V on it to a 4.1 Enterprise licensed cluster. --no worries here, I can do that part-- Take the existing physical box that is freed up and install vmware hypervisor 5.0 on it. --again I can do this part-- Do a v2v migration of the VM created in step #1 above from the Enterprise 4.1 Cluster to the standalone host. They are NOT using shared storage. Step #3 is where I'm confused as to what my best option is. I found an article online talking about using Veeam FastSCP and just shutting down the vm on the cluster, removing it from inventory, copying the files over to the new host and then adding it to inventory. Is that the best way to accomplish this?

    Read the article

  • Amazon EC2 hostnames

    - by Firefly
    I'm currently trying to setup a Tigase cluster on Amazon EC2 instances in a VPC and I'm having troubles getting it to work due to the hostnames of the instances not being "full DNS names". According to the Tigase documentation: Please note the proper DNS configuration is critical for the cluster to work correctly. Make sure the 'hostname' command returns a full DNS name on each cluster node. Can anyone explain what a full DNS name is and how I can set my instances to use one? Currently my instances get a default hostname of the form "ip-10-0-0-20".

    Read the article

  • WebSphere Application Server EJB Optimization

    - by Chris Aldrich
    We are working on developing a Java EE based application. Our application is Java 1.5 compatible and will be deployed to WAS ND 6.1.0.21 with EBJ 3.0 and Web Services feature packs. The configuration is currently one cell with two clusters. Each cluster will have two nodes. Our application, or our system, as I should rather say, comes in two or three parts. Part 1: An ear deployed to one cluster that contains 3rd party vendor code combined with customization code. Their code is EJB 2.0 compliant and has a lot of Remote Home interfaces. Part 2: An ear deployed to the same cluster as the first ear. This ear contains EBJ 3's that make calls into the EJB 2's supplied by the vendor and the custom code. These EJB 3's are used by the JSF UI also packaged with the EAR, and some of them are also exposed as web services (JAX-WS 2.0 with SOAP 1.2 compliance) for other clients. Part 3: There may be other services that do not depend on our vendor/custom code app. These services will be EJB 3.0's and web services that are deployed to the other cluster. Per a recommendation from some IBM staff on site here, communication between nodes in a cluster can be EJB RMI. But if we are going across clusters and/or other cells, then the communication should be web services. That said, some of us are wondering about performance and optimizing communication for speed of our applications that will use our web services and EJB's. Right now most EJB's are exposed as remote. (and our vendor set theirs up that way, rather than also exposing local home interfaces). We are wondering if WAS does any optimizations between apps in the same node/cluster node space. If two apps are installed in the same area and they call each other via remote home interface, is WAS smart enough to make it a local home interface call? Are their other optimization techniques? Should we consider them? Should we not? What are the costs/benefits? Here is the question from one of our team members as sent in their email: The question is: Supposing we develop our EJBs as remote EJBs, where our UI controller code is talking to our EXT java services via EJB3...what are our options for performance optimization when both the EJB server and client are running in the same container? As one point of reference, google has given me some oooooold websphere performance tuning documentation from 2000 that explains a tuning configuration you can set to enable Call By Reference for EJB communication when they're in the same application server JVM. It states the following: Because EJBs are inherently location independent, they use a remote programming model. Method parameters and return values are serialized over RMI-IIOP and returned by value. This is the intrinsic RMI "Call By Value" model. WebSphere provides the "No Local Copies" performance optimization for running EJBs and clients (typically servlets) in the same application server JVM. The "No Local Copies" option uses "Call By Reference" and does not create local proxies for called objects when both the client and the remote object are in the same process. Depending on your workload, this can result in a significant overhead savings. Configure "No Local Copies" by adding the following two command line parameters to the application server JVM: * -Djavax.rmi.CORBA.UtilClass=com.ibm.CORBA.iiop.Util * -Dcom.ibm.CORBA.iiop.noLocalCopies=true CAUTION: The "No Local Copies" configuration option improves performance by changing "Call By Value" to "Call By Reference" for clients and EJBs in the same JVM. One side effect of this is that the Java object derived (non-primitive) method parameters can actually be changed by the called enterprise bean. Consider Figure 16a: Also, we will also be using Process Server 6.2 and WESB 6.2 as well in the future. Any ideas? recommendations? Thanks

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >