Search Results

Search found 1781 results on 72 pages for 'cluster'.

Page 37/72 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • Mixing both local and nonlocal addresses on three switches

    - by klew
    I have four computers that have nonlocal addresses like 150.X.X.X. Now I also get another few computers that should be only accessible through a gateway (it will be computing cluster) and they addresses are 10.0.0.X. I also wanted to include those four older computers to this new cluster, but I want them to be accessible from internet on nonlocal addresses (so I would like to set up them on both 150.X.X.X and 10.0.0.X addresses - I've set up it as interface eth0:0 since I have only one NIC). Those new computers have their switch and old computers also have their own switch. Both of them are connected to another (third) switch. The problem is that those old computers see each other (I can ping them), and also new computers see each other, but I can't ping old computer from new computer and vice versa. However pinging on nonlocal adresses works as expected. I looked into switch configuration and didn't find anything useful. I have no idea what I missed here. Can somebody help? All computers have Ubuntu Server 10.04

    Read the article

  • Relevance and Necessity of SNMP

    - by Adam Tannon
    Edit: I am in the process of designing a Java-based monitoring tool that will send back periodic "health checks" of a Java app deployed to a cluster of GlassFish servers. I am trying to figure out the best protocol for this monitoring tool to send information back to the monitoring server on. After an initial research effort on my part, it seems like SNMP is just a protocol for monitor-type applications to communicate the "health status" of something (a part of a network, a server, a cluster, an application, etc.) to the rest of the network. If the above is incorrect, please correct me!!! Assuming the generalization is more or less accurate, my next question is: why is this a protocol!?!? In the age of REST/SOAP/TCP protocols, why is there the need for a standardized protocol that only fits one type of application (monitoring)? In other words, if I'm a developer assigned to building a new monitoring tool that periodically polls a server and reports on its CPU and available memory, what advantages does SNMP give me over just POSTing to a RESTful API via plain 'ole HTTP? I'm sure I'm missing something here - I just need someone to help connect the dots! Thanks in advance!

    Read the article

  • Need help configurating my Tomcat server without any WAR files

    - by gablin
    I just reinstalled my entire server, and now I can't seem to get my JSP-based website to work on Tomcat anymore. I use the same server.xml file, which worked perfectly before the reinstallation, but no longer. Here's the content of the server.xml file which worked before: <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener" /> <!-- JMX Support for the Tomcat server. Documentation at /docs/non-existent.html --> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <!-- Global JNDI resources Documentation at /docs/jndi-resources-howto.html --> <GlobalNamingResources> <!-- Editable user database that can also be used by UserDatabaseRealm to authenticate users --> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <!-- A "Service" is a collection of one or more "Connectors" that share a single "Container" Note: A "Service" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/service.html --> <Service name="Catalina"> <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Standalone" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> --> <!-- The request dumper valve dumps useful debugging information about the request and response data received and sent by Tomcat. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve"/> --> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <!-- Define the default virtual host Note: XML Schema validation will not work with Xerces 2.2. --> <!-- <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> --> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="common" resolveHosts="false"/> --> <!-- </Host> --> <Host name="www.rebootradio.nu"> <Alias>rebootradio.nu</Alias> <Context path="" docBase="D:/services/http/rebootradio.nu" debug="1" reloadable="true"/> </Host> </Engine> </Service> </Server> The JSP site doesn't use any WAR files or anything like that; there's just a default.jsp in the specified folder D:/services/http/rebootradio.nu which loads the site. As I said, this configuration worked before, but now with the latest verion of XAMPP and Tomcat it doesn't work anymore. All I get is a 404 message saying The requested resource () is not available.

    Read the article

  • Need help configurating my Tomcat server

    - by gablin
    I just reinstalled my entire server, and now I can't seem to get my JSP-based website to work on Tomcat anymore. I use the same server.xml file, which worked perfectly before the reinstallation, but no longer. Here's the content of the server.xml file which worked before: <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener" /> <!-- JMX Support for the Tomcat server. Documentation at /docs/non-existent.html --> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <!-- Global JNDI resources Documentation at /docs/jndi-resources-howto.html --> <GlobalNamingResources> <!-- Editable user database that can also be used by UserDatabaseRealm to authenticate users --> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <!-- A "Service" is a collection of one or more "Connectors" that share a single "Container" Note: A "Service" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/service.html --> <Service name="Catalina"> <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Standalone" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> --> <!-- The request dumper valve dumps useful debugging information about the request and response data received and sent by Tomcat. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve"/> --> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <!-- Define the default virtual host Note: XML Schema validation will not work with Xerces 2.2. --> <!-- <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> --> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="common" resolveHosts="false"/> --> <!-- </Host> --> <Host name="www.rebootradio.nu"> <Alias>rebootradio.nu</Alias> <Context path="" docBase="D:/services/http/rebootradio.nu" debug="1" reloadable="true"/> </Host> </Engine> </Service> </Server> The JSP site doesn't use any WAR files or anything like that; there's just a default.jsp in the specified folder D:/services/http/rebootradio.nu which loads the site. As I said, this configuration worked before, but now with the latest verion of XAMPP and Tomcat it doesn't work anymore. All I get is a 404 message saying The requested resource () is not available.

    Read the article

  • Cloning a WebCenter Portal Managed Server

    - by Maiko Rocha
    I had to run some tests on a WebCenter Portal application deployed in a cluster. I've got a development VM with WebCenter PS4 (this also works on PS5) and I was trying to figure out how could I easily add a new managed server to my single-node domain, and make it a cluster. Creating the machine and cluster are a piece of cake, you can do it pretty quick through WLS Console. Now, you'd guess that using the clone option on WLS Console would do the magic of cloning an existing instance, right? Well, it does, but all you get is an "empty" managed server: with no target libraries.  It was a good surprise to find that WebCenter provides a way of cloning an existing WebCenter Portal managed server through a simple WLST command: cloneWebCenterManagedServer  This is a screenshot of my starting point. I want to clone WC_CustomPortal managed server: These are the steps to clone my WC_CustomPortal managed server: 1. In the command line, invoke WLST. It should be on <ORACLE_HOME_for_component>/common/bin/wlst.sh. In my case, it is ./product/Middleware/WebCenterPortal/common/bin/wlst.sh 2. Connect to the Admin Server:  connect ('<wls_admin_username>','<password>','t3://<server>:<port>') 3. Execute the following command: wls:/webcenter/serverConfig> cloneWebCenterManagedServer(baseManagedServer='WC_CustomPortal', newManagedServer='WC_CustomPortal2', newManagedServerPort=8893, verbose=1) I've turned on verbose output on purpose so I could see what the script was doing while executing. This is the output:  [...] Creating the Managed Server "WC_CustomPortal2" MBean type Server with name WC_CustomPortal2 has been created successfully. Targeting the library "oracle.bi.adf.model.slib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.bi.adf.view.slib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.bi.adf.webcenter.slib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.wsm.seedpolicies#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.jsp.next#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.dconfig-infra#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "orai18n-adf#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.adf.dconfigbeans#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.pwdgen#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.jrf.system.filter" to the Managed Server "WC_CustomPortal2" Targeting the library "adf.oracle.domain#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "adf.oracle.businesseditor#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.adf.management#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "adf.oracle.domain.webapp#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "jsf#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "jstl#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "UIX#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "ohw-rcf#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "ohw-uix#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.adf.desktopintegration.model#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.adf.desktopintegration#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.bi.jbips#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.bi.composer#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.webcenter.skin#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.webcenter.composer#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.webcenter.framework.core#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.sdp.client#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.soa.workflow.wc#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.soa.worklist.webapp#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.ucm.ridc.app-lib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "p13n-app-lib-base#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "p13n-core-web-lib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "jaxrs-framework-web-lib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "jersey-web-lib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "wcps-util-app-lib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "wcps-services-client-web-lib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "content-app-lib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "content-web-lib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.webcenter.framework#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.webcenter.framework.view#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.webcenter.forum.dependency#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.webcenter.jive.dependency#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.webcenter.spaces.fwk#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the library "oracle.webcenter.activitygraph.lib#[email protected]" to the Managed Server "WC_CustomPortal2" Targeting the datasource "mds-CustomPortalDS" to the Managed Server "WC_CustomPortal2" Targeting the datasource "WebCenter-CustomPortalDS" to the Managed Server "WC_CustomPortal2" Targeting the datasource "Activities-CustomPortalDS" to the Managed Server "WC_CustomPortal2" Targeting the application "wsil-wls" to the Managed Server "WC_CustomPortal2" Targeting the application "DMS Application#11.1.1.1.0" to the Managed Server "WC_CustomPortal2" Targeting the application "ViewHandlerOverride_webapp1#V2.0" to the Managed Server "WC_CustomPortal2" Targeting the application "ViewHandlerOverride_application1#V2.0" to the Managed Server "WC_CustomPortal2" Targeting the startup class "JRF Startup Class" to the Managed Server "WC_CustomPortal2" Targeting the startup class "JPS Startup Class" to the Managed Server "WC_CustomPortal2" Targeting the startup class "ODL-Startup" to the Managed Server "WC_CustomPortal2" Targeting the startup class "Audit Loader Startup Class" to the Managed Server "WC_CustomPortal2" Targeting the startup class "AWT Application Context Startup Class" to the Managed Server "WC_CustomPortal2" Targeting the startup class "JMX Framework Startup Class" to the Managed Server "WC_CustomPortal2" Targeting the startup class "Web Services Startup Class" to the Managed Server "WC_CustomPortal2" Targeting the startup class "JOC-Startup" to the Managed Server "WC_CustomPortal2" Targeting the startup class "DMS-Startup" to the Managed Server "WC_CustomPortal2" Targeting the shutdown class "JOC-Shutdown" to the Managed Server "WC_CustomPortal2" Targeting the shutdown class "DMSShutdown" to the Managed Server "WC_CustomPortal2" Validating changes ... Validated the changes successfully [...] And this is the newly created WC_CustomPortal2 managed server showing up on Weblogic console:  Here is the full reference to WebCenter Portal Custom WLST Commands. Special thanks to Todd Vender for pointing this one out! :-)

    Read the article

  • What is bondib1 used for on SPARC SuperCluster with InfiniBand, Solaris 11 networking & Oracle RAC?

    - by user12620111
    A co-worker asked the following question about a SPARC SuperCluster InfiniBand network: > on the database nodes the RAC nodes communicate over the cluster_interconnect. This is the > 192.168.10.0 network on bondib0. (according to ./crs/install/crsconfig_params NETWORKS> setting) > What is bondib1 used for? Is it a HA counterpart in case bondib0 dies? This is my response: Summary: bondib1 is currently only being used for outbound cluster interconnect interconnect traffic. Details: bondib0 is the cluster_interconnect $ oifcfg getif            bondeth0  10.129.184.0  global  public bondib0  192.168.10.0  global  cluster_interconnect ipmpapp0  192.168.30.0  global  public bondib0 and bondib1 are on 192.168.10.1 and 192.168.10.2 respectively. # ipadm show-addr | grep bondi bondib0/v4static  static   ok           192.168.10.1/24 bondib1/v4static  static   ok           192.168.10.2/24 Hostnames tied to the IPs are node1-priv1 and node1-priv2  # grep 192.168.10 /etc/hosts 192.168.10.1    node1-priv1.us.oracle.com   node1-priv1 192.168.10.2    node1-priv2.us.oracle.com   node1-priv2 For the 4 node RAC interconnect: Each node has 2 private IP address on the 192.168.10.0 network. Each IP address has an active InfiniBand link and a failover InfiniBand link. Thus, the 4 node RAC interconnect is using a total of 8 IP addresses and 16 InfiniBand links. bondib1 isn't being used for the Virtual IP (VIP): $ srvctl config vip -n node1 VIP exists: /node1-ib-vip/192.168.30.25/192.168.30.0/255.255.255.0/ipmpapp0, hosting node node1 VIP exists: /node1-vip/10.55.184.15/10.55.184.0/255.255.255.0/bondeth0, hosting node node1 bondib1 is on bondib1_0 and fails over to bondib1_1: # ipmpstat -g GROUP       GROUPNAME   STATE     FDT       INTERFACES ipmpapp0    ipmpapp0    ok        --        ipmpapp_0 (ipmpapp_1) bondeth0    bondeth0    degraded  --        net2 [net5] bondib1     bondib1     ok        --        bondib1_0 (bondib1_1) bondib0     bondib0     ok        --        bondib0_0 (bondib0_1) bondib1_0 goes over net24 # dladm show-link | grep bond LINK                CLASS     MTU    STATE    OVER bondib0_0           part      65520  up       net21 bondib0_1           part      65520  up       net22 bondib1_0           part      65520  up       net24 bondib1_1           part      65520  up       net23 net24 is IB Partition FFFF # dladm show-ib LINK         HCAGUID         PORTGUID        PORT STATE  PKEYS net24        21280001A1868A  21280001A1868C  2    up     FFFF net22        21280001CEBBDE  21280001CEBBE0  2    up     FFFF,8503 net23        21280001A1868A  21280001A1868B  1    up     FFFF,8503 net21        21280001CEBBDE  21280001CEBBDF  1    up     FFFF On Express Module 9 port 2: # dladm show-phys -L LINK              DEVICE       LOC net21             ibp4         PCI-EM1/PORT1 net22             ibp5         PCI-EM1/PORT2 net23             ibp6         PCI-EM9/PORT1 net24             ibp7         PCI-EM9/PORT2 Outbound traffic on the 192.168.10.0 network will be multiplexed between bondib0 & bondib1 # netstat -rn Routing Table: IPv4   Destination           Gateway           Flags  Ref     Use     Interface -------------------- -------------------- ----- ----- ---------- --------- 192.168.10.0         192.168.10.2         U        16    6551834 bondib1   192.168.10.0         192.168.10.1         U         9    5708924 bondib0   There is a lot more traffic on bondib0 than bondib1 # /bin/time snoop -I bondib0 -c 100 > /dev/null Using device ipnet/bondib0 (promiscuous mode) 100 packets captured real        4.3 user        0.0 sys         0.0 (100 packets in 4.3 seconds = 23.3 pkts/sec) # /bin/time snoop -I bondib1 -c 100 > /dev/null Using device ipnet/bondib1 (promiscuous mode) 100 packets captured real       13.3 user        0.0 sys         0.0 (100 packets in 13.3 seconds = 7.5 pkts/sec) Half of the packets on bondib0 are outbound (from self). The remaining packet are split evenly, from the other nodes in the cluster. # snoop -I bondib0 -c 100 | awk '{print $1}' | sort | uniq -c Using device ipnet/bondib0 (promiscuous mode) 100 packets captured   49 node1-priv1.us.oracle.com   24 node2-priv1.us.oracle.com   14 node3-priv1.us.oracle.com   13 node4-priv1.us.oracle.com 100% of the packets on bondib1 are outbound (from self), but the headers in the packets indicate that they are from the IP address associated with bondib0: # snoop -I bondib1 -c 100 | awk '{print $1}' | sort | uniq -c Using device ipnet/bondib1 (promiscuous mode) 100 packets captured  100 node1-priv1.us.oracle.com The destination of the bondib1 outbound packets are split evenly, to node3 and node 4. # snoop -I bondib1 -c 100 | awk '{print $3}' | sort | uniq -c Using device ipnet/bondib1 (promiscuous mode) 100 packets captured   51 node3-priv1.us.oracle.com   49 node4-priv1.us.oracle.com Conclusion: bondib1 is currently only being used for outbound cluster interconnect interconnect traffic.

    Read the article

  • What's the best way to refactor this Rails controller?

    - by Robert DiNicolas
    I'd like some advice on how to best refactor this controller. The controller builds a page of zones and modules. Page has_many zones, zone has_many modules. So zones are just a cluster of modules wrapped in a container. The problem I'm having is that some modules may have some specific queries that I don't want executed on every page, so I've had to add conditions. The conditions just test if the module is on the page, if it is the query is executed. One of the problems with this is if I add a hundred special module queries, the controller has to iterate through each one. I think I would like to see these module condition moved out of the controller as well as all the additional custom actions. I can keep everything in this one controller, but I plan to have many apps using this controller so it could get messy. class PagesController < ApplicationController # GET /pages/1 # GET /pages/1.xml # Show is the main page rendering action, page routes are aliased in routes.rb def show #-+-+-+-+-Core Page Queries-+-+-+-+- @page = Page.find(params[:id]) @zones = @page.zones.find(:all, :order => 'zones.list_order ASC') @mods = @page.mods.find(:all) @columns = Page.columns # restful params to influence page rendering, see routes.rb @fragment = params[:fragment] # render single module @cluster = params[:cluster] # render single zone @head = params[:head] # render html, body and head #-+-+-+-+-Page Level Json Conversions-+-+-+-+- @metas = @page.metas ? ActiveSupport::JSON.decode(@page.metas) : nil @javascripts = @page.javascripts ? ActiveSupport::JSON.decode(@page.javascripts) : nil #-+-+-+-+-Module Specific Queries-+-+-+-+- # would like to refactor this process @mods.each do |mod| # Reps Module Custom Queries if mod.name == "reps" @reps = User.find(:all, :joins => :roles, :conditions => { :roles => { :name => 'rep' } }) end # Listing-poc Module Custom Queries if mod.name == "listing-poc" limit = params[:limit].to_i < 1 ? 10 : params[:limit] PropertyEntry.update_from_listing(mod.service_url) @properties = PropertyEntry.all(:limit => limit, :order => "city desc") end # Talents-index Module Custom Queries if mod.name == "talents-index" @talent = params[:type] @reps = User.find(:all, :joins => :talents, :conditions => { :talents => { :name => @talent } }) end end respond_to do |format| format.html # show.html.erb format.xml { render :xml => @page.to_xml( :include => { :zones => { :include => :mods } } ) } format.json { render :json => @page.to_json } format.css # show.css.erb, CSS dependency manager template end end # for property listing ajax request def update_properties limit = params[:limit].to_i < 1 ? 10 : params[:limit] offset = params[:offset] @properties = PropertyEntry.all(:limit => limit, :offset => offset, :order => "city desc") #render :nothing => true end end So imagine a site with a hundred modules and scores of additional controller actions. I think most would agree that it would be much cleaner if I could move that code out and refactor it to behave more like a configuration.

    Read the article

  • MySQL Connect 8 Days Away - Replication Sessions

    - by Mat Keep
    Following on from my post about MySQL Cluster sessions at the forthcoming Connect conference, its now the turn of MySQL Replication - another technology at the heart of scaling and high availability for MySQL. Unless you've only just returned from a 6-month alien abduction, you will know that MySQL 5.6 includes the largest set of replication enhancements ever packaged into a single new release: - Global Transaction IDs + HA utilities for self-healing cluster..(yes both automatic failover and manual switchover available!) - Crash-safe slaves and binlog - Binlog Group Commit and Multi-Threaded Slaves for high performance - Replication Event Checksums and Time-Delayed replication - and many more There are a number of sessions dedicated to learn more about these important new enhancements, delivered by the same engineers who developed them. Here is a summary Saturday 29th, 13.00 Replication Tips and Tricks, Mats Kindahl In this session, the developers of MySQL Replication present a bag of useful tips and tricks related to the MySQL 5.5 GA and MySQL 5.6 development milestone releases, including multisource replication, using logs for auditing, handling filtering, examining the binary log, using relay slaves, splitting the replication stream, and handling failover. Saturday 29th, 17.30 Enabling the New Generation of Web and Cloud Services with MySQL 5.6 Replication, Lars Thalmann This session showcases the new replication features, including • High performance (group commit, multithreaded slave) • High availability (crash-safe slaves, failover utilities) • Flexibility and usability (global transaction identifiers, annotated row-based replication [RBR]) • Data integrity (event checksums) Saturday 29th, 1900 MySQL Replication Birds of a Feather In this session, the MySQL Replication engineers discuss all the goodies, including global transaction identifiers (GTIDs) with autofailover; multithreaded, crash-safe slaves; checksums; and more. The team discusses the design behind these enhancements and how to get started with them. You will get the opportunity to present your feedback on how these can be further enhanced and can share any additional replication requirements you have to further scale your critical MySQL-based workloads. Sunday 30th, 10.15 Hands-On Lab, MySQL Replication, Luis Soares and Sven Sandberg But how do you get started, how does it work, and what are the best practices and tools? During this hands-on lab, you will learn how to get started with replication, how it works, architecture, replication prerequisites, setting up a simple topology, and advanced replication configurations. The session also covers some of the new features in the MySQL 5.6 development milestone releases. Sunday 30th, 13.15 Hands-On Lab, MySQL Utilities, Chuck Bell Would you like to learn how to more effectively manage a host of MySQL servers and manage high-availability features such as replication? This hands-on lab addresses these areas and more. Participants will get familiar with all of the MySQL utilities, using each of them with a variety of options to configure and manage MySQL servers. Sunday 30th, 14.45 Eliminating Downtime with MySQL Replication, Luis Soares The presentation takes a deep dive into new replication features such as global transaction identifiers and crash-safe slaves. It also showcases a range of Python utilities that, combined with the Release 5.6 feature set, results in a self-healing data infrastructure. By the end of the session, attendees will be familiar with the new high-availability features in the whole MySQL 5.6 release and how to make use of them to protect and grow their business. Sunday 30th, 17.45 Scaling for the Web and the Cloud with MySQL Replication, Luis Soares In a Replication topology, high performance directly translates into improving read consistency from slaves and reducing the risk of data loss if a master fails. MySQL 5.6 introduces several new replication features to enhance performance. In this session, you will learn about these new features, how they work, and how you can leverage them in your applications. In addition, you will learn about some other best practices that can be used to improve performance. So how can you make sure you don't miss out - the good news is that registration is still open ;-) And just to whet your appetite, listen to the On-Demand webinar that presents an overview of MySQL 5.6 Replication.  

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 23 (sys.dm_db_index_usage_stats)

    - by Tamarick Hill
    The sys.dm_db_index_usage_stats Dynamic Management View is used to return usage information about the various indexes on your SQL Server instance. Let’s have a look at this DMV against our AdventureWorks2012 database so we can examine the information returned. SELECT * FROM sys.dm_db_index_usage_stats WHERE database_id = db_id('AdventureWorks2012') The first three columns in the result set represent the database_id, object_id, and index_id of a given row. You can join these columns back to other system tables to extract the actual database, object, and index names. The next four columns are probably the most beneficial columns within this DMV. First, the user_seeks column represents the number of times that a user query caused a seek operation against a particular index. The user_scans column represents how many times a user query caused a scan operation on a particular index. The user_lookups column represents how many times an index was used to perform a lookup operation. The user_updates column refers to how many times an index had to be updated due to a write operation that effected a particular index. The last_user_seek, last_user_scan, last_user_lookup, and last_user_update columns provide you with DATETIME information about when the last user scan, seek, lookup, or update operation was performed. The remaining columns in the result set are the same as the ones we previously discussed, except instead of the various operations being generated from user requests, they are generated from system background requests. This is an extremely useful DMV and one of my favorites when it comes to Index Maintenance. As we all know, indexes are extremely beneficial with improving the performance of your read operations. But indexes do have a downside as well. Indexes slow down the performance of your write operations, and they also require additional resources for storage. For this reason, in my opinion, it is important to regularly analyze the indexes on your system to make sure the indexes you have are being used efficiently. My AdventureWorks2012 database is only used for demonstrating or testing things, so I dont have a lot of meaningful information here, but for a Production system, if you see an index that is never getting any seeks, scans, or lookups, but is constantly getting a ton of updates, it more than likely would be a good candidate for you to consider removing. You would not be getting much benefit from the index, but yet it is incurring a cost on your system due to it constantly having to be updated for your write operations, not to mention the additional storage it is consuming. You should regularly analyze your indexes to ensure you keep your database systems as efficient and lean as possible. One thing to note is that these DMV statistics are reset every time SQL Server is restarted. Therefore it would not be a wise idea to make decisions about removing indexes after a Server Reboot or a cluster roll. If you restart your SQL Server instances frequently, for example if you schedule weekly/monthly cluster rolls, then you may not capture indexes that are being used for weekly/monthly reports that run for business users. And if you remove them, you may have some upset people at your desk on Monday morning. If you would like to begin analyzing your indexes to possibly remove the ones that your system is not using, I would recommend building a process to load this DMV information into a table on scheduled basis, depending on how frequently you perform an operation that would reset these statistics, then you can analyze the data over a period of time to get a more accurate view of what indexes are really being used and which ones or not. For more information about this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms188755.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • Oredev 2011 Trip Report

    - by arungupta
    Oredev had its seventh annual conference in the city of Malmo, Sweden last week. The name "Oredev" signifies to the part that Malmo is connected with Copenhagen with Oresund bridge. There were about 1000 attendees with several speakers from all over the world. The first two days were hands-on workshops and the next three days were sessions. There were different tracks such as Java, Windows 8, .NET, Smart Phones, Architecture, Collaboration, and Entrepreneurship. And then there was Xtra(ck) which had interesting sessions not directly related to technology. I gave two slide-free talks in the Java track. The first one showed how to build an end-to-end Java EE 6 application using NetBeans and GlassFish. The complete instructions to build the application are explained in detail here. This 3-tier application used Java Persistence API, Enterprsie Java Beans, Servlet, Contexts and Dependency Injection, JavaServer Faces, and Java API for RESTful Services. The source code built during the application can be downloaded here (LINK TBD). The second session, slide-free again, showed how to take a Java EE 6 application into production using GlassFish cluster. It explained: Create a 2-instance GlassFish cluster Front-end with a Web server and a load balancer Demonstrate session replication and fail over Monitor the application using JavaScript The complete instructions for this session are available here. Oredev has an interesting way of collecting attendee feedback. The attendees drop a green, yellow, or red card in a bucket as they walk out of the session. Not everybody votes but most do. Other than the instantaneous feedback provided on twitter, this mechanism provides a more coarse grained feedback loop as well. The first talk had about 67 attendees (with 23 green and 7 yellow) and the second one had 22 (11 green and 11 yellow). The speakers' dinner is a good highlight of the conference. It is arranged in the historic city hall and the mayor welcomed all the speakers. As you can see in the pictures, it is a very royal building with lots of history behind it. Fortunately the dinner was a buffet with a much better variety unlike last year where only black soup and geese were served, which was quite cultural BTW ;-) The sauna in 85F, skinny dipping in 35F ocean and alternating between them at Kallbadhus is always very Swedish. Also spent a short evening at a friend's house socializing with other speaker/attendees, drinking Glogg, and eating Pepperkakor.  The welcome packet at the hotel also included cinnamon rolls, recommended to drink with cold milk, for a little more taste of Swedish culture. Something different at this conference was how artists from Image Think were visually capturing all the keynote speakers using images on whiteboards. Here are the images captured for Alexis Ohanian (Reddit co-founder and now running Hipmunk): Unfortunately I could not spend much time engaging with other speakers or attendees because was busy preparing a new hands-on lab material. But was able to spend some time with Matthew Mccullough, Micahel Tiberg, Magnus Martensson, Mattias Karlsson, Corey Haines, Patrick Kua, Charles Nutter, Tushara, Pradeep, Shmuel, and several other folks. Here are a few pictures captured from the event: And the complete album here: Thank you Matthias, Emily, and Kathy for putting up a great show and giving me an opportunity to speak at Oredev. I hope to be back next year with a more vibrant representation of Java - the language and the ecosystem!

    Read the article

  • Oredev 2011 Trip Report

    - by arungupta
    Oredev had its seventh annual conference in the city of Malmo, Sweden last week. The name "Oredev" signifies to the part that Malmo is connected with Copenhagen with Oresund bridge. There were about 1000 attendees with several speakers from all over the world. The first two days were hands-on workshops and the next three days were sessions. There were different tracks such as Java, Windows 8, .NET, Smart Phones, Architecture, Collaboration, and Entrepreneurship. And then there was Xtra(ck) which had interesting sessions not directly related to technology. I gave two slide-free talks in the Java track. The first one showed how to build an end-to-end Java EE 6 application using NetBeans and GlassFish. The complete instructions to build the application are explained in detail here. This 3-tier application used Java Persistence API, Enterprsie Java Beans, Servlet, Contexts and Dependency Injection, JavaServer Faces, and Java API for RESTful Services. The source code built during the application can be downloaded here (LINK TBD). The second session, slide-free again, showed how to take a Java EE 6 application into production using GlassFish cluster. It explained: Create a 2-instance GlassFish cluster Front-end with a Web server and a load balancer Demonstrate session replication and fail over Monitor the application using JavaScript The complete instructions for this session are available here. Oredev has an interesting way of collecting attendee feedback. The attendees drop a green, yellow, or red card in a bucket as they walk out of the session. Not everybody votes but most do. Other than the instantaneous feedback provided on twitter, this mechanism provides a more coarse grained feedback loop as well. The first talk had about 67 attendees (with 23 green and 7 yellow) and the second one had 22 (11 green and 11 yellow). The speakers' dinner is a good highlight of the conference. It is arranged in the historic city hall and the mayor welcomed all the speakers. As you can see in the pictures, it is a very royal building with lots of history behind it. Fortunately the dinner was a buffet with a much better variety unlike last year where only black soup and geese were served, which was quite cultural BTW ;-) The sauna in 85F, skinny dipping in 35F ocean and alternating between them at Kallbadhus is always very Swedish. Also spent a short evening at a friend's house socializing with other speaker/attendees, drinking Glogg, and eating Pepperkakor.  The welcome packet at the hotel also included cinnamon rolls, recommended to drink with cold milk, for a little more taste of Swedish culture. Something different at this conference was how artists from Image Think were visually capturing all the keynote speakers using images on whiteboards. Here are the images captured for Alexis Ohanian (Reddit co-founder and now running Hipmunk): Unfortunately I could not spend much time engaging with other speakers or attendees because was busy preparing a new hands-on lab material. But was able to spend some time with Matthew Mccullough, Micahel Tiberg, Magnus Martensson, Mattias Karlsson, Corey Haines, Patrick Kua, Charles Nutter, Tushara, Pradeep, Shmuel, and several other folks. Here are a few pictures captured from the event: And the complete album here: Thank you Matthias, Emily, and Kathy for putting up a great show and giving me an opportunity to speak at Oredev. I hope to be back next year with a more vibrant representation of Java - the language and the ecosystem!

    Read the article

  • ArchBeat Link-o-Rama Top 10 for October 28 - November 3, 2012

    - by Bob Rhubart
    The Top 10 most popular items shared on the OTN ArchBeat Facebook Page for the week of Oct 28 - Nov 3, 2012. Eventually, 90% of tech budgets will be outside IT departments | ZDNet Another interesting post from ZDNet blogger Joe McKendrick about changing roles in IT. ADF Mobile - Login Functionality | Andrejus Baranovskis "The new ADF Mobile approach with native deployment is cool when you want to access phone functionality (camera, email, sms and etc.), also when you want to build mobile applications with advanced UI," reports Oracle ACE Director Andrejus Baranovskis. Mobile Development Platform Strategy Chart: ADF Mobile, WebCenter Sites, Portal, Content and Social "Unlike desktop web focused efforts, the world of mobile has undergone change at a feverish pace," says social enterprise expert John Brunswick. His extensive post charts various resources that will help you keep up. ADF Essentials - The Bare Necessities | Floyd Teter The experiment is over… And now Oracle ACE Director Floyd Teter shares his impressions after spending some time with Oracle ADF Essentials, the free version of Oracle ADF. A review of Oracle SOA Suite 11g Administrator’s Handbook | RedStack "More so than any other single piece of content that I have seen on the topic, it provides the information that a SOA administrator needs to know in order to successfully configure, manage, monitor, troubleshoot and backup an Oracle SOA environment." So says Oracle Fusion Middleware A-Team solution architect Mark Nelson of Oracle SOA Suite 11g Administrator’s Handbook, by Ahmed Aboulnaga and Arun Pareek. Expanding the Oracle Enterprise Repository with functional documentation Capgemini middleware specialist Marc Kuijpers shares information on how Oracle Enterprise Repository can be configured "to contain functional assets, i.e. functional designs, use cases and a logical data model" to aid in SOA governance efforts. Podcast: Are You Future Proof? - Part 2 In Part 2, practicing architects and Oracle ACE Directors Ron Batra (AT&T), Basheer Khan (Innowave Technology), and Ronald van Luttikhuizen discuss re-tooling one’s skill set to reflect changes in enterprise IT, including the knowledge to steer stakeholders around the hype to what’s truly valuable. Easy way to access JPA with REST (JSON / XML) | Edwin Biemond Oracle ACE Edwin Biemond shows you "what is possible with JPA-RS, how easy it is and howto setup your own EclipseLink REST service." Clustering ODI11g for High-Availability Part 1: Introduction and Architecture | Richard Yeardley "JEE agents can be deployed alongside, or instead of, standalone agents," says Rittman Meade's Richard Yeardley. "But there is one key advantage in using JEE agents and WebLogic: when you deploy JEE agents as part of a WebLogic cluster they can be configured together to form a high availability cluster." Learn more in Yeardley's extensive post. 2012 IOUG Virtualization SIG – Online Symposium on Nov 7 and Nov 8 | Kai Yu Oracle ACE Director Kai Yu shares information on this week's IOUG Virtualization SIG online event. Does that make it a virtual virtualization event? Thought for the Day "If McDonalds were run like a software company, one out of every hundred Big Macs would give you food poisoning — and the response would be, 'We’re sorry, here’s a coupon for two more.'" — Mark Minasi Source: SoftwareQuotes.com

    Read the article

  • Expanding the Partner Ecosystem with Third-Party Plug-ins

    - by Joe Diemer
    Oracle Enterprise Manager’s extensibility capabilities are designed to allow customers and partners to adapt Enterprise Manager for management of heterogeneous environments with Plug-ins and Connectors.  Third-party developers continue to take advantage of Oracle Enterprise Manager’s Extensibility Development Kit (EDK) to build plug-ins to Enterprise Manager 12c, such as F5’s BIG IP Plug-in and Entuity’s Eye of the Storm Network Management Plug-In.  Partners can also validate their plug-ins through the Oracle Validated Integration (OVI) program, which assures customers that the plug-in has been tested and is functionally and technically sound, is designed in a reliable and standardized manner, and operates and performs as documented.   Two very recent examples of partners which have beta versions of their plug-ins are Blue Medora's VMware vSphere plug-in and the NetApp Storage plug-in.  VMware vSphere Plug-in by Blue Medora Blue Medora, an Oracle Partner Network (OPN) “Gold” member, which just announced that it is now signing up customers to try a beta version of their new VMware vSphere plug-in for Enterprise Manager 12c.  According to Blue Medora, the vSphere plug-in monitors critical VMware metrics (CPU, Memory, Disk, Network, etc) at the Host, VM, Cluster and Resource Pool levels.  It has minimal performance impact via an “agentless” approach that requires no installation directly on VMware servers.  It has discovery capabilities for VMware Datacenters, ESX Hosts, Clusters, Virtual Machines, and Datastores.  It offers integration of native VMware Events into Enterprise Manager, and it provides over 300 VMware-related health, availability, performance, and configuration metrics.  It comes with more than 30 out-of-the-box pre-defined thresholds and can manage VMware via a series of jobs split between cluster, host and VM target types.The company reports that the Enterprise Manager 12c plug-in supports vSphere versions 4.0, 4.5 and 5.0.  Platforms supported include Linux 64-bit, Windows, AIX and Solaris SPARC and x86.  Information about the plug-in, including how to sign up for the beta, is available at their web site at http://bluemedora.com after selecting the "Products" tab. NetApp Storage Plug-in NetApp believes the combination of storage system monitoring with comprehensive management of Oracle systems with Enterprise Manager will help customers reduce the cost and complexity of managing applications that rely on NetApp storage and Oracle technologies.  So, NetApp built a plug-in and reports that it has comprehensive availability and performance information for NetApp storage systems.  Using the plug-in, Oracle Enterprise Manager customers with NetApp storage solutions can track the association between databases and storage components and thereby respond to faults and IO performance bottlenecks quickly. With the latest configuration management capabilities, one can also perform drift analysis to make sure all storage systems are configured as per established gold standards. The company is also now signing up beta customers, which can be done at the NetApp Communities site at https://communities.netapp.com/groups/netapp-storage-system-plug-in-for-oem12c-beta. Learn More about Enterprise Manager Extensibility More plug-ins from other partners are soon to come, which I'll be reporting on them here.  To learn more about Enterprise Manager and how customers and partners can build plug-ins using the EDK to manage a multi-vendor data center, go to http://oracle.com/enterprisemanager in the Heterogeneous Management solution area.  The site also lists the plug-ins available with information on how to obtain them.  More info about the Oracle Validated Integration program can be found at the OPN Enterprise Manager Knowledge Zone in the "Develop" tab.

    Read the article

  • ArchBeat Link-o-Rama Top 10 for October 2012

    - by Bob Rhubart
    The Top 10 most popular items shared on the OTN ArchBeat Facebook Page for October 2012. OAM/OVD JVM Tuning | @FusionSecExpert Vinay from the Oracle Fusion Middleware Architecture Group (known as the A-Team) shares a process for analyzing and improving performance in Oracle Virtual Directory and Oracle Access Manager. SOA Galore: New Books for Technical Eyes Only Shake up up your technical skills with this trio of new technical books from community members covering SOA and BPM. Clustering ODI11g for High-Availability Part 1: Introduction and Architecture | Richard Yeardley "JEE agents can be deployed alongside, or instead of, standalone agents," says Rittman Meade's Richard Yeardley. "But there is one key advantage in using JEE agents and WebLogic – when you deploy JEE agents as part of a WebLogic cluster they can be configured together to form a high availability cluster." Learn more in Yeardley's extensive post. Solving Big Problems in Our 21st Century Information Society | Irving Wladawsky-Berger "I believe that the kind of extensive collaboration between the private sector, academia and government represented by the Internet revolution will be the way we will generally tackle big problems in the 21st century. Just as with the Internet, governments have a major role to play as the catalyst for many of the big projects that the private sector will then take forward and exploit. The need for high bandwidth, robust national broadband infrastructures is but one such example." -- Irving Wladawsky-Berger Eventually, 90% of tech budgets will be outside IT departments | ZDNet Another interesting post from ZDNet blogger Joe McKendrick about changing roles in IT. ADF Mobile - Login Functionality | Andrejus Baranovskis "The new ADF Mobile approach with native deployment is cool when you want to access phone functionality (camera, email, sms and etc.), also when you want to build mobile applications with advanced UI," reports Oracle ACE Director Andrejus Baranovskis. Podcast: Are You Future Proof? - Part 2 In Part 2, practicing architects and Oracle ACE Directors Ron Batra (AT&T), Basheer Khan (Innowave Technology), and Ronald van Luttikhuizen (Vennster) discuss re-tooling one’s skill set to reflect changes in enterprise IT, including the knowledge to steer stakeholders around the hype to what's truly valuable. ADF Mobile Custom Javascript — iFrame Injection | John Brunswick The ADF Mobile Framework provides a range of out of the box components to add within your AMX pages, according to John Brunswick. But what happens when "an out of the box component does not directly fulfill your development need? What options are available to extend your application interface?" John has an answer. Oracle Solaris 11.1 update focuses on database integration, cloud | Mark Fontecchio TechTarget editor Mark Fontecchio reports on the recent Oracle Solaris 11.1 release, with comments from IDC's Al Gillen. Architects Matter: Making sense of the people who make sense of enterprise IT Why do architects matter? Oracle Enterprise Architect Eric Stephens suggests that you ask yourself this question the next time you take the elevator to the Oracle offices on the 45th floor of the Willis Tower in Chicago, Illinois (or any other skyscraper, for that matter). If you had to take the stairs to get to those offices, who would you blame? "You get the picture," he says. "Architecture is essential for any necessarily complex structure, be it a building or an enterprise." (Read the article) Thought for the Day "I will contend that conceptual integrity is the most important consideration in system design. It is better to have a system omit certain anomalous features and improvements, but to reflect one set of design ideas, than to have one that contains many good but independent and uncoordinated ideas." — Frederick P. Brooks Source: SoftwareQuotes.com

    Read the article

  • Virtualized data centre&ndash;Part three: Architecture

    - by marc dekeyser
    Having the basics (like discussed in the previous articles) is all good and well, but how do we get started on this?! It can be quite daunting after all!   From my own point of view I can absolutely confirm your worries and concerns, but also tell you that it is not as hard as it seems! Deciding on what kind of motherboard to buy, processor and how much memory is an activity you will spend quite some time doing research on. And that is not even mentioning storage! All in all it comes down to setting you expectations and your budget. Probably adjusting your expectations according to your budget :). Processors As a rule of thumb you want VT-D (virtualization) technology built in to the processor allowing you to have 64 bit machines running on your host. Memory The more the better! If you are building a home lab don’t bother with ECC unless you are going to run machines that absolutely should be on all the time and your comfort depends on it! Motherboard Depends on what you are going to do with storage: If you are going the NAS way then the number of SATA port/RAID capabilities do not really matter. If you decide to have a single server with lots of dedicated storage it obviously matters how much SATA ports you will have, alternatively you could use a RAID controller (but these set you back a pretty penny if you want one. DELL 6i’s are usually available for a good bargain if you can find one!). Easiest is to get one with a built-in graphics card (on-board) as you are just adding more heat, power usage and possible points of failure. Networking Just like your choice of motherboard the networking side tends to depend on how you want to go. A single virtualization  host with local storage can usually get away with having a single network card, a cluster or server which uses iSCSI storage tends to have more than one teamed up :). Storage The dreaded beast from the dark! The horror which lives in the forest! The most difficult decision you are going to make in the building of your lab. Why you might ask? Simple my friend, having the right choice of storage can make or break your virtualization solution. The performance of you storage choice will have an important impact on the responsiveness of your virtual machines and the deployment of new machines. It also makes a run with your budget! If you decide to go the NAS route you will be dropping a lot more money than if you would be having just a bunch of disks sitting in a server and manually distributing the virtual machines over the disks. Platform I’m a Microsoftee so Hyper-V is a dead giveaway for me. If you are interested in using VMware I won’t stop you but the rest of my posts will be oriented on Server 2012 Hyper-V (aka 3.0)! What did I use? Before someone asks me this in the comments I’ll give you a quick run down of what I am using. - Intel 2.4 quad core processors (i something something) - 24 GB DDR3 Memory - Single disk in each server (might look at this as I move the servers to 2012) - Synology DS1812+ NAS - 3 network interfaces where possible - HP1800 procurve managed switch I decided to spring for the NAS as I will also be using it for backups and media storage (which is working out quite nicely with my Xbox 360 I must say). At the time of building my 2 boxes (over a year and a half ago) these set me back about 900 euros each so I can image you can build the same or better for a lower price. Next article will be diagramming what I want to achieve and starting a build on the Hyper V 3.0 cluster!

    Read the article

  • About the K computer

    - by nospam(at)example.com (Joerg Moellenkamp)
    Okay ? after getting yet another mail because of the new #1 on the Top500 list, I want to add some comments from my side: Yes, the system is using SPARC processor. And that is great news for a SPARC fan like me. It is using the SPARC VIIIfx processor from Fujitsu clocked at 2 GHz. No, it isn't the only one. Most people are saying there are two in the Top500 list using SPARC (#77 JAXA and #1 K) but in fact there are three. The Tianhe-1 (#2 on the Top500 list) super computer contains 2048 Galaxy "FT-1000" 1 GHz 8-core processors. Don't know it? The FeiTeng-1000 ? this proc is a 8 core, 8 threads per core, 1 ghz processor made in China. And it's SPARC based. By the way ? this sounds really familiar to me ? perhaps the people just took the opensourced UltraSPARC-T2 design, because some of the parameters sound just to similar. However it looks like that Tianhe-1 is using the SPARCs as input nodes and not as compute notes. No, I don't see it as the next M-series processor. Simple reason: You can't create SMP systems out of them ? it simply hasn't the functionality to do so. Even when there are multiple CPUs on a single board, they are not connected like an SMP/NUMA machine to a shared memory machine ? they are connected with the cluster interconnect (in this case the Tofu interconnect) and work like a large cluster. Yes, it has a lot of oomph in Linpack ? however I assume a lot came from the extensions to the SPARCv9 standard. No, Linpack has no relevance for any commercial workload ? Linpack is such a special load, that even some HPC people are arguing that it isn't really a good benchmark for HPC. It's embarrassingly parallel, it can work with relatively small interconnects compared to the interconnects in SMP systems (however we get in spheres SMP interconnects where a few years ago). Amdahl isn't hitting that hard when running Linpack. Yes, it's a good move to use SPARC. At some time in the last 10 years, there was an interesting twist in perception: SPARC was considered as proprietary architecture and x86 was the open architecture. However it's vice versa ? try to create a x86 clone and you have a lot of intellectual property problems, create a SPARC clone and you have to spend 100 bucks or so to get the specification from the SPARC Foundation and develop your own SPARC processor. Fujitsu is doing this for a long time now. So they had their own processor, their own know-how. So why was SPARC a good choice? Well ? essentially Fujitsu can do what they want with their core as it is their core, for example adding the extensions to the SPARCv9 chipset ? getting Intel to create extensions to x86 to help you with your product is a little bit harder. So Fujitsu could do they needed to do with their processor in order to create such a supercomputer. No, the K is really using no FPGA or GPU as accelerators. The K is really using the CPU at doing this job. Yes, it has a significantly enhanced FPU capable to execute 8 instructions in parallel. No, it doesn't run Solaris. Yes, it uses Linux. No, it doesn't hurt me ... as my colleague Roland Rambau (he knows a lot about HPC) said once to me ... it doesn't matter which OS is staying out of the way of the workload in HPC.

    Read the article

  • Biztalk Server 2009 - Failover Clustering and Network Load Balancing (NLB)

    - by FullOfQuestions
    Hi, We are planning a Biztalk 2009 set up in which we have 2 Biztalk Application Servers and 2 DB Servers (DB servers being in an Active/Passive Cluster). All servers are running Windows Server 2008 R2. As part of our application, we will have incoming traffic via the MSMQ, FILE and SOAP adapters. We also have a requirement for High-availability and Load-balancing. Let's say I create two different Biztalk Hosts and assign the FILE receive handler to the first one and the MSMQ receive handler to the second one. I now create two host instances for each of the two hosts (i.e. one for each of my two physical servers). After reviewing the Biztalk Documentation, this is what I know so far: For FILE (Receive), high-availablity and load-balancing will be achieved by Biztalk automatically because I set up a host instance on each of the two servers in the group. MSMQ (Receive) requires Biztalk Host Clustering to ensure high-availability (Host Clustering however requires Windows Failover Clustering to be set up as well). No loading-balancing option is clear here. SOAP (Receive) requires NLB to achieve Load-balancing and High-availability (if one server goes down, NLB will direct traffic to the other). This is where I'm completely puzzled and I desperately need your help: Is it possible to have a Windows Failover Cluster and NLB set up at the same time on the two application servers? If yes, then please tell me how. If no, then please explain to me how anyone is acheiving high-availability and load-balancing for MSMQ and SOAP when their underlying prerequisites are mutually exclusive! Your help is greatly appreciated, M

    Read the article

  • A catalogue of Cassandra log messages: What is the correct interpretation?

    - by knorv
    The following is a complete catalogue of all log messages generated by Cassandra 0.6 when stress-testing a Cassandra installation over an extended period of time: AntiEntropyService: Sending AEService tree for (,) to: [] CassandraDaemon: Binding thrift service to localhost/N.N.N.N:N CassandraDaemon: Cassandra starting up... ColumnFamilyStore: has reached its threshold; switching in a fresh Memtable at CommitLogContext(file='.../cassandra/commitlog/CommitLog-N.log', position=N) ColumnFamilyStore: Enqueuing flush of Memtable()@N CommitLog: Discarding obsolete commit log:CommitLogSegment(.../cassandra/commitlog/CommitLog-N.log) CommitLog: Log replay complete CommitLog: Replaying .../cassandra/commitlog/CommitLog-N.log, ... CommitLogSegment: Creating new commitlog segment .../cassandra/commitlog/CommitLog-N.log CompactionManager: Compacted to .../cassandra/data//-N-Data.db. N/N bytes for N keys. Time: Nms. CompactionManager: Compacting [org.apache.cassandra.io.SSTableReader(path='.../cassandra/data//-N-Data.db'), ...] DatabaseDescriptor: Auto DiskAccessMode determined to be mmap GCInspector: GC for ConcurrentMarkSweep: N ms, N reclaimed leaving N used; max is N GCInspector: GC for ParNew: N ms, N reclaimed leaving N used; max is N Memtable: Completed flushing .../cassandra/data//-N-Data.db Memtable: Writing Memtable()@N SSTable: Deleted .../cassandra/data//-N-Data.db SSTableDeletingReference: Deleted .../cassandra/data//-N-Data.db SSTableReader: Sampling index for .../cassandra/data//-N-Data.db StorageService: Starting up server gossip SystemTable: Saved ClusterName found: Test Cluster SystemTable: Saved ClusterName not found. Using Test Cluster SystemTable: Saved Token found: N SystemTable: Saved Token not found. Using N For each of the log messages listed - what is the correct interpretation of the log message?

    Read the article

  • Hadoop safemode recovery - taking lot of time

    - by Algorist
    Hi, We are running our cluster on Amazon EC2. we are using cloudera scripts to setup hadoop. On the master node, we start below services. 609 $AS_HADOOP '"$HADOOP_HOME"/bin/hadoop-daemon.sh start namenode' 610 $AS_HADOOP '"$HADOOP_HOME"/bin/hadoop-daemon.sh start secondarynamenode' 611 $AS_HADOOP '"$HADOOP_HOME"/bin/hadoop-daemon.sh start jobtracker' 612 613 $AS_HADOOP '"$HADOOP_HOME"/bin/hadoop dfsadmin -safemode wait' On the slave machine, we run the below services. 625 $AS_HADOOP '"$HADOOP_HOME"/bin/hadoop-daemon.sh start datanode' 626 $AS_HADOOP '"$HADOOP_HOME"/bin/hadoop-daemon.sh start tasktracker' The main problem we are facing is, hdfs safemode recovery is taking more than an hour and this is causing delays in our job completion. Below are the main log messages. 1. domU-12-31-39-0A-34-61.compute-1.internal 10/05/05 20:44:19 INFO ipc.Client: Retrying connect to server: ec2-184-73-64-64.compute-1.amazonaws.com/10.192.11.240:8020. Already tried 21 time(s). 2. The reported blocks 283634 needs additional 322258 blocks to reach the threshold 0.9990 of total blocks 606499. Safe mode will be turned off automatically. The first message is thrown in task trackers log because, job tracker is not started. job tracker didn't start because of hdfs safemode recovery. The second message is thrown during the recovery process. Is there something I am doing wrong? How much time does normal hdfs safemode recovery takes? Will there be any speedup, by not starting task trackers till job tracker is started? Are there any known hadoop problems on amazon cluster? Thanks for your help. Regards Bala Mudiam

    Read the article

  • Scalable / Parallel Large Graph Analysis Library?

    - by Joel Hoff
    I am looking for good recommendations for scalable and/or parallel large graph analysis libraries in various languages. The problems I am working on involve significant computational analysis of graphs/networks with 1-100 million nodes and 10 million to 1+ billion edges. The largest SMP computer I am using has 256 GB memory, but I also have access to an HPC cluster with 1000 cores, 2 TB aggregate memory, and MPI for communication. I am primarily looking for scalable, high-performance graph libraries that could be used in either single or multi-threaded scenarios, but parallel analysis libraries based on MPI or a similar protocol for communication and/or distributed memory are also of interest for high-end problems. Target programming languages include C++, C, Java, and Python. My research to-date has come up with the following possible solutions for these languages: C++ -- The most viable solutions appear to be the Boost Graph Library and Parallel Boost Graph Library. I have looked briefly at MTGL, but it is currently slanted more toward massively multithreaded hardware architectures like the Cray XMT. C - igraph and SNAP (Small-world Network Analysis and Partitioning); latter uses OpenMP for parallelism on SMP systems. Java - I have found no parallel libraries here yet, but JGraphT and perhaps JUNG are leading contenders in the non-parallel space. Python - igraph and NetworkX look like the most solid options, though neither is parallel. There used to be Python bindings for BGL, but these are now unsupported; last release in 2005 looks stale now. Other topics here on SO that I've looked at have discussed graph libraries in C++, Java, Python, and other languages. However, none of these topics focused significantly on scalability. Does anyone have recommendations they can offer based on experience with any of the above or other library packages when applied to large graph analysis problems? Performance, scalability, and code stability/maturity are my primary concerns. Most of the specialized algorithms will be developed by my team with the exception of any graph-oriented parallel communication or distributed memory frameworks (where the graph state is distributed across a cluster).

    Read the article

  • Padding is invalid and cannot be removed

    - by Ajay
    I have hosted an asp.net 2.0 site. Everyday, i am getting an error "Padding is invalid and cannot be removed" 2-3 times. Backend used is sql server 2005. the site is controlled via plesk 9.2 CP. Pooling is enabled with timeout as 120 Minutes, in IIS. Can it be the reason for this? I have not used any encryption except for the stored passwords(MD5) The error message is : " Base Exception = : Padding is invalid and cannot be removed. - Source : System.Web - TargetSite :Void ThrowError(System.Exception, System.String, System.String, Boolean)Message : Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. " & System Log (Application) says " Event code: 4009 Event message: Viewstate verification failed. Reason: The viewstate supplied failed integrity check. Event detail code: 50203 ViewStateException information: Exception message: Invalid viewstate. Port: 31235 User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1) " I have hosted it in a dedicated server and not in a web farm. Will keeping a static machine key help resolve this issue? Please guide me on this.

    Read the article

  • Hadoop streaming with Python and python subprocess

    - by Ganesh
    I have established a basic hadoop master slave cluster setup and able to run mapreduce programs (including python) on the cluster. Now I am trying to run a python code which accesses a C binary and so I am using the subprocess module. I am able to use the hadoop streaming for a normal python code but when I include the subprocess module to access a binary, the job is getting failed. As you can see in the below logs, the hello executable is recognised to be used for the packaging, but still not able to run the code. . . packageJobJar: [/tmp/hello/hello, /app/hadoop/tmp/hadoop-unjar5030080067721998885/] [] /tmp/streamjob7446402517274720868.jar tmpDir=null JarBuilder.addNamedStream hello . . 12/03/07 22:31:32 INFO mapred.FileInputFormat: Total input paths to process : 1 12/03/07 22:31:32 INFO streaming.StreamJob: getLocalDirs(): [/app/hadoop/tmp/mapred/local] 12/03/07 22:31:32 INFO streaming.StreamJob: Running job: job_201203062329_0057 12/03/07 22:31:32 INFO streaming.StreamJob: To kill this job, run: 12/03/07 22:31:32 INFO streaming.StreamJob: /usr/local/hadoop/bin/../bin/hadoop job -Dmapred.job.tracker=master:54311 -kill job_201203062329_0057 12/03/07 22:31:32 INFO streaming.StreamJob: Tracking URL: http://master:50030/jobdetails.jsp?jobid=job_201203062329_0057 12/03/07 22:31:33 INFO streaming.StreamJob: map 0% reduce 0% 12/03/07 22:32:05 INFO streaming.StreamJob: map 100% reduce 100% 12/03/07 22:32:05 INFO streaming.StreamJob: To kill this job, run: 12/03/07 22:32:05 INFO streaming.StreamJob: /usr/local/hadoop/bin/../bin/hadoop job -Dmapred.job.tracker=master:54311 -kill job_201203062329_0057 12/03/07 22:32:05 INFO streaming.StreamJob: Tracking URL: http://master:50030/jobdetails.jsp?jobid=job_201203062329_0057 12/03/07 22:32:05 ERROR streaming.StreamJob: Job not Successful! 12/03/07 22:32:05 INFO streaming.StreamJob: killJob... Streaming Job Failed! Command I am trying is : hadoop jar contrib/streaming/hadoop-*streaming*.jar -mapper /home/hduser/MARS.py -reducer /home/hduser/MARS_red.py -input /user/hduser/mars_inputt -output /user/hduser/mars-output -file /tmp/hello/hello -verbose where hello is the C executable. It is a simple helloworld program which I am using to check the basic functioning. My Python code is : #!/usr/bin/env python import subprocess subprocess.call(["./hello"]) Any help with how to get the executable run with Python in hadoop streaming or help with debugging this will get me forward in this. Thanks, Ganesh

    Read the article

  • How to thoroughly purge and reinstall postgresql on ubuntu?

    - by John Mee
    Somehow I've managed to completely bugger the install of postgresql on Ubuntu karmic. I want to start over from scratch, but when I "purge" the package with apt-get it still leaves traces behind such that the reinstall configuration doesn't run properly. After I've done: apt-get purge postgresql apt-get install postgresql It said Setting up postgresql-8.4 (8.4.3-0ubuntu9.10.1) ... Configuring already existing cluster (configuration: /etc/postgresql/8.4/main, data: /var/lib/postgresql/8.4/main, owner: 108:112) Error: move_conffile: required configuration file /var/lib/postgresql/8.4/main/postgresql.conf does not exist Error: could not create default cluster. Please create it manually with pg_createcluster 8.4 main --start or a similar command (see 'man pg_createcluster'). update-alternatives: using /usr/share/postgresql/8.4/man/man1/postmaster.1.gz to provide /usr/share/man/man1/postmaster.1.gz (postmaster.1.gz) in auto mode. Setting up postgresql (8.4.3-0ubuntu9.10.1) ... I have a "/etc/postgresql" with nothing in it and "/etc/postgresql-common/" has a 'pg_upgradecluser.d' directory and root.crt and user_clusters files. The /etc/passwd has a postgres user; the purge script doesn't appear to touch it. There's been a bunch of symptoms which I work through only to expose the next. Right this second, when I run that command "pg_createcluster..." it complains that '/var/lib/postgresql/8.4/main/postgresql.conf does not exist', so I'll go find one of those but I'm sure that won't be the end of it. Is there not some easy one-liner (or two) which will burn it completely and let me start over?

    Read the article

  • malloc works, cudaHostAlloc segfaults?

    - by Mikhail
    I am new to CUDA and I want to use cudaHostAlloc. I was able to isolate my problem to this following code. Using malloc for host allocation works, using cudaHostAlloc results in a segfault, possibly because the area allocated is invalid? When I dump the pointer in both cases it is not null, so cudaHostAlloc returns something... works in_h = (int*) malloc(length*sizeof(int)); //works for (int i = 0;i<length;i++) in_h[i]=2; doesn't work cudaHostAlloc((void**)&in_h,length*sizeof(int),cudaHostAllocDefault); for (int i = 0;i<length;i++) in_h[i]=2; //segfaults Standalone Code #include <stdio.h> void checkDevice() { cudaDeviceProp info; int deviceName; cudaGetDevice(&deviceName); cudaGetDeviceProperties(&info,deviceName); if (!info.deviceOverlap) { printf("Compute device can't use streams and should be discared."); exit(EXIT_FAILURE); } } int main() { checkDevice(); int *in_h; const int length = 10000; cudaHostAlloc((void**)&in_h,length*sizeof(int),cudaHostAllocDefault); printf("segfault comming %d\n",in_h); for (int i = 0;i<length;i++) { in_h[i]=2; } free(in_h); return EXIT_SUCCESS; } ~ Invocation [id129]$ nvcc fun.cu [id129]$ ./a.out segfault comming 327641824 Segmentation fault (core dumped) Details Program is run in interactive mode on a cluster. I was told that an invocation of the program from the compute node pushes it to the cluster. Have not had any trouble with other home made toy cuda codes.

    Read the article

  • Google map API v3 event click raise when clickingMarkerClusterer?

    - by lucian.jp
    I have a Google Map API v3 map object on a page that uses MarkerClusterer. I have a function that need to run when we click on the map to it is registered as: google.maps.event.addListener(map, 'click', function (event) { CallMe(event.latLng); }); So my problem is as follows: When I click on a cluster of MarkerClusterer instead of behaving like a marker and not raise the click event on the map but only the one from the marker it calls the click from the map. To test this I have generated an alert from the markerclusterer click: google.maps.event.addListener(markerClusterer, "clusterclick", function (cluster) { alert('MarkerClusterer click event'); }); So the clusterclick rises after the click event of map object. I then can't remove the listener of map object as a solution. Is there any way to test if there was a clusterer click in the click event of the map? Or a way to replicate the marker behaviour and do not raise the click event of map when clustererclick is called? Google and documentation didn’t help me. Thx

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >