Search Results

Search found 4878 results on 196 pages for 'sun cluster'.

Page 15/196 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Sane patch schedule for Windows 2003 cluster

    - by sixlettervariables
    We've got a cluster of 75 Win2k3 nodes at work in a coarse grained compute cluster. The cluster is behind a mountain of firewalls and resides in its own VLAN. Jobs of all sizes and types run on the cluster and all of the executables running are custom-made. (ed: additional notes on our executables) The jobs range from 30 seconds to 7 days in duration, and may contain one executable or 2000 sub-jobs (of short duration). Obviously we are trying to avoid the situation where our IT schedules a reboot during a 7 day production job. We have scheduling software which accomodates all of the normal tasks for a coarse grained cluster and we can control which machines are active for submission, etc. If WSUS was in some way scriptable (or the client could state it's availability for shutdown) we could coordinate the two systems and help out. Currently, the patch schedule is the Sunday after Super Tuesday regardless of what is running on the cluster. We have to ask for an exemption every time we want to delay patching a machine for a long running production job. Basically, while our group is responsible for the machines we have little control over IT's patch schedule. Is patching monthly with MS's schedule sane for a production Windows cluster? Are there software hooks in WSUS where we could say, "please don't reboot just yet"?

    Read the article

  • Cluster Nodes as RAID Drives

    - by BuckWoody
    I'm unable to sleep tonight so I thought I would push this post out VERY early. When you don't sleep your mind takes interesting turns, which can be a good thing. I was watching a briefing today by a couple of friends as they were talking about various ways to arrange a Windows Server Cluster for SQL Server. I often see an "active" node of a cluster with a "passive" node backing it up. That means one node is working and accepting transactions, and the other is not doing any work but simply "standing by" waiting for the first to fail over. The configuration in the demonstration I saw was a bit different. In this example, there were three nodes that were actively working, and a fourth standing by for all three. I've put configurations like this one into place before, but as I was looking at their architecture diagram, it looked familar - it looked like a RAID drive setup! And that's not a bad way to think about your cluster arrangements. The same concerns you might think about for a particular RAID configuration provides a good way to think about protecting your systems in general. So even if you're not staying awake all night thinking about SQL Server clusters, take this post as an opportunity for "lateral thinking" - a way of combining in your mind the concepts from one piece of knowledge to another. You might find a new way of making your technical environment a little better. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Single CAS web application in a cluster

    - by Dolf Dijkstra
    Recently a customer wanted to set up a cluster of CAS nodes to be used together with WebCenter Sites. In the process of setting this up they realized that they needed to create a web application per managed server. They did not want to have this management burden but would like to have one web application deployed to multiple nodes. The reason that there is a need for a unique application per node is that the web-application contains information that needs to be unique per node, the postfix for the ticket id.  My customer would like to externalize the node specific configuration to either a specific classpath per managed server or to system properties set at startup.It turns out that the postfix for ticket ids is managed through a property host.name and that this property can be externalized.The host.name property is used in: /webapps/cas/WEB-INF/spring-configuration/uniqueIdGenerators.xmlIt is set in /webapps/cas/WEB-INF/spring-configuration/propertyFileConfigurer.xmlin a PropertyPlaceholderConfigurer.The documentation for PropertyPlaceholderConfigurer:http://static.springsource.org/spring/docs/2.0.x/api/org/springframework/beans/factory/config/PropertyPlaceholderConfigurer.htmlThis indicates that the properties defined through the PropertyPlaceHolderConfigurer can be externalized.To enable this externalization you would need to change host.properties so it is generic for all the managed servers and thus can be reused for all the managed servers: host.name=${cluster.node.id}Next step is to change the startup scripts for the managed servers and add a system property for -Dcluster.node.id=<something unique and stable>.Viola, the postfix is externalized and the web application can be shared amongst the cluster nodes.

    Read the article

  • Mahout - Clustering - "naming" the cluster elements

    - by Mark Bramnik
    I'm doing some research and I'm playing with Apache Mahout 0.6 My purpose is to build a system which will name different categories of documents based on user input. The documents are not known in advance and I don't know also which categories do I have while collecting these documents. But I do know, that all the documents in the model should belong to one of the predefined categories. For example: Lets say I've collected a N documents, that belong to 3 different groups : Politics Madonna (pop-star) Science fiction I don't know what document belongs to what category, but I know that each one of my N documents belongs to one of those categories (e.g. there are no documents about, say basketball among these N docs) So, I came up with the following idea: Apply mahout clustering (for example k-mean with k=3 on these documents) This should divide the N documents to 3 groups. This should be kind of my model to learn with. I still don't know which document really belongs to which group, but at least the documents are clustered now by group Ask the user to find any document in the web that should be about 'Madonna' (I can't show to the user none of my N documents, its a restriction). Then I want to measure 'similarity' of this document and each one of 3 groups. I expect to see that the measurement for similarity between user_doc and documents in Madonna group in the model will be higher than the similarity between the user_doc and documents about politics. I've managed to produce the cluster of documents using 'Mahout in Action' book. But I don't understand how should I use Mahout to measure similarity between the 'ready' cluster group of document and one given document. I thought about rerunning the cluster with k=3 for N+1 documents with the same centroids (in terms of k-mean clustering) and see whether where the new document falls, but maybe there is any other way to do that? Is it possible to do with Mahout or my idea is conceptually wrong? (example in terms of Mahout API would be really good) Thanks a lot and sorry for a long question (couldn't describe it better) Any help is highly appreciated P.S. This is not a home-work project :)

    Read the article

  • Ha a hutés nem elég a gépteremben: Sun Cooling Door a Database Machine-hoz

    - by Fekete Zoltán
    A Database Machine hatalmas teljesítménye miatt általában jóval kevesebb hutésre van szükség, mintha egy külön high-end servert és külön high-end storage-ot hutenénk! Ha viszont a géptermünk maradék hutési kapacitása nem elegendo, és nem elégszünk meg a "hagyományos mosóporral", akkor újabb hutési trükkre van szükség. Erre kínálnak megoldást a Sun Cooling Door modellek, például az 5200-as és az 5600-as modellek.

    Read the article

  • Oracle Sun Java Roadshow

    - by Lajos Sárecz
    Jövo héten, május 20-án Oracle Sun Java Roadshow konferencia lesz Ramada Plaza Budapest Hotel helszínnel. Hogy jön ide a blog-ba egy Java konferencia kedvcsinálója? A tervezett program ismeretében talán már nem olyan meglepo. Ugyanis az egyik eloadás a közelmúltban megjelent Oracle Berkeley újdonságairól fog szólni. Bár az Oracle már 2006 februárjában felvásárolta a Berkeley DB-t, azóta Magyarországon nem volt olyan Oracle rendezvény, ahol érdemben szó esett volna róla, így mindenkit bíztatok, hogy ne hagyja ki ezt a lehetoséget.

    Read the article

  • Installed Sun Java 6 - configuration problem when running as sudo

    - by HorusKol
    I have install Sun Java 6 on an Ubuntu server and set an environment variable in the default profile as per the instructions at http://www.edugate.ie/workshop-guides/shibboleth-2-identity-provider-installation-linux-debian-or-ubuntu I then try to run an installer for a Java servlet - but when I run it as myself, it cannot create the required directory in /opt. When I run it as sudo, I am told that JAVA_HOME is not correct and it doesn't even start the installer - shouldn't this be coming from /etc/profile like it is for my normal user?

    Read the article

  • Oracle Sun Java Roadshow - Budapest

    - by peter.nagy
    Még a Sun Magyarország gondozásában, akik korábban is vitték ezen rendezvényeket újabb Java Roadshow esemény lesz. A sok külföldi eloado mellett hazai is lesz. A témák pedig meglehetosen aktuálisak: -Java at Oracle -Java in Embedded (SE, ME) -Java For Business -What is New in Java - Java FX, JDK 7 Szóval tessék jönni és meghallgatni, megvitatni. Logisztika: 2010. Május 20. 08:30 - 16:30 Ramada Plaza Budapest Hotel (1036 Árpád fejedelem útja 94.)

    Read the article

  • ForgeRock Picks Up Sun's Open Source Identity

    <b>Datamation:</b> "Among the promises of open source software is that there is no vendor lock-in. It's a promise that new open source startup ForgeRock is aiming to deliver upon by supporting and extending the OpenSSO open source single sign-on and identity management platform formerly supported by Sun Microsystems."

    Read the article

  • Last Day At Sun

    <b>Wild Webmink:</b> "Today is my last day of employment at Sun (well, it became Oracle on March 1st in the UK but you know what I mean). I am a few months short of my 10th anniversary there..."

    Read the article

  • Similar But Not The Same

    - by rickramsey
    A few weeks ago we published an article that explained how to use Oracle Solaris Cluster 3.3 5/11 to provide a virtual, multitiered architecture for Oracle Real Application Cluster (Oracle RAC) 11.2.0.2. We called it ... How to Deploy Oracle RAC on Zone Clusters Welllllll ... we just published another article just like it. Except that it's different. The earlier article was for Oracle RAC 11.2.0.2. This one is for Oracle RAC 11.2.0.3. This one describes how to do the same thing as the earlier one --create an Oracle Solaris Zone cluster, install and configure Oracle Grid Infrastructure and Oracle RAC in the zone cluster, and create an Oracle Solaris Cluster resource for Oracle RAC-- but for version 11.2.0.3 of Oracle RAC. Even though the objective is the same, and the version is only a dot-dot-dot release away, the process is quite different. So we decided to call it: How to Deploy Oracle RAC 11.2.0.3 on Zone Clusters Hope you can keep the different versions clear in your head. If not, let me know, and I'll try to make them easier to distinguish. - Rick Website Newsletter Facebook Twitter

    Read the article

  • VDI ? Synergy

    - by katsumii
    ????·?????????&???? ????·???? - ????????????????????·????????????Oracle Sun Ray??????????????·?????????Sun Ray 3? ??Windows??????????·??????????Windows Server Remote Desktop Services??????????????Sun Ray Software?????????Oracle?????????????????????????????????NotePC????SunRay???2?????????????????????????????????????????????????SynergySynergy???????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? ???????????????????&????????Remote Desktop??????????????????????????????????????????????????????Bug #3002 - Mouse Pointer Invisible on Client PC - SynergyMouse pointer does not move.Remote Desktop ?????????????????????????RDP????Synergy???????????????????????????????????·???????Synergy?????????????????????????????????? 

    Read the article

  • Red Hat cluster: Failure of one of two services sharing the same virtual IP tears down IP

    - by js01
    I'm creating a 2+1 failover cluster under Red Hat 5.5 with 4 services of which 2 have to run on the same node, sharing the same virtual IP address. One of the services on each node needs a (SAN) disk, the other doesn't. I'm using HA-LVM. When I shut down (via ifdown) the two interfaces connected to the SAN to simulate SAN failure, the service needing the disk is disabled, the other keeps running, as expected. Surprisingly (and unfortunately), the virtual IP address shared by the two services on the same machine is also removed, rendering the still-running service useless. How can I configure the cluster to keep the IP address up?

    Read the article

  • What "pieces" are needed in order to set up a cluster of physical servers?

    - by Chris Dutrow
    Background: Currently, we use Rackspace cloud servers. We have no intention to stop using them, but would like to look into setting up a cluster of physical servers (probably desktop computers in the $400 range with 8gb memory each) to offset some of our load and work as a secondary, more powerful, less reliable system. To put things in perspective, we can buy comparable desktop computers for the same price as we pay in one month to rent them on Rackspace Cloud. I understand that this is generally a dumb idea. However, in this particular instance, the server cluster is needed for its computation power. It is not mission-critical, it does not host a consumer-facing website, and if it goes down for a day or two, its not really a problem. Currently, we have access to business class verizon fios. If I understand correctly, we can get at least 25 dedicated IP addresses with this service, this should be enough. Requirements: Each server runs Linux Centos 6.3 Some of the servers run Python and execute processes from a task queue (Redis or RabbitMQ) Some of the servers are capable of serving static files and Python driven REST APIs Some of the servers host a Cassandra database cluster One or more of the servers are a Redis database servers One or more of the servers are PostgreSQL servers Questions: What kind of router or switch is needed? We would like the computers to be able to communicate effectively with each other via internal IP addresses. This is especially important for communicating with servers hosting Redis that need to be able to respond to requests very quickly. Are there special switches or routers that need to be used to connect the servers together? Are Desktop computers ok for this? We have found that we are mostly RAM-bottle necked, I understand that some servers have highly superior CPUs, but I'm not sure we need CPU power as much as we need RAM, which is cheap in Desktop computers. Will we have problems with the WIFI cards in the desktops or any other unexpected hardware limitation? What tools should be used to "image" the servers. For example, when we get an installation right for a Redis server or Cassandra node, are there tools that come with Linux Centos 6.3 to image the server to a USB drive or something like that? Or do we need to use some other software for this? What other things are we missing that we should be concerned about? Thanks so much!

    Read the article

  • Apache cannot access remotely

    - by MMRUSer
    I have set up and configured Apache 2.2 on Redhat EL .. But I cannot access it remotely (through a web browser). Here's my Apache log . [Sun Apr 11 05:58:12 2010] [notice] suEXEC mechanism enabled (wrapper: /usr/sbi$ [Sun Apr 11 05:58:12 2010] [notice] Digest: generating secret for digest authen$ [Sun Apr 11 05:58:12 2010] [notice] Digest: done [Sun Apr 11 05:58:13 2010] [notice] Apache/2.2.3 (Red Hat) configured -- resumi$ [Sun Apr 11 05:59:32 2010] [notice] caught SIGTERM, shutting down [Sun Apr 11 06:06:38 2010] [notice] suEXEC mechanism enabled (wrapper: /usr/sbi$ [Sun Apr 11 06:06:38 2010] [notice] Digest: generating secret for digest authen$ [Sun Apr 11 06:06:38 2010] [notice] Digest: done [Sun Apr 11 06:06:39 2010] [notice] Apache/2.2.3 (Red Hat) configured -- resumi$ [Sun Apr 11 06:10:13 2010] [notice] caught SIGTERM, shutting down [Sun Apr 11 06:14:29 2010] [notice] suEXEC mechanism enabled (wrapper: /usr/sbi$ [Sun Apr 11 06:14:29 2010] [notice] Digest: generating secret for digest authen$ [Sun Apr 11 06:14:29 2010] [notice] Digest: done [Sun Apr 11 06:14:29 2010] [notice] Apache/2.2.3 (Red Hat) configured -- resumi$ [Sun Apr 11 06:37:05 2010] [notice] caught SIGTERM, shutting down [Sun Apr 11 06:37:05 2010] [notice] suEXEC mechanism enabled (wrapper: /usr/sbi$ [Sun Apr 11 06:37:05 2010] [notice] Digest: generating secret for digest authen$ [Sun Apr 11 06:37:05 2010] [notice] Digest: done [Sun Apr 11 06:37:05 2010] [notice] Apache/2.2.3 (Red Hat) configured -- resumi$ http://x.x.x.x.x./ does not working.

    Read the article

  • Hibernate+PostgreSQL throws JDBCConnectionException: Cannot open connection

    - by Omer Salmanoglu
    false true auto thread org.postgresql.Driver 1234 jdbc:postgresql://localhost/postgres postgres public org.hibernate.dialect.PostgreSQLDialect false true org.hibernate.transaction.JDBCTransactionFactory false false false false 100 When I press "Save" button I get the following exception: javax.servlet.ServletException: org.hibernate.exception.JDBCConnectionException: Cannot open connection javax.faces.webapp.FacesServlet.service(FacesServlet.java:325) root cause javax.faces.el.EvaluationException: org.hibernate.exception.JDBCConnectionException: Cannot open connection javax.faces.component.MethodBindingMethodExpressionAdapter.invoke(MethodBindingMethodExpressionAdapter.java:102) com.sun.faces.application.ActionListenerImpl.processAction(ActionListenerImpl.java:102) javax.faces.component.UICommand.broadcast(UICommand.java:315) javax.faces.component.UIViewRoot.broadcastEvents(UIViewRoot.java:775) javax.faces.component.UIViewRoot.processApplication(UIViewRoot.java:1267) com.sun.faces.lifecycle.InvokeApplicationPhase.execute(InvokeApplicationPhase.java:82) com.sun.faces.lifecycle.Phase.doPhase(Phase.java:101) com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:118) javax.faces.webapp.FacesServlet.service(FacesServlet.java:312) root cause org.hibernate.exception.JDBCConnectionException: Cannot open connection org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:98) org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:52) org.hibernate.jdbc.ConnectionManager.openConnection(ConnectionManager.java:449) org.hibernate.jdbc.ConnectionManager.getConnection(ConnectionManager.java:167) org.hibernate.jdbc.JDBCContext.connection(JDBCContext.java:142) org.hibernate.transaction.JDBCTransaction.begin(JDBCTransaction.java:85) org.hibernate.impl.SessionImpl.beginTransaction(SessionImpl.java:1463) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) java.lang.reflect.Method.invoke(Unknown Source) org.hibernate.context.ThreadLocalSessionContext$TransactionProtectionWrapper.invoke(ThreadLocalSessionContext.java:344) $Proxy108.beginTransaction(Unknown Source) com.yemex.beans.CompanyBean.saveOrUpdate(CompanyBean.java:52) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) java.lang.reflect.Method.invoke(Unknown Source) org.apache.el.parser.AstValue.invoke(AstValue.java:196) org.apache.el.MethodExpressionImpl.invoke(MethodExpressionImpl.java:276) com.sun.faces.facelets.el.TagMethodExpression.invoke(TagMethodExpression.java:98) javax.faces.component.MethodBindingMethodExpressionAdapter.invoke(MethodBindingMethodExpressionAdapter.java:88) com.sun.faces.application.ActionListenerImpl.processAction(ActionListenerImpl.java:102) javax.faces.component.UICommand.broadcast(UICommand.java:315) javax.faces.component.UIViewRoot.broadcastEvents(UIViewRoot.java:775) javax.faces.component.UIViewRoot.processApplication(UIViewRoot.java:1267) com.sun.faces.lifecycle.InvokeApplicationPhase.execute(InvokeApplicationPhase.java:82) com.sun.faces.lifecycle.Phase.doPhase(Phase.java:101) com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:118) javax.faces.webapp.FacesServlet.service(FacesServlet.java:312) root cause java.sql.SQLException: No suitable driver found for jdbc:postgresql://localhost/postgres java.sql.DriverManager.getConnection(Unknown Source) java.sql.DriverManager.getConnection(Unknown Source) org.hibernate.connection.DriverManagerConnectionProvider.getConnection(DriverManagerConnectionProvider.java:133) org.hibernate.jdbc.ConnectionManager.openConnection(ConnectionManager.java:446) org.hibernate.jdbc.ConnectionManager.getConnection(ConnectionManager.java:167) org.hibernate.jdbc.JDBCContext.connection(JDBCContext.java:142) org.hibernate.transaction.JDBCTransaction.begin(JDBCTransaction.java:85) org.hibernate.impl.SessionImpl.beginTransaction(SessionImpl.java:1463) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) java.lang.reflect.Method.invoke(Unknown Source) org.hibernate.context.ThreadLocalSessionContext$TransactionProtectionWrapper.invoke(ThreadLocalSessionContext.java:344) $Proxy108.beginTransaction(Unknown Source) com.yemex.beans.CompanyBean.saveOrUpdate(CompanyBean.java:52) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) java.lang.reflect.Method.invoke(Unknown Source) org.apache.el.parser.AstValue.invoke(AstValue.java:196) org.apache.el.MethodExpressionImpl.invoke(MethodExpressionImpl.java:276) com.sun.faces.facelets.el.TagMethodExpression.invoke(TagMethodExpression.java:98) javax.faces.component.MethodBindingMethodExpressionAdapter.invoke(MethodBindingMethodExpressionAdapter.java:88) com.sun.faces.application.ActionListenerImpl.processAction(ActionListenerImpl.java:102) javax.faces.component.UICommand.broadcast(UICommand.java:315) javax.faces.component.UIViewRoot.broadcastEvents(UIViewRoot.java:775) javax.faces.component.UIViewRoot.processApplication(UIViewRoot.java:1267) com.sun.faces.lifecycle.InvokeApplicationPhase.execute(InvokeApplicationPhase.java:82) com.sun.faces.lifecycle.Phase.doPhase(Phase.java:101) com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:118) javax.faces.webapp.FacesServlet.service(FacesServlet.java:312)

    Read the article

  • JSF HIBERNATE POSTGRESQL

    - by user312619
    When I press "Save" button I get an exception like that ; javax.servlet.ServletException: org.hibernate.exception.JDBCConnectionException: Cannot open connection javax.faces.webapp.FacesServlet.service(FacesServlet.java:325) root cause javax.faces.el.EvaluationException: org.hibernate.exception.JDBCConnectionException: Cannot open connection javax.faces.component.MethodBindingMethodExpressionAdapter.invoke(MethodBindingMethodExpressionAdapter.java:102) com.sun.faces.application.ActionListenerImpl.processAction(ActionListenerImpl.java:102) javax.faces.component.UICommand.broadcast(UICommand.java:315) javax.faces.component.UIViewRoot.broadcastEvents(UIViewRoot.java:775) javax.faces.component.UIViewRoot.processApplication(UIViewRoot.java:1267) com.sun.faces.lifecycle.InvokeApplicationPhase.execute(InvokeApplicationPhase.java:82) com.sun.faces.lifecycle.Phase.doPhase(Phase.java:101) com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:118) javax.faces.webapp.FacesServlet.service(FacesServlet.java:312) root cause org.hibernate.exception.JDBCConnectionException: Cannot open connection org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:98) org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:52) org.hibernate.jdbc.ConnectionManager.openConnection(ConnectionManager.java:449) org.hibernate.jdbc.ConnectionManager.getConnection(ConnectionManager.java:167) org.hibernate.jdbc.JDBCContext.connection(JDBCContext.java:142) org.hibernate.transaction.JDBCTransaction.begin(JDBCTransaction.java:85) org.hibernate.impl.SessionImpl.beginTransaction(SessionImpl.java:1463) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) java.lang.reflect.Method.invoke(Unknown Source) org.hibernate.context.ThreadLocalSessionContext$TransactionProtectionWrapper.invoke(ThreadLocalSessionContext.java:344) $Proxy108.beginTransaction(Unknown Source) com.yemex.beans.CompanyBean.saveOrUpdate(CompanyBean.java:52) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) java.lang.reflect.Method.invoke(Unknown Source) org.apache.el.parser.AstValue.invoke(AstValue.java:196) org.apache.el.MethodExpressionImpl.invoke(MethodExpressionImpl.java:276) com.sun.faces.facelets.el.TagMethodExpression.invoke(TagMethodExpression.java:98) javax.faces.component.MethodBindingMethodExpressionAdapter.invoke(MethodBindingMethodExpressionAdapter.java:88) com.sun.faces.application.ActionListenerImpl.processAction(ActionListenerImpl.java:102) javax.faces.component.UICommand.broadcast(UICommand.java:315) javax.faces.component.UIViewRoot.broadcastEvents(UIViewRoot.java:775) javax.faces.component.UIViewRoot.processApplication(UIViewRoot.java:1267) com.sun.faces.lifecycle.InvokeApplicationPhase.execute(InvokeApplicationPhase.java:82) com.sun.faces.lifecycle.Phase.doPhase(Phase.java:101) com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:118) javax.faces.webapp.FacesServlet.service(FacesServlet.java:312) root cause java.sql.SQLException: No suitable driver found for jdbc:postgresql://localhost/postgres java.sql.DriverManager.getConnection(Unknown Source) java.sql.DriverManager.getConnection(Unknown Source) org.hibernate.connection.DriverManagerConnectionProvider.getConnection(DriverManagerConnectionProvider.java:133) org.hibernate.jdbc.ConnectionManager.openConnection(ConnectionManager.java:446) org.hibernate.jdbc.ConnectionManager.getConnection(ConnectionManager.java:167) org.hibernate.jdbc.JDBCContext.connection(JDBCContext.java:142) org.hibernate.transaction.JDBCTransaction.begin(JDBCTransaction.java:85) org.hibernate.impl.SessionImpl.beginTransaction(SessionImpl.java:1463) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) java.lang.reflect.Method.invoke(Unknown Source) org.hibernate.context.ThreadLocalSessionContext$TransactionProtectionWrapper.invoke(ThreadLocalSessionContext.java:344) $Proxy108.beginTransaction(Unknown Source) com.yemex.beans.CompanyBean.saveOrUpdate(CompanyBean.java:52) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) java.lang.reflect.Method.invoke(Unknown Source) org.apache.el.parser.AstValue.invoke(AstValue.java:196) org.apache.el.MethodExpressionImpl.invoke(MethodExpressionImpl.java:276) com.sun.faces.facelets.el.TagMethodExpression.invoke(TagMethodExpression.java:98) javax.faces.component.MethodBindingMethodExpressionAdapter.invoke(MethodBindingMethodExpressionAdapter.java:88) com.sun.faces.application.ActionListenerImpl.processAction(ActionListenerImpl.java:102) javax.faces.component.UICommand.broadcast(UICommand.java:315) javax.faces.component.UIViewRoot.broadcastEvents(UIViewRoot.java:775) javax.faces.component.UIViewRoot.processApplication(UIViewRoot.java:1267) com.sun.faces.lifecycle.InvokeApplicationPhase.execute(InvokeApplicationPhase.java:82) com.sun.faces.lifecycle.Phase.doPhase(Phase.java:101) com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:118) javax.faces.webapp.FacesServlet.service(FacesServlet.java:312)

    Read the article

  • How many domains can you configure on a Sun M5000 system?

    - by Andre Miller
    We have a few Sun M5000 servers with the following configuration: Each system has 2 system boards each containing 2 x 2.5Ghz quad core processors Each system board has 16GB of RAM Each system has 4 x 300GB disks I would like to know how many hardware domains can I configure per system? Do I need one system board per domain (implying a total of 2 domains), or can I create 4 domains, each with one cpu each?

    Read the article

  • Running a simple integration scenario using the Oracle Big Data Connectors on Hadoop/HDFS cluster

    - by hamsun
    Between the elephant ( the tradional image of the Hadoop framework) and the Oracle Iron Man (Big Data..) an english setter could be seen as the link to the right data Data, Data, Data, we are living in a world where data technology based on popular applications , search engines, Webservers, rich sms messages, email clients, weather forecasts and so on, have a predominant role in our life. More and more technologies are used to analyze/track our behavior, try to detect patterns, to propose us "the best/right user experience" from the Google Ad services, to Telco companies or large consumer sites (like Amazon:) ). The more we use all these technologies, the more we generate data, and thus there is a need of huge data marts and specific hardware/software servers (as the Exadata servers) in order to treat/analyze/understand the trends and offer new services to the users. Some of these "data feeds" are raw, unstructured data, and cannot be processed effectively by normal SQL queries. Large scale distributed processing was an emerging infrastructure need and the solution seemed to be the "collocation of compute nodes with the data", which in turn leaded to MapReduce parallel patterns and the development of the Hadoop framework, which is based on MapReduce and a distributed file system (HDFS) that runs on larger clusters of rather inexpensive servers. Several Oracle products are using the distributed / aggregation pattern for data calculation ( Coherence, NoSql, times ten ) so once that you are familiar with one of these technologies, lets says with coherence aggregators, you will find the whole Hadoop, MapReduce concept very similar. Oracle Big Data Appliance is based on the Cloudera Distribution (CDH), and the Oracle Big Data Connectors can be plugged on a Hadoop cluster running the CDH distribution or equivalent Hadoop clusters. In this paper, a "lab like" implementation of this concept is done on a single Linux X64 server, running an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0, and a single node Apache hadoop-1.2.1 HDFS cluster, using the SQL connector for HDFS. The whole setup is fairly simple: Install on a Linux x64 server ( or virtual box appliance) an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 server Get the Apache Hadoop distribution from: http://mir2.ovh.net/ftp.apache.org/dist/hadoop/common/hadoop-1.2.1. Get the Oracle Big Data Connectors from: http://www.oracle.com/technetwork/bdc/big-data-connectors/downloads/index.html?ssSourceSiteId=ocomen. Check the java version of your Linux server with the command: java -version java version "1.7.0_40" Java(TM) SE Runtime Environment (build 1.7.0_40-b43) Java HotSpot(TM) 64-Bit Server VM (build 24.0-b56, mixed mode) Decompress the hadoop hadoop-1.2.1.tar.gz file to /u01/hadoop-1.2.1 Modify your .bash_profile export HADOOP_HOME=/u01/hadoop-1.2.1 export PATH=$PATH:$HADOOP_HOME/bin export HIVE_HOME=/u01/hive-0.11.0 export PATH=$PATH:$HADOOP_HOME/bin:$HIVE_HOME/bin (also see my sample .bash_profile) Set up ssh trust for Hadoop process, this is a mandatory step, in our case we have to establish a "local trust" as will are using a single node configuration copy the new public keys to the list of authorized keys connect and test the ssh setup to your localhost: We will run a "pseudo-Hadoop cluster", in what is called "local standalone mode", all the Hadoop java components are running in one Java process, this is enough for our demo purposes. We need to "fine tune" some Hadoop configuration files, we have to go at our $HADOOP_HOME/conf, and modify the files: core-site.xml hdfs-site.xml mapred-site.xml check that the hadoop binaries are referenced correctly from the command line by executing: hadoop -version As Hadoop is managing our "clustered HDFS" file system we have to create "the mount point" and format it , the mount point will be declared to core-site.xml as: The layout under the /u01/hadoop-1.2.1/data will be created and used by other hadoop components (MapReduce = /mapred/...) HDFS is using the /dfs/... layout structure format the HDFS hadoop file system: Start the java components for the HDFS system As an additional check, you can use the GUI Hadoop browsers to check the content of your HDFS configurations: Once our HDFS Hadoop setup is done you can use the HDFS file system to store data ( big data : )), and plug them back and forth to Oracle Databases by the means of the Big Data Connectors ( which is the next configuration step). You can create / use a Hive db, but in our case we will make a simple integration of "raw data" , through the creation of an External Table to a local Oracle instance ( on the same Linux box, we run the Hadoop HDFS one node cluster and one Oracle DB). Download some public "big data", I use the site: http://france.meteofrance.com/france/observations, from where I can get *.csv files for my big data simulations :). Here is the data layout of my example file: Download the Big Data Connector from the OTN (oraosch-2.2.0.zip), unzip it to your local file system (see picture below) Modify your environment in order to access the connector libraries , and make the following test: [oracle@dg1 bin]$./hdfs_stream Usage: hdfs_stream locationFile [oracle@dg1 bin]$ Load the data to the Hadoop hdfs file system: hadoop fs -mkdir bgtest_data hadoop fs -put obsFrance.txt bgtest_data/obsFrance.txt hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$ hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$hadoop fs -ls hdfs:///user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt Check the content of the HDFS with the browser UI: Start the Oracle database, and run the following script in order to create the Oracle database user, the Oracle directories for the Oracle Big Data Connector (dg1 it’s my own db id replace accordingly yours): #!/bin/bash export ORAENV_ASK=NO export ORACLE_SID=dg1 . oraenv sqlplus /nolog <<EOF CONNECT / AS sysdba; CREATE OR REPLACE DIRECTORY osch_bin_path AS '/u01/orahdfs-2.2.0/bin'; CREATE USER BGUSER IDENTIFIED BY oracle; GRANT CREATE SESSION, CREATE TABLE TO BGUSER; GRANT EXECUTE ON sys.utl_file TO BGUSER; GRANT READ, EXECUTE ON DIRECTORY osch_bin_path TO BGUSER; CREATE OR REPLACE DIRECTORY BGT_LOG_DIR as '/u01/BG_TEST/logs'; GRANT READ, WRITE ON DIRECTORY BGT_LOG_DIR to BGUSER; CREATE OR REPLACE DIRECTORY BGT_DATA_DIR as '/u01/BG_TEST/data'; GRANT READ, WRITE ON DIRECTORY BGT_DATA_DIR to BGUSER; EOF Put the following in a file named t3.sh and make it executable, hadoop jar $OSCH_HOME/jlib/orahdfs.jar \ oracle.hadoop.exttab.ExternalTable \ -D oracle.hadoop.exttab.tableName=BGTEST_DP_XTAB \ -D oracle.hadoop.exttab.defaultDirectory=BGT_DATA_DIR \ -D oracle.hadoop.exttab.dataPaths="hdfs:///user/oracle/bgtest_data/obsFrance.txt" \ -D oracle.hadoop.exttab.columnCount=7 \ -D oracle.hadoop.connection.url=jdbc:oracle:thin:@//localhost:1521/dg1 \ -D oracle.hadoop.connection.user=BGUSER \ -D oracle.hadoop.exttab.printStackTrace=true \ -createTable --noexecute then test the creation fo the external table with it: [oracle@dg1 samples]$ ./t3.sh ./t3.sh: line 2: /u01/orahdfs-2.2.0: Is a directory Oracle SQL Connector for HDFS Release 2.2.0 - Production Copyright (c) 2011, 2013, Oracle and/or its affiliates. All rights reserved. Enter Database Password:] The create table command was not executed. The following table would be created. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081035-74-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files would be created. osch-20131022081035-74-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt Then remove the --noexecute flag and create the external Oracle table for the Hadoop data. Check the results: The create table command succeeded. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081719-3239-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files were created. osch-20131022081719-3239-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt This is the view from the SQL Developer: and finally the number of lines in the oracle table, imported from our Hadoop HDFS cluster SQL select count(*) from "BGUSER"."BGTEST_DP_XTAB"; COUNT(*) ---------- 1151 In a next post we will integrate data from a Hive database, and try some ODI integrations with the ODI Big Data connector. Our simplistic approach is just a step to show you how these unstructured data world can be integrated to Oracle infrastructure. Hadoop, BigData, NoSql are great technologies, they are widely used and Oracle is offering a large integration infrastructure based on these services. Oracle University presents a complete curriculum on all the Oracle related technologies: NoSQL: Introduction to Oracle NoSQL Database Using Oracle NoSQL Database Big Data: Introduction to Big Data Oracle Big Data Essentials Oracle Big Data Overview Oracle Data Integrator: Oracle Data Integrator 12c: New Features Oracle Data Integrator 11g: Integration and Administration Oracle Data Integrator: Administration and Development Oracle Data Integrator 11g: Advanced Integration and Development Oracle Coherence 12c: Oracle Coherence 12c: New Features Oracle Coherence 12c: Share and Manage Data in Clusters Oracle Coherence 12c: Oracle GoldenGate 11g: Fundamentals for Oracle Oracle GoldenGate 11g: Fundamentals for SQL Server Oracle GoldenGate 11g Fundamentals for Oracle Oracle GoldenGate 11g Fundamentals for DB2 Oracle GoldenGate 11g Fundamentals for Teradata Oracle GoldenGate 11g Fundamentals for HP NonStop Oracle GoldenGate 11g Management Pack: Overview Oracle GoldenGate 11g Troubleshooting and Tuning Oracle GoldenGate 11g: Advanced Configuration for Oracle Other Resources: Apache Hadoop : http://hadoop.apache.org/ is the homepage for these technologies. "Hadoop Definitive Guide 3rdEdition" by Tom White is a classical lecture for people who want to know more about Hadoop , and some active "googling " will also give you some more references. About the author: Eugene Simos is based in France and joined Oracle through the BEA-Weblogic Acquisition, where he worked for the Professional Service, Support, end Education for major accounts across the EMEA Region. He worked in the banking sector, ATT, Telco companies giving him extensive experience on production environments. Eugen currently specializes in Oracle Fusion Middleware teaching an array of courses on Weblogic/Webcenter, Content,BPM /SOA/Identity-Security/GoldenGate/Virtualisation/Unified Comm Suite) throughout the EMEA region.

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >