Search Results

Search found 34162 results on 1367 pages for 'oracle products distributed document capture'.

Page 465/1367 | < Previous Page | 461 462 463 464 465 466 467 468 469 470 471 472  | Next Page >

  • no ocijdbc10 in java.library.path

    - by B.Z.B
    Hey all, So I've been plagued by this issue, whenever I try to run my app in eclipse, I get this error. 2011-02-23 09:55:08,388 ERROR (com.xxxxx.services.factory.ServiceInvokerLocal:21) - java.lang.UnsatisfiedLinkError: no ocijdbc10 in java.library.path I've tried following the steps I found here with no luck. I've tried this on a XP VM as well as windows 7 (although in win 7 I get a different error, below) java.lang.UnsatisfiedLinkError: no ocijdbc9 in java.library.path I've made sure my oracle client was ok (by running TOAD) and I also re-added the classes12.jar / ojdbc14.jars to my WEB-INF/lib folder taken directly from my %ORACLE_HOME% folder (also re-added them to the lib path). I've also tried just adding the ojdbc14.jar without the classes12.jar. Any suggestions appreciated. In the XP VM I have my PATH variable set to C:\Program Files\Java\jdk1.6.0_24\bin;C:\ORACLE\product\10.2.0.1\BIN. I'm using Tomcat server 5.0

    Read the article

  • VirtualBox error with Ubuntu virtual machine

    - by user2985363
    I am trying to work on a coding project and cannot open my Ubuntu virtual machine with Oracle VM VirtualBox. I took a snapshot yesterday at about 11, and it was working fine. Several times I closed and reopened it. Today when I tried to open it, I kept getting the error below. Failed to open a session for the virtual machine Ubuntu 12.04 32-bit. VM cannot start because the saved state file 'C:\Users\Tyler\VirtualBox VMs\Ubuntu 12.04 32-bit\Snapshots\2014-01-30T19-59-05-976647800Z.sav' is invalid (VERR_FILE_NOT_FOUND). Deleted the saved state prior to starting the VM. I tried deleting the file as it said, but none of the snapshots would open still. The file is still in my recycling bin. What can I do? Also, I took the 1/31 snapshot today before I deleted the previous one.

    Read the article

  • Div's visibility with javascript - problem

    - by sammville
    I am trying to use div's to display content on my page. This is controlled with an onchange element in a select menu. It works perfectly but the problem is I want one div to close when another one is opened. The div's open fine but it does not close the others. An example code is below. What am I doing wrong? JavaScript: if(document.getElementById('catgry').value == '01'){ document.getElementById('post04').style.visibility = "visible"; document.getElementById('post04').style.display = ""; document.getElementById('post07').style.visibility = "hidden"; document.getElementById('post07').style.display = "none"; }else if(document.getElementById('catgry').value == '02'){ document.getElementById('post02').style.visibility = "visible"; document.getElementById('post02').style.display = ""; document.getElementById('post04').style.visibility = "hidden"; document.getElementById('post04').style.display = "none"; document.getElementById('post07').style.visibility = "hidden"; document.getElementById('post07').style.display = "none"; } HTML: <div id="post04" style="visibility:hidden; display:none;"> <table class="posttb"><tr> <td width="30%">Author</td> <td><input type="text" name="author" size="30" class="postfd"></td> </tr> </table> </div>

    Read the article

  • How to have partial incremental synchronizations based on a GUID?

    - by Gonçalo Veiga
    I need to synchronize an SQL Server database to Oracle through an Oracle Transparent Gateway. The synchronization is performed in batches, so I need to get the next set of data from the point where I left off. The problem I'm having is that the only field I have in the source, to help me, is a GUID. If it were a number I could just order by it, keep the last one processed and restart the process by getting the records which are my recorded number. This won't work with a GUID. Any ideas?

    Read the article

  • How to capture a 'sub-section' of a URL in a rewrite rule?

    - by George Edison
    I know the title is a little bit strange, but here is what the URLs look like: /user/xxx/page /user/xxx/page?error=yyy The rule for the first URL looks something like this: RewriteRule ^user/(\d+)/page$ something.pl?id=$1 [L] And to make it work with the second URL, it becomes: RewriteRule ^user/(\d+)/page(error=\d+)?$ something.pl?id=$1 [L] My question is... how do I capture the error number? I tried both of these: RewriteRule ^user/(\d+)/page(error=(\d+))?$ something.pl?id=$1&error=$2 [L] RewriteRule ^user/(\d+)/page(error=(\d+))?$ something.pl?id=$1&error=$3 [L] But it isn't working... How can I do this?

    Read the article

  • Facebook page linking to external site sign-up process, capture permission to write to wall in process?

    - by steve
    Hi all, Have had a good hunt through the archive but can't find anyone trying to do this... hope someone familiar with the facebook API can confirm if it's possible? Basically I have a client who wants to replicate their membership sign up process in a tab on their facebook page. The form would still submit to their own website to process, we'd just be replicating the form fields. As an additional requirement they want to capture peoples facebook user ID and get permission to post back to a users wall at the same time... The idea being that once the user is a member we can post back to their wall so their friends see that they've signed up... Basically after a sanity check that: 1) these things are possible to do; 2) the best method to build the form in a FB page - I'm guessing using JS to create all fields & ajax to submit to the external site? Thanks Steve

    Read the article

  • PL/SQL exception and Java programs

    - by edwards
    Hi Business logic is coded in pl/sql packages procedures and functions. Java programs call pl/sql packages procedures and functions to do database work. pl/sql programs store exceptions into Oracle tables whenever an exception is raised. How would my java programs get the exceptions since the exception instead of being propagated from pl/sql to java is getting persisted to a oracle table and the procs/functions just return 1 or 0. Sorry folks i should have added this constraint much earlier and avoided this confusion. As with many legacy projects we don't have the freedom to modify the stored procedures.

    Read the article

  • MemoryStream instance timing help

    - by rod
    Hi All, Is it ok to instance a MemoryStream at the top of my method, do a bunch of stuff to it, and then use it? For instance: public static byte[] TestCode() { MemoryStream m = new MemoryStream(); ... ... whole bunch of stuff in between ... ... //finally using(m) { return m.ToArray(); } } Updated code public static byte[] GetSamplePDF() { using (MemoryStream m = new MemoryStream()) { Document document = new Document(); PdfWriter.GetInstance(document, m); document.Open(); PopulateTheDocument(document); document.Close(); return m.ToArray(); } } private static void PopulateTheDocument(Document document) { Table aTable = new Table(2, 2); aTable.AddCell("0.0"); aTable.AddCell("0.1"); aTable.AddCell("1.0"); aTable.AddCell("1.1"); document.Add(aTable); for (int i = 0; i < 20; i++) { document.Add(new Phrase("Hello World, Hello Sun, Hello Moon, Hello Stars, Hello Sea, Hello Land, Hello People. ")); } } My point was to try to reuse building the byte code. In other words, build up any kind of document and then send it to TestCode() method.

    Read the article

  • Significance of date 02/31/2157?

    - by dpatchery
    I work in a large scale IT support environment. Twice now we have seen an invalid date of 02/31/2157 being inserted in an Oracle DATE column. So far I have not been able to reproduce this problem, but it appears to be happening occasionally when a user attempts to save '00/00/0000' into the column. I believe the value is originating from a PowerBuilder DataWindow update. The application uses myriad libraries for all sorts of technologies, so this question may be a bit vague, but... Has anyone seen the date 02/31/2157 in some established library that Oracle could be defaulting to when some other invalid date is entered? Perhaps an end-of-time concept analogous to the beginning-of-time date of 1/1/1970?

    Read the article

  • PL/SQL embedded insert into table that may not exist

    - by Richard
    Hi, I much prefer using this 'embedded' style inserts in a pl/sql block (opposed to the execute immediate style dynamic sql - where you have to delimit quotes etc). -- a contrived example PROCEDURE CreateReport( customer IN VARCHAR2, reportdate IN DATE ) BEGIN -- drop table, create table with explicit column list CreateReportTableForCustomer; INSERT INTO TEMP_TABLE VALUES ( customer, reportdate ); END; / The problem here is that oracle checks if 'temp_table' exists and that it has the correct number of colunms and throws a compile error if it doesn't exist. So I was wondering if theres any way round that?! Essentially I want to use a placeholder for the table name to trick oracle into not checking if the table exists.

    Read the article

  • A MongoDB find() that matches when all $and conditions match the same sub-document?

    - by MichaelOryl
    If I have a set of MongoDB documents like the following, what can I do to get a find() result that only returns the families who have 2 pets who all like liver? Here is what I expected to work: db.delegation.find({pets:2, $and: [{'foods.liver': true}, {'foods.allLike': true}] }) Here is the document collection: { "_id" : ObjectId("5384888e380efca06276cf5e"), "family": "smiths", "pets": 2, "foods" : [ { "name" : "chicken", "allLike" : true, }, { "name" : "liver", "allLike" : false, } ] }, { "_id" : ObjectId("4384888e380efca06276cf50"), "family": "jones", "pets": 2, "foods" : [ { "name" : "chicken", "allLike" : true, }, { "name" : "liver", "allLike" : true, } ] } What I end up getting is both families because they both have at least one food marked as true for allLike. It seems that the two conditions in the $and are true if any foods sub-document matches, but what I want is the two conditions to match for the conditions as a pair. As is, I get the Jones family back (as I want) but also Smith (which I don't). Smith gets returned because the chicken sub-doc has allLike set to true and the liver sub-doc has a name of 'liver'. The conditions are matching across separate foods sub-docs. I want them to match as a pair on a foods document. This code is not the real use case, obviously. I have one, but I've simplified it to protect the innocent...

    Read the article

  • Tunneling through SSH for 1521 port access?

    - by A T
    I am developing locally on my computer, using my own Apache server with PHP configured. My database however is remotely located on an Oracle 11g Database Server. We were also given a separate remote server for hosting our .html and .php files, however only FTP access has been provided there. Development is far too slow waiting for the FTP connection to push. So I decided to develop locally, but still use the remote DB server. Unfortunately that gives me an error. Not sure how—or where—to integrate tunnelling. Do I add something to the oci_connect HOST in my PHP file, or do I encapsulate my whole environment over SSH?

    Read the article

  • Is there a way to capture a bitmap from a WPF window using native C++?

    - by Mike Caron
    Imagine a document window in a MDI application which contains a child WPF window, say a sidebar for example. How can one get a bitmap containing both the WPF pixels AND the GDI (non-wpf) pixels? I've discovered that when making my thumbnail preview for the Win7 taskbar app icon hover, I get black in the parts of the preview where the WPF pixels should be. My current method simply grabs a bitmap capture of the document window. Then I get a DC for the preview, make a memory DC from it and select my bitmap into it. Then I do some size adjustments and bitblt the memory dc to the real dc. I'm guessing that the BitBlt operation doesn't take into account the fact that the WPF pixels are hardware accelerated and therefore need to be grabbed from the graphics hardware. All the stuff in GDI is managed just fine, though and when there's no WPF child windows, the preview image looks fine. I'm wondering if it's at all possible to grab a bitmap of the WPF window from native C++. Then I can blt that onto the black area of the previous preview.

    Read the article

  • Query on Triggers..

    - by RBA
    Created One Trigger in Oracle.. SQL> CREATE OR REPLACE TRIGGER student_after_insert 2 AFTER INSERT 3 ON student 4 FOR EACH ROW 5 BEGIN 6 @hello.pl 9 END student_after_insert; 10 / Contents of hello.pl are:- BEGIN DBMS_OUTPUT.PUT_LINE('hello world'); END; And.. the result is pretty good, as the content of hello.pl is displayed on screen while inserting a record.. Now, the query is -- When i change the content of the hello.pl file, after exiting from oracle, and then logging again, It doesn't shows the updated contents, instead it shows the previous content.. I noticed that, if i drop the trigger and create it again, then it is working fine.. Why is it happening so.. And what is the solution to this problem..

    Read the article

  • Why can't we use strong ref cursor with dynamic SQL Statement?

    - by Vineet
    Hi ALL, I am trying to use a strong ref cur with dynamic sql statment but it is giving out an error,but when i use weak cursor it works,Please explain what is the reason and please forward me any link of oracle server architect containing matter about how compilation and parsing is done in Oracle server. THIS is the error along with code. ERROR at line 6: ORA-06550: line 6, column 7: PLS-00455: cursor 'EMP_REF_CUR' cannot be used in dynamic SQL OPEN statement ORA-06550: line 6, column 2: PL/SQL: Statement ignored declare type ref_cur_type IS REF CURSOR RETURN employees%ROWTYPE; --Creating a strong REF cursor,employees is a table emp_ref_cur ref_cur_type; emp_rec employees%ROWTYPE; BEGIN OPEN emp_ref_cur FOR 'SELECT * FROM employees'; LOOP FETCH emp_ref_cur INTO emp_rec; EXIT WHEN emp_ref_cur%NOTFOUND; END lOOP; END;

    Read the article

  • Store Business Rules in XML Document, Validate afterwards in Java, how?

    - by JavaPete
    Example XML Rules document: <user> <username> <not-null/> <capitals value="false"/> <max-length value="15"/> </username> <email> <not-null/> <isEmail/> <max-length value="40"/> </email> </user> How do I implement this? I'm starting from scratch, what I currently have is a User-class, and a UserController which saves the User object in de DB (through a Service-layer and Dao-layer), basic Spring MVC. I can't use Spring MVC Validation however in our Model-classes, I have to use an XML document so an Admin can change the rules I think I need a pattern which dynamically builds an algorithm based on what is provided by the XML Rules document, but I can't seem to think of anything other than a massive amount of if-statements. I also have nothing for the parsing yet and I'm not sure how I'm gonna (de)couple it from the actual implementation of the validation-process.

    Read the article

  • Selecting a sequence NEXTVAL for multiple rows

    - by stringpoet
    I am building a SQL Server job to pull data from SQL Server into an Oracle database through a linked server. The table I need to populate has a sequence for the name ID, which is my primary key. I'm having trouble figuring out a way to do this simply, without some lengthy code. Here's what I have so far for the SELECT portion (some actual names obfuscated): SELECT (SELECT NEXTVAL FROM OPENQUERY(MYSERVER, 'SELECT ORCL.NAME_SEQNO.NEXTVAL FROM DUAL')), psn.BirthDate, psn.FirstName, psn.MiddleName, psn.LastName, c.REGION_CODE FROM Person psn LEFT JOIN MYSERVER..ORCL.COUNTRY c ON c.COUNTRY_CODE = psn.Country MYSERVER is the linked Oracle server, ORCL is obviously the schema. Person is a local table on the SQL Server database where the query is being executed. When I run this query, I get the same exact value for all records for the NEXTVAL. What I need is for it to generate a new value for each returned record. I found this similar question, with its answers, but am unsure how to apply it to my case (if even possible): Query several NEXTVAL from sequence in one satement

    Read the article

  • Java date long value changed after insert, select query

    - by StackExploded
    AFAIK, java Date type is independent from Timezone which means that it represents specific moment of time as long typed value. I found really weird thing here. This is the original value i tried to insert. (http-0.0.0.0-9080-4) 1352955600000 <-- long integer. (http-0.0.0.0-9080-4) Thu Nov 15 00:00:00 EST 2012 <-- User Friendly Format. After i finished inserting into Oracle 11g database, the value has changed! (http-0.0.0.0-9080-4) 1352952000000 (http-0.0.0.0-9080-4) Wed Nov 14 23:00:00 EST 2012 How could this happen?? The more weird thing is it only happens specific environments such as Jboss. I'm currently using below environments. java 1.6 ibatis 2.34 jboss-5.1 (server) tomcat 6.0 (local) oracle 11g Is there anybody who can give me a clue or link to be helpful? it really bugging me!!

    Read the article

  • Call a Javascript Function with Jquery

    - by danit
    I have two functions in Javascript: function getWindowWidth(){ var x = 0; if (self.innerHeight){ x = self.innerWidth; }else if (document.documentElement && document.documentElement.clientHeight){ x = document.documentElement.clientWidth; }else if (document.body){ x = document.body.clientWidth; }return x; }function getWindowHeight(){ var y = 0; if (self.innerHeight){ y = self.innerHeight; }else if (document.documentElement && document.documentElement.clientHeight){ y = document.documentElement.clientHeight; }else if (document.body){ y = document.body.clientHeight; } These appear to set the height and width of the document window, based on the size of the window? I could be wrong here.... What I have done is embeded a toolbar above this document, this toolbar can be hidden or shown at various points. I need the above function to be called when I use the following jQuery, $("#Main").animate({top: "89px"}, 200); Any assistance greatly appreciated!

    Read the article

  • SOA Suite Integration: Part 2: A basic BPEL process

    - by Anthony Shorten
    This is the next in the series about SOA Suite integration with Oracle Utilities Application Framework. One of the first scenarios I am going to illustrate in this series is building a basic BPEL process using Web Service calls to the Oracle Utilities Application Framework. The scenario is this. I will pass in the userid and the BPEL process will call our the AS-User Web Service we created in Part 1. This is just a basic test and illustrate how to import the Web Service into SOA Suite. To use this scenario, you will need access to Oracle SOA Suite, access to a copy of any Oracle Utilities Application Framework based product and Oracle JDeveloper (to build the process). First of all you need to start Oracle JDeveloper and create a new SOA Project to house the BPEL process in. For the purposes of this example I will call the project simpleBPEL and verify that SOA is part of the project. I will select "Composite with BPEL" to denote it as a BPEL process. I can also the same process to create a Mediator or OSB project (refer to the JDeveloper documentation on these technologies). For this example I will use BPEL 1.1 as my specification standard (BPEL 2.0 can also be used if desired). I give the individual BPEL process as simpleBPEL (you can use a different name but I wanted to keep the project and process the same for this example). I will also build a Synchronous BPEL Process as I want a response from the Web Service. I will leave the defaults to save time. I have no have a blank canvas to build my BPEL process against. Note: for simplicity I am going to use as much defaulting as possible. In fact I am not going to specify an input schema for the incoming call as I will use the basic single field used by BPEL as default. The first step is to import the AS-User Web Service into my BPEL project. To do this I use the standard Web Service BPEL component from the Component Palette to import the WSDL into the BPEL project. Now the tricky part (a joke), you drag and drop the component from the Palette onto the right side of the canvas in the Partner Links swim lane. This swim lane is reserved for Partner Links that have a Partner Role (i.e. being called rather than calling). When you drop the Web Service onto the canvas the Create Web Service wizard is invoked to ask for details of the Web Service. At this point you give the BPEL node a name. I have used the name RetrieveUser as a name. I placed the WSDL URL from the XAI Inbound Service screen in the WSDL URL. Once you specify the URL you can press the Find existing WSDL's button to load the information into BPEL from the call. You will notice the Port Type is prefilled with the port from the WSDL. I also suggest that you check copy wsdl and it's dependent artifacts into the project if you intending to work on the BPEL process offline. If you do not check this your target application must be accessible when you work on the BPEL process (that is not always convenient). Note: For the perceptive of you will notice that the URL specified in this example is different to the URL in the last post. The reason is for the demonstrations I shifted to a new server and did not redo all of the past screen captures. If you copy the WSDL into the project you will get an information screen about Localize Files. It is just a confirmation screen. The last confirmation screen is a summary of the partner link (the main tab is locked for editing at this stage). At this stage you have successfully imported the Web Service. To complete the setup of the Web Service you need to set the credentials for the Web Service to use. Refer to the past post on how to do that. Now to use the Web Service. To call the Web Service (as it is just imported not connected to the BPEL process yet), you must add an Invoke action to your BPEL Process. To do this, select Invoke action from the BPEL Constructs zone on the Component Palette and drop it on the edit nodes between the receiveInput and replyOutput nodes This will create an empty Invoke action. You will notice some connectors on the Invoke node. Grab the node closest to your Web Service and drag it to connect the Invoke to your Web Service. This instructs BPEL to use the Invoke to call the Web Service. Once the Invoke action is connected to the Web Service an Edit Invoke edit dialog is displayed. At this point I suggest you name the Invoke node. It is important to name the nodes straightaway and name them appropriately for you to trace the logic. I used InvokeUser as the name in this example. To complete the node configuration you must create Variables to hold the input and output for the call. To do this clock on Automatically Create Input Variable on the Edit Invoke dialog. You will be presented with a default variable name. It uses the node name (that is why it is important to name the node before hitting this button) as a prefix. You can name the variable anything but I usually take the default. Repeat the same for the output variable. You now have a completed node for invoking the service. You have a very basic BPEL process which contains an input, invoke and output node. It is not complete yet though. You need to tell the BPEL process how to pass data from the input to the invoke step and how to take the output from the service call and pass it back to the service. You need to now add an Assign node to assign the input to the Web Service. To do this select Assign activity from BPEL Constructs zone in the Component Palette. Drag and drop the Assign activity between the receiveInput and InvokeUser nodes as you want to pass data between these two nodes. You have now added a new Assign node to your BPEL process Double clicking the node allows you to specify the name of the node. I use AssignUser to describe that I am assigning user data. On the Copy Rules tab you can specify the mapping between the input variable InputVariable/payload/process/input string and the input variable for the Web Service call. We are passing data from the input to BPEL to the relevant input variable on the Web Service. This is simply drag and drop between the two data structures. In the example, I am using the input to pass to the user element in my Web Service as the user is the primary key for the object. The fields become linked (which means data from source will be copied to target). Almost there. You now need to process the output from the Web Service call to the outputVariable of the client call. I have decided to pass back one piece of data, the name associated with the user by concatenating the firstName and lastName elements from the Web Service call. To do this I will use a Transform as it is not just a matter of an Assign action. It is a concatenation operation. This also illustrates how you can use BPEL functionality to transform data from a Web Service call. As with the other components you drag and drop the Transform component to the appropriate place in the BPEL process. In this case we want to transform the output from the Web Service call so we want it after the InvokeUser action and the replyOutput action. The Transform component is actually part of the Oracle Extensions to the BPEL specification. Double clicking the Transform node will allow you to name the node.  In this example I used TransformName. To complete the transform I need to tell the product the source of the transformation and the target of the transform. In the example this is the InvokeUser output variable. I also named the mapper file to TransformName. By clicking the + or pencil icon next to the map I can create the map. The mapping screen is shows the source and target schemas for me to map across. As with the assign I can map the relevant elements. In my example, I first map the firstName from the Web Service to the result element. As I want to concatenate the names, I drop the concat function on the call line. I now attach the last name to the function to indicate the concatenation of the field. By default the names will be concatenated with no space. To make the name legible I add a space between the field by clicking the function and adding a space in the call. I now have a completed mapping. I can now save the whole project as my BPEL process is now complete. As you can see the following happens: We accept input from the client (the userid for the call) in the receiveInput step. We assign that value to the input parameters for the Web Service call in the AssignUser step. We invoke the Web Service call to retrieve the data from the product in the InvokeUser step. We take the output from the InvokeUser step and concatenate the names in the TransformName step. We pass back the data in the replyOutput step. At this point we can deploy the BPEL process to the SOA Suite server. I will not cover this aspect as it really all SOA Suite specific (it is all done via Oracle JDeveloper). Now we need to test the service in SOA Suite. We will use the Fusion Middleware Control test facility. I will assume that credentials have also been setup as per our previous post (else you will get a 401 error). You navigate to the deployed BPEL process within Fusion Middleware Control and select the Test Service option. Specify some test data on the payload at the bottom of the Test Service screen. In my case I am returning my own userid information. On the response tab you will see the result. It works. You can verify the steps using the Audit trace facility on individual calls. As you can see this is a basic BPEL but you get the idea of importing the Web Service is pretty straightforward. You can create more sophisticated BPEL processes using the full facilities in Oracle SOA Suite. I just showed you the basic principals.

    Read the article

  • Is RTD Stateless or Stateful?

    - by [email protected]
    Yes.   A stateless service is one where each request is an independent transaction that can be processed by any of the servers in a cluster.  A stateful service is one where state is kept in a server's memory from transaction to transaction, thus necessitating the proper routing of requests to the right server. The main advantage of stateless systems is simplicity of design. The main advantage of stateful systems is performance. I'm often asked whether RTD is a stateless or stateful service, so I wanted to clarify this issue in depth so that RTD's architecture will be properly understood. The short answer is: "RTD can be configured as a stateless or stateful service." The performance difference between stateless and stateful systems can be very significant, and while in a call center implementation it may be reasonable to use a pure stateless configuration, a web implementation that produces thousands of requests per second is practically impossible with a stateless configuration. RTD's performance is orders of magnitude better than most competing systems. RTD was architected from the ground up to achieve this performance. Features like automatic and dynamic compression of prediction models, automatic translation of metadata to machine code, lack of interpreted languages, and separation of model building from decisioning contribute to achieving this performance level. Because  of this focus on performance we decided to have RTD's default configuration work in a stateful manner. By being stateful RTD requests are typically handled in a few milliseconds when repeated requests come to the same session. Now, those readers that have participated in implementations of RTD know that RTD's architecture is also focused on reducing Total Cost of Ownership (TCO) with features like automatic model building, automatic time windows, automatic maintenance of database tables, automatic evaluation of data mining models, automatic management of models partitioned by channel, geography, etcetera, and hot swapping of configurations. How do you reconcile the need for a low TCO and the need for performance? How do you get the performance of a stateful system with the simplicity of a stateless system? The answer is that you make the system behave like a stateless system to the exterior, but you let it automatically take advantage of situations where being stateful is better. For example, one of the advantages of stateless systems is that you can route a message to any server in a cluster, without worrying about sending it to the same server that was handling the session in previous messages. With an RTD stateful configuration you can still route the message to any server in the cluster, so from the point of view of the configuration of other systems, it is the same as a stateless service. The difference though comes in performance, because if the message arrives to the right server, RTD can serve it without any external access to the session's state, thus tremendously reducing processing time. In typical implementations it is not rare to have high percentages of messages routed directly to the right server, while those that are not, are easily handled by forwarding the messages to the right server. This architecture usually provides the best of both worlds with performance and simplicity of configuration.   Configuring RTD as a pure stateless service A pure stateless configuration requires session data to be persisted at the end of handling each and every message and reloading that data at the beginning of handling any new message. This is of course, the root of the inefficiency of these configurations. This is also the reason why many "stateless" implementations actually do keep state to take advantage of a request coming back to the same server. Nevertheless, if the implementation requires a pure stateless decision service, this is easy to configure in RTD. The way to do it is: Mark every Integration Point to Close the session at the end of processing the message In the Session entity persist the session data on closing the session In the session entity check if a persisted version exists and load it An excellent solution for persisting the session data is Oracle Coherence, which provides a high performance, distributed cache that minimizes the performance impact of persisting and reloading the session. Alternatively, the session can be persisted to a local database. An interesting feature of the RTD stateless configuration is that it can cope with serializing concurrent requests for the same session. For example, if a web page produces two requests to the decision service, these requests could come concurrently to the decision services and be handled by different servers. Most stateless implementation would have the two requests step onto each other when saving the state, or fail one of the messages. When properly configured, RTD will make one message wait for the other before processing.   A Word on Context Using the context of a customer interaction typically significantly increases lift. For example, offer success in a call center could double if the context of the call is taken into account. For this reason, it is important to utilize the contextual information in decision making. To make the contextual information available throughout a session it needs to be persisted. When there is a well defined owner for the information then there is no problem because in case of a session restart, the information can be easily retrieved. If there is no official owner of the information, then RTD can be configured to persist this information.   Once again, RTD provides flexibility to ensure high performance when it is adequate to allow for some loss of state in the rare cases of server failure. For example, in a heavy use web site that serves 1000 pages per second the navigation history may be stored in the in memory session. In such sites it is typical that there is no OLTP that stores all the navigation events, therefore if an RTD server were to fail, it would be possible for the navigation to that point to be lost (note that a new session would be immediately established in one of the other servers). In most cases the loss of this navigation information would be acceptable as it would happen rarely. If it is desired to save this information, RTD would persist it every time the visitor navigates to a new page. Note that this practice is preferred whether RTD is configured in a stateless or stateful manner.  

    Read the article

  • Common Live Upgrade problems

    - by user12611829
    As I have worked with customers deploying Live Upgrade in their environments, several problems seem to surface over and over. With this blog article, I will try to collect these troubles, as well as suggest some workarounds. If this sounds like the beginnings of a Wiki, you would be right. At present, there is not enough material for one, so we will use this blog for the time being. I do expect new material to be posted on occasion, so if you wish to bookmark it for future reference, a permanent link can be found here. Live Upgrade copies over ZFS root clone This was introduced in Solaris 10 10/09 (u8) and the root of the problem is a duplicate entry in the source boot environments ICF configuration file. Prior to u8, a ZFS root file system was not included in /etc/vfstab, since the mount is implicit at boot time. Starting with u8, the root file system is included in /etc/vfstab, and when the boot environment is scanned to create the ICF file, a duplicate entry is recorded. Here's what the error looks like. # lucreate -n s10u9-baseline Checking GRUB menu... System has findroot enabled GRUB Analyzing system configuration. Comparing source boot environment file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment . Source boot environment is . Creating boot environment . Creating file systems on boot environment . Creating file system for in zone on . The error indicator ----- /usr/lib/lu/lumkfs: test: unknown operator zfs Populating file systems on boot environment . Checking selection integrity. Integrity check OK. Populating contents of mount point . This should not happen ------ Copying. Ctrl-C and cleanup If you weren't paying close attention, you might not even know this is an error. The symptoms are lucreate times that are way too long due to the extraneous copy, or the one that alerted me to the problem, the root file system is filling up - again thanks to a redundant copy. This problem has already been identified and corrected, and a patch (121431-58 or later for x86, 121430-57 for SPARC) is available. Unfortunately, this patch has not yet made it into the Solaris 10 Recommended Patch Cluster. Applying the prerequisite patches from the latest cluster is a recommendation from the Live Upgrade Survival Guide blog, so an additional step will be required until the patch is included. Let's see how this works. # patchadd -p | grep 121431 Patch: 121429-13 Obsoletes: Requires: 120236-01 121431-16 Incompatibles: Packages: SUNWluzone Patch: 121431-54 Obsoletes: 121436-05 121438-02 Requires: Incompatibles: Packages: SUNWlucfg SUNWluu SUNWlur # unzip 121431-58 # patchadd 121431-58 Validating patches... Loading patches installed on the system... Done! Loading patches requested to install. Done! Checking patches that you specified for installation. Done! Approved patches will be installed in this order: 121431-58 Checking installed patches... Executing prepatch script... Installing patch packages... Patch 121431-58 has been successfully installed. See /var/sadm/patch/121431-58/log for details Executing postpatch script... Patch packages installed: SUNWlucfg SUNWlur SUNWluu # lucreate -n s10u9-baseline Checking GRUB menu... System has findroot enabled GRUB Analyzing system configuration. INFORMATION: Unable to determine size or capacity of slice . Comparing source boot environment file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. INFORMATION: Unable to determine size or capacity of slice . Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment . Source boot environment is . Creating boot environment . Cloning file systems from boot environment to create boot environment . Creating snapshot for on . Creating clone for on . Setting canmount=noauto for in zone on . Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev. Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev. Saving existing file in top level dataset for BE as //boot/grub/menu.lst.prev. File propagation successful Copied GRUB menu from PBE to ABE No entry for BE in GRUB menu Population of boot environment successful. Creation of boot environment successful. This time it took just a few seconds. A cursory examination of the offending ICF file (/etc/lu/ICF.3 in this case) shows that the duplicate root file system entry is now gone. # cat /etc/lu/ICF.3 s10u8-baseline:-:/dev/zvol/dsk/panroot/swap:swap:8388608 s10u8-baseline:/:panroot/ROOT/s10u8-baseline:zfs:0 s10u8-baseline:/vbox:pandora/vbox:zfs:0 s10u8-baseline:/setup:pandora/setup:zfs:0 s10u8-baseline:/export:pandora/export:zfs:0 s10u8-baseline:/pandora:pandora:zfs:0 s10u8-baseline:/panroot:panroot:zfs:0 s10u8-baseline:/workshop:pandora/workshop:zfs:0 s10u8-baseline:/export/iso:pandora/iso:zfs:0 s10u8-baseline:/export/home:pandora/home:zfs:0 s10u8-baseline:/vbox/HardDisks:pandora/vbox/HardDisks:zfs:0 s10u8-baseline:/vbox/HardDisks/WinXP:pandora/vbox/HardDisks/WinXP:zfs:0 Solaris 10 9/10 introduces new autoregistration file This one is actually mentioned in the Oracle Solaris 9/10 release notes. I know, I hate it when that happens too. Here's what the "error" looks like. # luupgrade -u -s /mnt -n s10u9-baseline System has findroot enabled GRUB No entry for BE in GRUB menu Copying failsafe kernel from media. 61364 blocks miniroot filesystem is Mounting miniroot at ERROR: The auto registration file does not exist or incomplete. The auto registration file is mandatory for this upgrade. Use -k argument along with luupgrade command. autoreg_file is path to auto registration information file. See sysidcfg(4) for a list of valid keywords for use in this file. The format of the file is as follows. oracle_user=xxxx oracle_pw=xxxx http_proxy_host=xxxx http_proxy_port=xxxx http_proxy_user=xxxx http_proxy_pw=xxxx For more details refer "Oracle Solaris 10 9/10 Installation Guide: Planning for Installation and Upgrade". As with the previous problem, this is also easy to work around. Assuming that you don't want to use the auto-registration feature at upgrade time, create a file that contains just autoreg=disable and pass the filename on to luupgrade. Here is an example. # echo "autoreg=disable" /var/tmp/no-autoreg # luupgrade -u -s /mnt -k /var/tmp/no-autoreg -n s10u9-baseline System has findroot enabled GRUB No entry for BE in GRUB menu Copying failsafe kernel from media. 61364 blocks miniroot filesystem is Mounting miniroot at ####################################################################### NOTE: To improve products and services, Oracle Solaris communicates configuration data to Oracle after rebooting. You can register your version of Oracle Solaris to capture this data for your use, or the data is sent anonymously. For information about what configuration data is communicated and how to control this facility, see the Release Notes or www.oracle.com/goto/solarisautoreg. INFORMATION: After activated and booted into new BE , Auto Registration happens automatically with the following Information autoreg=disable ####################################################################### Validating the contents of the media . The media is a standard Solaris media. The media contains an operating system upgrade image. The media contains version . Constructing upgrade profile to use. Locating the operating system upgrade program. Checking for existence of previously scheduled Live Upgrade requests. Creating upgrade profile for BE . Checking for GRUB menu on ABE . Saving GRUB menu on ABE . Checking for x86 boot partition on ABE. Determining packages to install or upgrade for BE . Performing the operating system upgrade of the BE . CAUTION: Interrupting this process may leave the boot environment unstable or unbootable. The Live Upgrade operation now proceeds as expected. Once the system upgrade is complete, we can manually register the system. If you want to do a hands off registration during the upgrade, see the Oracle Solaris Auto Registration section of the Oracle Solaris Release Notes for instructions on how to do that. Technocrati Tags: Oracle Solaris Patching Live Upgrade var sc_project=1193495; var sc_invisible=1; var sc_security="a46f6831";

    Read the article

< Previous Page | 461 462 463 464 465 466 467 468 469 470 471 472  | Next Page >