Search Results

Search found 27143 results on 1086 pages for 'component based'.

Page 90/1086 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • Converting table based layout into a div/css based one.

    - by nimo9367
    I'm supposed to rewrite the UI for a rather large web application. The thing is that the layout is completely based on tables and if I could somehow, semi automatically, convert the tables into divs it would save me a huge amount of time. What are the best practices when doing something like this? Is this a good idea at all? The application use layout files (containing something similar to helpers) that are parsed into html at runtime and the application itself also output html at specific places. So the work will consist of converting these helpers and the htmloutput code within the application.

    Read the article

  • Validating an e-mail field based on a specific domain

    - by mmsa
    I have a site that will require a login by the users. The client will only only users from their company's domain to gain access. I need to validate the field based on an e-mail domain address. ie. only allow email addresses from @mycompany.com to go through. Can this be done with the jquery.validate plugin? I see where you can check to see if it's a valid e-mail, but I'd like to make sure it matches a specific pattern (@mycompany.com). Any help would be appreciated!

    Read the article

  • Android Loading & Playing Sound Based on String

    - by Chance
    I'm currently working on a simple Android app, and right now I am trying to get it to load in and play sounds. The problem I am faced with is that I want the sound it uses to be based on a string (With the same name as the sound file). The reason for this is simplicity in both the code and adding on to it. Now unfortunately I can't just slap a string in place of referencing the actual sound, but is there some way for me to compare a string to the entire raw folder to find the matching sound, or some other alternative short of defining every sound manually? Thank you for your time.

    Read the article

  • Writing an AI for a turn-based board game

    - by Cyril
    Hi, i'm currently programming a board game (8x8) in which I need to develop an AI. I have read a lot of articles about AI in board games, minmax with or without alphabeta pruning, but I don't really know how to implements this, I don't know where to start... About my game, this is a turn-based game, each player has pieces on the board, they have to pick one and choose in moving this piece (1 or 2 cells max) or clone the piece (1 cell max). At the moment, I have a really stupid AI which choose a random piece then choose a random move to play... Could you please give me some clues, on how to implement this functionality ? Best regards

    Read the article

  • Using iPhotoGallery in a tab bar based application

    - by suranga
    Hi, I need to use a photo gallery similar to the one iPhone uses. For this I have downloaded the iPhotoGallery source code from google svn repository. (http://code.google.com/p/iphotogallery/). I need to know how to use this project in our own tab bar based application. i.e. I want to load the thumbs view when I go to a particular tab in my application. Does anybody have a clue?

    Read the article

  • Emacs auto-minor-mode based on extension

    - by vermiculus
    I found this question somewhat on the topic, but is there a way [in emacs] to set a minor mode (or a list thereof) based on extension? For example, it's pretty easy to find out that major modes can be manipulated like so (setq auto-mode-alist (cons '("\\.notes$" . text-mode) auto-mode-alist)) and what I'd ideally like to be able to do is (setq auto-minor-mode-alist (cons '("\\.notes$" . auto-fill-mode) auto-minor-mode-alist)) The accept answer of the linked question mentions hooks, specifically temp-buffer-setup-hook. To use this, you have to add a function to the hook like so (add-hook 'temp-buffer-setup-hook 'my-func-to-set-minor-mode) My question is two-fold: Is there an easier way to do this, similar to major modes? If not, how would one write the function for the hook? It needs to check the file path against a regular expression. If it matches, activate the desired mode (e.g. auto-fill-mode).

    Read the article

  • Sorting an array based on its value

    - by MaxerDude
    I have an Array, Sample: $array { [0] { [something]=1; [something2]=2; } [1] { [something]=2; [something2]=4; } [2] { [something]=5; [something2]=2; } } I want to order the array based on the key something; So it will look like: $array { [0] { [something]=5; [something2]=2; } [1] { [something]=2; [something2]=4; } [2] { [something]=1; [something2]=2; } }

    Read the article

  • Tsql to find the start and end date(set based)

    - by priyanka.sarkar_2
    I have the below Name Date A 2011-01-01 01:00:00.000 A 2011-02-01 02:00:00.000 A 2011-03-01 03:00:00.000 B 2011-04-01 04:00:00.000 A 2011-05-01 07:00:00.000 The desired output being Name StartDate EndDate ------------------------------------------------------------------- A 2011-01-01 01:00:00.000 2011-04-01 04:00:00.000 B 2011-04-01 04:00:00.000 2011-05-01 07:00:00.000 A 2011-05-01 07:00:00.000 NULL How to achieve the same using TSQL in Set based approach DDL is as under DECLARE @t TABLE(PersonName VARCHAR(32), [Date] DATETIME) INSERT INTO @t VALUES('A', '2011-01-01 01:00:00') INSERT INTO @t VALUES('A', '2011-01-02 02:00:00') INSERT INTO @t VALUES('A', '2011-01-03 03:00:00') INSERT INTO @t VALUES('B', '2011-01-04 04:00:00') INSERT INTO @t VALUES('A', '2011-01-05 07:00:00') Select * from @t

    Read the article

  • LINQ Query based on user preferences

    - by Chris Phelps
    How can I do this better (so it actually works : ) I have a LINQ Query that includes an order by that is based on a user preference. The user can decide if they would like the results ordered asc or desc. If fuserclass.SortOrder = "Ascending" Then Dim mydat = (From c In dc.ITRS Order By c.Date Ascending Select c) Else Dim mydat = (From c In dc.ITRS Order By c.Date Descending Select c) End If For each mydata in mydat ***<<<error "mydat is not declared"*** I know I could put my For Each loop inside the If and Else, but that seems silly to have the same code twice. I know you know of a better way : )

    Read the article

  • Coarse classing based on weight of evidence in r

    - by user3619169
    How can we use weight of evidence for binning continuous data in R. For e.g. I have a data: Recency 364 91 692 13 126 4 40 93 13 33 262 12 136 21 88 16 4 19 24 89 36 5 274 125 740 6 13 715 591 443 104 853 260 125 62 357 559 155 163 16 433 91 1380 96 374 130 574 101 5 11 34 401 13 215 168 So, what should be the command to bin this variable in different groups, based on Weight of evidence, or you can say coarse classing. Output I want is: Group I: Recency <200 Group I: Recency 200-400 Group I: Recency 400 Thanks

    Read the article

  • Permission based access control

    - by jellysaini
    I am trying to implement permission based access control in ASP.NET. To implement this I have created some database tables that hold all the information about which roles are assigned what permissions and which roles are assigned to what user. I am checking the permissions in the business access layer. Right now I have created a method which checks the permissions of the user. If the user has permissions then okay otherwise it redirects to another page. I want to know if the following things are possible? class User { [PremissionCheck(UserID,ObjectName,OperationName)] public DataTable GetUser() { //coding for user } } I have seen it in MVC3. Can I Create it in ASP.NET? If yes then how can I implement it?

    Read the article

  • JAAS : on Callback ( Interesting based on HTTP headers )

    - by VJS
    I am using NameCallback and PasswordCallback for username and password.For username and password, popup comes on browser and when i enter username ans password, JAAS authenticates my request. On the wireshark, I have seen that 401 Unauthorized message (WWW-Authenticate header)comes and when i enter username/password HTTP request with credentials generate ( with Authorization header) and goes to server. My requirement : I don't want pop up to come.My application on other server having username / password, so once it received 401 then based on some logic it will generate HTTP request with Authorization header / credentials and sent it back. FLow : User - Other Server - My Tomcat5.5 Here on Other Server, nobody is available to enter username/password manually.Application is deployed and it will only generate HTTP request with credential and sent it back to tomcat. Can we have any other callback which behave like this.Need your help.Please provide me feedback as well related to approach.

    Read the article

  • file names based on file content

    - by Mark
    So iow, some algorithm to generate a unique, reasonable length filename based on binary file content. Two files that have the same binary content should have the same name. Obviously there would be limits to this, as presumably you couldn't have unique reasonable length filenames for each of a large set of large files only differing at a handful of bit positions. But presumably there is some heuristic, best approximation to this that for example exploits known attributes of typical image files. If I had the name of some algorithm that does this I can google it and find other approaches as well.

    Read the article

  • ASP.Net Gridview, How to activate Edit Mode based on ID (DataKey)

    - by Jon P
    I have a page, lets call it SourceTypes.aspx, that has a a GridView that is displaying a list of Source Types. Part of the GridView is a DataKey, SourceTypeID. If source TypeID is passed to the page via a query sting, how to I put the Gridview into Edit mode for the appropriate row based on the SourceTypeID? The GridView is bound to a SQlDataSource object. I have a feeling I am going to kick myself when the answer appears!! I have looked at Putting a gridview row in edit mode programmatically but it is some what lacking in specifics

    Read the article

  • SUM a pair of COUNTs from two tables based on a time variable

    - by Kevin O.
    Been searching for an answer to this for the better part of an hour without much luck. I have two regional tables laid out with the same column names and I can put out a result list for either table based on the following query (swap Table2 for Table1): SELECT Table1.YEAR, FORMAT(COUNT(Table1.id),0) AS Total FROM Table1 WHERE Table1.variable='Y' GROUP BY Table1.YEAR Ideally I'd like to get a result that gives me a total sum of the counts by year, so instead of: | REGION 1 | | REGION 2 | | YEAR | Total | | YEAR | Total | | 2010 | 5 | | 2010 | 1 | | 2009 | 2 | | 2009 | 3 | | | | | 2008 | 4 | I'd have: | MERGED | | YEAR | Total | | 2010 | 6 | | 2009 | 5 | | 2008 | 4 | I've tried a variety of JOINs and other ideas but I think I'm caught up on the SUM and COUNT issue. Any help would be appreciated, thanks!

    Read the article

  • changing body class based on user's local time

    - by John
    I'm trying to add a body class of 'day' if it's 6am-5pm and 'night' if "else" based on the user's local time. I tried the following but it didn't work. Any ideas? In the head: <script> function setTimesStyles() { var currentTime = new Date().getHours(); if(currentTime > 5 && currentTime < 17) { document.body.className = 'day'; } else { document.body.className = 'night'; } } </script> In the body: <body onload="setTimeStyles();"> Also, is there a more elegant way to achieve what I need?

    Read the article

  • Sort List based on dynamically generated numbers in C++

    - by user367322
    I have a list of objects ("Move"'s in this case) that I want to sort based on their calculated evaluation. So, I have the List, and a bunch of numbers that are "associated" with an element in the list. I now want to sort the List elements with the first element having the lowest associated number, and the last having the highest. Once the items are order I can discard the associated number. How do I do this? This is what my code looks like (kind've): list<Move> moves = board.getLegalMoves(board.turn); for(i = moves.begin(); i != moves.end(); ++i) { //... a = max; // <-- number associated with current Move }

    Read the article

  • Delphi - change text color based on current record value

    - by Welliam
    using db controls connected to a FireBird database. and I have a simple dblabel I want to change text color based on current value for current record The user navigate using the dbnavigator and I wrote code in the navigator button click. but there is a problem the code always read the previous record value not the current one so the color is wrong !! for example: procedure <navigator button click>; begin if table1.FieldByName('field1').AsString = 'val1' then <dblabel.textcolor> := red else <dblabel.textcolor> := green; end; but as I said the value is one record behind. why is that and what is the best approach to change label text color ? Thanks

    Read the article

  • Show the specific field on mysql table based on active date

    - by mrjimoy_05
    Suppose that I have 3 tables: A) Table UsrHeader ----------------- UsrID | UsrName ----------------- 1 | Abc 2 | Bcd B) Table UsrDetail ------------------------------- UsrID | UsrLoc | Date ------------------------------- 1 | LocA | 10 Aug 2012 1 | LocB | 15 Aug 2012 2 | LocA | 10 Aug 2012 C) Table Trx ----------------------------- TrxID | TrxDate | UsrID ----------------------------- 1 | 10 Aug 2012 | 1 2 | 16 Aug 2012 | 1 3 | 11 Aug 2012 | 2 What I want to do is to show the table like: --------------------------------------- TrxID | TrxDate | UsrID | UsrLoc --------------------------------------- 1 | 10 Aug 2012 | 1 | LocA 2 | 16 Aug 2012 | 1 | LocB 3 | 11 Aug 2012 | 2 | LocA Notice that there is one user but different location. That's based on the UsrDetail table that the user on a specified date has moved to another location. So, it should be show the user specific location on that date on every transaction. I have try this code but it is no luck: SELECT trx.TrxID, trx.TrxDate, trx.UsrID, User.UsrName, User.UsrLoc FROM trx INNER JOIN ( SELECT UsrHeader.UsrID, UsrHeader.UsrName, UserDetail.UsrLoc FROM UsrHeader INNER JOIN ( SELECT UsrDetail.UsrID, UsrDetail.UsrLoc, UsrDetail.Date FROM UsrDetail ) AS UserDetail ON UserDetail.UsrID = UsrHeader.UsrID ) AS User ON User.UsrID = trx.UsrID AND trx.TrxDate >= User.Date How to do that? Thanks..

    Read the article

  • Divide a array into multiple (individual) arrays based on a bin size in python

    - by user1492449
    I have an array like this: -0.68285 -6.919616 -0.7876 -14.521115 -0.64072 -43.428411 -0.05368 -11.561341 -0.43144 -34.768892 -0.23268 -10.793603 -0.22216 -50.341101 -0.41152 -90.083377 -0.01288 -84.265557 -0.3524 -24.253145 How do i split this array into individual arrays based on the value in column 1 with a bin width of 0.1? i want my output something like this: array1=[[-0.05368, -11.561341],[-0.01288, -84.265557]] array2=[[-0.23268, -10.79360] ,[-0.22216, -50.341101]] array3=[[-0.3524, -24.253145]] array4=[[-0.43144, -34.768892], [-0.41152, -90.083377]] array5=[[-0.68285, -6.919616],[-0.64072, -43.428411]] array6=[[-0.7876, -14.521115]]

    Read the article

  • JMS Step 5 - How to Create an 11g BPEL Process Which Reads a Message Based on an XML Schema from a JMS Queue

    - by John-Brown.Evans
    JMS Step 5 - How to Create an 11g BPEL Process Which Reads a Message Based on an XML Schema from a JMS Queue .jblist{list-style-type:disc;margin:0;padding:0;padding-left:0pt;margin-left:36pt} ol{margin:0;padding:0} .c12_5{vertical-align:top;width:468pt;border-style:solid;background-color:#f3f3f3;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c8_5{vertical-align:top;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 0pt 5pt} .c10_5{vertical-align:top;width:207pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c14_5{vertical-align:top;border-style:solid;border-color:#000000;border-width:1pt;padding:0pt 5pt 0pt 5pt} .c21_5{background-color:#ffffff} .c18_5{color:#1155cc;text-decoration:underline} .c16_5{color:#666666;font-size:12pt} .c5_5{background-color:#f3f3f3;font-weight:bold} .c19_5{color:inherit;text-decoration:inherit} .c3_5{height:11pt;text-align:center} .c11_5{font-weight:bold} .c20_5{background-color:#00ff00} .c6_5{font-style:italic} .c4_5{height:11pt} .c17_5{background-color:#ffff00} .c0_5{direction:ltr} .c7_5{font-family:"Courier New"} .c2_5{border-collapse:collapse} .c1_5{line-height:1.0} .c13_5{background-color:#f3f3f3} .c15_5{height:0pt} .c9_5{text-align:center} .title{padding-top:24pt;line-height:1.15;text-align:left;color:#000000;font-size:36pt;font-family:"Arial";font-weight:bold;padding-bottom:6pt} .subtitle{padding-top:18pt;line-height:1.15;text-align:left;color:#666666;font-style:italic;font-size:24pt;font-family:"Georgia";padding-bottom:4pt} li{color:#000000;font-size:10pt;font-family:"Arial"} p{color:#000000;font-size:10pt;margin:0;font-family:"Arial"} h1{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:24pt;font-family:"Arial";font-weight:normal} h2{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:18pt;font-family:"Arial";font-weight:normal} h3{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:14pt;font-family:"Arial";font-weight:normal} h4{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:12pt;font-family:"Arial";font-weight:normal} h5{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:11pt;font-family:"Arial";font-weight:normal} h6{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:10pt;font-family:"Arial";font-weight:normal} Welcome to another post in the series of blogs which demonstrates how to use JMS queues in a SOA context. The previous posts were: JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g JMS Step 2 - Using the QueueSend.java Sample Program to Send a Message to a JMS Queue JMS Step 3 - Using the QueueReceive.java Sample Program to Read a Message from a JMS Queue JMS Step 4 - How to Create an 11g BPEL Process Which Writes a Message Based on an XML Schema to a JMS Queue Today we will create a BPEL process which will read (dequeue) the message from the JMS queue, which we enqueued in the last example. The JMS adapter will dequeue the full XML payload from the queue. 1. Recap and Prerequisites In the previous examples, we created a JMS Queue, a Connection Factory and a Connection Pool in the WebLogic Server Console. Then we designed and deployed a BPEL composite, which took a simple XML payload and enqueued it to the JMS queue. In this example, we will read that same message from the queue, using a JMS adapter and a BPEL process. As many of the configuration steps required to read from that queue were done in the previous samples, this one will concentrate on the new steps. A summary of the required objects is listed below. To find out how to create them please see the previous samples. They also include instructions on how to verify the objects are set up correctly. WebLogic Server Objects Object Name Type JNDI Name TestConnectionFactory Connection Factory jms/TestConnectionFactory TestJMSQueue JMS Queue jms/TestJMSQueue eis/wls/TestQueue Connection Pool eis/wls/TestQueue Schema XSD File The following XSD file is used for the message format. It was created in the previous example and will be copied to the new process. stringPayload.xsd <?xml version="1.0" encoding="windows-1252" ?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"                 xmlns="http://www.example.org"                 targetNamespace="http://www.example.org"                 elementFormDefault="qualified">   <xsd:element name="exampleElement" type="xsd:string">   </xsd:element> </xsd:schema> JMS Message After executing the previous samples, the following XML message should be in the JMS queue located at jms/TestJMSQueue: <?xml version="1.0" encoding="UTF-8" ?><exampleElement xmlns="http://www.example.org">Test Message</exampleElement> JDeveloper Connection You will need a valid Application Server Connection in JDeveloper pointing to the SOA server which the process will be deployed to. 2. Create a BPEL Composite with a JMS Adapter Partner Link In the previous example, we created a composite in JDeveloper called JmsAdapterWriteSchema. In this one, we will create a new composite called JmsAdapterReadSchema. There are probably many ways of incorporating a JMS adapter into a SOA composite for incoming messages. One way is design the process in such a way that the adapter polls for new messages and when it dequeues one, initiates a SOA or BPEL instance. This is possibly the most common use case. Other use cases include mid-flow adapters, which are activated from within the BPEL process. In this example we will use a polling adapter, because it is the most simple to set up and demonstrate. But it has one disadvantage as a demonstrative model. When a polling adapter is active, it will dequeue all messages as soon as they reach the queue. This makes it difficult to monitor messages we are writing to the queue, because they will disappear from the queue as soon as they have been enqueued. To work around this, we will shut down the composite after deploying it and restart it as required. (Another solution for this would be to pause the consumption for the queue and resume consumption again if needed. This can be done in the WLS console JMS-Modules -> queue -> Control -> Consumption -> Pause/Resume.) We will model the composite as a one-way incoming process. Usually, a BPEL process will do something useful with the message after receiving it, such as passing it to a database or file adapter, a human workflow or external web service. But we only want to demonstrate how to dequeue a JMS message using BPEL and a JMS adapter, so we won’t complicate the design with further activities. However, we do want to be able to verify that we have read the message correctly, so the BPEL process will include a small piece of embedded java code, which will print the message to standard output, so we can view it in the SOA server’s log file. Alternatively, you can view the instance in the Enterprise Manager and verify the message. The following steps are all executed in JDeveloper. Create the project in the same JDeveloper application used for the previous examples or create a new one. Create a SOA Project Create a new project and choose SOA Tier > SOA Project as its type. Name it JmsAdapterReadSchema. When prompted for the composite type, choose Empty Composite. Create a JMS Adapter Partner Link In the composite editor, drag a JMS adapter over from the Component Palette to the left-hand swim lane, under Exposed Services. This will start the JMS Adapter Configuration Wizard. Use the following entries: Service Name: JmsAdapterRead Oracle Enterprise Messaging Service (OEMS): Oracle WebLogic JMS AppServer Connection: Use an application server connection pointing to the WebLogic server on which the JMS queue and connection factory mentioned under Prerequisites above are located. Adapter Interface > Interface: Define from operation and schema (specified later) Operation Type: Consume Message Operation Name: Consume_message Consume Operation Parameters Destination Name: Press the Browse button, select Destination Type: Queues, then press Search. Wait for the list to populate, then select the entry for TestJMSQueue , which is the queue created in a previous example. JNDI Name: The JNDI name to use for the JMS connection. As in the previous example, this is probably the most common source of error. This is the JNDI name of the JMS adapter’s connection pool created in the WebLogic Server and which points to the connection factory. JDeveloper does not verify the value entered here. If you enter a wrong value, the JMS adapter won’t find the queue and you will get an error message at runtime, which is very difficult to trace. In our example, this is the value eis/wls/TestQueue . (See the earlier step on how to create a JMS Adapter Connection Pool in WebLogic Server for details.) Messages/Message SchemaURL: We will use the XSD file created during the previous example, in the JmsAdapterWriteSchema project to define the format for the incoming message payload and, at the same time, demonstrate how to import an existing XSD file into a JDeveloper project. Press the magnifying glass icon to search for schema files. In the Type Chooser, press the Import Schema File button. Select the magnifying glass next to URL to search for schema files. Navigate to the location of the JmsAdapterWriteSchema project > xsd and select the stringPayload.xsd file. Check the “Copy to Project” checkbox, press OK and confirm the following Localize Files popup. Now that the XSD file has been copied to the local project, it can be selected from the project’s schema files. Expand Project Schema Files > stringPayload.xsd and select exampleElement: string . Press Next and Finish, which will complete the JMS Adapter configuration.Save the project. Create a BPEL Component Drag a BPEL Process from the Component Palette (Service Components) to the Components section of the composite designer. Name it JmsAdapterReadSchema and select Template: Define Service Later and press OK. Wire the JMS Adapter to the BPEL Component Now wire the JMS adapter to the BPEL process, by dragging the arrow from the adapter to the BPEL process. A Transaction Properties popup will be displayed. Set the delivery mode to async.persist. This completes the steps at the composite level. 3 . Complete the BPEL Process Design Invoke the BPEL Flow via the JMS Adapter Open the BPEL component by double-clicking it in the design view of the composite.xml, or open it from the project navigator by selecting the JmsAdapterReadSchema.bpel file. This will display the BPEL process in the design view. You should see the JmsAdapterRead partner link in the left-hand swim lane. Drag a Receive activity onto the BPEL flow diagram, then drag a wire (left-hand yellow arrow) from it to the JMS adapter. This will open the Receive activity editor. Auto-generate the variable by pressing the green “+” button and check the “Create Instance” checkbox. This will result in a BPEL instance being created when a new JMS message is received. At this point it would actually be OK to compile and deploy the composite and it would pick up any messages from the JMS queue. In fact, you can do that to test it, if you like. But it is very rudimentary and would not be doing anything useful with the message. Also, you could only verify the actual message payload by looking at the instance’s flow in the Enterprise Manager. There are various other possibilities; we could pass the message to another web service, write it to a file using a file adapter or to a database via a database adapter etc. But these will all introduce unnecessary complications to our sample. So, to keep it simple, we will add a small piece of Java code to the BPEL process which will write the payload to standard output. This will be written to the server’s log file, which will be easy to monitor. Add a Java Embedding Activity First get the full name of the process’s input variable, as this will be needed for the Java code. Go to the Structure pane and expand Variables > Process > Variables. Then expand the input variable, for example, "Receive1_Consume_Message_InputVariable > body > ns2:exampleElement”, and note variable’s name and path, if they are different from this one. Drag a Java Embedding activity from the Component Palette (Oracle Extensions) to the BPEL flow, after the Receive activity, then open it to edit. Delete the example code and replace it with the following, replacing the variable parts with those in your sample, if necessary.: System.out.println("JmsAdapterReadSchema process picked up a message"); oracle.xml.parser.v2.XMLElement inputPayload =    (oracle.xml.parser.v2.XMLElement)getVariableData(                           "Receive1_Consume_Message_InputVariable",                           "body",                           "/ns2:exampleElement");   String inputString = inputPayload.getFirstChild().getNodeValue(); System.out.println("Input String is " + inputPayload.getFirstChild().getNodeValue()); Tip. If you are not sure of the exact syntax of the input variable, create an Assign activity in the BPEL process and copy the variable to another, temporary one. Then check the syntax created by the BPEL designer. This completes the BPEL process design in JDeveloper. Save, compile and deploy the process to the SOA server. 3. Test the Composite Shut Down the JmsAdapterReadSchema Composite After deploying the JmsAdapterReadSchema composite to the SOA server it is automatically activated. If there are already any messages in the queue, the adapter will begin polling them. To ease the testing process, we will deactivate the process first Log in to the Enterprise Manager (Fusion Middleware Control) and navigate to SOA > soa-infra (soa_server1) > default (or wherever you deployed your composite to) and click on JmsAdapterReadSchema [1.0] . Press the Shut Down button to disable the composite and confirm the following popup. Monitor Messages in the JMS Queue In a separate browser window, log in to the WebLogic Server Console and navigate to Services > Messaging > JMS Modules > TestJMSModule > TestJMSQueue > Monitoring. This is the location of the JMS queue we created in an earlier sample (see the prerequisites section of this sample). Check whether there are any messages already in the queue. If so, you can dequeue them using the QueueReceive Java program created in an earlier sample. This will ensure that the queue is empty and doesn’t contain any messages in the wrong format, which would cause the JmsAdapterReadSchema to fail. Send a Test Message In the Enterprise Manager, navigate to the JmsAdapterWriteSchema created earlier, press Test and send a test message, for example “Message from JmsAdapterWriteSchema”. Confirm that the message was written correctly to the queue by verifying it via the queue monitor in the WLS Console. Monitor the SOA Server’s Output A program deployed on the SOA server will write its standard output to the terminal window in which the server was started, unless this has been redirected to somewhere else, for example to a file. If it has not been redirected, go to the terminal session in which the server was started, otherwise open and monitor the file to which it was redirected. Re-Enable the JmsAdapterReadSchema Composite In the Enterprise Manager, navigate to the JmsAdapterReadSchema composite again and press Start Up to re-enable it. This should cause the JMS adapter to dequeue the test message and the following output should be written to the server’s standard output: JmsAdapterReadSchema process picked up a message. Input String is Message from JmsAdapterWriteSchema Note that you can also monitor the payload received by the process, by navigating to the the JmsAdapterReadSchema’s Instances tab in the Enterprise Manager. Then select the latest instance and view the flow of the BPEL component. The Receive activity will contain and display the dequeued message too. 4 . Troubleshooting This sample demonstrates how to dequeue an XML JMS message using a BPEL process and no additional functionality. For example, it doesn’t contain any error handling. Therefore, any errors in the payload will result in exceptions being written to the log file or standard output. If you get any errors related to the payload, such as Message handle error ... ORABPEL-09500 ... XPath expression failed to execute. An error occurs while processing the XPath expression; the expression is /ns2:exampleElement. ... etc. check that the variable used in the Java embedding part of the process was entered correctly. Possibly follow the tip mentioned in previous section. If this doesn’t help, you can delete the Java embedding part and simply verify the message via the flow diagram in the Enterprise Manager. Or use a different method, such as writing it to a file via a file adapter. This concludes this example. In the next post, we will begin with an AQ JMS example, which uses JMS to write to an Advanced Queue stored in the database. Best regards John-Brown Evans Oracle Technology Proactive Support Delivery

    Read the article

  • Different Service behaviors per endpoint

    - by Preben Huybrechts
    The situation We are implementing different sort of security on some WCF service. ClientCertificate, UserName & Password and Anonymous. We have 2 ServiceBehaviorConfigurations, one for httpBinding and one for wsHttpBinding. (We have custom authorization policies for claim based security) As a requirement we need different endpoints for each service. 3 endpoints with httpBinding and 1 with wsHttpBinding. Example for one service: basicHttpBinding : Anonymous basicHttpBinding : UserNameAndPassword basicHttpBinding : BasicSsl wsHttpBinding : BasicSsl The Problem Part 1: We cannot specify the same service twice, once with the http service configuration and once with the wsHttp service configuration. Part 2: We cannot specify service behaviors on an endpoint. (Throws and exception, No endpoint behavior was found... Service behaviors cant be set to endpoint behaviours) The Config For part 1: <services> <service name="Namespace.MyService" behaviorConfiguration="securityBehavior"> <endpoint address="http://server:94/MyService.svc/Anonymous" contract="Namespace.IMyService" binding="basicHttpBinding" bindingConfiguration="Anonymous"> </endpoint> <endpoint address="http://server:94/MyService.svc/UserNameAndPassword" contract="Namespace.IMyService" binding="basicHttpBinding" bindingConfiguration="UserNameAndPassword"> </endpoint> <endpoint address="https://server/MyService.svc/BasicSsl" contract="Namespace.IMyService" binding="basicHttpBinding" bindingConfiguration="BasicSecured"> </endpoint> </service> <service name="Namespace.MyService" behaviorConfiguration="wsHttpCertificateBehavior"> <endpoint address="https://server/MyService.svc/ClientCert" contract="Namespace.IMyService" binding="wsHttpBinding" bindingConfiguration="ClientCert"/> </service> </services> Service Behavior configuration: <serviceBehaviors> <behavior name="securityBehavior"> <serviceAuthorization serviceAuthorizationManagerType="Namespace.AdamAuthorizationManager,Assembly"> <authorizationPolicies> <add policyType="Namespace.AdamAuthorizationManager,Assembly" /> </authorizationPolicies> </serviceAuthorization> </behavior> <behavior name="wsHttpCertificateBehavior"> <serviceMetadata httpGetEnabled="false" httpsGetEnabled="true"/> <serviceAuthorization serviceAuthorizationManagerType="Namespace.AdamAuthorizationManager,Assembly"> <authorizationPolicies> <add policyType="Namespace.AdamAuthorizationManager,Assembly" /> </authorizationPolicies> </serviceAuthorization> <serviceCredentials> <clientCertificate> <authentication certificateValidationMode="PeerOrChainTrust" revocationMode="NoCheck"/> </clientCertificate> <serviceCertificate findValue="CN=CertSubject"/> </serviceCredentials> </behavior> How can we specify a different service behaviour on the WsHttpBinding endpoint? Or how can we apply our authorization policy in a different way for wsHttpBinding then basicHttpBinding. We would use endpoint behavior but we can't specify our authorization policy on an endpoint behavior

    Read the article

  • DCOM Authentication Fails to use Kerberos, Falls back to NTLM

    - by Asa Yeamans
    I have a webservice that is written in Classic ASP. In this web service it attempts to create a VirtualServer.Application object on another server via DCOM. This fails with Permission Denied. However I have another component instantiated in this same webservice on the same remote server, that is created without problems. This component is a custom-in house component. The webservice is called from a standalone EXE program that calls it via WinHTTP. It has been verified that WinHTTP is authenticating with Kerberos to the webservice successfully. The user authenticated to the webservice is the Administrator user. The EXE to webservice authentication step is successful and with kerberos. I have verified the DCOM permissions on the remote computer with DCOMCNFG. The default limits allow administrators both local and remote activation, both local and remote access, and both local and remote launch. The default component permissions allow the same. This has been verified. The individual component permissions for the working component are set to defaults. The individual component permissions for the VirtualServer.Application component are also set to defaults. Based upon these settings, the webservice should be able to instantiate and access the components on the remote computer. Setting up a Wireshark trace while running both tests, one with the working component and one with the VirtualServer.Application component reveals an intresting behavior. When the webservice is instantiating the working, custom, component, I can see the request on the wire to the RPCSS endpoint mapper first perform the TCP connect sequence. Then I see it perform the bind request with the appropriate security package, in this case kerberos. After it obtains the endpoint for the working DCOM component, it connects to the DCOM endpoint authenticating again via Kerberos, and it successfully is able to instantiate and communicate. On the failing VirtualServer.Application component, I again see the bind request with kerberos go to the RPCC endpoing mapper successfully. However, when it then attempts to connect to the endpoint in the Virtual Server process, it fails to connect because it only attempts to authenticate with NTLM, which ultimately fails, because the webservice does not have access to the credentials to perform the NTLM hash. Why is it attempting to authenticate via NTLM? Additional Information: Both components run on the same server via DCOM Both components run as Local System on the server Both components are Win32 Service components Both components have the exact same launch/access/activation DCOM permissions Both Win32 Services are set to run as Local System The permission denied is not a permissions issue as far as I can tell, it is an authentication issue. Permission is denied because NTLM authentication is used with a NULL username instead of Kerberos Delegation Constrained delegation is setup on the server hosting the webservice. The server hosting the webservice is allowed to delegate to rpcss/dcom-server-name The server hosting the webservice is allowed to delegate to vssvc/dcom-server-name The dcom server is allowed to delegate to rpcss/webservice-server The SPN's registered on the dcom server include rpcss/dcom-server-name and vssvc/dcom-server-name as well as the HOST/dcom-server-name related SPNs The SPN's registered on the webservice-server include rpcss/webservice-server and the HOST/webservice-server related SPNs Anybody have any Ideas why the attempt to create a VirtualServer.Application object on a remote server is falling back to NTLM authentication causing it to fail and get permission denied? Additional information: When the following code is run in the context of the webservice, directly via a testing-only, just-developed COM component, it fails on the specified line with Access Denied. COSERVERINFO csi; csi.dwReserved1=0; csi.pwszName=L"terahnee.rivin.net"; csi.pAuthInfo=NULL; csi.dwReserved2=NULL; hr=CoGetClassObject(CLSID_VirtualServer, CLSCTX_ALL, &csi, IID_IClassFactory, (void **) &pClsFact); if(FAILED( hr )) goto error1; // Fails here with HRESULT_FROM_WIN32(ERROR_ACCESS_DENIED) hr=pClsFact->CreateInstance(NULL, IID_IUnknown, (void **) &pUnk); if(FAILED( hr )) goto error2; Ive also noticed that in the Wireshark Traces, i see the attempt to connect to the service process component only requests NTLMSSP authentication, it doesnt even attmept to use kerberos. This suggests that for some reason the webservice thinks it cant use kerberos...

    Read the article

  • How would I handle input with a Game Component?

    - by Aufziehvogel
    I am currently having problems from finding my way into the component-oriented XNA design. I read an overview over the general design pattern and googled a lot of XNA examples. However, they seem to be right on the opposite site. In the general design pattern, an object (my current player) is passed to InputComponent::update(Player). This means the class will know what to do and how this will affect the game (e.g. move person vs. scroll text in a menu). Yet, in XNA GameComponent::update(GameTime) is called automatically without a reference to the current player. The only XNA examples I found built some sort of higher-level Keyboard engine into the game component like this: class InputComponent: GameComponent { public void keyReleased(Keys); public void keyPressed(Keys); public bool keyDown(Keys); public void override update(GameTime gameTime) { // compare previous state with current state and // determine if released, pressed, down or nothing } } Some others went a bit further making it possible to use a Service Locator by a design like this: interface IInputComponent { public void downwardsMovement(Keys); public void upwardsMovement(Keys); public bool pausedGame(Keys); // determine which keys pressed and what that means // can be done for different inputs in different implementations public void override update(GameTime); } Yet, then I am wondering if it is possible to design an input class to resolve all possible situations. Like in a menu a mouse click can mean "click that button", but in game play it can mean "shoot that weapon". So if I am using such a modular design with game components for input, how much logic is to be put into the InputComponent / KeyboardComponent / GamepadComponent and where is the rest handled? What I had in mind, when I heard about Game Components and Service Locator in XNA was something like this: use Game Components to run the InputHandler automatically in the loop use Service Locator to be able to switch input at runtime (i.e. let player choose if he wants to use a gamepad or a keyboard; or which shall be player 1 and which player 2). However, now I cannot see how this can be done. First code example does not seem flexible enough, as on a game pad you could require some combination of buttons for something that is possible on keyboard with only one button or with the mouse) The second code example seems really hard to implement, because the InputComponent has to know in which context we are currently. Moreover, you could imagine your application to be multi-layered and let the key-stroke go through all layers to the bottom-layer which requires a different behaviour than the InputComponent would have guessed from the top-layer. The general design pattern with passing the Player to update() does not have a representation in XNA and I also cannot see how and where to decide which class should be passed to update(). At most time of course the player, but sometimes there could be menu items you have to or can click I see that the question in general is already dealt with here, but probably from a more elobate point-of-view. At least, I am not smart enough in game development to understand it. I am searching for a rather code-based example directly for XNA. And the answer there leaves (a noob like) me still alone in how the object that should receive the detected event is chosen. Like if I have a key-up event, should it go to the text box or to the player?

    Read the article

  • Case studies for successful service (project) based software development businesses without constant overtime from its employees [closed]

    - by Ryan Taylor
    I work for an IT company that is primarily services (project) based rather than product based. All software engineers are salaried. The company has set new expectations that everyone should work 48 hours per week instead of 40. Note, this isn't occasional overtime due to crunches. This is the new 40. The reasoning is that this enables the company to provide benefits to its employees such as monetary incentives and training because the company is more profitable. more hours worked = more billable hours = larger profit I understand the need for profitability and the occasional crunch time and have put in the extra hours when it was needed and beneficial to the project. However, I am also very sensitive to work life balance and have raised my concerns about the the new expectation. My employer is open to other methods to increase profitability so I hold hope that we can turn things around before it becomes a horrible place to work. How does a services based company become more profitable without increasing the number of hours expected from it's salaried employees? Are there any case studies showing the pros and cons of consistent overtime? Are there any case studies for a successful service based business model (for software development companies) that does not require consistent overtime from its employees?

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >